id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
2029153
pes2o/s2orc
v3-fos-license
Considerations in establishing a post-mortem brain and tissue bank for the study of myalgic encephalomyelitis/chronic fatigue syndrome: a proposed protocol Background Our aim, having previously investigated through a qualitative study involving extensive discussions with experts and patients the issues involved in establishing and maintaining a disease specific brain and tissue bank for myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), was to develop a protocol for a UK ME/CFS repository of high quality human tissue from well characterised subjects with ME/CFS and controls suitable for a broad range of research applications. This would involve a specific donor program coupled with rapid tissue collection and processing, supplemented by comprehensive prospectively collected clinical, laboratory and self-assessment data from cases and controls. Findings We reviewed the operations of existing tissue banks from published literature and from their internal protocols and standard operating procedures (SOPs). On this basis, we developed the protocol presented here, which was designed to meet high technical and ethical standards and legal requirements and was based on recommendations of the MRC UK Brain Banks Network. The facility would be most efficient and cost-effective if incorporated into an existing tissue bank. Tissue collection would be rapid and follow robust protocols to ensure preservation sufficient for a wide range of research uses. A central tissue bank would have resources both for wide-scale donor recruitment and rapid response to donor death for prompt harvesting and processing of tissue. Conclusion An ME/CFS brain and tissue bank could be established using this protocol. Success would depend on careful consideration of logistic, technical, legal and ethical issues, continuous consultation with patients and the donor population, and a sustainable model of funding ideally involving research councils, health services, and patient charities. This initiative could revolutionise the understanding of this still poorly-understood disease and enhance development of diagnostic biomarkers and treatments. Post-mortem examinations have greatly helped to clarify the aetiology and pathogenesis of a wide range of medical disorders, e.g. Creutzfeldt-Jakob disease (CJD) and Parkinson's disease, opening possibilities for better diagnosis and in many cases treatment. The poor understanding and lack of consensus on the causes and pathophysiological mechanisms involved in ME/CFS and the lack of specific animal models suggest its suitability for study through pathology. However, and rather surprisingly, only a very small number of (ad hoc) pathological studies have been conducted in people who have died with ME/CFS. A few but not all autopsies conducted have shown nervous system abnormalities, including inflammation in the dorsal root ganglia [2] and evidence of viral encephalitis [20,21]. These do not, however, constitute sufficient evidence to confirm the role of the central or peripheral nervous system in its aetiology or pathogenesis. Calls for brain banks to study ME/CFS have been made in the US [22] and Australia [23] as early as 1996 and 1999. The CFS Peer Reviews for the Centers for Disease Control (CDC) in the US recommended the establishment of a CFS patient brain bank for "neuropathological analysis for tissues not available by other mechanisms" [24]. Nevertheless no action from the CDC or others has yet resulted from that recommendation, and no initiatives have been reported towards systematically addressing the use of pathology for the study of ME/CFS. The only related initiative has come from the Sun Health Research Institute in Sun City, Arizona, US [25], which reported the creation in 1997 of the first and still incipient tissue bank for the study of fibromyalgia, a condition closely related to ME/CFS. A study carried out by two of the authors (LN, EL) confirmed that most patients with ME/CFS favour the establishment of a post-mortem tissue bank [26]. We hypothesise that the better understanding of the aetiology and pathology of ME/CFS following systematic tissue collection and examination would lead to improved diagnosis and disease recognition. Moreover, it would allow the development of new treatment options and interventions specifically targeting possible causes of disease and thus potentially resulting in clinical improvement of patients. This paper discusses considerations in setting up and maintaining a brain and tissue bank for the study of ME/CFS and outlines a protocol for accomplishing this. The aim of the study is to review the fundamental requirements for and to develop a protocol for establishing an internationally unique facility of high quality human tissue from well characterised subjects with ME/CFS and controls, suitable for a broad range of research applications. This will be achieved through a specific donor program coupled with a system for rapid tissue collection and processing, supplemented by comprehensive prospectively collected clinical, laboratory and self-assessment data from cases and controls. Procedures will meet high technical and ethical standards and legal requirements in relation to recruitment and follow-up of donors and the collection, handling, preservation and disposal of tissue. The protocol and procedures adopted will be based on the recommendations of the MRC UK brain banks network [27], of which the ME/CFS Tissue Bank will become a member. We will develop close collaboration with other tissue banks to facilitate the supply of control CNS tissue for research, subject to appropriate consents being available. Specific objectives include: To establish a cohort and donor scheme of well characterized cases of ME/CFS and controls for eventual retrieval of nervous system and other tissue post-mortem; To establish a post-mortem brain and tissue bank comprising samples from well characterized cases of ME/CFS and controls; To enable high quality pathological research in ME/CFS and identify biomarkers; and To disseminate the resource to the international research community and other potential users. The study, which was approved by the London School of Hygiene and Tropical Medicine (LSHTM) Ethics Committee, did not involve any human subjects, so no consent was required or obtained. Materials and methods We developed a protocol for a UK-based repository of tissues from people with ME/CFS. This resulted from extensive discussions with a range of experts and patients and the results of a qualitative study a reported in detail elsewhere [26], which aimed at determining the acceptability and feasibility of establishing a tissue bank for the study of ME/CFS. These were complemented by a review of the literature on tissue bank operation and of internal protocols and standard operating procedures (SOPs) from established tissue banks. This qualitative study was approved by the London School of Hygiene and Tropical Medicine (LSHTM) Ethics Committee and was performed in accordance with the ethical standards laid down in the Declaration of Helsinki 2008. All participants gave prior informed consent to the use of their inputs in the preparation of reports. Results We were able to establish the desirability and feasibility of establishing a tissue bank for the study of ME/CFS and to develop a protocol for its establishment. Our conclusions were that this would best be met by situating the ME/CFS Tissue Bank within an operational tissue bank, thus benefiting from existing infrastructure and experience whilst optimizing the use of resources. This protocol is generic and could be adopted by any appropriate facility within the UK or, with relevant adaptions, internationally. The Addenbrooke's Hospital Brain and Tissue Bank in Cambridge is very well placed for this purpose due to its long-term experience in tissue banking and previous involvement in ME/CFS research, and is intended as the initial site for the ME/CFS Tissue Bank. In addition, the involvement of patients and their representatives and particularly charities working with people with ME/CFS (PWME) would provide the necessary framework to ensure that the research is truly participatory and ongoing recruitment is safeguarding the long term sustainability of the Bank. The protocol we developed as a result of this study is detailed below: Overview of procedures Sources and procedures for recruitment of tissue donors Potential donors will be able to access information about the Tissue Bank and targeted-donor scheme primarily from the project webpage and from GPs and other health professionals, including at selected NHS ME/CFS clinics, particularly those situated within the catchment area of the Tissue Bank, i.e. in the East of England. Depending on availability of funding, the intention would be to expand the catchment area to include other regions of the country, with the potential to include other services as receptors of tissues. Other sources of participants may include disease specific charities and support groups, related newsletters and magazines, scientific and group meetings with patients and from GPs and other health professionals, however, these avenues of publicity and any expansion from the geographical area of recruitment, could only be executed in line with the capacity of the team to absorb the potentially high levels of demand for donations. Information packs will be sent to potential donors in response to requests for information. These will include an information booklet, the donor consent form, a form for close relatives of the donor to complete indicating their agreement with the proposed donation, a health questionnaire, our latest newsletter, and a self addressed and stamped or freepost envelope. Potential donors, including cases and controls, will have the opportunity to phone Tissue Bank staff to clarify any doubts before deciding to enrol. In addition, they will have the opportunity to discuss any issues directly with the Tissue Bank of staff collecting their blood samples. Those deciding to register as donors will be issued a donor card containing the contact details of the Tissue Bank and will be asked to inform their close relatives and GPs of their donor status. Inclusion and exclusion criteria for donors Donors will include individuals over 18 years old with a confirmed diagnosis of ME/CFS for at least two years using the Fukuda ('CDC-94') and Canadian Consensus Criteria, which are standard, widely-accepted case definitions [18,19], as ascertained by a health professional with knowledge of ME/CFS, e.g. a specialist working in an ME/CFS clinic or a GP with a special interest in ME/ CFS. We will ask patients to confirm they have been diagnosed by a medical professional with expertise in ME/ CFS and to include a letter from their GP or another health professional confirming their diagnosis has been formally made. Controls will include individuals of the same age group and without present or past history of chronic fatigue or other neurological, psychiatric, immune and inflammatory diseases or major morbidity, such as cancer. These will be sought from friends, relatives of cases and other volunteers. Since the control donors will be recruited as a direct result of their relationship with affected individuals, recruitment of control donors would be necessarily targeted at those who have knowledge of (and possibly indirect experience of) the condition. We will ensure that consent by control subjects is given freely, that they have sufficient time for reflection, and that they can access more information about what is involved in the Tissue Bank independently of cases in simple and straightforward ways, using the same channels of information and response to queries as cases. Donors will also be asked to give consent as 'control subjects' for studies conducted for other diseases, in connection with other tissue banks in the network. Individuals with confirmed previous ME/CFS, but who are asymptomatic and no longer fulfil diagnostic criteria, will be accepted as donors and form a separate group (controls with previous ME/CFS). The same procedure will apply to the minority of those who had ME/CFS at recruitment, but who no longer have the condition at the time of death. We will encourage recruitment of family members with ME/CFS as cases. Procedures for recruitment and follow-up of donors Recruitment procedures A self-completed recruitment form will establish detailed information on diagnosis, clinical symptoms, potential exposures and socio-demographic variables. Donors will be invited to donate up to 100 ml of blood for freezing and long-term storage b , which may be used for specific research projects and as a further source of information for the characterisation of cases post-mortem. Follow-up of donors Donors will be followed up at least every two years through postal or electronic questionnaires. Information from the questionnaires will be complemented by clinical data, e.g. those provided to the ME/CFS Disease Register and UK ME/CFS Biobank, clinical notes from assessments by GPs, research physicians and specialist doctors, and from laboratory and other tests, where appropriate. This complementary information will usually be obtained post-mortem. Table 1 illustrates the potential yield of effective tissue donations over 5 years, according to the number of PWME who register as donors (as a percentage of total number of people estimated to have the condition), based on an average UK population mortality. A similar yield for controls of the same age group and sex could be expected within the same periods of time. The actual numbers would depend on resources and interest. Post-mortem procedures Communication of death, inclusion and exclusion criteria The ME/CFS Tissue Bank will adopt a rapid response system following the death of a registered donor. The system will usually be initiated by a relative or carer telephoning the 24 hr emergency donor line (information found on the donor card), which is linked to a pager system, notifying the Tissue Bank of the death of a registered donor and leaving contact details. In response to the page a member of the Tissue Bank team will immediately call the contact person for further details. An on-call rota system will guarantee that the Tissue Bank will answer every call made to the emergency donor line promptly. Tissue also will be collected from those who have not previously registered when possible, for example, if family members contact the ME/CFS Tissue Bank team at the time of 'imminent or actual death'. This will be subject to the requirement of the Human Tissue Act that 'a decision of [the potential donor] to consent to the activity was in force immediately before he died' c [28]. In these cases, donations will be arranged if the documented agreement of relatives and tissue procurement can be organised within the appropriate time frame. For all donors, tissue donations will not be possible in the following circumstances: (i) if the death has been referred to the coroner and the coroner does not authorise the donation to proceed, or the tissue will not be available in the appropriate time frame, for example due to a prolonged post-mortem interval; and (ii) if it is possible that there is the presence of an infectious disease such as CJD, HIV, MRSA, septicaemia, tuberculosis, or hepatitis B or C. Acquisition and transportation of bodies, tissues and organs to the Tissue Bank Donors who live in the Tissue Bank area will, in the event of their death, have their whole body transported to the local mortuary for the autopsy and retrieval of relevant tissue samples [29]. On all other occasions tissues will be harvested in a mortuary close to the place of death. Tissues will be either collected as fresh samples from the mortuary by a member of the Tissue Bank team or transported to the tissue bank from the mortuary by authorised couriers following an appropriate period of tissue fixation. We aim to collect our cases within 24 hours of death whenever possible with a 5 day maximum post mortem interval (if refrigerated for histology) to ensure the suitability of tissue for the widest possible range of scientific techniques. Nevertheless we appreciate the difficulties in achieving a very rapid collection of samples, particularly in cases where coroner's involvement is required, and therefore will consider the inclusion of cases with longer interval periods from death to retrieval of tissues. Tissue preparation and storage We aim to provide tissue specimens that have been optimally prepared and stored for a variety of techniques as follows: i. Fresh frozen tissue blocks for mRNA, protein extraction and PCR techniques; ii. Fresh frozen tissue for cryosectioning for in situ hybridisation, immunohistochemistry and cell-based adhesion assays; and iii. Formaldehyde fixed and paraffin embedded tissue blocks for histology and immunohistochemistry. For techniques less commonly used, such as ultrastructural studies and cell culture from fresh tissue, arrangements and protocols for tissue provision and collection will be agreed following discussions of requirements with the research investigators concerned. Outline of tissue processing Immediately upon arrival, tissues will be weighed and the pH of tissue and cerebrospinal fluid (CSF) sample measured. A digital photograph of the lateral, inferior and superior views of the intact brain and lateral views of the spinal cord will be taken at this stage. Any gross pathological changes will be documented. When the brain is processed the whole brain is cut in 'half' by a sagittal cut, resulting in half the brain being fixed and half remaining for processing for freezing. In each case at the time of processing a further cut is made through the brainstem to separate the brain stem and cerebellum from the rest of the brain. Then the cerebral hemisphere and the cerebellar hemispheres are sliced and blocked independently. A small piece of tissue (<1 cm 3 ) will be taken for total RNA extraction using the guanidinium thiocyanate and CsCl method [30]. This will be used to determine the degree of mRNA preservation for each brain and given an (RIN) number. Following 6 weeks' fixation, 1 cm thick coronal sections will be sectioned according to international standards [31,32] and digitally photographed before further processing. Blocks from the 'half' brain which is frozen will all be photographed and stored. Hence, there is the potential to retrieve samples retained from the whole or one side of the brain. For long-term storage, tissue blocks will be placed in air-tight containers and stored in −80°C freezers with CO 2 backup and 24 hr monitoring using remote equipment linked to the on-call pager system. Clinical and tissue bank databases Information gained during the recruitment of donors, the harvesting of brains and the subsequent tissue analysis will be kept securely as three datasets linked in a relational password protected database, which will be accessed only by selected members of the tissue bank team. The first dataset will comprise detailed information of past clinical history (taken from follow-up of donors and complemented by retrospective clinical data obtained from the attending physician's post-mortem). The GP and specialist notes will be abstracted and added to this dataset. Each donor will be given a number, which will be translated into a brain tissue (donation) number at autopsy. The second dataset will contain information obtained at autopsy concerning patient details at time of death, macroscopic brain analysis, documentation of slice and block nomenclature. Information obtained from the routine neuropathological screening and further microscopic analysis of tissues will be added to this dataset when it becomes available. The third dataset will be the image database containing digital images of the gross brain appearance (4 images per brain), images of the whole coronal slices (20-30 images per brain), images of the blocked slices (both fixed and frozen; 20-30 images) and microscopic images of the routine pathological screening. This database design will allow secure and fast sharing of information and easy transfer of information between datasets. The databases will be fully searchable using adequate query language and updated as required [33]. Scientific strategy The post-mortem ME/CFS Tissue Bank team will conduct research with tissues collected from donors and controls, mainly investigating evidence of brain and spinal cord (and its dorsal root ganglia) inflammation and infection. In addition, tissues will be made available to researchers conducting ethically approved studies in the UK and internationally, and following evaluation and approval by a Steering Committee, whose constituents will include professionals with experience in ME/CFS, pathologists, ethicists, patients, and their representatives. Future research will be dictated by advances in scientific knowledge and according to research groups' interest and expertise. We will develop closer collaboration with other tissue banks in the UK and abroad to facilitate the supply of tissues for both cases and control subjects and to support a wide range of research activities. Governance and ethical issues The operation of the tissue bank The collection and use of human tissue samples for research raises a number of legal and ethical issues. In the past, particular concern has been expressed about the use of post-mortem samples without consent, and in response, legislation has been enacted in the UK to establish a framework for obtaining consent to address this issue [28]. The legislation also establishes systems for licensing tissue banks and for ensuring that the relevant legislation and codes of practice are being adhered to. By using an established tissue bank, the steering committee can be confident that all relevant conditions have been satisfied. Another means of minimising potential concerns from donors and their families is to promote transparency in the methods and procedures used so as to prevent potential ethical problems; doing so may also optimise recruitment of donors and research outputs. One way this has been achieved to date is by involving patients and other relevant stakeholders in all stages of the process, starting with the planning of the Tissue Bank. This participatory approach will be continued with the implementation and evaluation of the Tissue Bank. Standard operating procedures will be based on well established procedures and the best available international guidance. Those contracted to collect or process samples on behalf of the tissue bank staff will be required to complete written agreements detailing the procedures for samples to be collected, initial processing and storage, transport of samples to a central storage facility, and the protocol for tissue examination. Strict procedures will be followed in relation to consent, data protection, and other legal and ethical issues, including Human Tissue Authority licensing. Researchers processing tissue from a donor will be required to ensure that a valid consent covering all proposed uses of tissue has been obtained. Researchers will also be required to keep patient identifiable information confidential as a contractual condition. Electronic sharing of data will be over secure networks, and will, wherever possible, only include coded information on the identity of the donor. Access to these networks will be password protected. All paper records detailing the encryption key will be kept locked on hospital or university premises. Arrangements for follow-up of prospective donors Although there is a risk that prospective donors might find follow-up intrusive, the arrangements for follow-up will be fully documented in the information booklet provided at the outset. At each point of contact, individuals will be reminded of their continuing rights to withdraw should they so wish. They will also be reminded that they can withdraw without compromising or affecting the medical care that might be available to them. If a potential donor decides to withdraw from the register of prospective donors, any data collected prior to withdrawal, as well as all retrievable biological samples, will be destroyed. Consent form and status of tissue samples The information and consent forms will make clear to prospective donors that their tissue will be treated as a 'gift' to the Tissue Bank and that they will not retain any rights of ownership over the tissue (including any intellectual property rights that may be generated through research on those samples). The consent form will describe in general terms the scope of the possible research uses and the types of tissue and data that might be collected. It will also include information about how samples and data will be encrypted and the measures taken to protect participants' confidentiality, and on the restricted access of the databases to selected members of the Research and Tissue Bank teams. Where appropriate, it might also refer to samples or data being sent to jurisdictions which have less robust data protection regimes than exist in the UK. Where samples are released to researchers outside the research team, the terms of use will be set out in a licence agreement. This agreement is likely to include that when samples are provided to researchers by the Tissue Bank, it will be on a 'not for profit' basis. However, where appropriate, administrative fees may be charged to cover the costs of collecting, processing, handling, and shipping samples, including extra costs incurred in sending samples outside the UK, where applicable. It will also provide that patient confidentiality must be respected. General and ethical oversight A Steering Group will be formed with members including potential donors with ME/CFS and their representatives, e. g. carers; ME/CFS charity members; other lay-members, such as those from patients and public involvement groups and or others with interest in human research and ethics; experts in pathology, ME/CFS clinical care and research; and lawyers and ethicists with experience in Human Tissue legislation. The group will meet every 4 months to oversee project developments and to deliberate on requests from researchers for tissue provision. The success of the Tissue Bank will be judged on the number of patients and controls recruited into the donor program, number of donations, quality of post-mortem tissues, number of research projects supplied with tissue, number and quality of publications and communications resulting from such research, and feedback from users and researchers as to the level of service from the Tissue Bank. Discussion The Cambridge Brain Bank, situated at Addenbrooke's Hospital (Cambridge University Hospitals NHS Foundation) is the longest running Brain Bank in the UK [34]. It was established in 1974 to enable research into nervous system disorders such as dementia (Alzheimer's, fronto-temporal), motor neurone disease, Huntington's disease, multiple sclerosis and others. Since then, various other brain banks have been created and more recently most collections at Addenbrooke's have been obtained from patients at that hospital and from residents in Cambridgeshire and East Anglia. Its experience in tissue management, status as a centre of reference, and full compliance with Human Tissue Authority regulations make it particularly suitable to host the ME/CFS Tissue Bank, which will benefit from its existing well established infrastructure and resources and also from the infrastructure created for the ME/ CFS Disease Register [35] d . Membership of the MRC UK will add value to the Tissue Bank and enable it to benefit from and contribute to existing initiatives and proposed developments of the national strategy for the collection of control brain material [27], which is often in short supply. The still poor understanding of the pathophysiology and aetiology of ME/CFS, added to the difficulties of obtaining tissue samples in vivo and the absence of animal models, make it particularly suited for post-mortem pathology studies. This is especially true in light of recent methodological advances in pathology, genomics and proteomics, which enhance the potential of biomedical research. There has been growing evidence pointing to central nervous and autonomic nervous system dysfunctions and disrupted immunity, including impaired functioning of NK-cells and increased levels of proinflammatory cytokines. These abnormalities may be triggered by viral infections and other stressors, and possibly by persistent infection. The pathoaetiology of ME/ CFS has been reviewed by Shepherd and Chaudhuri [36]. Nevertheless, a very small number of studies on the neuropathology of ME/CFS have been conducted so far. One of the authors (DGOD) at Addenbrooke's Hospital demonstrated inflammatory changes at the dorsal root ganglia in post-mortem samples of affected patients [37]. The neurosensorial tract changes found are compatible with the central origin pain and fatigue experienced by patients. Further evidence from neurological involvement comes from brain pathology case studies [2,20,21], neuroimaging studies [3,4,[7][8][9]38,39] and the many CNStype symptoms, such as cognitive dysfunction, hyperacusis, photophobia and headache presented by patients [8,38,40]. However, the abnormalities and mechanisms behind clinical and imaging findings still require elucidation through pathology studies, something that would be enabled by the proposed brain and tissue bank and the generation of a sizable number of donations. ME/CFS poses unique challenges. It appears to be a heterogeneous condition with no confirmatory diagnostic tests or neuropathological markers. Its study has been hampered by the multitude and lack of specificity of diagnostic criteria in use, which are based mainly on reported symptoms. Good clinical data from donors and the linking of pathological with clinical and laboratory data will enable the search for a pathological biomarker at the same time as the development of better diagnostic criteria and sub-grouping of cases according to clinical and pathological findings. Our initiative will help address not only the tremendous gaps in knowledge in ME/CFS, but also the general scarcity of CNS 'control' tissues, specifically through the sharing of tissues within brain tissue bank networks. The outputs will be enhanced by robust methods, and the use of protocols common to other brain and tissue banks. For optimum results, we will strive to minimise the post-mortem interval, which for deaths occurring within the geographical area covered by the Tissue Bank will be helped by the relationships developed with local coroners/mortuaries and the possibility of quick transfer of whole bodies for immediate processing by a dedicated team. Moreover, for deaths occurring outside the Tissue Bank coverage area, it will be possible for a member of the research team to travel to the site and rapidly transport material to the Tissue Bank laboratory. This will maximise the potential for a wide range of investigations, including molecular studies and those requiring optimum preservation of the biological and chemical nature of tissue. This proposal has benefited from extensive discussions with individuals with ME/CFS and charities working in this field. These discussions helped form the donor programme strategy and research protocol and highlighted areas where further debate was required. Participatory research ensures the desires and needs of patients are addressed. It also optimises the acceptability, appropriateness, effectiveness and overall quality of research [26]. ME/CFS charities are already involved in seeking funding for the long term sustainability of the Tissue Bank once support for its implementation from research councils and the NHS is ensured. Conclusion In conclusion, we have established the need for the structured collection and examination of nervous system human tissue of people who have died with ME/CFS. Based on the experience at Addenbrooke's Hospital and other brain banks, and building on information given by experts and by patients themselves, we have developed a protocol for the first ME/CFS Tissue Bank in the world, including carefully chosen approaches for recruiting and following up donors and for collecting, storing and examining post-mortem tissue samples. This initiative has the potential to revolutionise the understanding of this still poorly recognised disease and greatly help the development of more precise case definitions, diagnostic biomarkers, and treatments. Endnotes a The qualitative study included interviews with 'key informants' and focus group discussions with people with ME/CFS and the results were reviewed in a workshop with a group of experts, including ME/CFS clinicians, researchers, epidemiologists, pathologists, a lawyer and patient representatives. b These patients will be invited to become part of the linked UK ME/CFS Biobank. c The Human Tissue Act, at s3(6)(c), distinguishes between consent for anatomical examination for which a written consent is required and consent for research for which an oral consent or the consent of a 'qualifying relative' is sufficient [28]. d This is an ongoing project aimed at the population wide recruitment of people with ME/CFS for clinical and epidemiological studies and at being a source of cases for future research involvement.
2017-08-03T02:29:47.911Z
2014-06-18T00:00:00.000
{ "year": 2014, "sha1": "f7a13b59c50424b1bd7c75ba684a346771e42a66", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-7-370", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e97fe5e92d0392f0074f22d6f8fd71db29a1bde", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13745061
pes2o/s2orc
v3-fos-license
Evaluation of Different Treatments for Appendiceal Abscess in Children , Introduction Appendicitis is the most common disease requiring emergent abdominal surgery in children [1].The lifetime risk of developing appendicitis is 8.7% for boys and 6.7% for girls [2].Despite its high incidence, the diagnosis is often delayed in children [1].This can partly be explained by children often presenting with more diffuse symptoms compared to the adult population, making the disease more difficult to diagnose.The most common atypical features include absence of fever; and as much as one third of the pediatric patients have absence of pain in the right lower quadrant.The diagnosis might also be delayed owing to the difficulty to carry out a proper examination of the pediatric patients and difficulties in communication [3].A delayed diagnosis may lead to perforation and abscess formation [1].It is shown that appendicitis exacerbated by perforation or abscess, referred to as complicated appendicitis, continues to be a common occurrence in the younger child.In children less than 4 years of age, more than 50% have a perforated appendicitis at presentation [4].The perforation rate increases in frequency as the age of the patient decreases and the duration of symptoms lengthens.Perforation leads to increase in the hospital length of stay and rate of abscess formation [5,6]. The treatment of an appendicular abscess is still a debatable subject and studies disagree on what strategy to use.Some pediatric surgeons prefer immediate operation [7], whereas others advocate conservative management with interval appendectomy [8,9].Some authors also support conservative management without late interval appendectomy [10].The decision on what treatment to use is complicated by the difference in presentation.Patients can present with a nearly asymptomatic right lower quadrant mass, or with clinical signs of toxicity or diffuse peritonitis [11].While the treatment of the latter is straightforward with the patient proceeding directly to the operating room [12], the optimal treatment of the relatively asymptomatic child with appendicular abscess remains controversial [7][8][9][10].The aim of this study was to evaluate patients with appendicular abscess, collect information on the outcome at a single center to enable benchmarking, and to possibly identify the best treatment algorithm. Study Design All pediatric patients (< 18 years of age) treated for appendicitis between January 2010 and August 2014 were retrospectively searched for using ICD-10 procedure codes (JAK00, JEA00, JEA01, JEA10) and ICD-10 diagnostic codes (K35.2,K35.3, K35.8).The diagnosis of appendicular abscess was based on the radiology �indings and/or the operative �indings.All patients with a walled off abscess found either during surgery or by radiology were included.The initial decision to operate or perform percutaneous drainage was at the discretion of the attending surgeon, as was the decision to choose non-operative management.The medical records were examined and the following characteristics were registered: age, gender, the time from onset of symptoms to seeking care, presence of diarrhea, fever and general peritonitis; white blood cell count (WBC), absolute Neutrophil count (ANC), the level of C-reactive protein (CRP), vital parameters (respiratory rate, oxygen saturation, heart rate, systolic blood pressure); size, location and number of abscesses; presence of an Appendicolith, type of treatment used (operation, percutaneous drainage or conservative), treatment failure, length of hospital stay, and complications.Patients were divided into groups based on the type of management (conservative or surgical treatment).Further subgroups were created of the patients undergoing appendectomy, based on when the diagnosis was set (pre-or postoperatively).The groups were compared to each other in three ways: a) the preoperatively diagnosed surgical group to the conservatively treated group; b) the preoperatively diagnosed surgical group to the per-operatively diagnosed surgical group, and c) all surgically managed patients compared to the conservatively treated group. De�initions The time from onset of symptoms to seeking care was estimated in hours based on information from the caregivers.Fever was considered present if it was either measured in the emergency room (ER) or if fever above 38 degrees was included in the patient history.The WBC and ANC were analyzed according to age, as were the vital parameters (Table 1).Sepsis was de�ined as present if two or more of the following criteria were met; a) fever above 38degrees or temperature less than 36 degrees, b) tachypnea, c) tachycardia, d) leukocytosis.Conservative treatment was de�ined as management with antibiotics without any surgical intervention such as operation or percutaneous drainage.Patients were considered to have a treatment failure if the abscess did not respond to treatment, or if new abscesses developed during treatment.Complications included intestinal obstruction, formation of pleural �luid, and wound infection.Reoperation was de�ined as new surgery or drainage.The length of hospital stay was de�ined as number of days the patient had a bed in the hospital during the �irst stay, including days with home permissions and excluding an interval appendectomy. Ethical Consideration This study was performed according to the Helsinki declaration and approved by the Regional Ethical Review Board (registration number 2010/49).The data were anonymized prior to calculations, and are presented in such a way that it is impossible to identify any single patient.Therefore, it was not necessary to obtain approval from the individual patient's guardians.Intention to treat was the main diagnostic strategy and used for all patients.All evaluations, treatments, and procedures described in this report were standard of care.No protocols were exercised that would have required appropriate informed consent or approval of an institutional review board. Statistical Consideration Statistical analyses were performed by using SPSS Statistics.Categorical variables were compared using the Fisher exact test.The Mann-Whitney U test was used to compare nonparametric, continuous variables.A p-value of < 0.05 was considered significant. Study Population A total of 49 patients were diagnosed with appendicular abscess during the study period.Among them, 28 patients were found to have an abscess during the operation and the remaining 21 patients were diagnosed by means of radiology before the start of treatment.These 21 patients formed the main study population and were further divided into two main groups: the surgical group (N=8) with patients operated on with appendectomy or with a percutaneous drainage of the abscess; and the conservatively treated group (N=13) with patients only 3/5 treated with antibiotics (Figure 1).No patients were excluded from our study.There were no significant differences between the surgical group and the conservatively treated group regarding age or gender (Table 2). Clinical Presentation No significant differences were found between the surgical group and the conservatively treated group when comparing preoperative data such as duration of symptoms, leukocytosis, neutrophilia, CRP-levels, presence of general peritonitis, diarrhea, sepsis, and hypotension (Table 3).Approximately 50% in both groups had an Appendicolith.Further, the abscess characteristics were similar between the two groups, with no significant difference in size, number, or location of the abscesses (Table 4). Surgeons and Operation Technique Six surgeons were involved in the treatment of the included children.Since the operation technique can affect the data this was standardized.Diagnostic laparoscopy is the gold standard when appendicitis is suspected, and is practiced by all the surgeons.However, if an abscess is found during the diagnostic laparoscopy, the operation is converted to open surgery, hence; all the patients were operated on with open appendectomy.Regarding the surgical management of the abscess, it was never irrigated with antibiotics added to the irrigation.The drain was not left in as a routine; however, a few patients received a drain. Antibiotic Usage All the patients were on the same antibiotic protocol, previously published by our center [13].They were all treated initially during the same period of time and with the same antibiotics.If complications occurred or the clinical picture revealed a more severe infection/inflammation, the antibiotic treatment was prolonged.Upon discharge, all the patients were prescribed the same oral antibiotics, scheduled for at least 7 days. Postoperative Characteristics Treatment failure occurred in 25% of the patients in the surgical group and in none of the patients in the conservatively treated group.However, these data did not reach statistical significance (p=0.133).Complications occurred in three patients; two had postoperatively an accumulation of pleural fluid, and one child had intestinal obstruction.These three patients belonged to 4/5 the surgical group and the difference was significant compared to the conservatively treated patients.Further, the duration of hospital stay was also significantly longer in the surgical group (Table 5). Elective Appendectomy Among the patients treated conservatively, 69% underwent an interval appendectomy after a median of 57 days (range 38 -154 days).None of the patients that were treated conservatively had a recurrence.All the patients were followed up for more than one year. Surgical Treatment For Appendiceal Abscess; Comparison of Preoperative Diagnosed Patients With Patients Diagnosed During Operation The 28 patients with an abscess diagnosed first during the appendectomy were compared to the eight patients diagnosed before the start of the surgical treatment.There was no difference between the two groups regarding age or gender.The duration of symptoms was significantly shorter in the per-operatively diagnosed group: 96 (range 12-168) hours compared to 108 (range 72-264) hours (p-value = 0.006).The other preoperative data; leukocytosis, neutrophilia, CRP-level, presence of general peritonitis, diarrhea, presence of sepsis, and presence of hypotension; did not differ significantly between the two groups.Regarding abscess characteristics, the location of the abscess differed between the two groups.The abscesses were more frequently localized in the right lower quadrant among the per-operatively diagnosed patients (82% compared to 38%, p = 0.02), and a pelvic localization was more common among the preoperatively diagnosed patients (38% compared to 4%, p = 0.03).Complication rate was higher among the patents diagnosed preoperatively (36% vs 4%, p = 0.028).The number of treatment failures and reoperations did not differ significantly between the two groups.The duration of hospital stay was longer among the patients who were diagnosed preoperatively; 8.5 (range 5 -60) days compared to 5.5 (range 1-31) days (p-value = 0.01). Overall Surgically Treated Patients Compared To Conservatively Treated Patients The patients (N = 36) who had surgical treatment, both preand per-operatively diagnosed were compared to the patients (N = 13) treated conservatively.The duration of symptoms among the surgically treated patients was shorter than among those treated conservatively: median 72 hours (range 12-264) compared to a median of 144 hours (range 72 -168) (p = 0.001).There was no treatment failure in the conservatively treated group compared to 22% of the surgically treated patients; however the data did not reach statistical significance (p= 0.09). Discussion We studied the treatment of pediatric patients with appendicular abscesses to evaluate different treatment algorithms.The group of children studied is small.On the other hand the patients are registered prospectively and treated at one center only where all information is collected.Among the patients diagnosed before the onset of treatment, there was a significantly poorer outcome in the surgically managed group, with a significantly longer duration of hospital stay and significantly more complications than the conservatively treated patients.Furthermore, treatment failure seemed to be more common in surgically managed patients than in those conservatively treated, regardless of whether they were compared with the pre-or intraoperatively diagnosed surgical group. We did not experience any treatment failure among the patients treated conservatively, and this is to be compared to a 25% failure rate among the operated patients with preoperatively diagnosed appendiceal abscess, and a 22% failure rate for all the patients treated surgically.This contrasts with the prospective study made by Samuel et al.where 11% of the patients did not respond to conservative management [7].In the study by Gillick et al. as many as 15.8% of the patients did not respond to conservative treatment [12].In the present study, the surgically treated patients had significantly more complications and longer duration of hospital stay.This also differs from the study by Samuel et al. [7] who concluded that early surgical intervention was more beneficial than non-operative management of patients since it resulted in a shorter overall length of hospital stay and reduced morbidity.Other studies have shown no difference in the duration of hospital stay [11].We have, unlike other studies [7,11] not included the interval appendectomy in the total length of hospital stay. Conservative management for these children is not currently what is being promoted for children in all countries and regions of the world.There are current data with much larger populations of patients promote operating on these children with shorter hospitalizations and equivalent complications to the conservative group.However, there are also several studies reporting of conservative treatment (with or without interval appendectomy) as an option to appendectomy, with similar results [14][15][16][17][18]. The complication rate of 36% amongst the patients treated with early surgical intervention is similar to the study made by Erdogan et al. [9] where 26% of the patients who were operated on immediately had complications and none of the patients who were treated conservatively.Another study, by Roach JP et al. [11] showed complication rate requiring readmission to the hospital of 10% in the operated patients which was significantly higher than the conservatively treated children. In our study, no patients had any recurrence of appendicitis.A study made by Svensson et al. showed a recurrence rate of 2.4-10%, depending on whether the patients who were surgically treated within one month were excluded or not [10].They had a median follow-up for 5.1 years whereas our follow-up ranged from 1 -3.5 years, or until a scheduled interval appendectomy was performed.Their conclusion was that the incidence of recurrent acute appendicitis was very low after successful non-operative treatment of appendiceal abscesses in children.Therefore they doubt if there is a role for interval appendectomy as part of an institutional treatment protocol.In contrast, Erdogan et al. [9] promoted interval appendectomy after conservative treatment; they evaluated that the risk of recurrence to be 76.2%.Samuel et al. [7] have also concluded that interval appendectomy is recommended after nonsurgical treatment.Gillick et al. [12] 5/5 advocate elective appendectomy after conservative treatment of appendiceal abscess in children.They performed histological examinations of the specimens they removed at elective appendectomy and found two carcinoid tumors (out of 331 patients) that probably would not have been found so promptly otherwise.We cannot draw any conclusions regarding recurrence after conservatively treated appendiceal abscess because of the low number of patients (N = 13) in the present study, and 69% of the patients underwent an interval appendectomy. In the present study, we found no difference in preoperative clinical data or in abscess characteristics between the surgically and the conservatively treated patients.Hence, taking into account that the parameters were retrospectively analyzed, the two groups arrived at the hospital with equally severe conditions.As with other retrospective studies, the results are dependent on accurate coding.The information collected is interpreted through several stages, which can lead to misinterpretation.Prospective studies have an advantage as the information is collected by the examiner.The patients in the present study were not randomized and were treated according to the decision of the attending surgeon.A randomization from the start reduces the risk of affecting variables that are not accounted for in the study.A bigger, randomized, prospective study is called for. In order to avoid bias and skew the data by having more severe and sick patients who failed conservative therapy added to the surgical group, the patients who failed conservative therapy were not excluded from the study.Furthermore, the patients who failed conservative therapy and underwent surgery were not added to the surgery group data.This would make the surgical group data worse and promote the conclusion of the superiority of conservative treatment. We did not look into the progression of the symptoms or the vital parameters after the children had left the emergency room.It would be interesting with a study with an even more thorough preoperative evaluation to evaluate which parameters influence the surgeon to choose appendectomy instead of conservative treatment.Furthermore, we had a limited number of patients in our study.However, we think that more patients only would have brought more significance to the data; for example, a significant difference in treatment failure. Conclusion Conservative management seems to be more beneficial than early surgical intervention in children with appendiceal abscess.The high number of per-operatively discovered appendicular abscesses suggests more use of preoperative work-up by radiology to rule out an appendiceal abscess before taking the child to appendectomy.This routine can be implemented in the clinical practice.By doing this, a conservative treatment could be selected and complications avoided.Larger, prospective studies with randomization of patients are needed. Table 1 : Reference interval for vital parameters according to age. Table 2 : Patient demographics.Values are given as the absolute number (n) and percentage of patients,* Mann Whitney U-test, ** Fisher's exact test, two-tailed. Values are given as the absolute number (n) and percentage of patients, CRP: C-reactive protein, a: only 6 patients with data, b: only 11 patients with data, * Mann Whitney U-test, ** Fisher's exact test, two-tailed. Values are given as the absolute number (n) and percentage of patients, * Mann Whitney U-test, ** Fisher's exact test, two-tailed.
2018-12-10T22:25:48.525Z
2015-01-21T00:00:00.000
{ "year": 2015, "sha1": "65616bcfa53199ced8370c72823b5369055159f1", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/MOJS/MOJS-02-00009.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "cb89b83d932dfcdf86b793e6d8dcbb68ae9fad5b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246443462
pes2o/s2orc
v3-fos-license
Small nucleolar RNA SNORA71A promotes epithelial‐mesenchymal transition by maintaining ROCK2 mRNA stability in breast cancer Metastasis is the primary reason of death in patients with cancer. Small nucleolar noncoding RNAs (snoRNAs) are conserved 60–300 nucleotide noncoding RNAs, involved in post‐transcriptional regulation of mRNAs and noncoding RNAs. Despite their essential roles in cancer, the roles of snoRNAs in epithelial‐mesenchymal transition (EMT)‐induced metastasis have not been studied extensively. Here, we used small RNA sequencing to screen for snoRNAs related to EMT and breast cancer metastasis. We found a higher expression of SNORA71A in metastatic breast cancer tissues compared to nonmetastatic samples. Additionally, SNORA71A promoted the proliferation, migration, invasion and EMT of MCF‐7 and MDA‐MB‐231 cells. Mechanistically, SNORA71A elevated mRNA and protein levels of ROCK2, a negative regulator of TGF‐β signaling. Rescue assays showed ROCK2 abrogated the SNORA71A‐mediated increase in proliferation, migration, invasion and EMT. Binding of SNORA71A to mRNA stability regulatory protein G3BP1, increased ROCK2 mRNA half‐life. Furthermore, G3BP1 depletion abolished the SNORA71A‐mediated upregulation of ROCK2. In vivo, SNORA71A overexpression promoted breast tumor growth, and SNORA71A knockdown inhibited breast cancer growth and metastasis. We suggest SNORA71A enhances metastasis of breast cancer by binding to G3BP1 and stabilizing ROCK2. Introduction Breast cancer is one of the most common carcinomas in women worldwide. In 2018, approximately 2.1 million breast cancer cases were diagnosed and caused 626 679 deaths [1]. The mean age of diagnosis for breast cancer is 62, and it is estimated that one out of eight females might develop breast cancer at some point in their lives [2]. Metastatic breast cancer needs to be treated according to the subtypes to prolong life and reduce symptoms [3]. Although significant advances have been made in the treatment of breast cancer, the prognosis of patients with metastasis is still poor. The median overall survival for metastatic triplenegative breast cancer and the other two subtypes (hormone receptor positive and ERBB2 positive) are about 1 and 5 years, respectively [3]. Development of cancer metastasis implicates various mechanisms, including angiogenesis, migration, invasion, and epithelial-mesenchymal transition (EMT) [4]. Therefore, there is an urgent need to investigate the underlying mechanisms of EMT to develop more accurate prognostic markers and effective therapeutic strategies. Epithelial-to-mesenchymal transition indicates the phenotypic conversion of epithelial cells to mesenchymal cells, which is pivotal for invasion and metastasis of cancer cells [5]. This alternation in cell behavior is mediated by key transcription factors, among which transforming growth factor (TGF-b) signaling plays a predominant role [6]. TGF-b induces the phosphorylation activity of SMAD 2/3 to mediate the regulation of target genes on the transcription level, thus promoting EMT, metastasis and tumorigenesis [7,8]. Activation of EMT ultimately leads to the loss of epithelial markers, including E-cadherin and cytokeratin, as well as to an increase of mesenchymal markers such as Vimentin, smooth muscle actin (SMA), and matrixdegrading enzymes. Small nucleolar noncoding RNAs (snoRNAs) are a group of conserved noncoding RNAs with the length of 60-300 nucleotides (nt). They are widely distributed in the eukaryotic cell nucleolus and are primarily divided into box C/D snoRNAs and box H/ACA snoRNAs [9]. snoRNAs direct the demethylation and pseudouracylation of ribosomal RNA through base-pairing and are involved in post-transcriptional modification of snRNAs, tRNAs, and mRNAs [10,11]. snoRNAs have been shown to play essential roles in cancer. For instance, SCARNA13 facilitates metastasis of hepatocellular carcinoma by regulating SOX9 [12]. SNORA71A increases migratory and invasive capacity in lung carcinoma by mediating the MEK and ERK1/2 phosphorylation levels [13]. H/ACA box small nucleolar RNA 7B has been shown to promote breast cancer [14,15]. However, the role of snoRNAs in EMT progression of breast cancer has not been studied extensively. Here, we aimed to identify and evaluate the role of snoRNAs as potentially suitable therapeutic targets for TGF-b-mediated EMT in breast cancer. We explored EMT-related snoRNAs by small RNA sequencing in TGF-b-stimulated breast cancer and the control cells. The function of a key snoRNA in EMT was verified by CCK-8, transwell, and western blot analyses, and the mechanisms were investigated by RNA sequencing, real-time PCR, and western blotting. Ethics approval and consent to participate The present study was approved by the ethical review committees of China. Ethical approval was granted from the Ethical Review Committees of Tongji Medical College, Huazhong University of Science and Technology. This study was performed in accordance with the Declaration of Helsinki. Written informed consent was obtained from all the patients. The animal experiments in this study and all procedures involving the handling and treatment of mice during this study were approved by the Ethical Review Committees of Tongji Medical College, Huazhong University of Science and Technology. All the experiments were performed according to the guidelines of the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Human tissues Breast cancer tissues and adjacent healthy tissues were obtained following curative surgical resections from 39 patients with breast cancer at the hospital. The patient information is shown in Table S1. Plasmid construction and cell transfection The siRNA fragments targeting SNORA71A were synthesized in Gene Pharma (Shanghai, China), and the scramble siRNA was used as a negative control (NC). SNORA71A was overexpressed by cloning its whole length sequence into pLVX-EGFP-IRES-Puro vector, using EcoRI (GAATTC) and XbaI (TCTAGA) cloning sites. Overexpressing plasmids of ROCK2 were obtained by amplifying their complete sequences and cloning into the overexpression vector pcDNA 3.1. For cell transfection, cells were seeded in 6-well plates with a density of 3 9 10 5 cells/well till confluence of 80-90% after 24 h. Next, cells were cultured with fresh medium and transfected with siRNA fragments or overexpressing plasmids using LipofectamineTM2000 (Invitrogen, Carlsbad, CA, USA). The siRNA sequences are shown in Table S2. RNA isolation Total RNA of cells and tissues was extracted via the TRIzol method (Invitrogen). The quality and quantity of total RNA was detected by NanoDrop ND1000 Spectrophotometer (Thermo Scientific, Waltham, DE, USA). RNA samples with A260/280 > 1.9 were used for real-time PCR and cDNA library construction. snoRNA library preparation and sequencing TGF-b-treated MCF-7, MDA-MB-231, and the corresponding control cells were subjected for snoRNA sequencing. snoRNA libraries were constructed by NEB Small RNA Library Prep kit (cat. no. E7560S; New England Biolabs, Inc., Ipswich, MA, USA). Briefly, total RNA was ligated with 3 0 -and 5 0adapters, and cDNA was synthesized via PCR. The DNA fragments with 180-420 bp (including 120 bp adapters) were extracted by the QIAquick gel extraction kit (Qiagen, Valencia, CA, USA). The four libraries were sequenced on an Illumina HiSeq 2500 (Illumina, San Diego, CA, USA) in Ying Biotech (http://www.yingbio.com). mRNA library construction and sequencing mRNA sequencing was operated in SNORA71Aoverexpressing MDA-MB-231 and negative control cells. Libraries were constructed via VAHTS TM Total RNA-seq (H/M/R) Library Prep Kit (#NR603; Vazyme Biotech, Nanjing, China) following the user guides. In brief, mRNA was extracted using oligo-dT magnetic beads and processed to mRNA fragments under divalent cations and high-temperature conditions. Fist-strand cDNA was synthesized by reverse transcriptase using random primers. The cDNA libraries were constructed via PCR and purified using Ampure Beads (Beckman, Brea, CA, USA). The quality of the DNA fragment was measured by Agilent 2200 (Agilent, Santa Rosa, CA, USA). Finally, RNA sequencing was performed using an Illumina HiSeq 2500 platform. Sequencing data analysis Short reads (< 15 nt), adaptor sequences, and lowquality sequences (> 50% of bases whose Q scores were ≤ 10%) were removed from the raw sequencing data using FAST-QC (v0.11.7) (http://www.bioinformatics. babraham.ac.uk/projects/fastqc/). The clean data were mapped to the human genomic reference (GRCH38 version), and the snoRNA was identified by mapping to the snoRNA database (RNAcentral V18). The differentially expressed snoRNA between TGF-b group and control group was analyzed by DEGseq in MCF-7 and MDA-MB-231, respectively, and the |fold change| > 1.5, FDR < 0.05 was consider significant. The differentially expressed mRNAs were identified based on |fold change| > 2, FDR < 0.05 selection. Functions of the different genes were classified by Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) database. Heatmaps were performed using differently expressed genes. GSEA was performed by using all identified genes (including genes that did not differ significantly) for enrichment analysis. Analysis of expression data from database The expression data of snoRNA from 1077 breast cancer samples and 105 normal samples were downloaded from The Cancer Genome Atlas (TCGA) database (http://bioinfo.life.hust.edu.cn/SNORic/download/). Among the 1078 cancer samples, 1076 had clinical information, which was used for survival analysis. We used the median score in training set as the cutoff and the data were divided into low-risk and high-risk groups. The Kaplan-Meier (KM) and log-rank methods were used to compare the survival rate between low-and high-risk groups via the R 'SUR-VIVAL' package (version 3.5.2). All data were publicly available and were downloaded for research purpose. Cell proliferation assay Cell Counting Kit-8 (CCK-8) assay was used to evaluate cell proliferation. Cells were cultured in the 96-well plates with 3 9 10 3 cells/well. Cells were transfected with overexpressing plasmids or siRNA fragments and cultivated for 24, 48, and 72 h, respectively, before the addition of 10 lL CCK-8 reagent (Dojindo, Kumomoto, Japan) and further incubation for 3 h at 37°C. The proliferation potential was evaluated by determination of OD value at 450 nm using the microplate reader (Infinite M1000; TECAN, M€ annedorf, Switzerland). Cell cycle analysis Breast cancer cells were starved for 24 h and next cultured in fresh medium containing 10% FBS for 24 h. We then collected the breast cancer cells and fixed them in 75% ethanol under 4°C, overnight. Cells were suspended and culture in propidium iodide (PI)/RNase staining buffer (BD Biosciences, San Jose, CA, USA) in the dark, for 15 min flow cytometer (BD Biosciences) was used to analyze cell cycle. Cell migration and invasion assays Cell migration experiment was performed using 0.8-lm 24-well chamber (353097, FALCON, Corning), and invasion assay was performed with BioCoat TM Matrigel Ò 0.8-lm 24-well chamber (354480; Corning). A total of 700 lL medium containing 10% serum was added in the lower chamber, and 500 lL cell suspension was added in the upper chamber and cultured for 24 h. The liquid in the upper chamber was soaked away, and the cells attaching the bottom of the upper chamber were stained using 800 lL crystal violet dye (Sigma, St. Louis, MO, USA) for 30 min at 20°C. Results were evaluated in three random fields under the microscope. Ethynyldeoxyuridine analysis Ethynyldeoxyuridine (EdU) detection kit (RiboBio, Guangzhou, China) was used to assess cell proliferation according to the manufacturer's instruction. Cells were cultured in 96-well plates at 5 9 10 3 cells/well. Ten microliters of EdU labeling media was added to the 96-well plates and then incubated at 37°C under 5% CO 2 for 2 h. After treatment with 4% paraformaldehyde (PFA; Sigma) and 0.5% Triton X-100 (Sigma), the cells were stained with the anti-EdU working solution and Hoechst 33342 (Sigma). Subsequently, the cells were visualized using a fluorescence microscope (Olympus, Tokyo, Japan). The EdU incorporation rate was calculated as the ratio of the number of EdU-positive cells (green cells) to the total number of Hoechst 33342-positive cells (blue cells). RNA pull-down assay The SNORA71A probe was synthesized by in vitro transcription using complete cDNA sequence of SNORA71A as a template, via a T7 in vitro Transcription Kit (Thermo Fisher Scientific, Waltham, MA, USA). The antisense of SNORA71A was used as a negative control probe. Magnetic RNA-Protein Pull-Down Kit (Thermo Fisher Scientific) was used according to the protocol specified by the manufacturer's instructions for RNA pull-down. MDA-MB-231 cell lysates were incubated with purified biotinylated transcripts for 1 h at 25°C. The biotin-coupled RNA complexes were isolated by streptavidin agarose beads (Invitrogen). The pellets were washed and then boiled with a loading buffer. Then, the pull-down material was analyzed. Mass spectrometry (MS) The specific protein bands were cut and subjected to washing, decolorization, and dehydration. Samples were supplemented with 10 lL enzymatic hydrolysate for 30 min and 20 lL enzymatic hydrolysate cover solution, and subjected to enzymolysis in water bath at 37°C for 16 h. The dried polypeptide sample was redissolved in Nano-HPLC Buffer A, activated by C18 column using 40 lL methanol, and balanced using 40 lL Nano-HPLC Buffer A, desalted using 40 lL Nano-HPLC Buffer A, and washed with 40 lL Nano-HPLC Buffer B. The enzymatic hydrolysis products were separated by High-Performance Liquid Chromatography by Agilent liquid chromatograph and analyzed via mass spectrometry by Q-Exactive HF mass spectrometer (Thermo Scientific). The data were processed via PROTEOME DISCOVERER software (version 2.1; Thermo Fisher). RNA immunoprecipitation assay The Magna RIP TM RNA-Binding Protein Immunoprecipitation Kit (Millipore, Bedford, MA, USA) was used according to the protocol specified for RNA immunoprecipitation (RIP) assay. Complete RIP lysis buffer was used to lyse MDA-MB-231 cells, which were then centrifuged at 16 000 g for 10 min. Cell lysates were incubated with the magnetic beads conjugated with anti-G3BP1 (1 : 20, # ab181150; Abcam, Cambridge, UK) or control anti-immunoglobulin G (IgG) antibody. Beads were washed three times with RIP buffer and once with PBS buffer. The immunoprecipitated RNA product was purified and subjected to quantitative real-time PCR. Actinomycin D experiment Actinomycin D was used to examine the effect of SNORA71A on the stability of ROCK2 mRNA. In brief, Actinomycin D (5 lgÁmL À1 , A4262; Sigma) was added to MDA-MB-231 cells, and the real-time PCR was performed at 0, 3, 6, 9, and 12 h after Actinomycin D treatment. Real-time PCR Complementary DNA was synthesized from the RNA using a reverse transcription kit (Thermo Bio, Waltham, MA, USA). PCR was performed using 2 9 Master Mix kit (Roche, Basel, Switzerland) following the manufacturer's instructions, and assessed on an ABI Q6 (Applied Biosystems Inc., Carlsbad, CA, USA) thermocycler. The amplification program was 95°C for 10 min, followed by 40 cycles at 95°C for 15 s and 60°C for 60 s. The primer sequences are shown in Table S2. The expression levels were calculated using 2 ÀDDCT method. The expression level of GAPDH was used as the reference gene for normalization. Immunofluorescence assay Cells were seeded on the coverslips and fixed via 4% paraformaldehyde at 20°C for 20 min. The attached cells were washed three times using PBS and permeabilized by 0.1% Triton X-100 for 3 min. The coverslips were blocked by blocking buffer for 1 h, and incubated in presence of primary antibodies against Ecadherin (1 : 1000; Cell Signaling Technology) and Vimentin (1 : 1000; Cell Signaling Technology) for 2 h. Cells were then treated with a fluorescently labeled secondary antibody (Abcam, London, UK) for 30 min. The cell nuclei were observed by staining with DAPI for 5 min. Images were obtained using a fluorescence microscope. RNA-protein double labeling by FISH and immunofluorescence assay Before immunofluorescence assay, cells were added with prehybridization solution at 37°C for 1 h. After removing the prehybridization solution, the hybridization solution containing the probe of SNORA71 or ROCK2 was added and hybridized overnight. After washing with SSC, cells were incubated with primary antibody anti-G3BP1 (1 : 500; # ab181150; Abcam) and secondary antibody: primary antibody were incubated overnight at 4°C, then washed with PBS for 3 9 5 min; Corresponding secondary antibody was dropped and incubated at room temperature for 50 min, then washed with PBS for 3 9 5 min. DAPI dye for dyeing nuclear for 8min away from light. Images were obtained using a fluorescence microscope. Xenograft assay MDA-MB-231 cells stably transduced with lentivirus vectors carrying sh-SNORA71, sh-NC, oe-SNORA71A, or empty vectors were used for the establishment of the xenograft mouse model. Female 5-to 6-week-old Balb/c nude mice purchased from Wuhan Cloud Clone Animal Co., LTD (Wuhan, China), were used in this study. Mice were housed at the regular housing temperatures, under a constant 12-h light/dark cycle with food and water available ad libitum. For overexpression, mice were randomly divided into two groups (n = 5): SNORA71A group and NC group, and for knockdown, mice were randomly divided into two groups (n = 6): sh-SNORA71 group and sh-NC group. Cells were resuspended using PBS with 6 9 10 6 cells in 100 lL PBS, and 100 lL of cell suspension was subcutaneously injected into mice. The experiments lasted for 3 weeks, and the tumor size of mice was detected every 3 days. Mice were euthanized by CO 2 asphyxiation procedure. The tumor volume was evaluated as width 2 specificity and sensitivity of the snoRNA expressionbased diagnose signature, and P < 0.05 represented statistical significance. Statistical analysis Statistical analysis was carried out by GRAPHPAD PRISM 6 (GraphPad Software, La Jolla, CA, USA). Statistical analysis between two groups was performed by Student's t-test (two-tailed), and a one-way ANOVA post-Tukey test was applied for the comparison between more than two groups. All cell experiments were performed in three biological replicates. Significance was identified at P > 0.05, and data are shown as Mean AE SD in the figures. Expression profile of snoRNA in breast cancer EMT models To establish an EMT model, breast cancer cells were stimulated with TGF-b, which is a crucial molecule for the EMT process [15,16]. TGF-b stimulation resulted in the transition from epithelial to the mesenchymal-like phenotype in both MCF-7 and MDA-MB-231 cells (Fig. S1A). Moreover, TGF-b increased the mRNA and protein levels of Vimentin, while decreased E-cadherin in MCF-7 and MDA-MB-231 cells (Fig. S1B-E). To obtain the expression pattern of snoRNAs, the EMT and control cells were sequenced and generated 13-21 million clean reads (Table S3). In average, 4.8 million reads per library were mapped to snoRNAs with 29.52% mapped rate and 17.3 unique mapped rate (Table S4). The length of snoRNAs was primarily ranged from 55 to 85 bp with a peak appeared at length of 71 bp (Fig. S2A). Moreover, 193 snoRNAs were identified, including 66 H/ACA box snoR-NAs, 101 C/D box snoRNAs, and 26 unknown snoRNAs. The abundance of C/D box snoRNAs was the highest, accounting for 80.04% of the total counts (Fig. S2B). SNORA71A is upregulated in metastatic breast cancer tissues and cells A total of 20 differentially expressed snoRNAs were identified between the TGF-b and control group, among which four snoRNAs were consistently upregulated (including SNORA71A) in both two cell lines, and two were commonly downregulated (Fig. 1A, Table S5). We verified the six snoRNAs that differentially expressed in both cell lines, and found the real-time PCR data provide confidence in the sequencing data (Fig. S2C). To identify vital snoRNAs underlying the breast cancer metastasis, we further analyzed these snoRNAs using the TCGA database (http://bioinfo.life.hust.edu.cn/SNORic/download/). We found SNORA71A was also significantly upregulated in the breast cancer tumor tissues compared to the normal tissues (Fig. 1B). Surprisingly, high expression of SNORA71A correlated significantly with the poor prognosis of patients with breast cancer (P = 0.025, Fig. 1C). Real-time PCR analysis confirmed the upregulated expression of SNORA71A in TGF-b-treated MCF-7 and MDA-MB-231 cells compared with their corresponding parental cells (Fig. 1D). Furthermore, real-time PCR showed the expression of SNORA71A was significantly increased in the breast cancer tissues compared to the adjacent peritumoral tissues (Fig. 1E). Receiver operating characteristic (ROC) curve analysis showed SNORA71A might serve as a biomarker for breast cancer, with the area under curve (AUC) of 0.72 (P = 0.0006) (Fig. 1F). At the cutoff value, the sensitivity and specificity of breast cancer diagnosis were 76.92% and 61.54%, respectively (Fig. 1F). We further focused on the metastasis role of SNORA71A in breast cancer. SNORA71A was upregulated in the tumor tissues from the patients with metastasis compared to that without metastasis (Fig. 1G). ROC curve displayed that the tissue SNORA71A may serve as a biomarker for metastasis, with AUC of 0.86 (P = 0.0009) (Fig. 1H). At the cutoff value, the sensitivity and specificity of metastasis diagnosis were 60% and 92.86%, respectively. When patients with breast cancer were divided into two groups with high (n = 20) or low SNORA71A expression (n = 19) according to the real-time PCR analysis, a significant correlation between SNORA71A expression and the tumor stage, ER status and lymph node metastasis was observed (Table S1). While the SNORA71A expression was not significantly associated with clinical parameters of age, tumor grade, PR status, or HER-2 expression status (Table S1). Furthermore, the expression of SNORA71A was evaluated in normal breast epithelial cells and breast cancer cells with different metastasis potential. SNORA71A expression was increased in the breast cancer cells compared to that in the normal breast epithelial cells (Fig. 1I). Importantly, SNORA71A expression was remarkably increased in the highly metastatic breast cancer cell line MDA-MB-231 compared to that in the lowly-metastatic breast cancer cell line MCF-7. SNORA71A promotes EMT process of breast cancer cells To evaluate the role of SNORA71A in metastasis of breast cancer, we overexpressed and silenced SNORA71A increased that of mesenchymal marker Vimentin in MCF-7 cells (Fig. 2G). Conversely, the downregulation of SNORA71A increased the E-cadherin expression and decreased the Vimentin expression in MDA-MB-231 cells (Fig. 2H). Similarly, western blotting revealed SNORA71A suppressed the E-cadherin expression and promoted the Vimentin expression ( Fig. 2I and Fig. S3D). Moreover, SNORA71A suppressed cell apoptosis of MCF-7 and MDA-MB-231 cells (Fig. S4). The Edu assay verified SNORA71A promoted the proliferation of breast cancer cells (Fig. S5A). However, upregulation or downregulation of SNORA71A had no significant effect on the cell cycle (Fig. S5B). Since there is an effect of SNORA71A on proliferation, to prevent confounding effects, the migration and invasion experiments were also conducted in the presence of aphidicolin (the final concentration was 1 mgÁL À1 ). The results showed in the presence of aphidicolin, SNORA71A also promoted the migration and invasion ability of breast cancer cells (Fig. S6). Additionally, another siRNA was also used to verify the promote effect of SNORA71A on migration, invasion, and EMT (Fig. S7). We also tested whether SNORA71A expression is necessary for TGF-bmediated EMT, by knocking-down SNORA71A and treating cells with TGF-b. The results showed deficiency of SNORA71A significantly abrogated the promote effect of TGF-b on EMT (Fig. S8). SNORA71A upregulates ROCK2 in the TGF-b signaling pathway To investigate the genes and pathways involved in SNORA71A-mediated EMT, we performed mRNA-seq in SNORA71A-overexpressing cells and control cells. GSEA analysis using the complete set of identified genes revealed that TGF-b signaling was the most enriched pathway (Fig. 3A). Among the genes enriched in the TGF-b signaling, ROCK2 was the most significantly enriched gene. Additionally, genes, including ROCK1, Smad3, MYC and MAPK1 that have been generally known to function in neoplastic processes, were also significantly enriched in the TGF-b signaling. To investigate whether SNORA71A is involved in the TGF-b-mediated EMT by regulating these genes, we verified their expression by real-time PCR in SNORA71A-overexpressing cells. Results revealed that high expression of SNORA71A significantly increased the mRNA and protein levels of ROCK2 (Fig. 3B,C). Similarly, silencing of SNORA71A significantly decreased the expression of ROCK2 (Fig. 3D). To clearly establish the importance of SNORA71A in induction of ROCK2 during TGF-b stimulation, TGF-b stimulation in combination with and without siSNORA71A was performed, with the measurement of ROCK2. Results showed SNORA71A significantly induced ROCK2 during TGF-b stimulation in MCF-7 and MDA-MB-231 cells (Fig. 3E,F). Furthermore, using the Kaplan-Meier Plotter and Timer 2.0 database, we found that patients with high expression of ROCK2 had lower recurrence-free survival (RFS) and overall survival (OS) time compared to those with low expression of ROCK2 (Fig. 3G,H). Real-time PCR showed ROCK2 was increased in breast cancer tissues compared to normal tissues adjacent to carcinoma, and showed a significantly positive correlation with SNORA71A expression (Fig. 3I,J). SNORA71A controls breast cancer EMT by ROCK2 To investigate whether ROCK2 mediated the function of SNORA71A, we firstly performed the invasion and migration experiments with ROCK2 overexpression only. Results showed ROCK2 was successfully overexpressed using the pcDNA 3.1 vector in MDA-MB-231 cells (Fig. 4A), and the ROCK2 itself could promote the migration and invasion of breast cancer cells (Fig. S9). Although SNORA71A silencing significantly inhibited the proliferation of MDA-MB-231 cells, high levels of ROCK2 remarkably restored the cell proliferating potential (Fig. 4B, Fig. S10). At SNORA71A deficiency, the migration and invasion of MDA-MB-231 cells were remarkably inhibited, while overexpression of ROCK2 significantly abrogated this effect (Fig. 4C,D). Importantly, overexpression of ROCK2 significantly reversed the increase of E-cadherin and the decrease of Vimentin induced by SNORA71A deficiency (Fig. 4E,F). Furthermore, we overexpressed SNORA71A and knockdown ROCK2 to further verify the SNORA71A controls breast cancer by ROCK2. We observed that ROCK2 knockdown significantly blocked SNORA71A to enhance the proliferation, migration, invasion and EMT of MDA-MB-231 cells (Fig. S11). These data indicate that SNORA71A might control EMT progress of breast cancer cells, partly by regulating ROCK2. 3.6. SNORA71A upregulates ROCK2 by mRNA stability regulatory protein G3BP1 To determine the mechanism by which SNORA71A mediates the expression of ROCK2, we performed RNA pull-down and mass spectrum assays to uncover the binding proteins of SNORA71A. Compared to the control antisense probe, SNORA71A specifically binds to histones, translation initiation factors, and several RNA-binding proteins, such as G3BP1 (Fig. 5A, Table S6). The binding of SNORA71A to G3BP1 was also verified by western blotting (Fig. 5B). Based on the eCLIP-seq database, we found that G3BP1, which has been shown to regulate mRNA stability [17,18], was significantly bound to the mRNA of ROCK2 (Fig. 5C). Moreover, the GEPIA database showed that the expression of G3BP1 had a significant positive correlation with the expression of ROCK2 in breast cancer (Fig. 5D). Therefore, we further investigated whether SNORA71A mediates the binding of G3BP1 and ROCK2, thus regulating the mRNA stability of ROCK2. RIP-PCR assay displayed that G3BP1 could bind to the mRNA of ROCK2, while SNORA71A silencing significantly inhibited their binding activity (Fig. 5E). Interestingly, the deficiency of G3BP1 strongly decreased the mRNA level of ROCK2 in MDA-MB-231 cells (Fig. 5F-H). We next used Actinomycin D to examine the effect of SNORA71A on the stability of ROCK2 mRNA. As expected, overexpression of SNORA71A remarkably increased the mRNA stability of ROCK2, while silencing of G3BP1 showed the opposite effect (Fig. 5I). Rescue assay showed that overexpression of SNORA71A increased the mRNA and protein levels of ROCK2, whereas G3BP1 silencing remarkably abrogated this regulatory function of SNORA71A (Fig. 5J,K). Fish and immunofluorescence assays showed SNORA71A mainly located in the cytoplasm, and co-located with G3BP1 protein (Fig. 6). Moreover, ROCK2 mRNA and G3BP1 protein were also co-located in cytoplasm, and both overexpression of SNORA71A and ROCK2 increased the signal of SNORA71A and ROCK2, but has no effect on their location. Taken together, SNORA71A might recruit mRNA stability-regulated protein G3BP1 to bind to the mRNA of ROCK2, thus enhancing the mRNA stability of ROCK2. SNORA71A promotes tumor growth in vivo To verify the carcinogenesis of SNORA71A in vivo, we injected mice with SNORA71A-overexpressing MDA-MB-231 cells (Fig. 7A). The tumor weight and tumor volume were significantly increased in the SNORA71A-overexpressing mice compared to the control mice (Fig. 7B,C). Notably, the mRNA levels of ROCK2 were significantly increased in the tumor tissues of mice injected with SNORA71A-overexpressing MDA-MB-231 cells compared to those in the control mice (Fig. 7D). High expression of SNORA71A significantly suppressed the E-cadherin and increased Vimentin in the tumor tissues of mice (Fig. 7E). However, knockdown of SNORA71A has an opposite effect on the tumor size and the expression of ROCK2 and EMT markers (Fig. 7F-J). We next investigated whether knockdown of SNORA71A affect migration and invasion in vivo. The lung metastasis model was established by tail vein injection of sh-NC and sh-SNORA71A MDA-MB-231cells, and the pictures of lung tissues were taken after 5 weeks. Knockdown of SNORA71A notably decreased the metastasis nodules of breast cancer cells in in vivo (Fig. 7K). Discussion In cancer, malignant cells can escape the primary tumor through EMT, invade surrounding tissues, and colonize distant sites through the blood or lymph pathways, resulting in metastasis [19]. The expression profiles and roles of snoRNAs in the EMT program remain unclear. In the present study, we first revealed the snoRNA expression profiles in EMT-activated breast cancer cells and identified a snoRNA that functions as a positive regulator of EMT by upregulating ROCK2 expression in the TGF-b signaling pathway. Emerging evidence implicates that snoRNAs play a pivotal role in multiple physiological processes and diseases, including cancer. For example, SNORD50A and SNORD50B inhibit tumorigenesis by directly binding and inhibiting K-Ras in human cancer [20]. SNORD89 enhances tumorigenesis via mediating the Notch1/c-Myc pathway in patients with ovarian cancer [21]. SNORD78 functions as an oncogene by increasing the proliferation of non-small cell lung cancer cells via activation of G0/G1 cell cycle arrest [22]. Interestingly, the expression levels of C/D box snoRNAs are remarkably related to the frequency of leukemia stem cells in the patients with primary acute myelogenous leukemia [11]. In our study, we observed that SNORA71A was upregulated in the metastasis tissues and MDA-MB-231cells, which has high metastatic potential, and promoted the proliferation, migration, invasion and EMT in breast cancer cells. Similarly, SNORA71A drives proliferation, invasion, and migration of lung tumors by regulating the phosphorylation of MEK and ERK1/2 [13]. These data showed that SNORA71A promoted EMT development and might act as a therapeutic target for metastasis in breast cancer. snoRNAs have been implicated in multiple regulatory mechanisms, including rRNA processing, RNA splicing, translation regulation, and oxidative stress response. For instance, snoRNA HBII-52 targets a silent element of the exon via complementary basepairing and mediates the selective splicing of 5-ht2cr [23,24]. snoRNAs preferentially bind and interact with the DNA-binding domain of PARP-1 and stimulate catalytic activity of PARP-1 to promote cell proliferation [25]. The most classic mechanism of snoRNA activity is control of rRNA processing and biogenesis by guiding the modifications on rRNA positions [26]. In addition to guiding rRNA modification, snoRNAs also promote 2 0 -O-methylation modification on mRNA, which increases the expression of mRNA while inhibits the translation [10]. In this study, we found that SNORA71A overexpression elevates ROCK2, which can rescue the EMT, mediated by SNORA71A. ROCK2 is a member of the TGF-b signaling and has been widely reported to directly regulate the EMT program in cancer [27][28][29][30]. Therefore, we speculate that SNORA71A is involved in the TGFb-induced EMT by ROCK2. The mechanism, by which SNORA71A regulates ROCK2, may implicate the aforementioned snoRNA mechanisms. For instance, SNORA71A may regulate the processes of ROCK2-related ribosomes or directly modulate ROCK2 mRNA modification, processing, or stability [10,31]. We exclude the mechanism of SNORA71A-mediated ROCK2 regulation through the microRNA-like functions [32], due to its activating effect on ROCK2. We found SNORA71A can bind to RNA-binding protein G3BP1, and promote the binding of G3BP1 to ROCK2 mRNA. Moreover, silencing of G3BP1 abolished the upregulatory effect of SNORA71A on ROCK2. Previous studies have shown that G3BP1 mediates mRNA stability [17,18]. Moreover, G3BP1 has been implicated to be involved in the TGF-b/Smad and p53 signaling pathways, and contribute to tumor progression and metastasis [33,34]. These evidences support one of the SNORA71A mechanisms that it might recruit G3BP1 to the mRNA of ROCK2, thus increasing the mRNA stability of ROCK2, promoting thereby the EMT development of breast cancer by TGF-b signaling (Fig. 8). Interestingly, we also found SNORA71A could bind to VIM by MS. Previous studies have reported that RNA, such as circular RNA [35], can directly bind to VIM. At present, there is no literature report that snoRNA can bind to VIM. We speculate that SNORA71A may promote the EMT also via directly regulating VIM protein. Nevertheless, the detailed mechanisms of SNORA71A activity in metastasis development need to be further investigated. In the present study, TGF-b dramatically activated the expression of SNORA71A. There are many transcription factors downstream of TGF-b, including Smad3, Sp1, and Myc. It has been reported that TGFb can activate the expression of noncoding RNA, for instance, TGF-b can activate lncRNA LINC00115, which is a critical regulator for glioma stem-like cell tumorigenicity [36]. We hypothesized that TGF-b may promote the transcription of SNORA71A through the downstream transcription factors. However, future research will further clarify how TGF-b-induces SNORA71A expression. Molecular biomarkers provide an effective method for the early detection of breast cancer and contribute to personalized treatment for patients. Currently, multiple biomarkers have been developed and used as routine prognostic markers to identify cancer types and guide treatment in breast cancer, for instance, TP53 mutation, immune biomarkers (programmed death-ligand 1 (PDL1)), and breast cancer susceptibility gene 1 or 2 (BRCA1/2) and PI3K/AKT/mTOR [37]. However, the incidence of recurrence, distant organ metastasis and breast cancer-related death after treatment remains high. Therefore, it is urgent to find new biomarkers and molecular therapeutic targets. Currently, noncoding RNA has been reported as a diagnostic marker for breast cancer [38,39]. The role of snoRNAs in breast cancer remains unknown. We showed SNORA71A could distinguish between normal samples and tumor samples, as well as patients with nonmetastatic breast cancer and patients with metastasis, indicating that SNORA71A might act as a novel biomarker for breast cancer. Conclusion In summary, we demonstrated that SNORA71A was upregulated in the breast cancer tissues and were associated with poor prognosis of breast cancer patients. SNORA71A promotes tumor growth and metastasis in breast cancer. Moreover, SNORA71A increases the mRNA stability of ROCK2 via binding to G3BP1. Chemically modified anti-SNORA71A agents, suppressing the metastasis of breast cancer, may have therapeutic potential and represent a novel strategy for treatment against metastasizing cancers. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Fig. S1. Establishment of EMT model in breast cancer cells. Fig. S2. Expression profiles of snoRNA in TGFb-induced breast cancer cells. Fig. S3. SNORA71A promotes cell proliferation, migration, invasion and EMT in breast cancer cells. Fig. S4. SNORA71A inhibits cell apoptosis in breast cancer cells. Fig. S5. Effect of SNORA71A on cell proliferation and cell cycle. Fig. S6. The migration and invasion experiments were also conducted in the presence of aphidicolin (1mg/L). Fig. S7. Effect of SNORA71A siRNA-2 on migration, invasion and EMT. Fig. S8. Deficiency of SNORA71A abrogated the promote effect of TGF-b on EMT. Fig. S9. ROCK2 promotes the migration and invasion of breast cancer cells. Fig. S10. ROCK2 overexpression significantly blocked the silencing of SNORA71A to enhance the proliferation of breast cancer cells. Table S1. Clinical features of breast cancer patients, and the correlationship between SNORA71A expression and different clinical features. Table S2. Primer and siRNA sequences used in this study. Table S3. Statistic analysis ofreads and base from snoRNA sequencing data. Table S4. Mapping statistics of snoRNA from small RNA sequencing data. Table S5. The commonly upregulated and downregulated snoRNA in both two breast cancer cells. Table S6. Mass spectrometry of SNORA71A pull down product. The"Entry name" shows theproteins may bind to SNORA71A.
2022-02-02T06:18:05.869Z
2022-01-31T00:00:00.000
{ "year": 2022, "sha1": "a9c46c608b65598fe55ae9ac3e8f2457d57745bc", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1878-0261.13186", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "4a7e7ee0addc8f36c33a55869ec922c7bb0f52d2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232047029
pes2o/s2orc
v3-fos-license
Chyle Leak after Right Axillary Lymph Node Dissection in a Patient with Breast Cancer Background A female patient was diagnosed with a right-sided chyle leak following right skin sparing mastectomy, axillary lymph node dissection, and immediate tissue expander placement in the setting of invasive ductal carcinoma status post neoadjuvant chemotherapy. Summary. Our patient underwent a level I and II right axillary lymph node dissection followed by an axillary drain placement. On the first postoperative day, a change from serosanguinous to milky fluid in this drain was noted. The patient was diagnosed with a chyle leak based on the milky appearance and elevated triglyceride levels in the fluid. While chyle leaks are rare after an axillary dissection and even rarer to present on the right side, it is a complication of which breast surgeons should be aware. The cause of this complication is thought to be due to injury of the main thoracic duct, its branches, the subclavian duct, or its tributaries. Management is usually conservative; however, awareness of this potential complication even on the right side is of the utmost importance Conclusion Chyle leaks are an uncommon complication of axillary node dissections and even rarer for them to present on the right side. It can be diagnosed by monitoring the drainage for changes in appearance and volume and by conducting supporting laboratory tests. Conservative management is generally suggested. Introduction Postoperative chyle leak in the setting of breast cancer and surgery is an exceedingly rare complication of axillary lymph node dissection. It has a reported incidence rate of 0.36-0.84%, and the majority of the cases published report a leak on the left side with few previous reports of a chyle leak following a right-sided axillary dissection [1][2][3][4]. We present a case of chyle leak following right skin sparing mastectomy, axillary lymph node dissection, and immediate tissue expander placement in a patient with right breast invasive ductal carcinoma status post neoadjuvant chemotherapy. Case Report A 57-year-old female patient presented with the diagnosis of cT3N1 right breast invasive ductal carcinoma. No distant metastatic disease was noted on staging imaging. After completing neoadjuvant chemotherapy, the patient underwent a right skin-sparing mastectomy, sentinel lymph node biopsy and targeted axillary lymph node dissection, and immediate stage I breast reconstruction with tissue expander placement. Intraoperatively, 1 of 3 axillary lymph nodes was positive for malignancy on frozen section and a level I and II right axillary lymph node dissection was completed. Multiple bulky and soft nodes were found in the axilla. The wound was irrigated, and no bleeding or lymphatic leakage was noted. The remainder of the operation was completed without complications. One drain was placed in the right axillary dissection region, and two additional drains were placed at the right breast following the tissue expander placement. On the first postoperative day, all three drains were found to have serosanguinous drainage at 12 hours postoperatively. At approximately 18 hours postoperatively, the patient was noted to have 100 CC of milky fluid in the right axillary drain. The differential diagnosis for the milky drainage included lymphatic obstruction due to the bulky lymph nodes found at the time of dissection and a right-sided chyle leak. Triglyceride level was sent on the drainage and returned elevated at 1,425 mg/dl. Drain fluid triglyceride levels > 100 mg/dl confirm the diagnosis of chyle leak [5]. The patient was managed in a conservative fashion with pressure dressings to the right axilla. She was instructed to follow a low-fat diet and discharged home on postoperative day 2. The patient followed up in clinic on postoperative day 12. She reported that axillary drain fluid turned serous and decreased in output, approximately 50 cc/day. It was recommended that the patient continue conservative management with the drain in place and pressure dressing. One week later, the patient reported 27 cc of serous drainage in the past 24 hours. The patient was recommended to discontinue the pressure dressing at this time. The patient presented two days later with scant serous discharge in her drain. The drain was removed on postoperative day 20. Discussion The thoracic duct is the structure responsible for lymph drainage for most of the body. It rises superiorly from the cisterna chyli, ascends through the aortic hiatus of the diaphragm, crosses over the midline to the left side of the thorax at the level of the aortic arch, ascends above the left clavicle, and finally descends to empty into the junction of the left subclavian and internal jugular veins [6]. Many anatomic variations have been described, including branches to the right jugular or subclavian veins [6]. A chyle leak during the postoperative period after head and neck surgery is a known phenomenon. For example, this risk following surgery for multinodular goiter ranges from 0.5 to 2% [7]. It is thought to be caused by injury to the thoracic duct and thus commonly presents on the left side [1]. The same phenomenon after axillary node dissection or left sided sentinel node in the setting of breast cancer is very rare. While not completely understood, there seem to be two main hypotheses on how this occurs: injury to an aberrant branch of the thoracic duct and chyle reflux due to injury of the subclavian duct [4]. The typical anatomy as outlined above is only present in about 50% of individuals. Some of the anatomical variants include termination of the thoracic duct into the external jugular vein, vertebral vein, brachiocephalic vein, suprascapular vein, and transverse cervical vein and terminating as a single vessel, bilateral ducts, or several terminal branches. Additionally, it has been shown that the duct empties on the right in 2-3% of cases and bilaterally in 1.5% of cases [6]. Chylous drainage may be noted postoperatively due to the presence of microscopic tributaries, specifically in the setting of occlusion due to a mediastinal mass. No mass was seen on staging imaging in this patient. As the anatomical location of the thoracic duct is generally remote from the axilla, many use these variations to explain how injury to the duct while dissection might be possible. Taylor et al. conclude that an injury to an aberrant branch of the thoracic duct is possible during axillary node dissection and can present with a chyle leak [8]. Others, however, disagree that even an aberrant branch of the thoracic duct could be injured during axillary node dissection. Singh et al. postulate that injury to the main duct or its branches is not possible as variations to the normal anat-omy of the thoracic duct occurs within 1 cm of the jugulovenous venous junction and an axillary node dissection does not extend to this level [9]. Instead, they propose that chyle reflux in the setting of this dissection occurs due to injury of the subclavian duct or a tributary [9]. In their case report, Daggett et al. present 37 known cases of chyle leak after axillary node dissection with only three presenting on the right side [4]. Daggett et al. reviewed the literature from 1993 to 2011, suggesting that this case is, to our knowledge, the first report of a right sided leak since their review. Chyle leak following axillary node dissection most commonly presents on the first few days postoperatively, as was the case for this patient. It presents with drainage of a milky, nonpurulent fluid in the drain in place of serosanguinous or serous fluid [10,11]. Laboratory testing is used to confirm the presence of a chyle leak using tests such as triglycerides, protein, cell count, lipoprotein electrophoresis, cholesterol, or pH [4]. Management requires adequate drainage and pressure dressings and can also include diet modifications to include a low-fat diet. Surgery is rarely required but may be necessary if the leak is identified intraoperatively or if it persists after conservative management [8]. Our patient responded favorably to conservative management. It is standard practice to leave a drain in place following an axillary node dissection. This allows for monitoring of the quality of the output as well as avoidance of seroma formation. The duration of drain placement is surgeon dependent; however, it is our practice to remove axillary drains when serous output is <30 cc/day for two consecutive days. We recommend that in the rare incidence of chylous drainage, conservative management attempted for at least 3-4 weeks. This will likely require the drain to remain in place for a longer duration, and infection risk must be considered in this setting. Although the incidence of chyle leak is low in this commonly performed procedure, surgeons should be aware of this potential complication, even on the right side and how to manage it, and inform patients accordingly. Conclusion This case report represents a rare occurrence of right sided chyle leak following axillary node dissection. A chyle leak following an axillary node dissection is an unusual complication that generally occurs on the left side but has occasionally presented on the right. Though the axilla is remote from the site of the thoracic duct, injury to the thoracic duct, one of its multiple described aberrant branches, the subclavian duct, or its tributaries could cause this complication. Breast surgeons should be aware of further variation from normal anatomy including emptying of the thoracic duct on the right or bilaterally can explain the occurrence of a chyle leak on the right side, as well as how to identify and manage this complication. Lessons Learned A chyle leak is a rare but possible complication of axillary node dissection and even more unusual on the right side. Case Reports in Surgery Breast surgeons should be aware that variations of normal anatomy may explain the occurrence of a chyle leak on the right side. They should thus understand this possibility and how to identify and manage this complication. Data Availability Data is available in the manuscript.
2021-02-26T05:58:28.328Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "0651dc76c545a49538e9802d25454e14c939441a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cris/2021/8812315.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0651dc76c545a49538e9802d25454e14c939441a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265350421
pes2o/s2orc
v3-fos-license
Codesigning implementation strategies to improve evidence‐based stroke rehabilitation: A feasibility study Abstract Introduction People with lived experience are rarely involved in implementation science research. This study was designed to assess the feasibility of codesigning and delivering implementation strategies with people with lived experience of stroke and health professionals to improve evidence‐based stroke rehabilitation. Methods We used Experience‐Based CoDesign to design and deliver strategies to implement Stroke Clinical Guideline recommendations at one Australian inpatient stroke rehabilitation unit. Workgroups were formed with health professionals and people with 6–12 months experience of living with stroke (survivors and carers). Feasibility of the codesign approach (focusing on acceptability, implementation fidelity, signal of promise) was evaluated using mixed methods, using data from interviews, observations and inpatient self‐reported outcomes. Results Of 18 people with stroke invited, eight (44%) agreed to join the lived experience workgroup. All disciplines with ≥1 full‐time staff members on the stroke unit were represented on the health professional workgroup. Median workgroup attendance over 6 months was n = 8 health professionals, n = 4 survivors of stroke and n = 1 carers. Workgroup members agreed to focus on two Guideline recommendations: information provision and amount of therapy. Workgroup members indicated that the codesign approach was enjoyable and facilitated effective partnerships between health professionals and lived experience workgroup members. Both cohorts reported contributing valuable input to all stages of the project, with responsibility shifting between groups at different project stages. The codesigned strategies signalled promise for improving aspects of information provision and creating additional opportunities for therapy. We could not compare patient‐reported outcomes before and after the implementation period due to high variability between the preimplementation and postimplementation patient cohorts. Conclusion It is feasible to codesign implementation strategies in inpatient rehabilitation with people with lived experience of stroke and health professionals. More research is required to determine the effect of the codesigned strategies on patient and service outcomes. Patient or Public Contribution People with lived experience of stroke codesigned and evaluated implementation strategies. Author F. C. has lived experience of stroke and being an inpatient at the inpatient rehabilitation service, and has provided input into analysis of the findings and preparation of this manuscript. | INTRODUCTION Quality healthcare is reliant on health professionals delivering evidence-based and person-centred care, which increases the likelihood of positive health outcomes. 1Knowledge about which interventions are associated with positive health outcomes is steadily increasing. 2 With an increasing volume of research evidence, methodologies have been developed to package research information into useable formats such as clinical guidelines, to assist health professionals and patients decide about appropriate healthcare. 3Use of guidelines and the delivery of evidence-based care remains inconsistent internationally, 4,5 and a whole field of research, described as implementation science, is dedicated to investigating how to improve implementation of evidence-based healthcare. 6 the field of stroke, there is a vast body of research evidence for interventions to reduce disability and improve function.In Australia, this evidence has been synthesised into 'living' Clinical Guidelines for Stroke Management (hereafter referred to as Guidelines), which are updated as new evidence becomes available. 7wever, in 2020, only 54% of Australian inpatient stroke rehabilitation facilities reported adhering to the Guidelines. 8dressing the problem of suboptimal delivery of stroke rehabilitation is not straightforward.There is scant new knowledge being generated in this area-less than 3% of published stroke rehabilitation research in leading stroke rehabilitation journals evaluate the implementation of evidence-based practices in healthcare. 9Further, a recent Cochrane systematic review about implementation strategies in stroke rehabilitation 10 highlighted the need for more high-quality research. Many Guideline recommendations align directly with the preferences of survivors of stroke and carers, which have been collated in systematic reviews. 11,12For instance, survivors of stroke want more therapy, carers want to be involved in therapy sessions and the Guidelines include recommendations that rehabilitation should provide as much scheduled therapy as possible and that survivors should be encouraged to do extra practice on their own or with family assisting. 7Carers and survivors want information about stroke, 11,12 and the Guidelines include a strong recommendation that survivors and carers should be provided information tailored to meet their individual needs.Given these commonalities, we wanted to involve survivors of stroke and carers in partnership with health professionals to develop strategies to implement Guideline recommendations.Codesign approaches offer the potential to improve efficiency and reduce research waste by ensuring that key stakeholders are involved in prioritising research questions and designing suitable strategies. 13However, involvement of people with lived experience in research to improve the delivery of evidence-based practice is not routine.In a systematic review of published literature from 2004 to 2019, only 16 publications were identified where people with lived experience were involved in research to change health professional behaviour to improve implementation of evidence-based practice, and only four studies involved people with lived experience beyond the development phase. 14Similarly, patient experience is rarely used to inform quality improvement activities in rehabilitation; a recently published scoping review identified 10 publications that used patient experience to improve quality of rehabilitation services, but patient experience was collected solely in the form of retrospective survey data. 15e aim of this research was to assess the feasibility of using Experience-Based CoDesign 16 to design and deliver implementation strategies to improve adherence with Guideline recommendations and enhance patient and carer experience at one inpatient stroke rehabilitation unit.Three common focus areas of feasibility studies 17 were selected and explored in this study: acceptability, implementation fidelity and limited efficacy (signal of promise).The specific research questions were: 1. Was the codesign approach acceptable to health professionals and people with lived experience of stroke? 2. How was the codesign approach implemented?Could the principles of codesign be realised in inpatient rehabilitation?3. Did application of the codesigned strategies show promise for improving patient/carer experience and adherence to Guideline recommendations? | Mixed-methods study The programme logic of the codesign approach for improving the delivery of evidence-based practice and patient/carer experience is presented in Figure 1. Quantitative and qualitative data were collected and triangulated to inform strategy development and evaluate feasibility of the codesign approach. | Setting Australia provides universal healthcare.The recommended pathway after stroke is admission to an acute stroke unit, assessment for rehabilitation needs and referral to an appropriate rehabilitation service if rehabilitation needs are identified. 18Inpatient rehabilitation is provided to approximately 40% of survivors of stroke, 19 transfer occurs 1-2 weeks poststroke and median length of stay in inpatient rehabilitation is 21 days. 20In this study, the participating inpatient rehabilitation hospital was affiliated with a large acute hospital in metropolitan Adelaide, South Australia.Survivors of stroke requiring inpatient rehabilitation were transferred to the rehabilitation hospital when medically stable and colocated on one 25-bed ward.Rehabilitation was provided by a multidisciplinary team including rehabilitation physicians, nurses, physiotherapists, occupational therapists, speech pathologists, social workers, clinical-and neuro-psychologists, exercise physiologists and dietitians.Therapy sessions were provided on weekdays only. | Participants and recruitment Three cohorts of stakeholders were recruited for different stages of the project: lived experience workgroup members, health professional workgroup members and stroke survivor and carer (current inpatient) participants.Workgroup members and potential participants were informed about the project verbally and in writing.All participants provided written consent. | Lived experience workgroup members Survivors of stroke and/or their carers who been discharged from inpatient rehabilitation in the previous 6-12 months were eligible to join the lived experience workgroup.A staff member from the rehabilitation service reviewed lists of discharged patients to identify F I G U R E 1 Programme logic of codesign approach for improving the delivery of evidence-based practice and patient/carer experience.PDSA, Plan-Do-Study-Act.people with stroke who varied in terms of their age, living arrangements, cultural background and stroke-related impairments. The staff member telephoned potential participants to inform them of the study, and if they expressed interest in participating, sent written information via post or email. | Health professional workgroup members Health professionals from all disciplines working in the inpatient stroke rehabilitation unit were invited to participate in an individual interview or focus group and attend monthly workgroup meetings. We aimed to recruit an experienced representative from each discipline. | Stroke survivor and carer study participants People with stroke who were receiving inpatient rehabilitation and their informal carers (partners, family members, close friends), who were aged 18 or over, were invited to complete questionnaires. Carers or next-of-kin could provide proxy consent for people who were unable to understand the consent process due to aphasia or cognitive changes. | Baseline data collection Observational fieldwork was conducted by the principal author between May to August 2019 to collect data about social, professional and organisational practice and patient activity levels within the rehabilitation unit.Members of the health professional workgroup were interviewed about current practice and areas needing service improvement, with a focus on information provision, goalsetting, discharge care planning and intensity of practice.Lived experience workgroup members participated in video-recorded interviews about their experiences regarding inpatient rehabilitation, particularly pertaining to information provision, goalsetting, discharge care planning and intensity of practice.The principal author edited the interviews to create one 30-min video to be shared with the lived experience and health professional workgroups. Patients with stroke and their carers who were participating in inpatient rehabilitation were invited to complete questionnaires (see Section 2.7). | Codesigning strategies Experience-Based CoDesign 16 was used to codesign the implementation strategies, by building on the experiences of patients, carers and staff through filmed interviews, discussions and observation. A series of facilitated workshops were conducted between May and December 2019.Initially, all workgroup members were asked to introduce themselves and share their motivation for participating in the project.'Ways of working' were tabled by author E. A. L. and accepted by all workgroup members, which outlined that each person regardless of background brought valuable expertise to the group and had an equal right to speak and be heard.Health professional workgroup members met to receive feedback on the observations and to identify priorities to improve how rehabilitation was delivered to improve adherence with Guideline recommendations.Lived experience workgroup members met to receive feedback on the observations, view the 30-min video and identify priorities to improve rehabilitation delivery and improve patient experience.At the third workshop, health professional workgroup members and lived experience workgroup members came together, viewed the video and joined a facilitated discussion to identify the top three priorities to address to improve both patient experience and adherence to Guideline recommendations.Following identification of the priorities, workgroup members formed task groups for each priority area; each task group comprised at least one health professional and at least one person with stroke/carer.Task groups developed action plans (i.e., strategies) to address each priority area to improve rehabilitation service delivery in terms of both adherence to Guideline recommendations and patient experience.Codesign activities within the workgroups commonly included brainstorming, facilitated discussion (E.A. L. and M. W. acting as facilitators), with a particular emphasis on seeking feedback from lived experience workgroup members about ideas and strategies being suggested and whether they considered they would improve patient experience. Refreshments (cold and hot drinks, finger food) were served at all meetings. | Delivery and refinement of the strategies All workgroup members were invited to attend monthly codesign meetings for 6 months to review progress and refine the strategies to improve rehabilitation service delivery, using Plan-Do-Study-Act cycles. 21ogress on each priority area was presented to the entire workgroup, then attendees were divided into task groups to refine strategies.Most strategies were delivered by health professionals as part of core business or quality improvement, with support from the project team.One staff member (M.W.) was employed as the project facilitator 6 h/week to support strategy development and delivery. The principal author attended all workshops and monthly meetings and visited the site ad hoc as requested by workgroup members. | Outcomes and data collection During the 6 months when strategies were developed, delivered and refined, data were collected via field notes and codesign meeting minutes. On completion of the project, semistructured interviews were conducted with workgroup members to discuss feasibility of the codesign approach and its component elements (Figure 1), focusing on acceptability, implementation fidelity and signal of promise of being successful (limited efficacy). 17 People with stroke and their carers who were receiving inpatient rehabilitation before the study period (preintervention) and patients on the ward 6 months after the codesigned strategies were introduced (postintervention) were invited to complete a series of questionnaires collecting patient-reported experience measures and patient-report outcome measures.These data were collected to determine the feasibility of the intended data collection methods and whether the codesigned strategies showed promise of improving implementation of evidence-based recommendations and patient experience. Figure 2 illustrates the data collected during the study. | Research question 1: Acceptability of the codesign approach Interview data about workgroup members' satisfaction with the codesign approach, its perceived appropriateness, fit within organisational culture and perceived effects on the organisation were mapped to 'acceptability'.Quantitative data regarding the proportion of eligible people (lived experience and health professionals) who consented to join workgroups and attended meetings were recorded. | Research question 2: Implementation fidelity of the codesign approach Interview data about how power was shared between health professionals and lived experience workgroup members and how workgroup members worked together were mapped to 'implementation fidelity'.Patient-reported outcome measures and patient-reported experience measures were obtained to measure patient experience (Picker Patient Experience Questionnaire [PPEQ]), 22 anxiety and depression (Hospital Anxiety and Depression Scale, 23 HADS) and quality of life (EQ.5D). 24To measure feasibility of these data collection methods, we evaluated the proportion of consented patient participants for whom full data were collected. | Analysis Directed content analysis 25 W. and F. C. (project facilitator and lived experience workgroup member) for consistency with their experience of the codesign process.The coding tree is available in the Supporting Information. Illustrative quotes are presented in the text. Quantitative data were entered into Excel and imported to SPSS28 and descriptively analysed.We compared data from patients admitted before the study period (preimplementation), and patients on the ward 6 months after the implementation strategies were introduced (postimplementation).We planned to conduct X 2 for categorical and t-test for continuous variables to compare outcomes and characteristics of the groups recruited preimplementation and postimplementation. | RESULTS Over the course of the project, 15 health professionals contributed as workgroup members.Ten health professional workgroup members participated in interviews (six in a face-to-face group interview, three As part of the action cycles, workgroup members were asked to identify how to measure the delivery of the targeted evidence-based practice.No data were routinely collected on amount of self-directed practice, so instead the workgroup agreed to measure whether individualised self-directed exercise programmes were provided to patients by therapists and whether exercise sheets were completed by patients.Similarly, no data were routinely collected about the provision of tailored information, so workgroup members decided to conduct an audit of bedside documents to determine whether staff were recording information about the treating team and key milestone dates.Further, items within the PPEQ, such as 'When you had important questions to ask a doctor/nurse, did you get answers that you could understand?' were considered suitable to measure information provision by the multidisciplinary team.Strategies trialled by the workgroup are presented in Table 1. | Acceptability of the codesign approach The codesign approach was acceptable, with lived experience and health professional workgroup members reporting the approach was satisfying, enjoyable, appropriate and valuable to the health service.Workgroup members reported that the processes used to codesign, implement, evaluate and refine strategies were appropriate for improving rehabilitation service delivery.Health professional workgroup members reported that feeding back to other workgroup members each month about progress assisted in accountability for action plans. In order for projects to be realised I think … there needs to be a drive that holds you accountable, and I think this is quite a good model for that.(HP3) T A B L E 1 Strategies developed and delivered to address priority areas. Strategies to address information provision for patients and families Creation of rehabilitation folder-relevant, up-to-date written information stored in one place and provided to each patient Content of routinely provided written materials updated-site information including map, ward processes, general rehabilitation information, therapies and activities Yes Format of routinely provided written materials updated-larger font, more white space, site logo Yes Purchase of folders to keep written information together, information sheets laminated to allow reuse between patients Yes Folder provided to each patient on the ward Yes Volunteer staff oriented to new materials within information folder Yes Volunteer staff sit with patients and families and discuss written resources when admitted to ward Yes Individual information eg therapy programmes added to folder Yes Ward staff encouraged patients to take folder with them to therapies, so when patients had questions, staff could provide answers and simultaneously refer to relevant section in the rehabilitation folder Inconsistent Bags made to carry folder on patient wheelchairs, supplied to each patient using a wheelchair Yes Information about staff Photo board on entry to ward with photo of each staff member, their name and role Yes Laminated form created and placed on the wall by each patient's bed with space for name and contact details of treating team members Yes Whiteboard markers tied with string to laminated form to enable staff to record required information Yes Treating team members to fill in name and contact details when patients are admitted to a ward Details not routinely filled in The median number of health professional workgroup members to attend each 90-min monthly codesign meeting was eight. Joining the codesign workgroup appeared less acceptable to people with lived experience.Eighteen people with stroke who had completed their rehabilitation were purposively invited to participate in interviews about their rehabilitation experience and attend the codesign workgroups.More than half (56%) chose not to participate. The busyness and stress of learning to cope with life after stroke was mentioned by numerous people with stroke who were contacted but declined to be interviewed.Eight survivors of stroke and three carers (two spouses, one child) agreed to participate in video-recorded interviews, conducted at their place of residence.Travel costs to the codesign meetings (held at the hospital) were covered by the research team for all lived experience workgroup members, travel arrangements were organised and communicated to workgroup members with cognitive changes (including written reminders and phone calls), communication partners were encouraged to accompany people with aphasia and translators were offered for the workgroup member who did not speak English at home.Five survivors of stroke and two carers attending one or more of the subsequent codesign meetings (median attendance at meetings was four survivors of stroke, one carer).Selected demographic features of workgroup members with stroke are presented in Table 2. | Implementation fidelity of codesign approach Workgroup members indicated that the project adhered to the core principles of Experience-Based CoDesign, describing effective partnerships between health professional and lived experience workgroup members.However, differing levels of contributions by lived experience workgroup members were highlighted by both health professionals and people with lived experience. I think some of them struggled to know … how to be | Signal of promise of codesigned strategies being successful (limited efficacy) Some codesigned strategies were implemented as intended (data collected during monthly workgroup meetings, see Table 1) and mostly centred around the production of written information resources, exercise programme prescription and environmental restructure.Strategies requiring clinicians to change their behaviour were inconsistently implemented; exercise programmes were routinely provided, but documents (treating team contact list, key milestone dates) were inconsistently completed.Improving continuity of nursing care was nominated and strongly supported by all lived experience workgroup members to enhance the way information was provided and therapy was supported on the ward.Lived experience workgroup members reported receiving care from different nurses every day during their inpatient stay and reported feeling that the allocated nurse did not understand their care needs or rehabilitation goals.This contrasted with one workgroup member's experience who had received rehabilitation at a prior site, where each patient would usually receive care from a small team of nurses using a continuity of care model.Numerous attempts were made to change the way nursing allocations were organised, including meetings with nursing management and a presentation to staff by a lived experience workgroup member, but by the end of the codesign period, nursing allocation patterns were unchanged, with all newly admitted patients receiving care from different nursing staff over their first 3 days. Twenty-three people with stroke (60% male, median age 60 years) who were participating in inpatient rehabilitation agreed to complete questionnaires before the intervention occurring on the ward, and 20 (50% male, median age 80 years) consented at the end of the 6-month intervention period.All recruited participants completed the EQ.5D and HADS, whereas only 88% (21 in preintervention, 17 in postintervention cohort) completed the PPEQ. All participants who did not complete the PPEQ had aphasia. The feasibility study was not powered to detect a change in our selected outcome measures and there were significant differences in key demographic and stroke-related characteristics of the preimplementation and postimplementation cohorts (see Table 3).People in the preimplementation cohort were younger (mean age 60 vs. 80 years), less likely to have had an ischaemic stroke (39% vs. 90%) and were less likely to have active hand movement on admission to hospital (17% vs. 60%) when compared to people in the postimplementation cohort.Accordingly, we did not conduct statistical Spoke language other than English at home 1 0 Could not read or write 1 0 Lived in residential aged care 1 0 Reliant on a wheelchair for mobility 4 1 Aged under 65 years 1 0 [28] Results for the EQ.5D, HADS and PPEQ are presented in Table 4.The mean visual analogue scales of self-reported health status were 6.1 and 6.8 in the preimplementation and postimplementation cohorts respectively.Depression, as per a score of ≥8 on the HADS, was not uncommon (preimplementation median 8; postimplementation median 7.5), whereas anxiety scores were generally lower (median anxiety scores of 7 and 4 for the preimplementation and postimplementation cohorts).While most participants reported consistently being treated with respect (67% preimplementation, 82% postimplementation), most also wanted to be more involved in care decisions (71% and 53%). | DISCUSSION In this single-site evaluation conducted in Australia, we were able to demonstrate that partnering with patients to prioritise and codesign implementation strategies in an inpatient stroke rehabilitation setting was acceptable to health professionals, people with stroke and their carers and the approach could be implemented as intended.We were unable to determine whether the approach is effective for improving outcomes in survivors of stroke because the feasibility study was not powered, and the pre-and postimplementation cohorts were markedly different on key demographic variables that are independently associated with our outcome measures. The codesign process was valuable for developing information resources and templates to be delivered by the multidisciplinary team.However, health professional workgroup members did not anticipate barriers to documenting information on the templates, and implementation strategies (other than provision of templates and pens) were deemed unnecessary.Unfortunately, our evaluation indicated that information was not routinely documented.This contrasted with the process for providing therapy programmes, which was a new initiative for physiotherapists and occupational therapists, and strategies were developed to ensure consistent programme provision.The lack of specific strategies for information documentation was an obvious oversight; implementation activities should systematically evaluate performance when new initiatives are introduced and develop specific strategies to support behaviour change when required. 29ny people were reluctant to become involved in a long-term research project within the first year of stroke (less than 50% agreed to join the lived experience workgroup), despite purposively inviting T A B L E 3 Participant demographics and health status pre-and postimplementation period.former patients that staff considered would be comfortable to share their experiences and suggest service improvements.Survivors of stroke and caregivers frequently face challenges following discharge from hospital such as struggling to navigate ongoing care and rehabilitation, 30 experiencing a sense of loss, 30 impaired function or anxiety and depression. 31Time to be involved is a commonly cited barrier preventing people with lived experience from contributing to research projects as co-researchers or consultants 32,33 particularly when fitting the project in around other life commitments. 33 The partnerships formed between health professional workgroup members and people with lived experience of stroke who attended the codesign meetings were valued by both cohorts.The careful selection of people who were invited to become workgroup members likely contributed to effective collaboration, because partnerships are enhanced when stakeholders have skills in creativity, communication and teamwork. 34,35Implementation of evidence-based practices can be supported when the people recommending the change are deemed to be reputable, credible and trustworthy, 36,37 and the esteem with which the health professionals held the lived experience workgroup members was evident during the project and in the postintervention interviews.Lived experience workgroup members spoke openly about the shared power between health professionals and people with lived experience of stroke.The dynamics between the different workgroup members contrast with reports of healthcare institutional culture wherein patient experience and constructive feedback are not taken seriously, 38 and power imbalances between patients and healthcare providers are entrenched. 39,40Using Experience-Based CoDesign, the experiences and perspectives of former patients and carers were explicitly acknowledged at each meeting by the project facilitator as central to the success of the project and all workgroup members were able to reframe interactions to genuinely recognise each other's contributions. A limitation of this study is that we did not collect data on amount of scheduled therapy (not routinely collected at the participating site), which was selected as a priority Guideline recommendation.The codesign workgroup did not endorse having the project facilitator spend her allocated project time on collecting this information because they did not believe that scheduled therapy would address boredom on the ward.Instead, the codesign workgroup opted to measure whether alternative opportunities to increase the dose of therapy were provided and used, emphasising self-directed and carer-supported activities.While not Guideline recommendations, these are both good practice statements (recommendations based on consensus opinion in the absence of evidence) in the Guidelines. 413][44] Nonetheless, the codesign workgroup was motivated to provide opportunities for more activity to improve patient experience.This was an instance where there was a mismatch between measuring adherence with Guideline recommendations (scheduled therapy time) and measuring actions to improve patient experience (activities to promote recovery and reduce boredom), and the codesign team made the considered decision to focus on enhancing the patient experience, which is the overarching philosophy of Experience-Based CoDesign.A further limitation is that the study was conducted on a single inpatient rehabilitation unit, so findings may not be transferable to other sites due to variations in contextual and personal factors. Health professional workgroup members were interviewed individually or in groups, face-to-face or via Zoom, depending on individual preference.Lived experience workgroup members were interviewed individually or with their carer (when stroke survivor and carer were both workgroup members) via phone or Zoom.The interview guide was developed by E. A. L., M. W., D. A. C. and G. H.All interviews were audio-recorded and then transcribed by an independent transcription service. 2. 7 . 3 | Research question 3: Signal of promise for improving implementation and experience measures (limited efficacy) Workgroup members were asked to identify how to measure a change in delivery of evidence-based practice for the priority areas identified in stage 2.5. of the interview transcripts was conducted by two reviewers (E. A. L. and L. N. B.) who read through the transcripts and coded data to the predetermined codes of acceptability and implementation fidelity of the codesign approach.Subcategories were inductively identified independently by each reviewer and then discussed and refined.Analysis was checked by M. F I G U R E 2 Data collected during study.LYNCH ET AL. | 5 of 14 via individual face-to-face interviews and one emailed answers to a series of questions) on project completion.Eleven people with lived experience of stroke (eight stroke survivors, three carers) initially agreed to contribute to the project as lived experience workgroup members.Seven people with lived experience of stroke (five stroke survivors, two carers) attended monthly codesign meetings.Five lived experience workgroup members (four stroke survivors, one carer) participated in telephone interviews at the end of the 6-month period.Forty-three people with stroke participating in inpatient rehabilitation responded to questionnaires (23 preimplementation, 20 postimplementation).The Guideline recommendations the workgroup chose to address were scheduled therapy and information provision.In line with strong feedback from lived experience workgroup members about intense and overwhelming boredom on the ward, particularly on weekends, the Guideline about scheduled therapy provision was adapted towards creating more opportunities for therapy that could occur outside scheduled sessions.One survivor of stroke explained, I used to get out of bed and in my [wheel]chair, and go round the block [hospital corridors] and that's what I used to do to amuse myself, to try and help myself do something, because there was just nothing there … You're there Saturday and Sunday, and what do you do?There's nothing to do there.(Survivor of stroke [SS] 1) Health professional workgroup members particularly valued working with people with lived experience to improve patient experience rather than solely concentrating on performance indicators such as reducing length of stay Health Professional (HP)1: [Working with lived experience workgroup members] added a bit of personal, feel-good value, for me … HP2: Yeah … I feel like it's almost been a bit of self-care Similarly, lived experience workgroup members reported high levels of enjoyment from working with health professionals to improve the rehabilitation service, and from connecting with others with similar experiences.The thing that really touched my heart was the fact that you people, who were already putting in a full day's work … wanted to improve it and do the best … We were all really touched by that….It was a very heart-warming experience.(SS2) What I particularly enjoyed was the interpersonal relationship, particularly with some members of the group.(SS3) | 7 of 14 Health professional workgroup members reported that hearing from the lived experience workgroup members about what could be improved, enhanced their motivation to create change in the workplace.Actually getting their opinions face-to-face has been really powerful and motivating.You know, I understand how hearing from a client how things could be done better, how you're more likely to put something in action if it comes from the client directly.(HP4) Further, discussions within the monthly meetings and having a collective goal were seen to facilitate teamwork within the codesign workgroup as well as strengthening networks within the ward healthcare team.So, having that wide variety of people involved … I could throw out the idea and then someone could go 'oh, have you thought about contacting such and such?' so it just sort of worked in this web.(HP4) Consent rates to join the professional workgroup were highrepresentatives from all disciplines with 1.0 full-time equivalent or more staffing on the ward (medicine, nursing, physiotherapy, occupational therapy, speech pathology, social work, clinical psychology, neuropsychology) agreed to join the codesign workgroup.Nursing was represented by three individuals who worked in different roles (clinical care provision, staff education, quality improvement).Some health professional workgroup members were replaced by another professional from their discipline when staff members rotated off the unit. involved, and how to provide feedback and what ideas to bring.(HP3) I feel like I contributed something, not a lot….Some of the others were able to talk far more … than what I could … A lot of things just didn't bother me one way or the other.(SS4) Both cohorts reported they were able to contribute valuable input to all stages of the project, even though responsibility shifted between health professionals and people with lived experience at different stages.For instance, health professionals tended to defer to T A B L E 1 (Continued) Achieved (yes/no) Document created to record independent practice, stored in folder reviewed with therapy staff Yes Patients to record amount of practice on exercise programme to record adherence to exercise programme Amount of practice not routinely recorded Facilitate more activity outside therapy times by encouraging attendance at the physiotherapy gym for self-directed or carer-directed practice Staff verbally invited patients and carers to attend the physiotherapy gym outside their individual therapy sessions for self-directed or carer-supported exercise Yes Posters created and fixed to walls advertising 'open gym' for self-directed practice outside individual therapy session times Yes Patients and carers who want to do more exercise attend the physiotherapy gym outside their individual therapy sessions to perform additional self-directed exercises Yes lived experience workgroup members when identifying priorities for improvement.The agenda was set through listening to clients and their carer … and we went from there.(Carer 1) [Lived experience workgroup members] really helped narrow down what was the most important thing, where we should be directing the energy, what ideas we should be following through on.(HP5) In contrast, health professional workgroup members would frequently nominate strategies to address areas for improvement and seek advice from lived experience workgroup members about which strategies to trial first.Health professionals usually assumed responsibility for planning how to assess whether the change was successful.I think it was mostly that [the lived experience workgroup members] resolved disagreements between staff … so staff might go 'oh, we could do this, we could do that.This would be more practical, this would be less practical' and then we'd sort of go 'what do you guys think?What would have worked best for you?' (HP5) I think [health professional workgroup members] made a really good effort to accommodate all those changes that we suggested, with a few exceptions, and some of them were related practicability and finance … and availability of resources.So, the thing was we were considered equals in the process.(SS3) T A B L E 2 Sociodemographic details of workgroup team members with stroke. 5 | CONCLUSIONPartnerships between people with lived experience of stroke and health professionals are feasible to codesign and deliver implementation strategies in inpatient rehabilitation.There was very good retention of workgroup members (people with lived experience of stroke and health professionals) who attended one or more codesign meetings.The effect of the codesigned strategies on patient experience and delivery of evidence-based stroke rehabilitation is unclear.Further research is warranted to measure the effect of the codesigned strategies on patient and service outcomes.AUTHOR CONTRIBUTIONSElizabeth A. Lynch contributed to design, ethics approval processing, intervention, securing funding, data collection, analysis and write-up of the manuscript.Lemma N. Bulto contributed to data analysis and write-up of the manuscript.Maria West assisted with data collection and analysis and critically reviewed the manuscript through the lens of a working health professional.Fawn Cooper assisted with data analysis and critically reviewed the manuscript through the lens of a person with lived experience of stroke.Dominique A. Cadilhac and Gillian Harvey assisted with study design, securing funding and critically reviewed the manuscript. Participant outcomes and experiences. Whilewe faced challenges in recruiting people with lived experience who had interest, time and energy to participate in monthly workgroup meetings in this early poststroke period, we had excellent retention of the people who joined; all seven lived experience workgroup members (five people with stroke, two carers) who attended the first codesign continued to contribute to the project over 6 months until its completion.T A B L E 4
2023-11-23T06:17:48.569Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "ac67781d3fa772d8907fc05c529b8c330bb133d0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.13904", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00cd5b762c9149c789f0a1493a46ab07d115338e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256037051
pes2o/s2orc
v3-fos-license
Noether charge, black hole volume, and complexity In this paper, we study the physical significance of the thermodynamic volumes of AdS black holes using the Noether charge formalism of Iyer and Wald. After applying this formalism to study the extended thermodynamics of a few examples, we discuss how the extended thermodynamics interacts with the recent complexity = action proposal of Brown et al. (CA-duality). We, in particular, discover that their proposal for the late time rate of change of complexity has a nice decomposition in terms of thermodynamic quantities reminiscent of the Smarr relation. This decomposition strongly suggests a geometric, and via CA-duality holographic, interpretation for the thermodynamic volume of an AdS black hole. We go on to discuss the role of thermodynamics in complexity = action for a number of black hole solutions, and then point out the possibility of an alternate proposal, which we dub “complexity = volume 2.0”. In this alternate proposal the complexity would be thought of as the spacetime volume of the Wheeler-DeWitt patch. Finally, we provide evidence that, in certain cases, our proposal for complexity is consistent with the Lloyd bound whereas CA-duality is not. Introduction The laws of black hole thermodynamics, at least in their traditional formulation [1][2][3][4], do not include a pressure-volume conjugate pair. This conspicuous absence is perhaps related to the difficulty of defining the volume of a black hole in a coordinate-invariant way: unlike the area of the horizon, a naïve integration over the interior of a black hole depends on the foliation of spacetime. A number of relativists [5][6][7][8][9], and more recently high energy physicists [10], have suggested that the pressure should be identified as the cosmological constant. In this framework, dubbed the extended black hole thermodynamics or "black hole chemistry", the ADM mass of the black hole is reinterpreted as the enthalpy H of the system rather than its internal energy U . The volume, then, can be defined in the usual thermodynamical way to be: JHEP03(2017)119 Fascinatingly enough, in simple cases such as the AdS-Schwarzschild or AdS-Reissner-Nordstrom (AdS-RN) black hole, the thermodynamic volume coincides with a naïve integration over the "black hole interior": In more complicated cases, such as rotating holes or solutions with hair, the thermodynamical volume is less intuitive, and it is an interesting question to ask how the volume arises as an integral of some local quantity over some region of spacetime, in a way similar to (1.2). For a selection of work on or related to this topic, we refer to [11][12][13][14][15][16][17]. Our main goal in this paper is two-fold: on one hand, we attempt to shed light on the meaning of the thermodynamic volume as a geometrical quantity, which, a priori, is an abstract notion of volume associated to the black hole and does not correspond to the actual volume of any spatial region. On the other hand, we will relate the thermodynamic volume to holography, and in particular to the quantum complexity of the boundary state. Why should we believe that the thermodynamic volume has a role to play in the holographic context ? To this question, we will offer four answers, which we list one after another below. The first reason to believe that the thermodynamic volume has a place in holography is that, as we will demonstrate below, this quantity is derivable from the Noether charge (or Iyer-Wald) formalism [18,19], or a slight twist thereof. This is the main finding of section 2. A powerful way to derive the first law of black hole thermodynamics, the Iyer-Wald formalism has yielded deep insights into the nature of black hole entropy, so it is a natural step to extend the formalism to derive the thermodynamic volume. In recent years, the Iyer-Wald formalism has proved useful to holographers as a means to translate between the geometry in the bulk to quantum information theoretic quantities on the boundary, starting with [20] where the formalism was used to derive the linearized equation of motion in the bulk from the first law of entanglement on the boundary in pure AdS. To give a few more examples, the formalism was used in [21] to relate matter in the bulk to the relative entropy on the boundary, in [22] to relate canonical energy in the bulk to the quantum Fisher information on the boundary, in [23] to relate quantum information inequalities to gravitational positive energy theorems, and finally in [24] in conjunction with the kinematic space program to clarify the emergence of gravity from entanglement. The second reason to suspect that the thermodynamic volume is relevant to holography is that various notions of volume in the bulk have been identified with quantum information theoretic quantities. In particular, the size of the Einstein-Rosen bridge of a 2-sided eternal AdS black hole is believed to capture the complexity of the thermofield double state [25,26]. Furthermore, the complexity of subregions of the boundary CFT [27][28][29][30] and the fidelity [31] have been related to the volume of a constant time slice in the bulk enclosed between the Ryu-Takayanagi surface and the boundary. In the light of these ideas, it is suggestive that the thermodynamical volume also admits a quantum information theoretic interpretation, and we will find in this paper that it indeed seems so. In the second part of the paper (sections 3 and 4), we will relate the thermodynamic volume (and also the Smarr relation) to the complexity as per the proposals in [25,26]. JHEP03(2017)119 The third motivation to study the thermodynamic volume in holography is the question of to what extent holography knows about the black hole interior. In [32], the black hole interior was probed with minimal surfaces which cross the horizon, and the notion of "vertical entanglement" as well as a tensor network picture of black hole interiors were formulated. Like the quantum complexity, the vertical entanglement could serve as a useful way to keep track of time in the dual CFT. In the case of the complexity, the key observation is that the size of the ERB grows linearly with the boundary time at late time, a behaviour conjectured to be true of the quantum complexity at exponentially late time. However this linear growth can be observed for various geometrical entities crossing the wormhole, and to correctly pick out one among them is a non-trivial problem. This leads us to the fourth and last motivation of this paper: how to correctly quantify the size of an ERB and capture the complexity ? Two ways to achieve this have been considered in the literature [25,26,33,34]. The first way, first proposed in [34] and dubbed "complex-ity=volume" or CV-duality, postulates that the complexity is dual to the volume of the maximal spatial slice crossing the ERB. This proposal, while capturing the linear growth at late time, has the minor problem that a length scale has to be introduced "by hand", for which there seems to be no unique, natural choice. The second way, first proposed in [25] and dubbed "complexity=action" or CA-duality, postulates that the complexity is dual to the bulk action evaluated on the Wheeler-DeWitt (WDW) patch. This proposal solves the lengthscale problem of CV-duality, and has in addition the practical advantage that the WDW patch is easier to work with than the maximal volume. We will see in this paper that the thermodynamic volume is intimately related to the linear growth of the WDW patch at late time. We will also point out a third possibility, dubbed "Complexity = Volume 2.0" in which complexity is identified with the spacetime volume of the WDW patch rather than the action. This is potentially even easier to work with and will be discussed in more detail further in the paper. The paper is organized as follows: in section 2, we briefly review the Iyer-Wald formalism (with varying cosmological constant) and apply it to derive the thermodynamic volume of two solutions: the charged BTZ black hole and the R-charged black hole. In section 3, we move on to discuss the connection between the extended thermodynamics and the complexity in the simple case of the AdS-Schwarzschild black hole. In section 4, we extend this connection with the complexity to a black hole with conserved charges (i.e. electrically charged and rotating holes). In section 5, we contrast our proposal for the complexity against CA-duality, and show that, in certain cases, our proposal can help fix problems which ail CA-duality. In section 6, we conclude and discuss future work. Volume and Iyer-Wald formalism In this section we will present a slight generalization of the Iyer-Wald formalism [18,19,35] which will allow us to derive the volume. The formalism requires a diffeomorphism invariant action S = L = L , together with a solution with a bifurcate timelike Killing vector JHEP03(2017)119 field ξ. We will follow the notation used in [20]. 1 Consider some general variation of the Lagrangian. For an action including a cosmological constant which is allowed to vary, one writes: is the symplectic potential current, is the equation of motion form for the field φ, and where the sum over φ runs over the entire field content of the theory. In the case where this variation is due to applying a diffeomorphism generated by a vector field ζ, this becomes In this case, since our action is diffeomorphism invariant, we may apply Noether's theorem to derive the conserved current and a Noether charge form Q(ζ) such that on-shell Replacing our general vector field ζ by the killing vector field ξ, and considering some general other variation δφ, we now define By an algebraic computation we may see that on-shell In particular, denotes the usual volume form in d dimensions: We will find it useful to define the additional forms: JHEP03(2017)119 Applying Stokes' theorem to this form on a region Σ of a constant time slice bounded by the bifurcate killing horizon and the conformal boundary at infinity then yields In the case of a black hole spacetime, this reduces to the extended first law of black hole thermodynamics upon evaluation of the integrals, where roughly speaking, the second term from the left above gives rise to the V dP . Application to Einstein-Maxwell: charged BTZ black hole Next, we apply the above formalism to the Einstein-Maxwell theory: (2.11) After some algebra, we find the symplectic potential current, Noether current and Noether charge to be: Let us now specialize to a solution of the Einstein-Maxwell system: the charged BTZ black hole in 3 dimensions. The metric together with the gauge field are given by: Here we use units where 4G = 1. This solution has two horizons, an outer horizon at r = r + and an inner horizon at r = r − . Both outside the outer horizon (r > r + ) and inside the inner horizon (r < r − ), ∂ t is a bifurcate time-like Killing vector field, whose killing horizons are given by r = r + and r = r − respectively. For this choice of Killing vector field, we find that the Noether charge only has one nonzero component: JHEP03(2017)119 and all other independent components of Q µν vanish. For a perturbation defined by perturbing the AdS length L of the solution above and leaving the other parameters fixed, we find that with the other two components zero. From these we find the only nonzero component of χ to be: (2.14) Integrating this form over any surface of constant t and r yields Evaluating this on the outer horizon r + we can recongnize this as T δS after some algebra. On the other hand, the integral of χ diverges as r → ∞, but putting in a large r cutoff R and adding we get a cutoff independent result also equal to T δS. Rewriting this in terms of δP instead of δL, the first law with m and q fixed then becomes and so we have the volume We could equally well have done this in region inside the inner horizon. Evaluating equation (2.15) at r = r − once again yields T δS for this horizon, and evaluating at the singularity gives − q 2 4L δL. On the other hand, All in all, we obtain the first law: where the (. . . ) − is to emphasize that the quantity enclosed pertains to the inner horizon. Once again trading δL for δP we read off the volume for the inner horizon: In this subsection, we consider a more complicated example, and derive the volume of the R-charged black hole in 4 dimensions. The thermodynamics of R-charged black holes has been studied in [36]. In (3+1) dimension, the action is given by: and The metric together with the matter fields are given by: The thermodynamical quantities are: JHEP03(2017)119 In the extended phase space, the pressure is the cosmological constant, which is also the bottom of the scalar potential: As mentioned in the introduction, the ADM mass is now reinterpreted as the enthalpy and the black hole's volume can be computed using the familiar thermodynamic formula: In particular, the AdS-RN black hole is a special case when all 4 charges coincide q 1 = q 2 = q 3 ≡ q. In this case, the above reduces to: Also, the radial coordinate has to be redefined by r → r − q in order to recover the usual Schwarzschild-like form of the AdS-RN metric. We then recognize the volume of the AdS-RN black hole in the form of equation (1.2). A note here is in order about coordinate dependence. While the thermodynamic volume can take different forms depending on the radial coordinate used (as illustrated in the example above), we stress that the volume is coordinate-invariant quantity. The fact that it is not the actual volume of some spatial region, combined with the fact that spatial volumes in General Relativity depend on the foliation, can make this coordinate invariance not so obvious. The cleanest way to see this coordinate invariance is to go back to the definition of V as a partial derivative (1.1). The function M (S, P ), which represents an equation of state so to say, is a relation between coordinate invariant quantities (M , S and P ), and so is the partial derivative (1.1). The paper [36] asks the interesting question of what integral over the black hole interior would give rise to the volume (2.36). To answer this question, one can recast the above in the form: where V (r) is the function defined in equation (2.36) (with r + relabeled to r), and r 0 is taken to be the largest root of the equation V (r) = 0. We then find that r 0 is the largest root of a cubic polynomial: As for the integral V (r), it was pointed out in [36] that it is essentially the scalar potential: Two aspects of this formula are remarkable: first, the fact that the integrand admits a clean interpretation in terms of the scalar potential; and secondly, the integral does not run over the whole of the black hole's interior. As one can generally expect the volume to JHEP03(2017)119 have something to do with the scalar potential, the second aspect is perhaps a bit more mysterious than the first one. We now proceed to apply the extended Iyer-Wald formalism to compute the volume, and we will see how the formalism sheds light on the two mysterious aspects as described above. The symplectic potential current and Noether charge for this theory are given by: Here we only give the on-shell form of these expressions. Next, let us perturb the coupling g 2 . By noting that equations (2.25) and (2.26) are g-independent, it is clear that the profiles of the matter fields are unaffected, and only the gravity part contributes to δQ and Θ. Moreover, equations (2.29) and (2.30) are also g-independent, so neither the ADM mass M nor the charges Q i are affected by the g-variation. This implies that the (extended) first law of thermodynamics: If we now compare with equation (2.10), we can identify the T δS term with the integral of χ over the horizon, and the V δP term as arising from a combination of the two other terms. The fact that T δS corresponds to the integral of χ over the horizon is to be expected from the Iyer-Wald formalism: roughly speaking, it is because the form χ evaluated on the bifurcation surface reduces to the surface binormal, and hence its integral over the bifurcation surface gives the area (or the entropy). Let us next compute the form χ. After some algebra, we find that the only nonzero component of χ is: The integral of χ over infinity diverges. If we regularize by a radial cutoff r c r + , we find: Next, let us focus on the δΛ term in the extended first law. By differentiating the Lagrangian with respect to the coupling g 2 , we have: JHEP03(2017)119 Notice that we have an integral of the scalar potential on the right-hand side! We emphasize here that the extended Iyer-Wald formalism makes this fact manifest, in contrast with the approach described in equation (2.38). As usual, the upper limit of integration above will diverge and we have to regularize by a radial cutoff r c . Evaluating the integral, we then find: If we now compare the divergent terms in (2.46) and (2.48), we then find that they cancel pairwise, and we are left with a finite answer which consists of two parts: (1) the finite term in (2.46) and the horizon term (the lower limit of integration) in (2.48). We then obtain: and we recover equation (2.36). Notice in particular, that, from the viewpoint of the extended Iyer-Wald formalism, the lower limit of integration r 0 in (2.40) arises from the finite term in the integral of χ at infinity. Moreover, the Iyer-Wald formalism has taught us that the volume of the black hole is perhaps best thought of as arising from an integral over the exterior of the black hole rather than its interior. 2 To summarize, the volume arises as the integral of the scalar potential over the whole black hole exterior, but it is regularized by the Iyer-Wald form χ at infinity in a nontrivial way. Thermodynamic volume and complexity: Schwarzschild-AdS From the viewpoint of the Iyer-Wald formalism, as we have seen above, the black hole volume arises as an integral over the exterior of the black hole. This observation naturally begs the question of whether the thermodynamic volume has something to do with the black hole interior. Moreover, it remains unclear as to what the information contained in the volume can teach us about the dual CFT. It is generally pointed out in the literature [5, 9-11, 13, 15, 16, 37] that varying the cosmological constant in the bulk corresponds to varying the rank of SU(N ) or the central charge on the field theory side, and that the volume can be thought of as a chemical potential-like quantity corresponding to the degrees of freedom counted by the central charge. In this section, we bring the two questions above together (the black hole interior and the CFT interpretation) and attempt to answer them through the notion of complexity of quantum states. In a series of elegant papers [25,26] Wheeler-DeWitt (WdW) patch. In particular, this quantity grows linearly with time at late time, and we will see in the first half of this section that the thermodynamical volume is a contribution to this growth. In the section half of this section, we switch gear and consider the possibility that it is the spacetime volume, rather than the action, of the WDW patch which is dual to the complexity. We will show that the spacetime volume of the WDW patch is intimately related to the thermodynamic volume, and that, in the Schwarzschild-AdS case, the spacetime volume and action behave in very similar fashions and both proposals should work equally well. Review of Brown et al. Let us start by reviewing the proposal by Brown et al. in some level of details. The Wheeler-DeWitt patch is a region in the maximally extended black hole spacetime defined with respect to two choices of time, one on each boundary. For simplicity let us first consider the AdS-Schwarzschild black hole in 4 dimensions. We will denote the time on the left boundary as t L and the time on the right boundary as t R . From these two points on the boundary (see figure 1 for a depiction), we draw four null rays, and the WdW patch is the region in the bulk enclosed between rays (and possibly the past and future singularities). 3 On the CFT side, picking out two times t L and t R is equivalent to choosing a quantum state: The WdW patch as described here extends all the way to the boundary, and therefore the action evaluated on the WdW is divergent. To extract a finite answer, we have to choose a regularization. One could simply cut off the patch at some large radius r cutoff r+. Alternatively, one could move the two corners of the WdW patch on the boundary to r cutoff , as done in [38]. The regularization introduces terms which drop out when we take the time derivative of the complexity, and for this reason we leave the regularization unspecified. JHEP03(2017)119 where H L and H R are the Hamiltonian on the left and right boundaries, respectively, and |T F D is the thermofield double state: The thermofield double state has the properties that it is close to being maximally entangled, and that the reduced density matrix on either side is the usual thermal state. The complexity of a quantum state is, roughly speaking, the minimal number of quantum gates needed to produce the state from some universally agreed-upon starting point. The statement of CA-duality is that: where A is the bulk action evaluated on the WDW patch. At late t L , it follows from CAduality that the rate of growth of the complexity approaches the mass of the black hole: is a convincing piece of evidence for CA-duality. This is because it is reminiscent of a conjectured upper bound on the rate of computation by Lloyd ([39]), according to which the rate of computation is bounded above by the energy. Let us briefly review the motivation for the Lloyd bound. The Lloyd bound takes inspiration from another bound known as the Margolus-Levitin theorem [43]. This latter states that the time τ ⊥ it takes for a quantum state to evolve into a state orthogonal to it is bounded below by: where E is the average energy of the state. If we take the reciprocal of both sides, and re-interpret the left-hand side (which has unit of frequency) as the rate of change of the complexity, we then arrive at the statement that this rate is bounded above by the energy of the system:Ċ which is the Lloyd bound. We point out that, while the Margolus-Levitin theorem can be proved using elementary techniques, the Lloyd bound is a conjecture. If we now compare the Lloyd bound with the prediction of CA-duality ( 3.4) for the late-time complexification rate, we see that the ADM mass of the black hole plays the role of energy in the Lloyd bound, and that the bound is saturated at late time. That the bound is saturated is another conjecture but is appealing: black holes seem to excel at information-related tasks, since they saturate the Bekenstein bound [44,45] and are believed to be the fastest scramblers in nature [46]. JHEP03(2017)119 3.2 CA-duality, through the lens of black hole chemistry In this subsection, we take a closer look at the gravity calculation of CA-duality to derive equation (3.4). This computation itself, of course, can be found in [25,26]. 4 Our contribution in this subsection is to show that the thermodynamic volume (together with the pressure) arises naturally from the calculation. First, since the WdW patch is a region with boundary, the action is the sum of the Einstein-Hilbert action and the Gibbons-Hawking(-York) term: When we shift t L to t L + δt L , the WdW patch loses a thin rectangle and gains another thin rectangle as described in dark orange in figure 1. Thus, to compute the rate of change of the action we have to evaluate the action above on the two orange rectangles. Observe that all the sides of these two rectangles are null except at the singularity, and the paper [38] gives a detailed argument that the null boundaries do not contribute to the Gibbons-Hawking term. Also, since the boundary is not smooth at the corners of the rectangles, we have to take into account the contributions localized at these corners (named B and B' in figure 1). Thus, we see that the Gibbons-Hawking term contributes at the singularity, at B and at B (all of which are depicted in blue in figure 1): Note that V 1 and V 2 denote the upper and lower dark orange slivers from figure 1 respectively, and that a = ln |k ·k| where k andk are the null normals to the corner pieces. Let us consider first the difference between the two rectangles S V 1 − S V 1 . Note that the Ricci scalar of the AdS-Schwarzschild solution is a constant: This readily follows from the fact that AdS-Schwarzschild is a vacuum solution of Einstein-Hilbert theory. Thus, if we evaluate the Einstein-Hilbert action on the AdS-Schwarzschild background, we immediately see that we have something proportional to the spacetime volume: Thus, after one evaluates the integrals above, we expect to see something which is schematically the product of a spatial volume and the infinitesimal time interval δt: A technical remark is in order here. The method of computation in [25,26] was questioned by [38], where the calculation was redone with a more careful treatment of the boundary of the WDW patch. However the conclusion 3.4 remains unchanged. In this section we will follow the more rigorous treatment of the boundary term as presented in [38]. JHEP03(2017)119 Let now us do the integral for S V 1 − S V 2 explicitly. When we do this, two remarkable things happen. The is that the part of the upper rectangle which is outside the future horizon always cancels with the part of the lower rectangle which is outside the past horizon, and this happens for any t L thanks to boost symmetry of the black hole. 5 Thus, whatever quantity comes out to be the spatial volume in equation (3.11) only receives contribution from the black hole interior. The second is that the integral evaluates to: where r B is the r coordinate of the 2-sphere sitting at B. In the late time limit, we can easily see by inspection of figure (1) that r B tends to r + . Thus, in the late time limit, the integral above can be interpreted in the language of the extended thermodynamics as: where, in the last equality, we used P = − Λ 8πG , Λ = − 3 L 2 and V = 4 3 πr 3 + . Thus, we have seen how the thermodynamic volume arises from the action evaluated on the WDW patch. Put differently, the WDW patch provides an interpretation of hte thermodynamic volume as a measure of the black hole interior, and in the same time, relates it to the late-time rate of growth of the complexity. Let us now evaluate the remaining contributions in (3.8) (The algebraic details are found again in [38]). The contribution of the Gibbons-Hawking term at the singularity is essentially the ADM mass: As for the contribution at the two corners B and B , one finds: where K is a constant. In the late time limit, where r B → r + , the second term above vanishes and: Putting everything together, we find the time derivative of the action at late time to be: Next, recall the Smarr relation for AdS-Schwarzschild in 4 dimensions: Using the Smarr relation above, the time derivative of the action simplifies to: JHEP03(2017)119 If we now turn the logics around, we can reinterpret the following slight rewriting of the Smarr relation: as a way to keep track of the different contributions to the complexity growth: the lefthand side corresponds to the total growth, the term with M on the right-hand side is the contribution from the singularity of the WdW patch, the term with T S is the corner contributions which end up on the horizon at late time, and finally the term with P V is the contribution from the black hole interior away from the singularity. Complexity = volume 2.0 As we have learned from (3.13), in the case of AdS-Schwarzschild, the late-time rate of change of the bulk action evaluated on the WDW patch gives the product P V , or equivalently the late-time rate of change of the spacetime volume of the WDW patch is the thermodynamic volume V . These observations beg the question of whether P and V can serve as the basis for a new, alternative proposal for the complexity alongside with CAduality. In this subsection, we will make the case that a possible holographic dual to the complexity is the spacetime volume of the WDW patch. As previously noted, what we are looking for in proposing a holographic dual to the complexity is a linear growth at late time, together with consistency with the Lloyd bound. On the information-theoretical side, the linear growth of the complexity at late time is generally believed to be true but is surprisingly hard to prove. 6 It is straightforward to see that the complexity of the thermofield double state is bounded above by a linear function of time: C(|ψ(t) ) < t · poly(K) (3.21) almost by definition of the complexity. To see this, recall that the complexity is the smallest number of quantum gates needed build a state, hence any way to build the state automatically establishes an upper bound on the complexity. In particular, time-evolving the thermofield double state the usual way in quantum mechanics establishes the upper bound (3.21). To establish that the complexity grows linearly at late time, one needs to also bound the complexity from below by a linear function of time. This is a highly non-trivial task, but there are two promising directions. One of them is a recently proved theorem by Aaronson and Susskind [41] which establishes a lower bound for the complexity (modulo the possibility that an improbable statement in complexity theory is true). The other direction is Nielsen's idea of the complexity geometry [42] where finding the complexity reduces to the problem of finding geodesics on a curved manifold. On the gravity side, as noted in the introduction already, one can associate various geometrical quantities to the ERB which all grow linearly in size at late time, so this property of the complexity alone allows for quite some freedom in proposing a holographic dual. A simple illustration of this non-uniqueness phenomenon (given in [34]) is a geodesics JHEP03(2017)119 in the BTZ black hole anchored at boundary times t L and t R . The length of such a geodesic is given by (for the case r + = L): If we keep t R fixed and send t L → ∞, we find that indeed the leading term is linear in t L . Another geometrical entity whose size grows linearly at late time is the maximal surface spanning the wormhole. As previously noted, this quantity served as the basis for an earlier proposal by Brown et al. known as CV-duality [34]. Taking inspiration from CV-duality and CA-duality, we would like to propose now that the complexity is dual to the spacetime volume of the WDW patch (more precisely the spacetime volume multiplied by the pressure): In the late time regime, by design we will have: We will refer to this proposal as "complexity=volume 2.0". Next, we ask the question of whether "complexity=volume 2.0" satisfies the Lloyd bound. Naively, it might seem that the Lloyd bound favors CA-duality over our proposal, because we have the mass M coming out of CA-duality calculation as opposed to P V , and the Lloyd bound refers to the energy of the system. However, we can form 3 quantities with the dimension of energy out of the standard thermodynamical variables, by multiplying each variable by its conjugate. Thus we have: M , T S and P V . While M seems to be the "correct" energy from the viewpoint of the Lloyd bound, it is T S which should be identified as the complexification rate from the viewpoint of quantum circuits [34]. To see this, [34] argues that if we think about the CFT as a quantum circuit of K qubits, then the complexity grows linearly in time with slope K: To convert between the quantum circuit picture and the field theory picture, we identify K with the entropy S of the CFT, and use the temperature T to convert between the CFT time and the quantum circuit time. Thus, we find after the translation: On the other hand, one could make similar arguments to make the case that the complexification rate should be P V . The complexity should again be proportional to the number of degrees of freedom, which for a discretized CFT is roughly the central charge JHEP03(2017)119 times the number of lattice sites. Now by the holographic dictionary we know that the central charge is dual to what we have been calling the pressure. For example in 3 bulk dimensions we have the Brown-Henneaux formula [47] It furthermore seems reasonable that the volume would roughly encode the number of sites. Thus, one can schematically write down: (3.29) and the complexification rate at late time is P V . We end this section by noting that for most black holes all three quantities M , T S and P V have the same order of magnitude. To see this, we express these quantities as functions of r + and L: For r + L (i.e. large black holes), we then find Interestingly, the two quantities M and T S differ by an O(1) numerical factor, while M and P V become the same quantity ! Thus, for high temperatures at least, it does not make much of a difference whether the rate of growth of the complexity is thought of as M , as T S or as P V . Given that there are ambiguities associated with defining the complexity (such as overall numerical factors), the discrepancies between M , T S and P V seem relatively easy to accommodate. Thermodynamic volume and complexity: conserved charges Given the clean connection between the Schwarzschild-AdS WDW patch with the thermodynamic volume and the complexity, it is natural to ask whether we can also establish similar connections for charged and rotating solutions. Unfortunately, within the framework of CA-duality, the situation for charged and rotating black holes is not as clean, and the gravity calculation does not respect the Lloyd bound. In this section, however, we will simply present the computation of the complexity according to "complexity=volume 2.0" for a variety of charged black holes, and demonstrate that -like in the uncharged case -JHEP03(2017)119 Figure 2. The Penrose diagram of a charged and/or rotating black hole and a Wheeler-DeWitt patch (depicted in orange). When t L is shifted to t L +δt L , the patch loses a sliver and gains another one (depicted in darker orange). The singularity is in red, and the horizons are dashed. the thermodynamic volume and the pressure emerge naturally from the late-time rate of growth. We will relegate the interesting question of consistency with the Lloyd bound to the next section. On the gravity side, for both charged and rotating black holes, the Penrose diagram is qualitatively the same. In figure 2, we depict their Penrose diagram together with the WDW patch. Note that the WDW patch is qualitatively different from that of the Schwarzschild-AdS solution: the upper part of the patch no longer runs into a singularity, but approaches the inner horizon at late time. Electrically charged black holes Let us start with the Reissner-Nordström black hole in n + 2 dimensions. The metric together with the gauge field are given by: JHEP03(2017)119 As mentioned in the introduction, the thermodynamic volume is well-known and looks like the geometric volume of a ball in flat space: where the subscript ± of course refers to either horizon. The spacetime volume of the WDW patch takes the form: where the ellipsis stand for terms which are time-independent (and therefore drop out from the time derivative of the complexity) or are exponentially suppressed at late time. We recognize the difference between thermodynamic volumes in the equation above. At late time, then, we have as advertised: Note the slight difference compared with the Schwarzschild-AdS case: the late-time complexification rate is now proportional to the difference between the two thermodynamic volumes. Let us end this subsection by mentioning the 3-dimensional case of the charged BTZ black hole. Here there is potential for some surprise, since the volume takes the somewhat different form: But in the end, the second term on the right-hand side above drops out of the difference V + − V − , and the late-time rate of change of the complexity still takes the form (4.6). Rotating black hole Next, we move on to discuss rotating holes. Like the Schwarzschild-AdS case, rotating holes are vacuum solutions of the Einstein-Hilbert action, and this again implies that the on-shell Einstein-Hilbert action (ignoring boundary contributions) is proportional to the pressure multiplied by the spacetime volume of the WDW patch: Thus, like for the Schwarzschild-AdS case, the distinction between the bulk action (i.e. without the boundary term) and spacetime volume is not very important here. In the simple case of the rotating BTZ black hole, the metric reads: The thermodynamic volume can be found to be (see for example [8] for the outer horizon volume): V ± = πr 2 ± (4.10) JHEP03(2017)119 After some calculation, the late time rate of complexification is again found to be proportional to the difference between the two thermodynamic volumes: Next, we move on to discuss the case of rotating black hole in higher dimensions (the Kerr-AdS). This case is substantially richer and more interesting, as the analysis of the thermodynamics is somewhat different depending on whether the spacetime dimension is odd or even (see [36]), and there are two possible notions of volume one can identify. For simplicitly, we will focus on the 4-dimensional case. The solution is given by: ∆ r = (r 2 + a 2 )(1 + g 2 r 2 ) − 2mr (4.13) ∆ θ = 1 − a 2 g 2 cos 2 θ (4.14) Here a = J/M is the ratio of the angular momentum to the mass. The late-time growth of the bulk Einstein-Hilbert action was computed in [48]: which again is proportional to the spacetime volume by virtue of the solution being a vacuum solution. As for the thermodynamic volume, we have two different notions of volume depending on whether the analysis is done in a non-rotating or rotating frame at infinity. Following [36], we refer to the volume in the non-rotating frame as the thermodynamic volume and the one in the rotating frame as the geometric volume. The latter admits a geometrical interpretation: 7 where A is the area of the horizon: Putting the two equations above together, we have: We also note here that the thermodynamic quantities derived in the rotating frame obey the Smarr relation [36] but not the first law. On the other hand, the thermodynamic quantities derived in the nonrotating frame at infinity do obey a first law (in addition to a Smarr relation) and can be derived from the Iyer-Wald formalism. Figure 3. Given M and L, we vary the angular momentum to mass ratio a and for each value solve numerically for V = V + − V − . Notice that a = 0, which reduces to the Schwarzschild case, has the maximal V . As we approach extremality, which here occurs as the plots flatten out on the left (In flat space exremality occurs for a = 1, but this is modified by the AdS length dependence of the metric), V tends towards zero. In the plot green is for M = 5, L = 1, blue is for M = 2, L = 3, and red is for M = 1, L = 2. As in the previous cases, we can define a second volume V − associated to the inner horizon by the replacement r + → r − in V + : Comparing equations (4.17) and (4.20), and converting from the bulk action to the spacetime volume, we finally find: To help gain intuition, in figure 3, we plot the angular momentum-to-mass ratio a versus V + − V − for fixed M and L. Action or volume? In this paper we have discussed two different possible holographic identifications of the complexity of the boundary thermofield double state. On could identify this complexity on the one hand with the action of the Wheeler-DeWitt patch, and on the other hand with the spacetime volume of the same. These two quantities behave in a rather similar fashion, and one is naturally led to ask whether any advantage can be identified for one or the other. One advantage is that there are no boundary terms, which in higher curvature theories could have problems near a singularity. 8 In this section we seek to answer this question as regards the Lloyd bound [39]. The Lloyd bound with conserved charge In this subsection, let us derive the Lloyd bound in the presence of a conserved charge. As argued in [26], the existence of conserved charges puts constraints on the system and implies that the rate of growth of the complexity at late time is slower than in the case without charges. Let us start by generalizing the thermofield double state to include a chemical potential µ: This state time-evolves by the Hamiltonian H L + µQ L on the left, and H R − µQ R on the right: Based on this, one would guess that the appropriate generalization of the Lloyd bound is: This however violates our intuition that as zero temperature system, the complexification rate of an extremal black hole should be zero, and that the bound should reflect this. It thus seems appropriate to modify the above tȯ where (M − µQ) gs is nothing but M − µQ evaluated in the appropriate ground state, which will be either empty AdS or an extremal black hole depending on the case under consideration. If we think of our system as being in the grand canonical ensemble, it is most natural to take the ground state to correspond to the geometry whose chemical potential is the same as the black hole under consideration. As it happens, this is nothing but pure AdS for black holes with µ ≤ 1, but for µ > 1 it corresponds to some extremal black hole (In units where G = 1). Bound violation: near the ground state Now we will check to see whether the Lloyd bound is obeyed by the two proposals at hand. For simplicity we restrict our attention to 4-dimension, and work in units where G = 1. First we consider the case where µ > 1. Expanding the outer horizon radius near extremality, we find that Where δM := M − M e , M is the total mass of the black hole, and r e and M e are the radius and mass respectively of an extremal black hole with the same chemical potential as the JHEP03 (2017)119 one we are considering. We likewise may expand the inner horizon as From these we can expandĊ under both proposals aṡ On the other hand the bound is given by We thus see that both proposals must violate the Lloyd bound near extremallity. This is in agreement with [26]. Next we consider µ ≤ 1, in which case we expand around empty AdS. Here the bound becomes to lowest order ButĊ becomes under each proposal We see immediately that the bound is satisfied in CV-duality sufficiently near extremality, but the bound is far from saturated.The situation with CA-duality is a bit more complex: C A exactly saturates the bound to lowest order in M , and so the lower order behavior becomes important. Expanding the bound violation (i.e. the difference betweenĊ A and the bound) directly we find to lowest ordeṙ As this term is positive definite, we see that the bound is violated as we approach empty AdS. This would seem to put CV-duality in a slightly better position than CA-duality, though the expectation that the bound should be saturated, or nearly so is not met in this case. Bound violation: exact results In the µ ≤ 1 case, one can in fact do better. We can find the bound violations as an exact function of the inner and outer horizon. Note of course that these are only valid over the region in r + , r − space where µ ≤ 1. The exact expressions in 4 dimensions arė The second expression is clearly positive definite, and so the under CA-duality the bound is always violated for µ ≤ 1. Now the exact expression for the chemical potential is From this we may conclude that and sȯ and so we may conclude that C V respects the bound whenever µ ≤ 1. Generalizing further to arbitrary dimension d > 3, we find thaṫ Which is a positive definite quantity. This in fact recovers a result already derived by [48]. Being, for now, a bit less ambitions with the CV quantity, we find in 5 dimensionṡ and from which we geṫ And so CV duality in 5 dimensions respects the bound whenever µ 2 < 1. We conjecture without proof that CV-duality repects the Lloyd bound whenever µ 2 ≤ 1 for all AdS-RN spacetimes. Altering the bound by a pre-factor We have considered the Lloyd bound in it's usual form, It would seem, however, due to the arguments leading to this bound, that the bound should only be trusted up to an overall factor. It would be interesting, therefore, to see how robust the above discussion is under the insertion of some pre-factor. For example, under which proposals and sets of circumstances doeṡ C ≤ αE π . (5.24) hold for various values of α. For example, for α = 1 for the AdS-RN case we find: and, using the inequality (5.17) again, we find: Hence the value α = 1 is also consistent with the bound. We leave further explorations of other values of α to future research. Conclusion Let us summarize the main findings in this paper and sketch out some future directions. In the first part of the paper, we analyzed the notion of thermodynamic volume from the viewpoint of the Iyer-Wald formalism. Using a slight generalization this formalism, we present a systematic way to derive the volume and illustrate it in two cases: the charged BTZ black hole and the R-charged black hole. In the latter case, our method explains several interesting and intriguing features of the thermodynamic volume, and we believe that it will prove useful to compute the volume of many more complicated black hole solutions in the future. Of particular interest are Lifschitz black holes [49,50]. Even though the computation of the volume for the R-charged black hole was a bit involved, it is still relatively simple since we saw that perturbing the coupling g leaves all the matter fields unchanged. In comparison, we do not have this luxury in the case of Lifschitz solutions: a generic feature of these spacetimes is the fact that the profiles of the matter fields depend explicitly on the cosmological constant (this property is somehow related to the fact that these spacetimes are not asymptotically AdS), so varying the cosmological constant will affect the matter fields. JHEP03(2017)119 In the second part of the paper, we related the thermodynamic volume to the holographic proposals for the complexity. In particular, we showed that the thermodynamic volume (together with its conjugate the pressure) is intimately related to the WDW patch of an eternal AdS black hole, and this holds for a large class of AdS black holes. This intimate relationship can be stated cleanly in two different ways: on one hand, the rate of change of the WDW spacetime volume in the late time limit is precisely the thermodynamic volume (if there is only one horizon) or the difference of thermodynamic volumes (if there are two horizons). On the other hand, the bulk action evaluated on the WDW patch (ignoring boundary contributions) is the sum of "work terms" involving pressurevolume and charge-potential. The several different ways to arrive at the thermodynamic volume presented in this paper may be a little confusing to the reader, so let us state again the relationship between them: the thermodynamic volume may be defined in the usual thermodynamic fashion as the partial derivative of the ADM mass with respect to the pressure. The volume computed by the Iyer-Wald formalism is by construction the same quantity. We conjecture that this should further correspond to the late-time value of the time derivative of the spacetime volume of the Wheeler-DeWitt patch, and have checked several examples, but have no proof that this holds generally. How to take this story further? As mentioned in the introduction, a tensor network picture of the black hole interior was introduced by Hartman and Maldacena in [32]. Tensor network is a topic of much recent interest for holographers [51][52][53][54] with an eye on the emergence of spacetime. Thus one can ask the question: can the pressure-volume variables be understood in the language of tensor networks or quantum circuits ? Also, according to black hole complementarity [55,56], the black hole interior is an example of emergent space par excellence. Thus, one can hope that the pressure and volume variables can prove helpful to our understanding of quantum gravity in the future.
2023-01-21T14:21:20.139Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "cf3ed201d33a8c629260e6e80a8c65769aa4b414", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2017)119.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "cf3ed201d33a8c629260e6e80a8c65769aa4b414", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
54859547
pes2o/s2orc
v3-fos-license
Research on P System with Chain Structure and Application and Simulation in Arithmetic Operation Considering the advantages of distribution and maximum parallelism of membrane computing and availability of discrete Morse theory to deal with discrete structure, in this paper, combining discreteMorse theory andmembrane computing, a novelmembrane structure—P system with chain structure, is proposed, which is constructed on the basis of discrete gradient vector path of the discrete Morse theory. At the theoretical level, due to its unique chain structure, compared with traditional P system, its structure, object, and rule are described in details. In the practical aspect, a specific application example, P chain system for arithmetic operation, is presented to demonstrate the superiority, computational efficiency, and ability of P system with chain structure. Moreover, a simulation system of arithmetic operations based on P chain system is designed, giving a visual display of the implementation of P chain system for arithmetic operation, and verifying the feasibility and effectiveness of P chain system. Introduction Membrane computing is a new computational model proposed by Romanian scientist Pȃun in 1998, due to being introduced by Pȃun at the first time, it is also called P system [1].Membrane computing abstracts cell as the computational unit, permitting every computational unit to calculate dependently and the whole system to operate in the way of maximum parallelism, whose computational efficiency has been improved obviously [2][3][4][5].It has been proved that the computation capacity of membrane computing is equivalent with that of turing machines, so due to its strong parallel computing power it has been the highlight of the recent study. Morse theory [6] is a useful tool in differential topology, applied for investigating the topology of smooth manifolds, particular for computer graphics, having been the focus of the research.Forman [7] extended it to the discrete aspect, which provides an effective tool to describe the topology of discrete object, such as simplex and simplicial complex and plays a vital role in pure mathematics and applied mathematics.Concepts in discrete Morse theory, such as simplex [8], discrete gradient vector path and so on, provide a useful tool to research the topology of discrete structure. Generally, the structure of P system is abstracted as the nested structure of the cell wall and organelle, so it is a kind of nested and hierarchical structure.Of course, there are many other structures, such as reticular formation of neural network.Up to now, there are three main P systems, celllike P system, tissue-like P system and neural P system [3], and the study of cell-like P system has been well-developed.In literature [4], Pȃun pointed out that the focus of the next stage of membrane computing study was the nonhierarchical arrangement of membranes.In literature [9], the author has proposed a P system based on simplicial complexes, which is an innovative try in the aspect of nonhierarchical membrane structure.Inspired from this, in this paper, a P system with chain structure is introduced, which combines membrane computing with discrete Morse theory and constructs a P system based on the discrete gradient vector path, forming a new kind of nonhierarchical membrane structure-P chain system.To try to arrange a novel nonhierarchical structure of P system not only makes a contribution to knowledge but also makes a clear methodological contribution. The organization of the reminder in this paper is described as follows: Section 2 is the part of theoretical discussion, reviewing theories and properties of discrete Morse theory and P system, which are the foundation of P system with chain structure, and then giving the specific description of structure, object, and rule of P chain system.In Section 3, a practical application of P chain system, P chain system for arithmetic operation, is proposed, displaying +, −, * , / four kinds of P chain systems.In Section 4, by a computer simulation, the specific implementation of P chain system for arithmetic operation is demonstrated, showing the computational efficiency and power of P system with chain structure.In Section 5, the summary and prospect are included.Here are some core definitions in discrete Morse theory, which are also essentials in this paper [6][7][8][9]. Definition 2 (simplex with orientation).For a -simplex , there are ( + 1)! permutations of different sequences for its +1 vertices 0 , 1 , 2 , . . ., ; when > 0, there are two kinds of permutations; any two permutations of the same kind differ in even commutations, while any two permutations in different group differ in odd commutations.These two kinds are called two orientations of .The simplex which has been given an orientation is called simplex with orientation, one denoted as and the other as − . Definition 3 (𝑄-chain). Suppose a fundamental constituent {𝑠 } of -complex , for an integer , there is , and a linear combination with integer coefficient Definition 4 (discrete gradient vector field).For the gradient vector field on a -complex , it is a series of ordered pair sets denoted as { () , P System Theory. Membrane computing is a novel computational model abstracted from biochemical reactions in living cells, whose merit is internal maximum parallelism.The essential components of P system are membrane structure, objects, and rules.The formalization definition of P system is shown as follows: ∏ = (, , , , 1 , . . ., , ( 1 , 1 ) , . . ., ( , )) .(1) Here is the alphabet, representing the object; is the output alphabet and ∈ ; is the catalyst and ∈ − ; is the set of membranes; is the object multisets, is the membrane label; is the set of evolutionary rules, some rules are applied to reflect the chemical reactions, such as rewriting rules, and others are employed to simulate biological processes, such as communication rules, and is the priority set of these rules. Definitions and Properties of P Chain System Definition 5 (P system with chain structure).Generalized P system with chain structure is written as , where is integer and | | represents the number of the membranes.Additionally, if ∀ > 0, represents positive generalized P system with chain structure, especially if ∀ = 1, denotes positive standard P chain system.Moreover, if ∀ < 0, represents negative generalized P chain system, particularly if ∀ = −1, denotes negative standard P chain system.Furthermore, if ∃ > 0 or ∃ < 0, represents multiply generalized P chain system, especially if ∃ = 1 or ∃ = −1, denotes multiply standard P chain system. Generally, what we call P system with chain structure is referred to the standard P chain system.Here is an example for the above definition.There is a complex 1(a), and 0-simplex are these vertices so there are 0-chain 2 + 1 + 1 + 2 + 2 + 1 whose units come from 0 and 1-chain 2 1 + 1 1 + 1 2 + 2 2 + 2 1 whose units belong to 1 .And then membranes from 0-chain or from 1-chain can compose a P system with chain structure. 2 + 1 + 1 + 2 + 2 + 1 is an example of positive standard P chain system.5 2 1 + (− 1 1 ) + 1 2 + (−3 2 2 ) + 2 1 can be called multiply generalized P chain system shown in Figure 1(b), where twoway arrow represents the repeated appearance of membrane, figure denotes the number of repeated appearance of its front membrane, that is to say, 1 appears 5 times and 3 3 times, usually 1 is omitted.Property 1 (oriented property of P chain system).There are three orientations of P chain system due to the orientation of simplex.What we entitle as P chain system with the same direction is that all the membranes are the same orientation, specifically in order to determine the direction of the P chain system, we need to predefine the orientation of complex, that is to say, if we stipulate the positive surface as positive, all the membranes from positive simplex of { } form P chain system with positive orientation, marked by "+", its membrane called positive membrane.On the contrary, all the membranes from the negative simplex of {− } constitute the P chain system with negative orientation, marked by "−", corresponding membrane called negative membrane.Moreover, if there are membranes from positive simplex or negative simplex at the same time in a P chain system, it is called P chain system with multiply orientation, marked by "×, " that is to say, in the P chain system with multiply orientation there are both positive membranes and negative membranes. Take the P chain system 1 for example.If we define clocklike orientation as the positive direction, P chain system Property 2 (additive property of P chain system).Supposing that is also a -chain.For the addition, all the -chain of complex are a free Abelian group, which is called -chain group of . Property 3 (precursory and subsequent relationships between adjacent membranes in P chain system).Because P chain system is based on a discrete gradient vector path, the order of membranes is determined; that is, for a given P system with chain structure, the relationship between adjacent membranes is precursor or subsequent.For a P chain system The algorithm for constructing the P system with chain structure is shown in Algorithm 1. Description of Structure, Object, and Rule of P Chain System.For the structure of P chain system, we know that it is nonhierarchical, specifically if it is a chain structure based on discrete gradient vector path, while for the object, it is similar with the former P system, which is denoted by multisets, meaning that = is used to represent objects of each membrane.But considering the oriented property of P chain system, the membrane is divided into positive and negative one, so objects in membrane with different orientations are different too; for example, supporting that object in positive membrane is marked as , then object in negative membrane is marked as correspondingly. and are antimatter, meaning that they cannot coexist; when they encounter each other, they will counteract immediately.This is similar with positive and negative spikes in spiking neural P system, where rule → makes their coexistence impossible. Based on the rules of former P system, combining the particularities of P system with chain structure, there are three main rules of P chain system: rewriting rules, communication rules, and forgetting rules. Rewriting rule is with the form of → V, where and V, from the alphabet , are the objects which represent the multisets, and this rule can be used in membrane when and only when the object set in satisfies ⊇ .Rewriting rules are used to control the type and number INPUT: discrete Morse function , the number of the membranes in P chain system OUTPUT: P system with chain structure (1) Find out a discrete gradient vector path from the given discrete Morse function by the algorithm proposed in literature [10]; (2) Find out sets consisted by all the -simplex and −1 consisted by all the ( − 1)-simplex from the discrete gradient vector path, noting that and −1 are ordered sets, ordered by the sequence of simplexes located in the discrete gradient vector path; (3) According to the number of membranes in the P chain system, choosing adjacent simplexes from or −1 as membranes in the P chain system, constituting -dimension or ( − 1)-dimension P system with chain structure. Algorithm 1: Construction algorithm of P system with chain structure. of objects in membrane.Here are some target indications used to manage the movement of objects.They include tar 1 = {here, out, in}, where "here" indicates that object remains in the membrane, "out" denotes that object leaves the membrane into its subsequence, and "in" shows that object is sent out to its precursor.So rewriting rule with target indication is formed as → V, where is a string representing multisets of objects from a given set , and V is a string over * {here, out, in}.Usually they are formed as (, tar), where is the object from and tar is here (usually omitted) or in or out.For example, P chain system ; there are object 3 2 4 and rule 2 3 → 2 (, out)(, in) in membrane 2 ; the result of the execution of the rule is to produce 4, 1, 2, 1, where 1 and 1 will be sent into subsequence 3 and 1 and 1 will be sent into precursor 1 , eventually remaining 3 2 2 in membrane 2 . Communication rules are used to manage the communication across the membranes, including symport rules and antiport rules.Here another target indication tar 2 = {pre, sub} is introduced, where "pre" indicates that which move to the precursor of the membrane and "sub" denotes that which move to its subsequence.So in communication rules, target indication is shown in pair, formed as (tar 1 , tar 2 ), where tar 1 = {here, out, in} and tar 2 = {pre, sub} and where tar 2 can be omitted, meaning that objects will be sent into the precursor or subsequence randomly.In a P chain system where the structure is determined, defining that the membrane where rule resides is Forgetting rule is defined as the form of → , where is object over alphabet which is used to represent multisets, and is null.The function of forgetting rule is to disappear some certain objects in membrane.A typical example of forgetting rule is objects , which are antimatter in membranes with different orientation, when they encounter with each other, they will counteract immediately, resulting from the execution of rule → .In most situations, forgetting rule → does not appear explicitly, but is defaulted to execute with the highest priority.Note that when execute this rule, if there are object or in membrane where the rule resides, its objects or will be used at first, if there is no object or , supporting that there are enough object or in environment can be used to counteract the antimatter. Formalization Definition of P Chain System. The formalization definition of standard P system with chain structure is described as followed: ∏ = (, , , , 0 , 1 , . . ., , syn, in, out) . ( (1) is the alphabet, and the element from is called object; (2) is output alphabet, and ∈ ; (3) is catalyst ( ∈ − ); the element of is unchanged during the evolutionary process of system; that is to say, the char of does not disappear and there is no new char generated, but its participation is necessary for some certain rules to execute; (4) is the set of membrane structures, each membrane and its enclosed area are expressed as label set , = {1, 2, . . ., }; ( P Chain System for Arithmetic Operation The calculating power of P system has been concerned and researched generally, and previous study has proved that NP-hard problem can be resolved by P system in polynomial time [5].So there are crucial theoretical value and heightened practical significance if we can take advantage of the maximum parallelism of P system to improve computational efficiency for a series of calculation problems, such as arithmetic operation.The fundamental arithmetic operation includes addition, subtraction, multiplication, and division, which is the base of other complex operations.We try to achieve arithmetic operation with P chain system in order to explore more complex operation fulfilled by P chain system.Literature [11] has proved the possibility to fulfill arithmetic operation by P system.Based on this, P chain system for arithmetic operation is proposed here; compared with the former method, there are some improvements of both time performance and space performance.When designing the P chain system for arithmetic, standard P system with chain structure is chosen; that is to say, each membrane in P chain system appears only once.Moreover, we suppose that there are enough objects in environment which can be used for the execution of certain rules.Note that the P chain system for arithmetic operation designed now can only achieve the arithmetic operation between any two natural numbers, and the result of the calculation is shown as the number of object in output membrane. P Chain System for Addition. In order to fulfill addition by P chain system, we select P system with single direction; that is to say, there is only membrane with positive orientation or negative orientation.Here is the example of P chain system with positive orientation to carry out the addition operation of P chain system.P system with chain structure for addition is designed as Figure 2, and its formalization expression is described as follows: +: show that it is a P chain system with positive orientation; , here output membrane does not appear in Figure 2 and is represented by alphabet . As shown in Figure 2, there are two membranes with positive orientation, where and represent two natural numbers to add, and and are the objects of the P chain system, using the number change of object and to achieve the addition between any two natural numbers, here the number of object are used to represent the calculating result.The specific process of P system with chain structure to fulfill addition is stated as followed.First, due to that there are operable rules in both membrane 1 and 2 , in membrane 1 , rule 1 is executed and there will generate an object sent into membrane 2 , at the same time, rule 2 in membrane 2 is performed too and produces an object sent into output membrane.Reactions in membrane 1 and 2 are done at the same time in the mode of maximum parallelism, and it will continue until all the object in membrane 1 are transformed into and sent into membrane 2 .Then membrane 1 will go into stable condition, and in membrane 2 the rules will be used until all object from membrane 1 or 2 are converted into and sent into output membrane.Now membrane 2 reaches stable, meaning that the whole P system is stable and the calculation ends, left + object in output membrane.Until then, the P chain system completes the addition + of arbitrary two natural numbers and . P Chain System for Subtraction. In order to achieve subtraction by P chain system, we select P chain system with multiplied orientation; that is, there are both membrane with positive orientation and negative orientation.P chain system for subtraction is designed as Figure 3 and its formalization expression is described as followed. As Figure 3 showed, there is one membrane with positive orientation and the other is with negative orientation, where and represent any two natural numbers to subtract, specifically is subtrahend and is minuend, and , and which are antimatter are the objects in the P chain system, using the number variation of object and to achieve the subtraction between any two natural numbers, here the number of object are used to represent the calculating result.The process of P system with chain structure to achieve subtraction is described as followed.At first, rule in membrane 1 with positive orientation will be used, where "/" denotes the condition for the rule to be executed, that is, only when the condition is met, the rule can be used.So rule 11 will be performed, producing − 1 object sent into membrane 2 with negative orientation, and when and encounter, they will disappear immediately by the defaulted rule → .And then there is only one object left in membrane 1 , which satisfies the executive condition of rule 12 , and objects and generated will be sent into membrane 2 .The appearance of object activates the rule 2 , and − object left in membrane 2 will be sent into output membrane.Until then, the P chain system completes the subtraction − of any two natural numbers and . P Chain System for Multiplication. In order to complete multiplication by P chain system, we select P chain system with single orientation, membrane with positive orientation or negative orientation.Here take the P chain system with positive orientation for example, and P chain system for multiplication is designed as Figure 4 and its formalization expression is described as followed. +: show that it is a P chain system with positive orientation; = {, }; 1 = { As shown in Figure 4, there are two membranes with positive orientation, where and represent any two natural numbers to multiply, and and are the objects of the P chain system, using the number change of and to achieve the multiplication between any two natural numbers, here the number of object are used to represent the calculating result.The process of P system with chain structure to fulfill multiplication is explained as followed.First, the existence of object needed by rule 1 makes it usable in membrane 1 , due to the execution of rule → (, out 2 ), the object produced will be sent into membrane 2 , which will trigger rule 21 in membrane 2 .Here rule 2 is a chained rule, which contains two rule vectors.Referred to the literature [12], chained rule is a vector set of rules, in the membrane if the first rule in the vector of chained rules is applied, in the next step the rest of the rules from will be applied in order in consecutive steps.And if any one rule from a vector of chained rules which has already started to carry out cannot be used, the execution of is dropped, that is, for the current application of , the remaining rules are not executed anymore.So by rule 21 , generate object , and then the executive condition of rule 22 is satisfied.So in membrane 2 rule 22 will be used continuously until object in membrane 1 are used up, and the whole system reaches stable, calculation ending and * object left in output membrane.Until then, the P chain system completes the multiplication * of arbitrary two natural numbers and .Specific implementation is shown in Table 1. P Chain System for Division. In order to achieve division by P chain system, we use P chain system with single orientation.Here taking the P chain system with positive orientation for example, P chain system for division is designed as Figure 5 and its formalization definition is described as followed. +: show that it is a P chain system with positive orientation; here output membrane does not appear in Figure 5 and is represented by alphabet .As shown in Figure 5, there are two membranes with positive orientation, where and represent any two natural numbers to divide, and and are the objects of the P chain system, using the number variation of object and to achieve the division between any two natural numbers, here the number of object are used to represent the calculating result.Because at present the object defined in P chain system can only represent nonnegative integer, division taken into consideration here ignores the remainder.The process of P system with chain structure to fulfill division / is explained as followed.First, there are operable objects for rule 1 in membrane 1 , so the rule → ( , out 2 ) is used, generation object sent into membrane 2 , which activates the rule 21 .Here rule 2 is a chained rule too.So by rule 21 produce one object , and then the executive condition of rule 22 is met.So in membrane 2 rule 22 will be performed continuously until the object is used up or less than and rule 1 isn't executed any more, the whole system reaches stable, calculation ending and result is the number of object left in output membrane.Until then, the P chain system completes the division / of arbitrary two natural numbers and .Specific implementation of exact division is shown in Table 2, and Table 3 gives the computational process of an example of nonexact division 9/2. The Analysis of System Performance.Time complexity used in literature [11] is referred to the number of execution of all rules.Following this way, the comparison of time complexity of the two different methods is shown in Table 4.We can see that the time complexity of arithmetic operation fulfilled by P system with chain structure has been improved obviously, especially the division, which can testify the efficiency and improvement of the new method.Meanwhile, the design of membrane structure and rule is simpler, more effective and more conducive to simulate than the former P system. Simulation of Arithmetic Operation P Chain System The simulation of P system has a vital practical significance, and there has been a series of success P system simulation software which can be obtained from P system web page [13].Based on this, in this paper we try to do the simulation program of P chain system for arithmetic operation, as a result, it has stood out its lower time-cost and higher efficiency significantly.The development platform of the system is Microsoft Visual Studio 2008 on Windows 7 and the development language is C#.The whole system can operate normally, and can produce the executable file, which is with portability and robustness. Description of Storage Mode of the Simulation System. During the process to develop system, we need to choose the appropriate storage mode for different elements, and the specification is demonstrated as followed [14]. We have known that we need to store the membrane structure, object and rule.Here we abstract those contents in each P system instance as an input file, marked "pcs" as the expanded name, for example, the input file of a P system to achieve the addition operation is named as "Addition.pcs".And taking the addition operation for example, the input file which includes the definition of the membrane structure, object and rule is showed in Table 5. After explain the structure, object and rule in a P chain system which need stored, how to store them is the next consideration, which is also the problem that needs to be resolved in the process of system initialization. For the object multisets in the P chain system, it is stored by the form of nonnegative integer.The reason that we do not choose char is when the number of the object is large, the length of char may be very long, not only occupying large storage space, but also consuming much time when read.However, by using the INT array, huge amount of data can be denoted briefly. For the rules in the P chain system, they are input into the system in the form of strings.If we want to fulfill the correspondence between the rules and object multisets, some rule transformations are needed.The way is that, first, determine the alphabets used in the P chain system, taking the P chain system for addition operation for example, the alphabet set {, } and the rule → (, out 2 ) involved in membrane 1, we need divide the rule into two parts, namely the former and the latter, and store respectively.The former part of the rule is presented as [1,0], meaning that there are 1 object and 0 object , and the latter part of the rule is denoted as [0, 0; 1] and [0, 1; 2], noting that the number 1 and 2 after semicolon display the membrane 1, 2, and the part before semicolon shows the change of numbers of objects.For the rule → (, out 2 ), we know that after the operation of the rule, there are 1 object consumed and no consumption of object in membrane 1, and in membrane 2, the number of object are unchanged and the object is added one more. For the storage of the structure of P chain system, due to the special chain structure, we choose linear structure to save.Specifically, it is a kind of linear list, that is to say, two elements which are adjacent logically are adjacent physically.Here we use the vector representation, and there are three marks in each storage unit, the node mark (), precursor node mark (Pre) and subsequent node mark (Sub).Take the P chain system 1 → 2 → 3 → 4 → 5 → for example, its storage form showed as Table 6. Following is the code of object and rule for P chain system for addition operation to illustrate the specific storage mode.Here line (1) and ( 2) are the storage way of objects, num1 and num2 are any two nature numbers to add; line (3)-( 4) are the storage of rule → (, out 2 ), using two-dimension string array, where the first dimension shows the rule is divided to three parts, and the second dimension displays the number of object , object and the label of membrane, for line (3) as an example, the last {1} means this is in the membrane 1, the {1, 0} denotes before rule execution there are 1 object and 0 object , and {0, 0} explains after rule execution there are 0 object and left the membrane 1. Description of Rules Selection Algorithm.The strong computing power of membrane computing is largely due to its maximum parallelism, in the theoretical aspect, the rules in the membrane are allowed to perform freely until there are no rules can be selected.But in the specific process to fulfill the simulation of P system, we need to consider the order of the rule executed and the phenomenon "competition for resource" among the implement process of rules. For the order of the rules execution, there are many exploratory researches, for example, in the literature [12] chained-rule was proposed, which is a new way to design rules in the P system, where rules are arranged as a form of a chain, meaning they are used by a sequence in the chain.Supposing that there is a chained rule , which contains several vectors of the rule 1 , 2 , . . ., , in the membrane if the first rule 1 of chained rule is applied, then in the next step the rest of the rules from will be applied in order in consecutive steps.Only when the condition of certain vector rule is not satisfied or there is no more vector rules can be used, the chained rule will halt. In fact, all the ways to explore the order of rule execution are to solve the priority problem, here are two main rule priorities, the strong relationship and the weak one.The strong relationship is referred to that a rule can be used when and only when there is no prior rule can be used, while the weak relationship is defined as for a rule , if there are prior rules than it, meaning the condition to perform the prior is also met, no matter the prior rules are used or not, the rule cannot be used.For example, for the objects and rules 1: → ; 2: → ; 3: → ; 4: → with priority 2 > 3 and 2 > 4, referring to the definition of strong relationship, there are three possible rule choices {1, 3}, {1, 4} and {2}, but considering the weak relationship, there are only two choices {1} and {2}, and this is because the condition that is satisfied to rule 1 is also met to rule 2, even if rule 2 is not used, when rule 1 is executed, rule 3 and 4 cannot be used.In this paper, we choose the weak one as the standard in the simulation of P chain system.And also some tentative algorithms are given to solve the problem of "competition of resource".The algorithm of rule selection for simulating P chain system is shown in Algorithm 2. Now we have set the executive priority of rules, which has reduced the problem of "competition for resource" in some extend, but for the rules without priority, the problem is also existed, here we make full use of another significant characteristic of P system-"Randomness", meaning that if there are two rules which compete for resource, randomly select one to use.The specific algorithm to handle "competition for resource" further is shown in Algorithm 3. Demonstration of Simulation of P Chain System.The interface of simulation software of P chain system for implementing arithmetic operation is shown in Figure 6(a).Here the first number and the second number represent the two positive integers used to operate, and there are four kinds of operators +, −, * , / to be chosen.When you have chosen a kind of operator and clicked on the "calculate" button, the calculating result will be shown in the resulting box, and when you have selected to use clear button, the data will be eliminated from the textbox and the next calculation is ready. Summary and Prospect In this paper, a novel P system with chain structure is introduced, which combines membrane computing and discrete Morse theory and takes use of the advantages of discrete INPUT: object multisets of membrane , rule set , priority set of rule execution P (P is represented by the ordered pairs of rule labels, such as 1 > 2, denoted as ⟨1, 2⟩; for the chained rules, P is showed as ⟨, ( + 1)⟩, , = 1, 2, . .., meaning to be performed by order); OUTPUT: rule set chosen to be used (1) For every rule in the set , if each value in the INT vector corresponded to the former of the rule is no more than corresponding value in , rule is put into the candidate rule set . gradient vector path.As one of new P system, it has some theoretical and practical contributions.First, the definition of structure, object and rule of P system with chain structure is proposed, especially its differences with the former P system, mainly referred to its chain structure, such as its oriented property, additive property, and precursory and subsequent relationships.It contributes to enrich the theoretical framework of P system research.Second, the application on arithmetic operation by P chain system is demonstrated, and the specific model and rules of P chain system used are shown in detail.Compared with the former methods, the new thinking is more effective both in time and space complexity.Last, the simulation system of P chain system for arithmetic operation is given, and the feasibility of the system is proved, which demonstrates the computational feasibility and effectiveness of P system with chain structure.This is a good start to simulate P system, and is also an exploratory way to show and take full use of the advantages of P system to solve some practical problems. While we should admit that there are some insufficiencies in this paper; for example, the application on arithmetic operation is not connected with the rules proposed so closely and does not display the computing power of P chain system obviously, and the function of the simulation system is a little easy.All of these should be taken into consideration for further study.Based on the work in this paper, there are many other studies can be furthered on.For example, there are some special properties of P chain system, and how to use these particularities to enhance the computational power of P system is a question deserved to think.Also, whether there are some special applications which may be solved by P chain system more efficiently or not is a meaningful question too.Moreover, the simulation of P system with chain structure to fulfill more functions is a big challenge and needed to study more. Figure 1 : Figure 1: The structure of P chain system. Figure 2 : Figure 2: P chain system for addition. Figure 3 : Figure 3: P chain system for subtraction. Figure 4 : Figure 4: P chain system for multiplication. Figure 5 : Figure 5: P chain system for division. ( 2 )Algorithm 2 : For each ordered pairs ⟨, ⟩ in priority set P, setting = , if rule and are both in L, then if = , = ( + 1), then , are left in , and the sequence is , , otherwise delete the less prior rule .Rule selection algorithm for simulation of P chain system.INPUT: rule set A (the output of Algorithm 2) OUTPUT: rule set without competition for resource (1) Select one rule form randomly, adding into , and delete from at the same time.(2)For every rule in except , if there is competition for resource between and , delete from .If is not null, return (1), else output .Algorithm 3: Rule selection algorithm with "competition for resource". Figure 6 : Figure 6: Simulation interface of P chain system for arithmetic operation. and its subsequence is (in, tar 2 )) elaborates the specific precursor and subsequence.In fact, the target indication (tar 1 , tar 2 ) can be simplified as tar 1tar 2 , for example, symport rule (, (out, sub)) in membrane can be expressed as (, out +1 ), in the same way, antiport rule (, (out, tar 2 ); V, (in, tar 2 )) can be shown as (, out or (, (out, tar 2 ); V, (in, tar 2 )), where (, out; V, in) means that object from membrane is sent out into membrane −1 or membrane +1 randomly, at the same time, object V from the membrane −1 or membrane , while rule (, (out, tar 2 ); V, 5) 1 , . . ., denote that there are membranes with chain structure 1 , . . ., , and membrane is represented as = ( , ), 0 ≤ ≤ , where A ≥ 0 denotes the number of objects of membrane at the beginning of calculation; B shows the limited set of rules in membrane , there are three main forms.(i) Rewriting rules are showed as → V, where from the alphabet represents the multisets; V is the string over set * {here, out, in}, written as (, tar), where is the object from and tar is here (usually omitted), out or in.(ii) Communication Rules include symport rules and antiport rules.Symport rules are formed as (, (tar 1 , tar 2 )), and antiport rules are showed as (, (tar 1 , tar 2 ); V, (tar 1 , tar 2 )), where , V are the objects of alphabet representing the multisets, and tar 1 = {here, out, in}, tar 2 = {pre, sub}, tar 1 is used to control the direction of object in or out of the membrane, and tar 2 explains the object will be sent into which membrane.(iii) Forgetting Rules are formed as → , where is the object over representing the multisets, and represents null.The execution of forgetting rules makes the number of objects in membrane less. Table 1 : The implementation of P chain system for multiplication. Table 3 : The implementation of P chain system for nonexact division 9/2. Table 4 : Comparison of time complexity of two methods. Table 6 : Storage of P chain system.
2018-12-14T01:04:07.558Z
2015-01-21T00:00:00.000
{ "year": 2015, "sha1": "d0ae2082cb004d169d94e7f1415ad4c55a930288", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2015/123960.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d0ae2082cb004d169d94e7f1415ad4c55a930288", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
232772511
pes2o/s2orc
v3-fos-license
Determinants of Disability in Minority Populations in Spain: A Nationwide Study Some population groups could be especially vulnerable to the effects of population ageing. The Global Activity Limitation Indicator (GALI) has been proposed as a measure of disability, but it has not been used in minority groups. The aim of this study is to estimate the prevalence of disability using the GALI and to analyse its determinants in immigrant and Roma populations. Data from the Spanish National Health Survey 2017 and the National Health Survey of the Roma Population 2014 were used, including adults aged 50 and above. Prevalence of disability was estimated, and odds ratios were calculated using logistic regression models to assess the association between disability and demographic, socioeconomic, and health variables. The prevalence of disability was estimated at 39.4%, 30.6%, and 58.7% in the native, immigrant, and Roma populations, respectively. Gender was a common determinant for the native and Roma populations. On the other hand, among immigrants, the risk of disability increased over the time residing in Spain. There were significant interactions with age and gender in the native population. Disability has different determinants in the three population groups. Public health measures to protect the Roma population and immigrants’ health should be considered. Introduction Population ageing and life expectancy increase have led to a rise in disability and long-term illnesses [1]. This process has been occurring along with an increase in health inequalities, since there is a close relationship between disability and poverty and lack of resources [2]. The World Health Organisation (WHO) considers social exclusion as one of the main causes of health inequalities [3]. Moreover, some studies suggest that racial and ethnic segregation, as well as different forms of discrimination, may negatively impact the health and disability status of some groups [4,5]. Socially excluded people suffer from deprivation and lack of resources, which may affect health. However, a holistic view should be taken on Social Determinants of Health (SDH), considering conditions of daily life and social structures [3]. In addition, although the SDH approach focusses on indirect causes of health problems, such as ethnicity and migrant status, it has proved to be essential for improving health equity [6,7]. In Europe, minority groups, such as Roma and immigrant populations, are among the most socially vulnerable groups and have less access to the health system, which could lead to a worse health and disability status [8,9]. In 2017, the immigrants residing in Spain represented 13.3% of the population. Of that percentage, 38.3% came from South and Central America, 18.2% came from Africa, and 7.2% from Asia, whereas 34.5% came from other European countries (29.1% from the EU-28 and 5.4% from the rest of Europe), and the remaining came from North America and the The result variable considered was the GALI (Disabled/Not Disabled), which is collected through the same question in both health surveys: "For at least the past 6 months, to what extent have you been limited because of a health problem in activities people usually do?" It has three possible answers: Severely limited, limited but not severely, and not limited at all. The first two answers were grouped into a category representing those who were disabled, and the third answer represented those who were not disabled. The following demographic and socioeconomic variables were considered: sex (Male/ Female), age (50 to 64/65+ years), educational level (No studies/Primary/Secondary or University), employment status (Working/Unemployed/Retired/Other situations), and household income (Low/Medium/High). The category "other situations", in the employment situation variable, included students, people with an incapacity to work, and people who do household work. The household income variable has been collected through slightly different response categories in the two surveys. In the ENSE, it has been grouped by the following categories: less than 1050 euros/from 1050 to 1800 euros/more than 1800 euros per month, and in the ENSPG: less than 950 euros/from 950 to 1950/more than 1950 euros per month, being denominated as low, medium, and high income. For the immigrant population, the time of residence in the country was included in the analyses (Less than 10 years/More than 10 years). Health variables included were self-rated health status (SRH) (Healthy/Unhealthy), overweight/obesity (Yes/No), and physical (Yes/No) and mental (Yes/No) illnesses. The physical illnesses variable included an affirmative response to at least one of the 12 illnesses on the list in both surveys: high blood pressure, osteoarthritis, chronic allergy, asthma, chronic bronchitis, emphysema, chronic obstructive pulmonary disease (COPD), diabetes, stomach or duodenal ulcer, high cholesterol, migraine, and osteoporosis. Mental illnesses included depression, anxiety, and other mental illnesses. To describe the demographic, socioeconomic, and health characteristics in all three populations, frequencies and percentages were calculated with their 95% confidence intervals (95% CI). The prevalence of disability and their 95% CIs were also calculated for all three populations. As a measure of association, simple and adjusted Odds Ratios (OR) were calculated using binary logistic regression models, including as explanatory variables the demographic, socioeconomic, and health variables with a significant effect (p < 0.05). Due to the complex sample design of the surveys, the weights provided in the surveys were used to produce all the estimations. The statistical programme SPSS v.25 ® was used for the computations. Table 1 shows the characteristics of the native, immigrant, and Roma populations according to demographic, socioeconomic, and health variables. According to sex, a balanced distribution was observed in the native and Roma populations, in contrast to the immigrant population, which shows a higher percentage of women (59.2%). Immigrants were the youngest population group, with an average age of 60.3 years, compared to an average of 65.9 years in the native population and 61.8 years in the Roma population. Immigrants and Roma were the largest population groups in working ages (74.1% and 66.4% respectively), and immigrants had the highest percentage of the working population (43.4% compared to 29.2% in the native population and 25.9% in the Roma). With regard to educational level, the immigrants had the highest proportion of secondary or university studies (73.6% as opposed to 53.1% in the native population), and the percentage was similar among those with no studies (3.2% as opposed to 2.9%). However, among the Roma, the percentage of those without studies reached 27.3%, and only 1.8% had secondary or university studies. The percentage of the Roma with low incomes (83.8%) was much higher than the rest. Table 2 shows the prevalence of limitations in native, immigrant, and Roma populations. On analysing the difference between immigrants and natives, it can be observed that immigrants had a lower prevalence of limitations than natives (30.6% vs. 39.4%). In the case of the Roma population, this prevalence reached 58.7%. By sex, it can be observed that women had a higher prevalence of limitations than men, particularly in natives (44.5% vs. 33.8%). Likewise, Roma women had a higher prevalence of limitations than men (68.8% vs. 47.9%), although among immigrants, the difference was slight (31.0% vs. 29.8%). According to age, a higher prevalence of limitations was noticed in people aged 65 and above, particularly in the native population (51.0%) and in the Roma population (75.5%), while among immigrants, there were scarce differences between age groups (31.9% vs. 30.2%). In overall terms, there is a clear social gradient, both in the native and in the Roma populations. People who were not working had a low educational level or a low household income had a higher prevalence of disability. In the case of immigrants, there is no clear gradient, with a higher prevalence of disability in unemployed people (32.6%), in other situations (47.9%), in those with primary education (41.5%), and in people with medium incomes (39.0%). It was also observed that immigrants who had been residing in Spain for 10 years or more had a greater prevalence of disability (32.7% vs. 18.9%). Results Describing the prevalence according to health variables, it was higher among those who had health problems in all three populations: those who had overweight/obesity (40.5% natives, 36.8% immigrants, and 55.3% among Roma), suffered from physical illnesses (46.1%, 38.7%, and 64.2% respectively), and especially those who suffered from mental health problems (65.0%, 61.2%, and 88.3% respectively). According to self-rated health, the prevalence of disability in those who had a bad perception of their health was 74.5% among Roma, while in natives, it was 69.5% and in immigrants, it was 60.0% (see Table 2). Tables 3 and 4 show the association between disability and demographic, socioeconomic, and health variables in all three populations. In the native population, a statistically significant adjusted association was observed between disability and the demographic, socioeconomic (except for household income), and health variables (except for overweight). Women show a higher disability risk than men, and people aged over 65 show a higher risk than the younger population. In addition, a clear risk gradient was also noticed among people who were not working. In particular, a higher risk was observed among people who were in other situations or were retired. People with a lower level of education were at more risk than those with a high educational level. Similarly, people who suffered from physical or mental illnesses and especially those who had a bad perception of their health had a greater risk of disability. By excluding self-rated health from the model, it is shown that sex stopped being significantly associated (p = 0.052), but the value of the Odds Ratios in the physical and mental illness variables increased. By excluding self-rated health from the model, it is shown that sex stopped being significantly associated (p = 0.052), but the value of the Odds Ratios in the physical and mental illness variables increased. In the case of the immigrant population, no demographic (except years of residence) or socioeconomic variable was significant. Immigrants who resided in Spain for 10 years or more had a greater risk of disability than those who resided less time in Spain. Oppositely, all the health variables were significant, and persons with overweight/obesity presented a greater risk of disability. Moreover, people who suffered from physical illnesses, mental illnesses, and especially those who had a bad perception of their health, had a greater risk of disability. As in the native population, by not including self-rated health, the risk of disability increased among people who were overweight/obese and physically and mentally ill. In the Roma population, women were at greater risk of disability than men. With regard to the socioeconomic variables, the only significant variable was the employment situation, with retirement being a protective factor and other work situations (housework, inability to work, study, or others) being a risk factor. Among the health variables, those suffering from mental and physical illnesses and those with poor self-rated health had a greater risk of disability. By excluding self-rated health from the adjusted model, a slight increase in the risk of disability in women is noticed. People aged 65 and above were also at greater risk of disability than the younger group. In addition, the risk of disability increased in people who were physically and mentally ill. As a result of the findings regarding sex, the interactions between this variable and the rest of the explanatory variables in the three populations were tested. A significant interaction between age and sex was only found in the native population, and it was observed that women presented a greater risk of disability (OR = 1.85 (1.58-2.15)) than men (OR = 1.42 (1.21-1.67)) in the older age group. Complementarily, the differences in the risk of disability between men and women were analysed disaggregating by the age groups in the most advanced ages: 65 to 74 years, 75 to 84 years, and over 85 years. It was shown that the risk of disability increased with age in both men and women, although the risk of limitations in women increased to a greater extent in the older age groups. In the immigrant population, no significant interaction between the explanatory variables and sex was found. Although in the Roma population, the interaction was not significant, it was noticed that the association between age groups and disability varies in men and women in a similar pattern as in the native population. Women aged 65 and above had a higher risk of disability (OR = 2.13 (0.77-5.83)) than men of the same age group (OR = 1.45 (0.62-3.44)). On the other hand, it was observed that retirement was only a protective factor for women, while for men, it was not significant. Discussion This study aimed to analyse the determinants of disability in minority groups in Spain (immigrant and Roma populations) and in the native population. The disability measure used was the GALl, which is a subjective indicator of participation restriction, meaning that it relies on the individual and social perception of illness and disability. Disabled people are discriminated against in a unique form, having less access to healthcare, employment, and education. In this sense, disability can be considered a SDH, leading to worse health outcomes, for example in mental health and obesity [3,38]. When interacting with other SDH such as ethnicity, migrant status, or gender, particular types of discrimination could be observed [7]. This is the first study in Spain that compares those three population groups for the analysis of disability. In the three population groups studied, objective health problems, such as physical or mental illness, as well as the poor perception of health, were found to be a disability risk factor. In addition, important differences were observed in the prevalence of disability, with the Roma population being the most affected and the immigrant population having an even lower prevalence than the native one. However, the results suggest that there are differences in the determinants of disability between the three populations in Spain. A clear social gradient was observed in the native population (more risk of disability according to sex, educational level, employment situation, and income level), which was not observed in immigrants and Roma when adjusting for other variables. As a social construction, gender is a crosscutting determinant, leading to worse health outcomes and often interacting with other social determinants, such as ethnicity [44]. One aspect to emphasise in the results is the fact that women had a greater risk of disability than men in native and Roma populations, but not in the immigrant population. According to a study by Padkapayeva et al., the differences in work activity limitations between men and women could be fully explained by the mediation of chronic diseases and the type of occupation [45]. In addition, a report from the Canadian National Survey indicated that more than half of women with activity limitations needed help with household work, while only one-third of men needed it [46]. This may suggest that the types of occupations and the greater burden of household work may lead to a greater risk of limitations for women. Indeed, the double burden of housekeeping and employed work out of the house, in addition to violence and discrimination against women, might mediate in gender health inequities [3]. In addition, immigrant women from Western and Northern European countries may have unknown protective factors, since populations from those countries have shown a non-significant association between sex and functional limitations [47]. A large proportion of immigrants over 50 from those countries could explain that there is no difference by sex among immigrants. Finally, the greater impact of ageing on women's risk of suffering limitations has also been reported in the United States in several racial groups [48]. In the case of the immigrant population, disability was not associated with their economic or educational level. This contradicts the hypothesis that the existence of the healthy migrant effect in Spain is due to their socioeconomic characteristics (higher education level compared with immigrant populations in other European countries) [26,28]. This could indicate that in this group, there are different social patterns that make a different perception of disability, especially considering the great cultural diversity that exists in this population group. A plausible explanation could be that those who migrate to work in other countries are those who have better health and no limitations, especially among those with the least qualified and lowest income jobs, due to the difficulties and risks of migration [19,20]. On the other hand, people from wealthy countries residing in preferred retirement areas in Spain have shown better health outcomes than natives, while immigrants from those countries residing in the rest of Spain have shown an unhealthy migrant effect, so community context has shown to be a strong determinant among immigrants [49]. A large proportion of those European retirement migrants among the immigrant population over 50 could explain why there is no increase in disability risk in older and retired people, as immigrants with disabilities are more likely to return to their countries [19,20]. In any case, despite the type of occupation and the mediation of physical and mental problems, newcomer immigrants seem to have some disability protective factors. Acculturation and social behavior assimilation could be determinants in the disabling process [24,25]. New evidence will be needed in order to test these hypotheses. To the best of the authors' knowledge, this is the first study that evaluates the GALI in the Roma population, so there is no background with which to compare the results. However, considering the social exclusion they suffer, those results were expected. Evidence shows that Roma people are one of the most discriminated minority groups in Spain, being usually prejudged and having greater difficulties in finding a house or a job [32]. This has led to a tendency to cluster in the same neighborhoods and jobs, with notable social segregation [33]. In these cultural contexts, women often assume very rigid family-caring roles and have difficulties in being cared for [50]. In fact, the gender pattern observed in disability contrasts with other studies describing poorer health in women [34,51]. Furthermore, the results show that the measures of socioeconomic and health inequality used for the native population are insufficient to explain the differences in disability of this ethnic minority. The fact that the Roma are one of the ethnic groups with the worst health results in Spain is also reinforced, which makes public health action a priority in order to make visible the situation of social exclusion suffered by Roma and to promote inclusive policies to avoid multiple discrimination and multi-dimensional exclusion. These measures should include political action to protect all workers, in particular immigrants and ethnic minorities, and to enforce existing legislation, surveillance and health promotion at workplaces, improvements in occupational healthcare access, and improvements in communication with preventive health workers [52]. It is also necessary to implement social, labour, and health policies in order to integrate the Roma population more effectively. Institutions and civil society should take action to render the health needs visible and to implement measures on different levels, from economic integration policies to the training of health professionals in the specific problems of the Roma community. Similarly, it is also necessary to promote the active participation of the Roma population in their own health. Researchers should also make an effort to improve the instruments to monitor this population, given the limitation in the information to study the Roma community [53]. This study has the inherent limitations of a cross-sectional study, as well as limitations due to data sources. The first data source limitation is due to the fact that the data from both surveys have been collected in different years, which could slightly affect the comparability of the results. In addition, the ENSE does not ask about ethnicity or culture of origin. So, another limitation is that, as said in the methods, the Roma population could be included to some extent in the native population, not being possible to analyse the information of this population group separately. It was also not possible to study the immigrant population according to the culture of origin. One limitation is due to the ENSPG, since even if it was designed to be comparable with the ENSE, there are considerable differences. It includes a limited list of illnesses, compared to the list of 29 illnesses in the ENSE. In this study, only the physical and mental illnesses that are common in both surveys had to be considered (12 illnesses included in the ENSPG). Furthermore, it is not asked whether these diseases have been confirmed by a doctor, so the prevalence could be overestimated. Then again, the list of mental illnesses only asks about depression or others, not asking about anxiety, so the prevalence may be underestimated. Conclusions In conclusion, exclusion is a strong determinant of disability, in particular in ethnic minorities and immigrants. Groups such as the Roma suffer a higher risk of disability due to multiple marginalising factors. In addition, when interacting with gender, the effects of social exclusion increase. Then, intersectional approaches should be adopted in order to better understand this process. On the other hand, there is a healthy migrant effect in Spain, showing a lower prevalence of disability, independently of the different demographic, socioeconomic, and health variables. Nevertheless, this effect tends to disappear over the time of residence. Given the existing cultural and ethnic diversity and the large number of immigrants who are long-term residents in Spain, it is necessary to consider the findings of this study and to take occupational health and preventive measures so that the immigrants' health does not deteriorate over time. The results of this study constitute a new set of evidence that the high levels of disability among the Roma, especially among women, as compared to the rest of the population, are a result of the years of social exclusion through which they have lived.
2021-04-04T06:16:27.134Z
2021-03-29T00:00:00.000
{ "year": 2021, "sha1": "e27dc729305817d6e12ec3be61710e743860da13", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph18073537", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "209e4845b8970c21530445d18534e70753f1be57", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2463614
pes2o/s2orc
v3-fos-license
Hypothalamic food intake regulation in a cancer-cachectic mouse model Background Appetite is frequently affected in cancer patients leading to anorexia and consequently insufficient food intake. In this study, we report on hypothalamic gene expression profile of a cancer-cachectic mouse model with increased food intake. In this model, mice bearing C26 tumour have an increased food intake subsequently to the loss of body weight. We hypothesise that in this model, appetite-regulating systems in the hypothalamus, which apparently fail in anorexia, are still able to adapt adequately to changes in energy balance. Therefore, studying changes that occur on appetite regulators in the hypothalamus might reveal targets for treatment of cancer-induced eating disorders. By applying transcriptomics, many appetite-regulating systems in the hypothalamus could be taken into account, providing an overview of changes that occur in the hypothalamus during tumour growth. Methods C26-colon adenocarcinoma cells were subcutaneously inoculated in 6 weeks old male CDF1 mice. Body weight and food intake were measured three times a week. On day 20, hypothalamus was dissected and used for transcriptomics using Affymetrix chips. Results Food intake increased significantly in cachectic tumour-bearing mice (TB), synchronously to the loss of body weight. Hypothalamic gene expression of orexigenic neuropeptides NPY and AgRP was higher, whereas expression of anorexigenic genes CCK and POMC were lower in TB compared to controls. In addition, serotonin and dopamine signalling pathways were found to be significantly altered in TB mice. Serotonin levels in brain showed to be lower in TB mice compared to control mice, while dopamine levels did not change. Moreover, serotonin levels inversely correlated with food intake. Conclusions Transcriptomic analysis of the hypothalamus of cachectic TB mice with an increased food intake showed changes in NPY, AgRP and serotonin signalling. Serotonin levels in the brain showed to correlate with changes in food intake. Further research has to reveal whether targeting these systems will be a good strategy to avoid the development of cancer-induced eating disorders. Electronic supplementary material The online version of this article (doi:10.1007/s13539-013-0121-y) contains supplementary material. Introduction Anorexia affects 60-80 % of all patients with cancer and considerably contributes to disease-related malnutrition and cachexia, which in turn strongly affect patient's morbidity, mortality and quality of life [1]. Anorexia is often linked to cachexia, a complex metabolic syndrome associated with underlying illness which is characterised by progressive loss of muscle (muscle wasting) with or without loss of fat mass resulting in weight loss [2]. Although anorexia and cachexia are likely to be initiated by similar pathologies, several lines of evidence suggest that both conditions progress via distinct mechanisms. However, the presence of cachexia makes it difficult to disentangle the primary underlying mechanisms of cancer anorexia since this might be due to tumour growth, cachexia progression or other diseaserelated mechanisms. Cancer anorexia is generally considered to be a multifactorial condition. Contributing to its complexity is the observation that evolution has developed powerful physiological mechanisms favouring food intake. It has been shown that upon shifting the balance to anorexia, pathways can become redundant when they are not functioning properly. This is for example shown by data obtained from studying knockout animals for well-known food intake regulators, the NPY knockout mouse [3], the AgRP knockout mouse [4] or the ghrelin knockout mouse [5]. These mice display regular food intake and body weight regulation despite the loss of a significant key modulator in appetite regulation. The difficulties encountered in studying cancer anorexia inspired us to approach the problem from a different angle. Cancer-induced anorexia is suggested to be predominantly caused by the inability of the hypothalamus to respond adequately to pivotal peripheral signals involved in appetite regulation [6]. This hypothalamic resistance to peripheral neuroendocrine signals is believed to be due to the increase in proinflammatory cytokines resulting from tumour growth [6]. In this study, we report on hypothalamic gene expression profiles in a cancer-cachectic model with increased food intake. In this model, appetite-regulating systems, which apparently fail in anorexia, are still able to adapt adequately to changes in energy balance. By applying transcriptomics, many appetite-regulating systems in the hypothalamus could be taken into account. Here, we provide an overview of changes that occur in the hypothalamus during tumour growth which could be important in the development of cancer-induced eating disorders. Animals were individually housed 1 week before start of the experiment in a climate-controlled room (12:12 dark-light cycle; 21°C ± 1°C). Mice were placed on a standard ad libitum diet (AIN93M, research Diet Services, The Netherlands) and had free access to water. Murine C26 adenocarcinoma cells were cultured and suspended as described previously [7]. Under general anaesthesia (isoflurane/N2O/O2), tumour cells in 0.2 ml HBSS were inoculated subcutaneously into the right inguinal flank. Controls were sham-injected with 0.2 ml HBSS. All experimental procedures were approved by the Animal Ethical Committee (DEC, Bilthoven, The Netherlands) and complied with the principles of good laboratory animal care. Experimental design On day 0, tumour cells were injected. BW, food intake and tumour size were measured three times a week. Tumour size was determined by measuring the length and width of the tumour with a calliper. On day 20, body composition was determined by DEXA (Lunar, PIXImus). Subsequently, blood was collected by cardiac puncture. After sacrifice, brain, hypothalamus, organs and lower leg skeletal muscles were weighted and frozen at −80°C. Two studies were performed with similar settings: study A was a pilot study to optimise experimental conditions and was followed by study B. Table 1 shows the number of tumour cells used for inoculation in the different groups that were included in the two studies. Blood plasma amino acids and cytokines Amino acids were measured by using HPLC with orthophthalaldehyde as derivatization reagent and L-norvaline as internal standard (Sigma Aldrich). The method was adapted from van Eijk et al. [8]. Serotonin and dopamine levels Hypothalamic samples were used for microarray experiments, while remaining brain parts were used to determine serotonin and dopamine levels. Brains were homogenized in 1 ml containing 40 mM Tris, 1 mM EDTA, 5 mM EGTA, 0.50 % Triton X-100 and PhosSTOP phosphatase inhibitor (Roche Nederland, The Netherlands). Citric acid (1 %) was added to prevent serotonin oxidation. Serotonin and dopamine levels were measured using enzyme-immunoassay kits (BAE-5900, BAE-5300, LDN, Nordhorn, Germany). Statistics Data was analysed by statistical analysis of variance followed by a post hoc Tukey's multiple comparison/Bonferroni test or by a Student's t test. Differences were considered significant at a two-tailed P <0.05. Statistical analyses were performed using Graphpad Prism 5. For statistical analysis of microarray data, see microarray section (below). Microarray studies Total RNA from the hypothalamus was isolated by using RNeasy Lipid tissue kit (Qiagen, Venlo, The Netherlands). RNA concentrations were measured by absorbance at 260 nm (Nanodrop). RNA quality was checked using the RNA 6000 Nano assay on the Agilent 2100 Bioanalyzer (Agilent Techologies, Amsterdam, The Netherlands) according to the manufacturer's protocol. For each mouse, total RNA (100 ng) was labelled using the Ambion WT expression kit (Life Technologies, Bleiswijk, The Netherlands). Microarray For both studies A and B, samples were pooled for each group. Also, individual samples from study B were included in a subsequent microarray experiment to confirm the findings on appetite regulators and canonical pathways. In this microarray experiment, four control samples and five samples from tumour-bearing mice were included in this experiment; however, one control sample gave various spots on the array and was therefore excluded from analysis. Array data were analysed using an in-house online system [9]. Shortly, probe sets were redefined according to Dai et al. [10] using remapped CDF version 15.1 based on the Entrez Gene database. In total, these arrays target 21,225 unique genes. Robust multi-array analysis was used to obtain expression values [11,12]. For study B, we only took genes into account that had an intensity >20 on at least two arrays, had an interquartile range throughout the samples >0.1 and had at least seven probes per genes. In total, 8,763 genes passed the filter. Genes were considered differentially expressed at P < 0.05 after intensity based moderated t-statistics [13]. Further functional interpretation of the data was performed through the use of IPA (Ingenuity® Systems, www.ingenuity.com). Canonical pathway analysis identified the pathways from the IPA library of canonical pathways that were most significant to the data set. Genes from the data set that met the cutoff of 1. 3-fold change and p value cutoff of 0.05 and were associated with a canonical pathway in the Ingenuity Knowledge Base were considered for the analysis. Array data have been submitted to the Gene Expression Omnibus accession number GSE44082. Body weight and food intake In study A, tumour size and tumour weight did not increase correspondingly to the number of tumour cells injected (Fig. 1b, c). However, carcass weight, epididymal fat pad weight and skeletal muscle weight decreased proportionally to the number of tumour cells injected, suggesting that body wasting increases with tumour load despite the weight of the tumour being similar (Supplementary table S1). Food intake in all tumour-bearing animals was found to increase after 15 days. At day 19, tumour-bearing (TB) mice in TB-0.5 and TB-1 groups ate approximately 45 % more than the controls. An increase of food intake in TB mice was again noticed in subsequent study B (Fig. 1a, d). In this study, food intake of TB mice was 40 % higher than controls at day 19. On day 13, after tumour inoculation, TB mice started to lose body weight (BW). Synchronously to the decline in body weight, an increase in food intake in TB mice was measured, suggesting compensatory eating by TB mice in order to cope with loss of BW. The loss of lean mass, fat mass and skeletal muscle weight in TB mice in study B was comparable with that of study A, showing that the level and severity of cachexia developed in TB animals was similar in both studies (Supplementary table S1). Microarray analysis of the hypothalamus The heat map in Fig. 2 shows fold changes of orexigenic and anorexigenic gene expressions. Orexigenic neuropeptide Y (NPY) and agouti-related protein (AgRP) expression were found to be significantly higher by 1.9 and 1.6-fold, respectively, in TB mice. Orexigenic ghrelin expression was comparable between TB mice and controls. However, expression of the growth hormone-secretagogue receptor (GHsR), which mediates ghrelin signalling, showed to be slightly higher by 1.2-fold. In addition, growth hormone (GH) expression, which also acts via GHsR and stimulates food intake, showed to be highly upregulated in TB mice. Expression of anorexigenic somatostatin showed to be 1.2-fold higher in TB mice compared to controls. Somatostatin is a strong negative feedback regulator of GH, suggesting that its upregulation could be a result of increased GH expression. Anorexigenic pro-opiomelanocortin (POMC) and cholecystokinin (CCK) expressions were slightly lower in TB by 1.1-fold and 1.2-fold, respectively. PYY, leptin and glucagon expression were not included in the analysis because absolute expressions were below threshold. In addition to analysis of appetite regulators, a list of highly upregulated genes was generated. Genes that were upregulated with a fold change above 1.5 in both studies A and B resulted in a list of 19 genes that were highly upregulated in both studies (Supplementary table S2). Lipocalin 2 and leucin-richα2glycoprotein 1 are both discussed for their role in tumour progression and for being potential biomarkers for cancer progression [14,15] and secretoglobin (Scgb3a1) is considered a strong tumour suppressor [16]. Lipocalin 2 expression in hypothalamus has been reported to be strongly elevated upon influenza infection in mice, suggesting that lipocalin 2 in the brain is able to sense inflammatory stressors from the periphery [17]. The strong upregulation of also other inflammatory genes as interleukin 1 receptor and oncostatin M receptor in both studies contribute to the idea of an elevated inflammatory status in the hypothalamic area. Fig. 2 Heat map representation of fold changes of orexigenic and anorexigenic genes in the hypothalamus in studies A and B. RNA from hypothalamus was used to perform microarray experiment using Affymetrix chips. Fold changes relative to their control group were calculated and compared between the two studies. Each row represents a gene and each column represents a group of animals. RNA samples from the same group were pooled for analysis in study A. Study A: TB-0.5 injected with 0.5×10 6 tumour cells and TB-1 injected with 1×10 6 tumour cells. In study B, TB mice were injected with 1×10 6 tumour cells and both pooled samples (pools) and individual replicates (mean) were analysed. Pools RNA from all mice within one group were pooled, mean calculated mean from replicates. Red colour indicates genes that were higher expressed as control, and green colour indicates genes that were lower expressed as the control. Black indicates genes whose expression was similar to compared to control. Grey indicates genes that were filtered out (NA) because absolute expression values were below the predefined threshold limits (M and M section) ID Entrez ID, R receptor, NA not analysed 3.3 Pathway analysis: serotonin and dopamine signalling Pathway analysis using Ingenuity Systems showed that the serotonin (5-HT) receptor signalling pathway was significantly altered (P <0.05) in the hypothalamic tissues of TB mice (Supplementary figure 1). Expression of genes involved in both 5-HT synthesis and 5-HT degradation showed to be lower in TB mice than in controls, pointing towards a compensatory mechanism regulating expression of these enzymes. Pathway analysis further showed that besides 5-HT signalling, also dopamine (DA) signalling was altered (Supplementary figure 1). Several genes involved in 5-HT signalling are also of importance in dopamine signalling. Changes in these shared genes between the 5-HT and DA pathways are therefore likely to have an effect on both neurotransmitters. Expression of gch1, qdpr and ddc, which are involved in the synthesis of both 5-HT and DA, were strongly downregulated. Also, transporter vmat, which is important in transporting 5-HT and DA into the neuronal synapse, showed to be 1.7-fold lower in TB mice compared to controls. Tryptophan hydroxylase (tph) and tyrosine hydroxylase (th), rate-limiting enzymes in the synthesis of 5-HT or DA, respectively, were also strongly downregulated. In addition, SERT and DAT, re-uptake transporters of 5-HT and DA, respectively, in order to terminate activation in the synaptic cleft showed to be more than twofold lower in TB mice. This indicates that besides shared genes between the 5-HT and DA pathways, also genes specifically involved in either DA or 5-HT synthesis, were altered. Figure 3a shows an overview of genes involved in 5-HT and DA signalling and their fold changes. To determine the effects of these changes on gene expression, 5-HT and DA levels were measured. Serotonin levels showed to be significantly lower in the TB mice, whereas DA levels showed not to be different in TB mice compared to control animals (Fig. 3c-e). Since both DA and 5-HT have been discussed for their role in food intake and feeding behaviour, correlation between these neurotransmitters and food intake were studied. Serotonin levels were found to correlate with food intake in both C and TB mice, while this correlation could not be made for DA and food intake (Fig. 3d-f). Plasma amino acid levels and immune parameters In study B, levels of various amino acids in plasma were measured (Supplementary table S3). TRP levels relative to branched-chain amino acids (BCAA) is often used as a predictor for 5-HT status in the brain. Surprisingly, TRP/BCAA ratios showed to be significantly higher in TB animals compared to controls (Fig. 4a). TNFα levels showed to decrease, while pro-inflammatory mediators IL-6 and PGE 2 showed to be significantly elevated. Discussion In the present study, we report on the hypothalamic gene expression profile in C26 tumour-bearing mice that show an increase in food intake concomitant with body weight loss. It is likely that in these TB mice, hypothalamic appetiteregulating systems respond and adapt adequately to changes in energy balance resulting from tumour growth, although other causes for this compensatory eating behaviour (e.g. stress from the tumour) are difficult to exclude. At the same time, in situations where cancer anorexia develops food intake regulation seems to fail. By studying changes in the hypothalamus in response to disturbed energy balance during tumour growth, we aim to discover new targets for prevention or treatment of cancer anorexia. Here, we show gene expressions of important orexigenic genes to be increased, while expression of anorexigenic genes decreased. Remarkable is the downregulation of the complete serotonin signalling cascade in TB mice. To our knowledge, this is the first study showing that serotonin synthesis, degradation and synaptic release is affected during tumour growth and subsequent changes on serotonin levels are correlated to changes in food intake. The observed increase in food intake in these C26 TB mice in both experiments has not yet been reported. The C26 cancer-cachexia mouse model as described in 1990 by Tanaka et al. [18] is often referred to as "the standard" for the C26 model. With this setup, cachexia develops, which is reflected in a decrease in muscle weight as well as adipose tissue depletion. Our findings on cachexia in the present model correspond to the results found by Tanaka et al. with food intake. Canonical pathway analysis with IPA (Ingenuity® Systems) revealed serotonin receptor signalling pathway and dopamine receptor signalling pathway as being significantly changed in tumourbearing mice in both studies. a Overview of serotonin and dopamine signalling pathway and their overlapping genes (Gch1, Qdpr, DDC, VMAT and degrading enzymes MAO and ALDH). Expression of genes necessary for the synthesis of serotonin/dopamine as well as genes playing a role in the termination of serotonin/dopamine signalling in the synapse showed to be downregulated. b Heat map of fold changes of numbered genes. Genes 1-5 represent genes involved in both serotonin and dopamine signalling. c Serotonin level in brain relative to control mice. d Correlation of serotonin with food intake. e Dopamine level in brain relative to control mice. f Correlation of dopamine with food intake. Values are expressed as mean ± SEM. C sham-injected control, TB injected with 1×10 6 tumour cells *P <0.05 (significantly different from C). DHNTP 7,8-dihydroneopterin triphosphate, 6PTS 6-pyruvoyl-tetrahydropterin, 5HIAA 5-Hydroxyindole acetic acid, q-dbt q-dihydrobiopterin, KYN kynurenine, TRP tryptophan, DA dopamine, 5HT 5-hydroxytryptamine (serotonin), MAO mono amine oxidase, ALDH aldehyde dehydrogenase, SULT sulfotransferase A specific characteristic for this model is that in this particular setting, food intake of TB mice does not change and is not different from that of healthy controls. However, in the meantime, various research groups have reported a strong decrease in food intake in mice injected with these C26 cancer cells [19,20], suggesting that changes in morphology of the cell line, variation in the strain of mice and differences in number of tumour cells used for inoculation might lead to these discrepancies in findings on food intake. It has already been reported that C26-induced cachexia and anorexia can vary according to the inoculation site [21] and origin of C26 cells [22], as well as the use of solid tumour fragments or cell suspensions for inoculation can cause variation [23]. Also, adaptation of C26 cells to in vitro culture conditions can cause mutations in the cell line leading to changes in cell characteristics, sensitivity to chemotherapy, metastatic potential and tumour-induced cachexia in mice, suggesting that C26 cells can differentiate to different variants and change tumour characteristics despite being derived from the same source [24]. Subsequently, the extent and type of inflammatory response that is induced by tumour growth might play a role in the severity of cachexia and anorexia. Differences in tumourdriven inflammation, might therefore explain differences between various cancer models. To confirm tumour-induced inflammation in our model, various cytokines and PGE 2 levels in blood plasma were measured. IL-6 and PGE 2 showed to be elevated in TB mice, which also has been reported previously [25]. However, in contrast to previous results, TNFα levels in blood plasma were not elevated in TB mice compared to control mice. Elevated concentrations of TNFα are reported to decrease food intake [26], suggesting that the absence of TNFαmediated inflammation might play a role in compensating feeding behaviour in this model. All together, we would like to propose the hypothesis that although the "C26 model" is referred to as such, in fact the model is heterogeneous with many varieties. Small differences in experimental settings and spontaneous mutations in the cell line used might lead to great changes in characteristics of the model. In the present study, pathway analysis indicates serotonin (5-HT) and dopamine (DA) signalling to be altered in TB mice compared to controls. DA and 5-HT are both important neurotransmitters involved in eating behaviour. The signalling pathways of 5-HT, DA and DA metabolites norepinephrine and epinephrine are closely linked by shared synthesising enzymes and transporters. Therefore, it is very likely that changes in these shared genes will propose these comprehensive effects. Since both pathways were predicted to be altered, we measured 5-HT and DA levels in whole brain homogenates. Serotonin levels were found to be significantly lower in TB mice compared to control. This might be caused by decreased TPH and SERT expression, which have been directly correlated to lowered 5-HT levels in other studies [27,28]. However, DA levels in TB mice showed not to be different from levels in controls, suggesting that effects on expression of shared genes are of greater impact on 5-HT synthesis than on DA synthesis. In addition, in relation to changes in food intake in TB mice, only 5-HT levels showed to inversely correlate with food intake whereas DA levels did not. A limitation of the present setup is that gene pathway analysis was based on hypothalamic transcripts whereas analysis of 5-HT and DA levels took place in homogenates of remaining brain material. Therefore, levels of these neurotransmitters reflect an indication and local differences in the various regions of the brain in both DA and 5-HT cannot be ruled out. Overall, our results suggest that primarily 5-HT is associated with altered food intake regulation caused by tumour growth. This is consistent with findings reported by other research groups. In MCA tumour-bearing rats which showed clear anorexia, 5-HT levels were elevated in the PVN of the hypothalamus [29]. This elevation of 5-HT showed to be clearly tumour-driven since it did not occur in the pair-fed controls. In addition, 5-HT levels were restored after tumour resection. On the other hand, also DA levels in the hypothalamus were reported to be decreased in that study. However, this decrease in DA level was also found in the pair-fed control group to a similar extent as observed in TB rats. This suggests that the decrease DA in the hypothalamus was a consequence and not a direct cause of decreased food intake. Dopamine has shown to play a role in mechanisms induced during and after feeding, such as rewarding mechanisms [30]. For example, DA levels in the hypothalamus have been shown to increase directly after eating and the magnitude of DA response is relative to the size of meal ingested [31]. Our results, together with the existing literature, suggest that DA is not a direct causative factor in the development of cancer anorexia since it is not induced by tumour growth but decreases subsequent to a reduction in food intake. However, DA is likely to play a role in sustaining cancer anorexia once this has been manifested. Long-term alterations in DA in the hypothalamus are suggested to affect feeding pattern [32] and treatment with Ldopa, precursor of DA, has been shown to be beneficial in restoring appetite in severely anorectic cancer patients [33]. In addition, an increase in hypothalamic expression of several DA receptors (DRD), including DRD2, during tumour growth in anorectic TB rats might play a role in sustainment of cancer anorexia [32]. Our results support this finding, as we found a decreased expression of this receptor in TB mice with compensatory feeding behaviour. In summary, our results suggest that changes in 5-HT signalling and 5-HT levels contribute to compensatory eating during tumour growth. Serotonin is considered an important mediator in the regulation of satiety and hunger [34]. High brain levels induce satiation, whereas lowered levels stimulate food intake. In cancer, elevated brain serotonin has been suggested to play a crucial role in the development of anorexia [6,29]. On the other hand, lowered serotonin levels and downregulation of SERT are discussed for their role in eating abnormalities and hyperphagia in obesity [35,36]. Next to changes on 5-HT signalling and 5-HT levels, also tryptophan (TRP) metabolism appeared to change in TB mice. TRP/BCAA ratios in plasma showed to be increased in TB animals. TRP, precursor of 5HT, competes with BCAA at the blood brain barrier. Therefore, plasma TRP/BCAA ratio is used as predictor for 5-HT levels in the brain and is often linked to food intake. From this perspective, an elevated TRP/ BCAA ratio would result in increased TRP availability for serotonin synthesis in the brain and subsequently higher brain 5-HT levels. However, inconsistencies in this theory have been reported. Several reports show that plasma TRP levels do not predict TRP in brain and consequently brain 5-HT levels [37,38]. In addition, plasma TRP/BCAA ratio as predictor for changes in food intake [39], appetite [40] and satiety [41] has been reported to fail in several studies. Amino acid profiles in blood reflect skeletal muscle status and total protein metabolism in the body and is dependent on the physical status of the subject [42]. In the case of severe cachexia, it could be that large metabolic alterations in muscle [43] and the presence of insulin resistance in the muscle [44] might distort amino acid profiles in blood in order to predict brain 5-HT levels via TRP ratios adequately. In the present study, various appetite regulators were studied for their role in the observed increased food intake in TB mice. AgRP and NPY expressions were highly upregulated in TB mice. Central infusion of AgRP in cachectic C26 tumourbearing mice results in an increase in food intake [45], which supports our findings. However, increased expression of NPY and its relation to potentiate feeding in this study is more difficult to interpret, as messenger NPY has been reported to not correlate with NPY levels in the hypothalamus in cancercachectic conditions [46]. Several studies have shown that in cachectic and anorectic TB mice [47] and rats [46], messenger NPY is also elevated. However, translation of messenger NPY or transport of NPY to NPY terminals showed not to correspond to mRNA changes shown by measurements of NPY levels and immunohistochemistry [46]. Serotonin has been discussed to play a role in this imbalance between messenger NPY and NPY signalling in feeding behaviour in cancer anorexia [29]. Inhibition of 5-HT signalling showed to increase NPY levels [48], while induction of 5-HT signalling reduced NPY levels in rats [49]. All together, this suggests that 5-HT signalling can interfere with NPY synthesis or transport. Therefore, it could be that in the current study, decreased 5-HT levels and lowered 5-HT signalling might preserve NPY signalling. In this study, we report on the transcriptomic analysis of a cancer-cachectic model with an increased food intake. In this model, appetite-regulating systems, of which failure might contribute to anorexia, are able to adapt properly to changes in energy balance. We showed that alterations in NPY, AgRP and serotonin signalling are likely to explain compensatory eating behaviour of mice bearing a C26 tumour. Therefore, targeting these systems might offer promising strategies to avoid the development of cancer-induced anorexia.
2017-06-18T04:00:05.531Z
2013-11-13T00:00:00.000
{ "year": 2013, "sha1": "c0293f989dfb204ccb28e8f6d26450356146070d", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1007/s13539-013-0121-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bdbf7a20bd525cc02abc24d892723ee5a5f33a07", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244081984
pes2o/s2orc
v3-fos-license
Risk Factors Associated with Mechanical Ventilation in Critical Bronchiolitis The American Academy of Pediatrics (AAP) recommends supportive care for the management of bronchiolitis. However, patients admitted to the intensive care unit with severe (critical) bronchiolitis define a unique group with varying needs for both non-invasive and invasive respiratory support. Currently, no guidance exists to help clinicians discern who will progress to invasive mechanical support. Here, we sought to identify key clinical features that distinguish pediatric patients with critical bronchiolitis requiring invasive mechanical ventilation from those that did not. We conducted a retrospective cohort study at a tertiary pediatric medical center. Children ≤2 years old admitted to the pediatric intensive care unit (PICU) from January 2015 to December 2019 with acute bronchiolitis were studied. Patients were divided into non-invasive respiratory support (NRS) and invasive mechanical ventilation (IMV) groups; the IMV group was further subdivided depending on timing of intubation relative to PICU admission. Of the 573 qualifying patients, 133 (23%) required invasive mechanical ventilation. Median age and weight were lower in the IMV group, while incidence of prematurity and pre-existing neurologic or genetic conditions were higher compared to the NRS group. Multi-microbial pneumonias were diagnosed more commonly in the IMV group, in turn associated with higher severity of illness scores, longer PICU lengths of stay, and more antibiotic usage. Within the IMV group, those intubated earlier had a shorter duration of mechanical ventilation and PICU length of stay, associated with lower pathogen load and, in turn, shorter antibiotic duration. Taken together, our data reveal that critically ill patients with bronchiolitis who require mechanical ventilation possess high risk features, including younger age, history of prematurity, neurologic or genetic co-morbidities, and a propensity for multi-microbial infections. Introduction Although viral bronchiolitis is usually a self-limited affliction characterized by lowgrade fever, congestion, and rhinorrhea, severity of symptoms can be both variable and unpredictable, resulting in a 2-3% admission rate amongst infants, making it the most common cause of hospitalization in the first year of life [1][2][3][4][5]. Despite the wide spectrum of etiologic causes, respiratory syncytial virus (RSV), which accounts for up to 80% of cases, has been linked with more severe disease compared to non-RSV bronchiolitis, especially in the premature population [1,3,[6][7][8]. Non-invasive respiratory support (NRS) systems, including heated high-flow nasal cannula (HHFNC), continuous positive airway pressure (CPAP), bilevel positive airway pressure (BiPAP), and RAM cannula, are commonly used to address acute respiratory failure, both in the acute care and critical care settings. Although RAM cannula is approved as a class 1 oxygen delivery device, proper fitting prongs that occupy 60-70% of the nares can deliver positive pressure in neonates and infants [13]. Randomized trials in adults have shown a reduction in intubation rates with the use of NRS for acute respiratory failure, which has translated into practice guidelines by the European Respiratory Society and the American Thoracic Society [14][15][16]. Similarly in pediatrics, NRS has shown success in decreasing the need for invasive mechanical ventilation (IMV) [17,18]. In critical bronchiolitis, HHFNC alone was shown to dramatically reduce intubation rates [19][20][21]. Thus, NRS has become a favored mode of treating respiratory failure with hypoxia and/or hypercarbia secondary to critical bronchiolitis [22][23][24][25]. Even with NRS support, a subset of patients with bronchiolitis tend to worsen over their disease course and eventually require mechanical ventilation, representing between 2-10% of all bronchiolitis admissions. Amongst critically-ill children, intubation rates vary from 10-15% for RSV bronchiolitis to 25% for non-RSV bronchiolitis [9,10]. In assessing responsiveness to NRS, a single-site study in a small cohort of patients identified an FiO 2 of >0.8 for up to 60 min as a criterion for failure [26]. However, no other guidelines or clinical characteristics have been identified to distinguish pediatric patients with acute respiratory failure who may need invasive mechanical ventilation to adequately address their oxygenation and ventilatory needs. In this study, we examined patients with acute respiratory failure in the setting of critical bronchiolitis managed on NRS to identify clinical and demographic features that differentiated the NRS group from the IMV group. We also compared patients intubated early in their disease course with those intubated later to determine if there were inherent differences between these groups. With our findings, we produced a predictive model using selected demographic and clinical data that may prove helpful in early identification of pediatric patients with critical bronchiolitis likely to require mechanical ventilation. Setting The Children's Hospital & Medical Center (CHMC), Omaha, is a 145-bed tertiary pediatric medical center and the only free-standing pediatric hospital in Nebraska. CHMC houses a 32-bed combined cardiac/non-cardiac pediatric intensive care unit with an annual admission rate of approximately 1100 patients, an average daily census of 21, and a standardized mortality ratio of 0.87. Study Design Our study received approval from the Institutional Review Board (IRB) of the University of Nebraska Medical Center (UNMC) as a minimal risk study with a waiver of informed consent (Protocol: 655-17-EP). We adhered strictly to the ethical principles outlined in the Declaration of Helsinki (2013) and were HIPAA compliant. For this retrospective cohort study, the electronic medical record (EMR) was interrogated for all pediatric patients admitted to the PICU with a diagnosis of acute bronchiolitis from January 2015 to December 2019. Eligibility For inclusion in the study, patients had to be: (1) ≥37 weeks corrected gestational age, older than 72 h, and ≤2 years old, (2) carry an ICD-9 or ICD-10 diagnosis of acute bronchiolitis (refer to Appendix A), and (3) be managed on NRS excluding HHFNC, i.e., CPAP, BiPAP, and/or RAM cannula, or IMV. Patients were excluded for: (1) never requiring higher NRS than HHFNC during their PICU stay, (2) baseline chronic ventilatory support, (3) congenital heart disease with single ventricle physiology, and (4) immediate post-operative status. Variables Demographic and historical information included age and weight at PICU admission, gender, gestational age at delivery, any pre-existing neurologic or genetic conditions, selfreported race and ethnicity, and insurance type (as a surrogate for socioeconomic status). Hospital course information included duration of intubation, ventilator-free-days, PICU length of stay (LOS), severity of illness based on the pediatric index of mortality-III risk of mortality (PIM-III ROM) score [27], vasoactive medication use, and in-hospital mortality. Data related to infecting pathogens included number and type of infecting pathogens, white blood cell (WBC) count closest to the time of intubation, percent bands closest to the time of intubation, procalcitonin closest to the time of intubation, use and duration of antibiotics, and timing of antibiotic initiation relative to intubation. Pathogen positivity in the NRS group was based on respiratory viral panel testing (turnaround~60 min) at the time of admission, which tests for the following pathogens: adenovirus, coronavirus (229E, HKU1, NL63, OC43), human metapneumovirus, rhinovirus/enterovirus, influenza A and B, parainfluenza 1-4, respiratory syncytial virus, Bordetella pertussis, Chlamydophila pneumoniae, and Mycoplasma pneumoniae. In the IMV group, tracheal aspirates were obtained at the time of intubation or admission (if transferred intubated) and sent for respiratory culture and gram stain. Definitions Critical bronchiolitis was defined as any acute bronchiolitis diagnosis necessitating management in the PICU for risk of impending respiratory failure. At our institution, transfer to the PICU is usually triggered when a child's respiratory support exceeds 2 mL/kg of flow through a high-flow nasal cannula delivery system. Acute respiratory failure was defined as needing non-invasive respiratory support to maintain oxygen saturation ≥ 88% and/or to address work of breathing. Non-invasive respiratory support (NRS) included CPAP, BiPAP, and RAM cannula; HHFNC was excluded since many patients at our center on HHFNC are managed on the medical/surgical floor, like many other centers [5,28,29], and thus would not meet a diagnosis of critical bronchiolitis. We deliver NRS via RAM cannula by assigning a peak inspiratory pressure (PIP), peak endexpiratory pressure (PEEP), respiratory rate, inspiratory time (i-time), and fraction of inspired oxygen (FiO 2 ), using a conventional mechanical ventilator for delivery. The nasal prongs are fitted to occupy ≥60% of the diameter of the patient's nares in order to approximate delivery of positive pressure [13]. CPAP, continuous positive airway pressure, and BiPAP, bilevel positive airway pressure, are provided through a conventional mechanical ventilator via nasal or face mask. Any patient requiring invasive mechanical ventilation was placed in the "IMV" group; within the IMV group, patients intubated within 24 h of admission were placed in the "early IMV" group, while those intubated greater than 24 h after admission were placed in the "late IMV" group. The early IMV group was further subdivided into patients who arrived intubated vs. those who progressed to invasive mechanical ventilation after admission. Ventilator-free days were defined as the number of days alive and off the ventilator 28 days following intubation [30]. Statistical Analysis All continuous variables are presented with medians and interquartile ranges or mean and standard deviation, while categorical variables are presented using frequencies and percentages. The chi-square test of independence was used to compare categorical data, and the Wilcoxon rank-sum test was used when comparing continuous variables, in case of normal distribution failure. Multivariate logistic regression models were built to predict the need for intubation. Stepwise selection with entry and stay p-value threshold set at 0.1 were used to select the variables that most strongly associated with the need for intubation. Receiver operating curve (ROC) for the prediction model was computed, and the area under the curve (AUC) was used to summarize the overall predictive power of the model. Statistical significance was established at p < 0.05. Results We identified a total of 775 patients on EMR interrogation with a diagnosis of acute bronchiolitis. After excluding 202 patients based on established criteria, of the remaining 573 eligible patients, 133 (23%) required IMV, while 440 (77%) were managed non-invasively. Upon subgroup analysis of the IMV group, 96 patients (17%) were identified as early IMV and 37 (6%) were intubated later in their PICU stay. Further subdividing patients in the early IMV group, 52 patients (9%) were intubated prior to arrival, while the rest (8%) progressed to invasive mechanical ventilation within 24 h of admission ( Figure 1). When examining demographic data between all groups, there were no differences in sex, race, ethnicity, or insurance type. However, the NRS and IMV groups differed significantly in age (5 months vs. 3 months, p = 0.002) and weight (7.0 kg vs. 5.3 kg, p < 0.001); within the IMV group, these differences were neither discernable between early and late intubation (Table 1) nor between those intubated prior to or after admission (Table S1). In comparing clinical characteristics between the NRS and IMV groups, differences were noted in gestational age at delivery, pre-term status at birth, and presence of preexisting genetic or neurologic conditions. More specifically, the IMV group had a signif-icantly younger median gestational age at delivery (37 weeks vs. 38 weeks, p < 0.001); had a higher proportion of patients with a history of prematurity (p < 0.001); and a higher incidence of pre-existing genetic (12% vs. 5%, p = 0.0042) and neurologic (20% vs. 6%, p < 0.001) conditions. Within the IMV group, a higher incidence of pre-existing genetic conditions was noted in the late IMV group (22% vs. 8%, p = 0.035) ( Table 2). These differences were not discerned amongst patients intubated prior to versus after arrival to the PICU (Table S2). When examining demographic data between all groups, there were no differences in sex, race, ethnicity, or insurance type. However, the NRS and IMV groups differed significantly in age (5 months vs. 3 months, p = 0.002) and weight (7.0 kg vs. 5.3 kg, p < 0.001); within the IMV group, these differences were neither discernable between early and late intubation (Table 1) nor between those intubated prior to or after admission (Table S1). Next, we compared hospital course between groups. Expectedly, the IMV group experienced a longer PICU length of stay (8 days vs. 2 days, p < 0.001) and higher severity of illness (1.1% vs. 1.0%, p = 0.007), as evidenced by vasoactive drug usage (12% vs. 0%, p < 0.001) and overall in-hospital mortality (1.5% vs. 0%, p = 0.01). Similarly, the late IMV group experienced a longer ICU length of stay (10 days vs. 7 days, p < 0.001) and more vasoactive medication usage (24% vs. 7%, p = 0.007), contrarily with a lower severity of illness (1% vs. 1.1%, p = 0.008) and no differences in mortality from the early IMV group ( Table 3). The late IMV group also had a longer during of intubation (7 days vs. 6 days, p = 0.043) and in turn, less ventilator-free days (21 days vs. 22 days, p = 0.041) ( Table 3). Within the early IMV group, patients intubated after admission experienced a longer duration of intubation (7 days vs. 5 days, p = 0.002) and, in turn, less ventilator-free days (21 days vs. 23 days, p = 0.002), associated with longer PICU LOS (8 days vs. 6 days, p = 0.005) but not higher illness severity, vasoactive usage, or mortality (Table S3). Delving into the pathogen characteristics between groups, several interesting observations were highlighted. While there were no differences in the pathogen positivity between groups, the burden was significantly higher in the intubated cohort with a median difference of 1 pathogen (p < 0.001). Furthermore, the NRS group was significantly more likely to have single pathogens (75%) and only viral pathogens identified (98%), while ≥70% of the IMV group had evidence of multi-microbial or mixed (virus + bacterial) infections (p < 0.001). The only discernible difference within the IMV group was the presence of a higher pathogen load in the late IMV group (3 vs. 2 pathogens, p = 0.007) (Tables 3 and S3). As noted in multiple prior studies [2,31,32], RSV and rhino/enterovirus were the top causes of bronchiolitis in all patients; H.influenzae, M.catarrhalis, and S.pneumoniae were the most frequently isolated bacterial pathogens in all IMV groups (Tables S4 and S5 and Figure S1). In comparing RSV vs. non-RSV infections, overall younger patients were afflicted with RSV vs. non-RSV viruses. Moreover, patients with RSV bronchiolitis experienced longer hospitalizations compared to non-RSV bronchiolitis in the NRS group, which corroborates prior published observations [7,8]; this difference was lost in the IMV group (Table S6). Finally, when examining antibiotic usage, a significantly higher proportion of patients in the IMV group received antibiotics and for a longer duration than the NRS group (p < 0.001). Although the median duration of antibiotics was 7 days in both groups, the NRS group used antibiotics for an average of 6.8 days, while the IMV group used antibiotics for an average of 9.5 days (p < 0.001). We also found that those in the late IMV subgroup received longer courses of antibiotics with a median difference of 3 days (p = 0.017) ( Table 4). Within the early IMV group, those intubated after arrival experienced later initiation of antibiotics (7 h vs. 0 h, p < 0.001) and longer duration of antibiotics (10 days vs. 7 days, p = 0.007) (Table S7). No discernible differences were observed in lab findings amongst groups. Using our data, we generated a model for determining the probability of intubation in critical bronchitis patients. The most important variables included weight, gestational age ≤ 36 weeks, and presence of a pre-existing neurologic or genetic condition. The latter was found to have the highest odds ratio in our model (Figure 2A). The area under the receiver operating characteristic curve of 0.72 was acceptable [33] for determining whether mechanical ventilation is likely based on the chosen variables ( Figure 2B). We separately conducted the same analysis including pathogen data, which took into account both bacterial and viral pathogen positivity and burden. This dramatically improved the AUC to 0.86. However, in the clinical context, we typically identify bacterial pathogens after intubation. Hence, its practical utility in a predictive model is limited. That said, this analysis did highlight the important role multi-microbial infections likely play in the need for invasive mechanical support ( Figure S2). Discussion As the most common cause of hospitalization in infants, bronchiolitis poses a large burden on our healthcare system [1,3,5,34]. Critical bronchiolitis represents a unique subgroup of patients with a distinct disease trajectory punctuated by the need for intensive care in up to a quarter of admitted cases; between 10-25% of these patients will go on to require invasive mechanical ventilation [5,9,10]. However, clinical characteristics that distinguish the IMV group from their NRS counterparts are lacking. By identifying risk factors that are independently associated with the need for mechanical ventilation in critical bronchiolitis, intensivists may better manage these patients with targeted approaches. We observed a 23% incidence of mechanical ventilation in critically ill bronchiolitis patients admitted to our unit; removing patients intubated prior to arrival, our observed intubation rate for patients managed on NRS and progressing to IMV is 14%, similar to prior studies [9,10]. Almost ¾ of patients in the IMV group were intubated within 24 h of presentation; and, for those intubated early, nearly half were first managed on NRS and then progressed to requiring invasive mechanical support. Expectedly, intubated patients had more complicated hospitalizations with longer lengths of stay, higher severity of illnesses and mortality. We were able to hone in on risk factors at admission that may help identify these children early, with the strongest predictors for IMV being weight, which Discussion As the most common cause of hospitalization in infants, bronchiolitis poses a large burden on our healthcare system [1,3,5,34]. Critical bronchiolitis represents a unique subgroup of patients with a distinct disease trajectory punctuated by the need for intensive care in up to a quarter of admitted cases; between 10-25% of these patients will go on to require invasive mechanical ventilation [5,9,10]. However, clinical characteristics that distinguish the IMV group from their NRS counterparts are lacking. By identifying risk factors that are independently associated with the need for mechanical ventilation in critical bronchiolitis, intensivists may better manage these patients with targeted approaches. We observed a 23% incidence of mechanical ventilation in critically ill bronchiolitis patients admitted to our unit; removing patients intubated prior to arrival, our observed intubation rate for patients managed on NRS and progressing to IMV is 14%, similar to prior studies [9,10]. Almost 3 4 of patients in the IMV group were intubated within 24 h of presentation; and, for those intubated early, nearly half were first managed on NRS and then progressed to requiring invasive mechanical support. Expectedly, intubated patients had more complicated hospitalizations with longer lengths of stay, higher severity of illnesses and mortality. We were able to hone in on risk factors at admission that may help identify these children early, with the strongest predictors for IMV being weight, which relates to age at admission, history of prematurity, and having a pre-existing genetic or neurologic condition. Our data are corroborated by prior studies examining RSV bronchiolitis in infants, identifying risk factors that portend more complicated hospitalizations, including prematurity, low birthweight, young age, and co-morbidities [1][2][3]5,7,11,12,35]. Our study adds new insight by not only highlighting the contribution of the same risk factors but also the role of multi-microbial infections in the eventual need for invasive mechanical ventilation amongst critically ill bronchiolitis patients. A potentially discerning aspect in the need for IMV in critical bronchiolitis may arise from complicating bacterial pneumonias. The AAP discourages the routine use of antibiotics in this population unless there is a clear source of bacterial infection [8,36]. In intubated critical bronchiolitis patients, despite practice variations across the country, early antibiotic initiation was associated with shorter duration of mechanical ventilation and shorter hospitalizations [37]. However, this study did not identify etiologic bacterial pathogens in these patients. Within our IMV cohort, 72% had evidence of a bacterial infection. Moreover, these patients had a higher pathogen load and multi-microbial infections compared to the NRS group. Of note, at our institution, we only obtain tracheal aspirates for culture in intubated patients; in addition, our respiratory PCR panel for nasopharyngeal samples checks for a limited cadre of bacterial pathogens, including Mycoplasma pneumoniae, Chlamydia pneumoniae, and Bordetella pertussis and parapertussis. Despite these limitations, it is noteworthy that a minority of unintubated patients received antibiotics (29% in NRS vs. 94% in IMV), yet experienced shorter PICU stays and better illness severity compared to those that were intubated. In fact, antibiotic usage in the NRS group had no effect on PICU length of stay. Taken together, these data associate invasive mechanical ventilation with bacterial coinfections in critical bronchiolitis. While data are not present to conduct intergroup comparisons on bacterial pathogen load between the NRS and IMV groups, the lower antibiotic usage and less severe hospitalization in the former would support this association. What role does timing of intubation play in disease trajectory? Based on our data, delayed intubations were associated with more severe hospitalizations, longer PICU LOS, higher vasoactive usage, and longer duration of antibiotics. Our seemingly divergent finding of lower severity of illness in the late IMV group reflects similar findings by Kopp et al. who noted that patients on NRS for ≥24 h before progressing to IMV had a lower risk of mortality but longer duration of IMV than their age-matched cohorts who did not receive NRS prior to IMV [38]. Of note, ≥50% of patients in our early IMV group were intubated prior to arrival to the PICU. As we are a referral center for a large catchment area of rural hospitals that may not have a full range of NRS options for pediatric patients, we cannot ascertain the proportion of patients in the early IMV group who truly needed IMV and who could have instead benefitted from NRS. Moreover, whether physicians in rural settings were influenced to intubate patients based on factors within our predictive model remains unknown. As a single center retrospective study, we were constrained by the information recorded in the EMR. For example, the influence of duration of illness prior to admission and history of palivizumab administration are important variables that were not accurately reflected. Moreover, given the high proportion of Hemophilus influenzae and Streptococcus pneumoniae in the IMV group and their young median age, vaccination status would have been an interesting additional variable to examine. Moreover, risk factors highlighted by prior studies for NRS failure have included FiO 2 requirements, respiratory rate changes following NRS initiation, and ventilation/perfusion mismatch [18,26]. While not examined in our study, these factors should be considered in prospective trials. Finally, the prediction model we generated would need validation in a prospective clinical trial, and currently stands to simply highlight the risk factors identified by this study on which patients with critical bronchiolitis would potentially require mechanical ventilation. Conclusions Within the critical bronchiolitis cohort of pediatric patients, those requiring invasive mechanical ventilation are clinically distinct from those managed non-invasively. These patients are typically younger at admission, have a history of prematurity, and can have pre-existing neurologic or genetic co-morbidities. Their need for mechanical ventilation may be related to bacterial co-infections, potentially leading to complicated multi-microbial pneumonias, that can worsen disease trajectory and prolong hospitalization. Future studies should focus on determining early signs of deterioration which may be acted upon to potentially reduce the need for invasive ventilation and, in turn, improve outcomes.
2021-11-14T16:28:43.330Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "fd86afb213b4800a815bebf65be3618f314c9dcf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/8/11/1035/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adbf55aac7d645ffeaa8b8d43dd6a85985030ef3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233914216
pes2o/s2orc
v3-fos-license
Trans-obturator Cable Fixation for Disrupted Pubic Symphysis: A Reasonable Option Compared to Plating? Methods Abstract Operative treatment of ruptured pubic symphysis by plating is often accompanied by complications. Trans-obturator cable xation might be a more reliable technique; however, have not yet been tested for stabilization of ruptured pubic symphysis. This study compares symphyseal trans-obturator cable xation versus plating through biomechanical testing and evaluates safety in a cadaver experiment. APC type II injuries were generated in synthetic pelvic models and subsequently separated into three different groups. The anterior pelvic ring was xed using a four-hole steel plate in Group A, a stainless steel cable in Group B, and a titan band in Group C. Biomechanical testing was conducted by a single-leg-stance model using a material testing machine under physiological load levels. A cadaver study was carried out to analyze the trans-obturator surgical approach. Peak-to-peak displacement, total displacement, plastic deformation and stiffness revealed a tendency for higher stability for trans-obturator cable/band xation but no statistical difference to plating was detected. The cadaver study revealed a safe zone for cable passage with sucient distance to the obturator canal. Trans-obturator cable xation has the potential to become an alternative for symphyseal xation with less complications. Introduction Disruption of the pubic symphysis is commonly seen in pelvic ring injuries of trauma patients [1,2]. The disruption often occurs in combination with a posterior pelvic ring impairment of variable severity. When diastasis of the disrupted symphysis pubis exceeds a certain displacement, stabilization of the anterior pelvic ring is recommended [3,4]. Adequate reduction and stable xation can restore pelvic ring alignment and allow early patient mobilization [5]. Currently, symphyseal plating represents the most common technique for anterior pelvic ring xation in such conditions [6]. This plate xation, however; represents a static xation of what is actually a dynamic junction. The pubic symphysis comprises a brocartilaginous disc between the articular surfaces of the pubic bones, encapsulated and reinforced by surrounding ligaments allowing limited movement [7]. From the healing perspective, the currently preferred treatment is not the ideal one as it does not provide a dynamic xation [8] and the plate is placed in an unfavorable rectangular position to the load vectors that affect the ruptured symphysis. Other disadvantages of plating include hardware breakage or implant loosening, which lead to recurrent instability of the pubic diastasis. These implant failures are the main reasons for revision surgery and may be linked to the rigid character of the xation technique [9]. More dynamic xation techniques like a trans-obturator cable cerclage could bypass this problem but have not been taken up in daily practice. Clinical and laboratory studies on pubic wiring are inconsistent and often small in size [10][11][12][13][14][15]. Recent studies only consider simple wiring instead of a more stable cable-system xation [16]. The aim of this study is to analyze dynamic trans-obturator cable xation as an alternative stabilization for the disrupted pubic symphysis. The hypothesis is that, compared to plating, trans-obturator cable xation provides an equal or even superior xation strength. If so, the cable xation could be a dynamic, less invasive alternative to symphyseal plating with less complications. Specimen and Fracture Generation Thirty synthetic pelvises (Pelvis Complete, Synbone, Art. No. 4060) were used. In this model, the sacroiliac joint and the symphysis are joined using exible plastic foam. There are no ligaments or muscles attached to the pelvis. An anterior-posterior compression injury (Young and Burgess APC II; OTA/AO: 61-B2.3d) was simulated, in which the pubic symphysis and sacroiliac joint connection on one side was disrupted. Preparation and xation of posterior instability was conducted in all specimens using the same technique. A 3.5-mm drill was used to open the cortical bone and create an osseous corridor to S1 and S2 while the pelvis was still intact. Final xation was done with two partially threaded 7.3-mmdiameter, 90-mm-long cannulated titan screws, including washers. Experimental Groups Group A, the control group, received traditional plate xation (N = 10). Here, pubic disruption was xed using a four-hole stainless steel symphysis plate (Symphyseal Plate 3.5 with coaxial combi-holes, DePuy Synthes; Art. No. 02.100.004) and 3.5-mm self-tapping, non-locking cortex screws. The medial and lateral screws were 55 mm and 60 mm long, respectively. Medial screws were placed with lateral inclination and lateral screws were tilted towards the symphysis [17]. Pubic symphysis disruption in Group B (N = 10) was xed with a trans-obturator cable, using a 1.7-mmdiameter interwoven stainless steel cable and cable crimp (cable with crimp, DePuy Synthes, Art. No. 298.800.01S). A medium size cable passer was used to tunnel the cable twice through the obturator foramen and embrace the pubic symphysis. The cable tensioner in combination with the provisional tensioning device and attachment bit pre-tensioned the cable wire up to 40 kg. Care was taken that the cable did not fully cut into the cortex. Final xation was conducted with the cable crimper. The cable was shortened near the tip with an adequate cutter. A titanium cerclage band was used in Group C (N = 10). The cerclage band was 5.8 mm wide and 240 mm long (Titanium-Cerclage Band according to Thabe, Link, Art. No. L63-4300/02), and was used with a cerclage band guide of the appropriate size. The cerclage was passed through the obturator foramen around the pubic symphysis, ensuring broad contact of the cable band to the bone. A cable tensioner was used for repositioning and tightening. The cable was locked using a hexagon screwdriver and shortened using a cutter (Figs. 1 and 2). Biomechanical Test Setup A single-leg-stance model was established and testing was performed using a universal testing machine (Z020; Zwick/Roell) and testXpert II software (Version 3.6; Zwick/Roell) [18,19]. Pelvic samples were attached at the sacrum to the testing machine at a physiological 45° tilt using a custom-made aluminum device. A hemiarthroplasty prosthesis was used for articulation with the acetabulum at 15% adduction on one side to simulate unilateral axial load. The femoral stem was wedged in a steel quiver attached to the bottom of the machine. Photographic documentation of the pubic symphysis was conducted throughout the testing. In contrast to previous studies by the authors, a cable-pulley system simulated the abductor muscles to increase maximum load levels [18][19][20]. A series of pretests were performed to establish the protocol and summarize the data for power analysis. The pretests encompassed load levels of 50 to 1,000 N and test cycles of 500 to 3,000 repetitions. All pretests indicated that 200, 400, and 600 N were valuable load levels, and that test cycles of 500/1000/1500 repetitions were adequate to show differences between the experimental groups. The decision to use such load levels was based on in vivo measurements, the literature and previous work by the authors [18-23] (Fig. 3). The main tests were started with 10 setting cycles at 0-to 10-N loads at 50 mm/min frequency. A load-displacement curve was generated during the testing. Outcome measurements were peak-to-peak displacement at 200, 400, and 600 N, total displacement, plastic deformation, and stiffness. Surgical Approach A cadaver study was conducted to understand the relationship of the trans-obturator cerclage to surrounding structures. Fresh frozen male and female cadavers (one of each) were placed in a supine position. Using a Pfannenstiel approach, a slightly curved 15-cm-long horizontal incision was made, centered about 2 cm cranial to the pubic symphysis. The anterior portion of the rectus sheath was prepared and the rectus sheath was divided to identify the rectus abdominis and the pyramidalis muscles. The muscle was separated from the attachment at the pubic bone. For better exposure, dissection was continued laterally to the external inguinal ring, the spermatic cord/round ligament, and the vascular and muscular lacuna. Dissection of the obturator foramen was conducted, including the obturator canal and membrane. A 1.7-mm cable wire was passed through the medial part of the obturator foramen and xed. The distance to the surrounding anatomical structures was evaluated. Statistical Analysis A power analysis performed using a power of 80% and a signi cance level of 5% showed that the sample size was adequate. The results are presented as mean values with standard deviation. All data underwent statistical analysis for normal distribution using the Shapiro-Wilk test. Analysis of variance was used to compare the means. A p value < 0.05 was considered to indicate signi cance. As no statistical differences were found, further tests were not conducted. Results Peak-to-Peak Displacement at 200, 400, and 600 N Peak-to-peak displacement was measured for the entire pelvic ring. Under 200 N, there was a mean peakto-peak displacement of 0.27 ± 0.10 mm in group A, 0.24 ± 0.98 mm in group B, and 0.20 ± 0.06 in group C. Under 400 N, the mean peak-to-peak displacement was 0.57 ± 0.21 mm in group A, 0.50 ± 0.16 mm in group B, and 0.45 ± 0.09 in group C. Under a 600-N load, the values were 1.16 ± 0.71 mm in group A, 1.34 ± 1.0 in group B, and 0.78 ± 0.15 in group C. There were no signi cant differences between the groups (p = 0.29; p = 0.27; p = 0.25; Fig. 4). Total Displacement The mean total displacement was 3.96 ± 0.42 mm in Group A, 3.58 ± 0.54 mm in group B, and 3.70 ± 0.43 mm in Group C. There were no signi cant differences among the groups (p = 0.20). Plastic Deformation Plastic deformation was measured using a load-displacement curve. It represents the irreversible deformation of the whole pelvic ring at 200-N load. The mean plastic deformation was 0.63 ± 0.20 mm in group A, 0.55 ± 0.17 mm in group B, and 0.56 ± 0.19 mm in group C. There was no signi cant difference among the groups (p = 0.58) Stiffness Stiffness was measured using the slope of the load-displacement curve at 400 N. The mean stiffness was 105 ± 13 N/mm in group A, 118 ± 18 N/mm in group B, and 107 ± 16 N/mm in group C. There was no signi cant difference among the groups (p = 0.21; Fig. 5). Surgical Approach A cadaver study was conducted to understand the relationship of the trans-obturator cable to surrounding structures (Figs. 6 and 7). The obturator foramen is closed by a brous membrane. A cable can be passed through the weak medial part of the membrane. The distance between the cable and the obturator canal was 3.0-3.5 cm. Interference with the spermatic cord/round ligament of the femoral vessels should be considered [24]. Discussion Disruption of the pubic symphysis of more than 2.5 cm (1.8-4.5 cm) is a severe injury for which many orthopedic trauma surgeons would agree that operative stabilization is the preferred treatment [25]. In case of minor diastasis, conservative treatment is an option but was abandoned in severe cases due to disadvantages caused by longtime immobilization [26]. Wiring of the pubic symphysis was performed in the past [27] but the available clinical data is limited, outdated, and only consists of small case series [10,13,28]. Beside the above mentioned clinical reports different biomechanical studies explore the use of a simple wiring technique: Tile et al. xed the pubic symphysis in one group with double plating (4.5-mm plates), in the second group with four trans-osseous wire loops, and in the third group with absorbable suture material. Wiring resulted in signi cantly less symphyseal motion than the other methods and they concluded that symphyseal wiring can oppose the tensile load caused during bilateral stance loading at the symphysis [12]. Hofmann compared pubic reconstruction by plate to wiring using cadavers (N = 83). While plate stabilization was stiffer; he mentioned no other difference compared with the wiring technique [11]. Meißner et al. performed a biomechanical study that compared wiring and plating in a cadaver model (N = 24) under physiological conditions. Cut-out of the wires was seen in many specimens and highest stability was provided by plating of the pubic symphysis, what raises concerns about stability of the wiring technique [14]. This are some of the reasons why treatment ultimately evolved to plate xation over the years [5,29,30]. Unfortunately plating is also a procedure which carries a signi cant risk for complications. Most frequently subclinical implant loosening (31-80%) is noted in follow-up X-rays [9,[31][32][33][34]. In case of complete failure or reoccurrence of the diastasis (> 10mm), revision surgery is required (3-21%) [9,[32][33][34]. Because clinical and biomechanical studies just focus on simple symphyseal wiring, the role of modern cable-systems for this purpose was unclear. Lenz et al. con rmed a superior stability of cable-systems over wiring when xing long bones. In their biomechanical study a double loop cable cerclage, as used in our group B, has a load-to-failure of mean 2734 N (±330 N) compared with a single loop wire that only bears 606 N (±109N) [16]. Our study con rms high stability of double loop cable xation of the pubic symphysis; however, we did not compare it to simple wiring. While passing a cable through the obturator foramen, knowing the location of the obturator canal is important for safety. The obturator canal is a pathway through the obturator membrane containing an artery, vein, and nerve and can be found at the cranio-lateral margin. The obturator artery originates in most cases of the internal iliac vessel (62%) [35], sometimes forming a connection to the external iliac vessel system, called corona mortis [36]. After passing the obturator canal, the obturator artery divides into an medial and lateral branch to form a vascular circle [37] or splints into small tributaries without forming such circle [38]. The obturator nerve runs from the obturator canal inferomedially between the adductor longus and brevis muscles and divides into an anterior and posterior division that mainly innervates adductor muscles [37]. Jo et al. conducted a morphometric study of the nerve exit zone in the obturator foramen and showed a median distance to the pubic tubercle of 30 mm [39]. This makes it possible to identify a safe zone (Fig. 6). Trans-obturatorial surgical procedures in the safe zone are also known from other disciplines like vascular surgery (bypass) [40], general surgery (obturator hernia repair) [37] and gynecology (midurethral sling procedure) [41,42]. Beside plating or trans-obturatorial cable xation other innovative stabilitation techniques for the pubic symphysis were described in the past. Chen et al. published an article about 45 patients with pubic disruption, treated using percutaneous screw xation. The authors recommend this technique because of lower blood loss and a better functional outcome [43] The technique was con rmed by biomechanical ndings [44]. Limitations The use of synthetic pelvic models instead of cadaver specimens is controversial in orthopaedic research but there are some reasons why synthetic bones are increasingly used. They are affordable at a justi able budget, have similar biomechanical properties than cadaver specimens, are easily available without logistical problems, have almost no variability between specimens and are ethically uncritical [48]. In contrast, there is a high variability in biomechanical properties between cadaver specimens and the specimens themselves represent disproportionally the elderly population [49]. Therefore, cadaver specimens often do not adequately simulate the biomechanical behavior of orthopaedic implants. Especially polyurethane foam based bone models were intensely tested and con rmed for biomechanical studies [50]. Still, the synthetic pelvic model used here has no ligaments or muscular attachments. The load vectors are therefore expected to differ in cadaver models. In summary, a synthetic pelvis model is useful for a comprehensive statistically preliminary testing but cadaver studies should be considered before rst-in-human study. Further, the monoaxial, single leg load applied in the biomechanical testing does not simulate bilateral strain. No implant loosening was observed in the plating group what may be a result of the low number of test cycles, despite a high number used in our pretests. Moreover, loosening of the cable xation was not detected, making it di cult to verify the cable to be a more dynamic xation. Placing the cerclage in the desired position could be more di cult in real surgery than it is in a synthetic pelvic model. The Pfannenstiel approach only allows for easy dissecting of the cranial corner of the foramen through which to pass the wire. Further studies are required to assess the ideal surgical approach [51]. Another concern is loosening of the cable-system leading to displacement of pubic bones in the axial and sagittal planes caused by continuous movement in the vertical and horizontal axis of the pelvis (Fig. 8). Conclusion Our study con rms adequate stability of symphyseal plating and demonstrates similar biomechanical properties of trans-obturator cable xation. Passing a cable at the medial margin of obturator foramen seems possible and relatively safe. Declarations Acknowledgment We thank Dr. R. Wagner for sharing his clinical experience with pubic symphysis wiring and A. Stow for editing service. Con ict of Interest Statement: No author has association of nancial involvement (i.e. consultancies/advisory board, stock ownerships/options, equity interest, patents received or pending, royalties/honorary) with any organization or commercial entity having a nancial interest in or nancial con ict with the subject matter or research presented in the manuscript. Figures Figure 1 Different groups tested. A) Stabilization with a four-hole 3.5-mm stainless steel plate. B) Trans-obturator wire using a 1.7-mm cable system. C) Trans-obturator xation using a broad titan band. Ten specimens were tested in each group. Figure 2 Stepwise application of a trans-obturator xation. A) Instruments and titan band B) Tensioner C) Fixation of the clamp using a screw driver B) Final position. Total displacement represents the nal change of the probe after all load levels were applied. Plastic deformation represents the permanent and irreversible strain of the hole construct. The plate and cable systems showed only minimal amounts of deformation. Even though group B had the highest stiffness, no statistical difference was found. Schematic illustration of the obturator foramen. The obturator artery divides into medial and lateral branches as it emerges from the obturator canal. An acetabular branch rises from the lateral part and runs towards the hip joint. The obturator nerve divides into anterior (pectineus m., Adductor longus m., Adductor brevis m. Gracilis m.) and posterior divisions (Obturator externus m., Adductor magnus m.). At the medial border, a safe zone can be identi ed in which the cable wire can be passed with limited risk.
2021-05-08T00:03:32.886Z
2021-02-19T00:00:00.000
{ "year": 2021, "sha1": "bb918cd4b518daf89f254ec2921bdbe63da1e722", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-200179/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "c96a65e7c543aa351a25ea25db8e963114ffde28", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3518205
pes2o/s2orc
v3-fos-license
Beyond Pixels: Leveraging Geometry and Shape Cues for Online Multi-Object Tracking This paper introduces geometry and object shape and pose costs for multi-object tracking in urban driving scenarios. Using images from a monocular camera alone, we devise pairwise costs for object tracks, based on several 3D cues such as object pose, shape, and motion. The proposed costs are agnostic to the data association method and can be incorporated into any optimization framework to output the pairwise data associations. These costs are easy to implement, can be computed in real-time, and complement each other to account for possible errors in a tracking-by-detection framework. We perform an extensive analysis of the designed costs and empirically demonstrate consistent improvement over the state-of-the-art under varying conditions that employ a range of object detectors, exhibit a variety in camera and object motions, and, more importantly, are not reliant on the choice of the association framework. We also show that, by using the simplest of associations frameworks (two-frame Hungarian assignment), we surpass the state-of-the-art in multi-object-tracking on road scenes. More qualitative and quantitative results can be found at the following URL: https://junaidcs032.github.io/Geometry_ObjectShape_MOT/. . An illustration of the proposed method. The first two rows show objects tracks in frames t and t + 1. The bottom row depicts how 3D position and orientation information is propagated from frame t to frame t + 1. This information is used to specify search areas for each object in the subsequent frame, and this greatly reduces the number of pairwise costs that are to be computed. Abstract-This paper introduces geometry and novel object shape and pose costs for multi-object tracking in road scenes. Using images from a monocular camera alone, we devise pairwise costs for object tracks, based on several 3D cues such as object pose, shape, and motion. The proposed costs are agnostic to the data association method and can be incorporated into any optimization framework to output the pairwise data associations. These costs are easy to implement, can be computed in real-time, and complement each other to account for possible errors in a tracking-by-detection framework. We perform an extensive analysis of the designed costs and empirically demonstrate consistent improvement over the state-of-the-art under varying conditions that employ a range of object detectors, exhibit a variety in camera and object motions, and, more importantly, are not reliant on the choice of the association framework. We also show that, by using the simplest of associations frameworks (two-frame Hungarian assignment), we surpass the state-of-the-art in multi-object-tracking on road scenes. More qualitative and quantitative results can be found at https://junaidcs032.github.io/Geometry_ ObjectShape_MOT/. Code and data to reproduce our experiments and results are now available at https://github. com/JunaidCS032/MOTBeyondPixels. I. INTRODUCTION Object tracking in road scenes is an important component of urban scene understanding. With the advent and subsequent surge in autonomous driving technologies, accurate multi-object trackers are desirable in several tasks such as navigation and planning, localization, and traffic behavior analysis. In this paper, we focus on designing a simple and fast, yet accurate and robust solution to the Multi-Object Tracking (MOT) problem in an urban road scenario. The dominant approach to multi-object tracking is tracking-by-detection, where the entire process is divided into two phases. The first phase comprises object detection, where bounding-boxes of objects of interests are obtained in each frame of the video sequence. The second phase is the data association phase, which is often the hardest step in the trackingby-detection paradigm. Several factors such as spurious or missing detections, repeat detections, or occlusions and target interactions are confounding factors in this data association phase. Although several approaches [1], [2], [3], [4], [5] exist for accurate online tracking of moving vehicles from a moving camera, most of them [6], [2] use handcrafted cost functions that are either based on primitive features such as bounding box position in the image and color histograms, or are highly sophisticated and non-intuitive in design (eg. ALFD [6]). On the other hand, we propose costs that are intuitive, easy to compute and implement, and provide complementary cues about the target. We exploit the fact that road scenes have a unique geometry and use this prior information to design costs. The proposed costs capture 3D cues arising from this scene geometry, as well as appearance based information. Further, we introduce a novel cost that captures similarity of 3D shapes and poses of target hypotheses. To this end we leverage recent work on shape-priors for object detection and localization from monocular sequences [7], [8]. To the best of our knowledge, such pairwise costs have not been incorporated in multi-object tracking frameworks. The efficacy of the monocular 3D cues is best portrayed in Fig.1. In this figure the first two rows illustrate the objects with their bounding boxes in two successive frames at t and t+ 1. Upon lifting the objects at t to 3D and ballooning their locations to account for large uncertainties in ego motion, we project them into the image observed at t + 1. This gated/overlapping area shown in their respective colors in the last row of Fig.1 reduces the search area for each such object significantly thereby reducing the pairwise costs. By backprojecting that lie only within this gated area into 3D and ascertaining data association costs based on 3D volume overlaps significantly improves tracking accuracy even with a straight forward Hungarian data association scheme. The proposed costs are not too dependent on the choice of data association framework. We demonstrate the superiority of the proposed costs over monocular video sequences of urban road scenes that capture a wide range of camera and target motions, and also consistent improvement over other costs regardless of the choice of the object detector. We perform an extensive evaluation of various modes of the proposed costs on the KITTI Tracking benchmark [9] and obtain state-of-the-art performance, beating previous approaches by using a simple two-frame Hungarian association scheme. The approach is tested on KITTI online evaluation sever and outperforms the previous published approaches significantly. Naturally, more complex data association schemes, such as network flow based algorithms [10], [11], [12], [13] can result in much better performance boosts upon incorporation of the proposed pairwise costs. The paper contributes as follows. 1) It introduces novel data association cues based on single view reconstruction of objects that results in best tracking performance reported thus far in KITTI training datasets. It outperforms the nearest reported values in training data [14], [1], [6], by at-least 12% . The approach is tested on the KITTI Tracking online evaluation server where it outperforms the published approaches by a margin of over 6%. 2) It showcases that such improvements are sufficiently detector agnostic and repeatable over baseline appearance tracking based on object detectors such as [15], [16] through ablation studies 3) Finally it also identifies a role for 3D pose and shape cues where they play a role in improving tracking performance. Monocular 3D cues especially based on single view geometry can often be unreliable. However when computed effectively they can be used reliably and repeatably even in challenging sequences such as KITTI. This constitutes the central theme of this effort. II. RELATED WORK In this section, we review relevant work on multi-object tracking, and compare and contrast it with the proposed approach. A. Global Tracking Many approaches to tackle the association problem are global [10], [12], [17], [11], [18], [19], in the sense that they assume detections from all frames are available for processing. Most global methods operate by mapping the tracking problem to a min-cost network flow problem. The original idea was proposed in [10] and also provides for a method for explicit occlusion reasoning. An efficient variant is an approach based on generalized minimum clique graphs [12], where associations are solved for one object at a time while other objects are implicitly incorporated. Another section of global methods attempts to construct small chunks of trajectories (called tracklets), and compose them hierarchically to form longer trajectories, rather than solving for a min-cost flow over a densely connected graph. B. Online MOT In contrast to this, online trackers [4], [20], [3], [21] do not assume any knowledge of future frames and operate greedily, only with the data available upto the current instant. Such trackers often formulate the association problem as that of bipartite matching, and solve it via the Hungarian algorithm. A recent variant proposes near-online trackers [6], in an attempt to provide the best of both worlds, i.e., to combine the capability of global methods to handle longterm occlusions and still achieve very low output latencies. Gieger et al [13] propose a memory and computation cost bound variant of network flow using dynamic programming. Both these paradigms rely on handcrafted pairwise costs being fed into the association framework. Most of these are sophisticated in design and do not end up capturing 3D information that is easily available in road scenes. C. Learning Costs for MOT Significant attention has also been devoted to the task of learning pairwise costs for target tracking problems. In [3], a structured SVM was used to learn pairwise costs for a bipartite matching data association framework. Other works have used graphical models, divide and conquer strategies and also learn unary costs. A more recent work [1] learns all costs using a deep neural network. On the other hand, we show that our simple, yet clean and efficient cost function designs significantly improve performance without the need of extensive hyperparameter search or cost learning. III. PROBLEM FORMULATION We adopt the tracking-by-detection paradigm where we assume that we are provided with a monocular video sequence of F frames {I f } for f ∈ {1..F }, and a set of object detections D f for each frame I f . Each detection set Note that D f can also be an empty set, in the case where no objects are detected in a frame. Each detection D i f is parametrized as and s i f is the detectors confidence in the bounding box (greater value indicates higher confidence). The multi-object tracking problem is to associate each bounding box to a target trajectory T k such that the following constraints are met. • Each target trajectory T k comprises of a set of bounding boxes (all from different frames) belonging to a unique target in the scene. • There are exactly as many trajectories K as there are targets to be tracked. • In all frames where a target is visible, it is detected and assigned to the corresponding unique trajectory for the object. • All spurious bounding box detections are unassigned to any target trajectory. The tracking problem formulated above is usually solved in a min-cost network flow framework (global tracking), a moving window dynamic programming framework (nearonline tracking) or a bipartite matching framework (online tracking). Note that these are not the only available frameworks, but a representative set of most tracking approaches. All these frameworks (and the others not mentioned here) use pairwise costs to define affinity across pairs of detections. The association framework then computes a Maximum A Posteriori (MAP) estimate of the target trajectory T k , given the detection hypotheses D = (D 1 , D 2 , ...D f ) and an affinity matrix that gives the likelihood of each detection in each frame corresponding to every detection in every other frame. IV. GEOMETRY AND OBJECT SHAPE COSTS The core contribution of this paper is to design intuitive pairwise costs that are efficient to compute, and are accurate for tracking. We focus on urban driving scenarios and demonstrate how the geometry of urban road scenes can be exploited to infer 3D cues for tracking. Typical costs in tracking algorithms include bounding box locations, trajectory priors, optical flow, bounding box overlap, and appearance information (color histograms or path-based cross-correlation measures). These costs require careful handcrafting, finetuning, and hyperparameter estimation. We propose to use a set of simple complementary costs that are readily available from recent monocular 3D object localization systems [7], [8]. We also introduce a novel cost based on the 3D shape and pose of the target. We show that this cost, apart from improving data association performance, also assists in discarding false detections without incurring large computational overhead. A. System Setup We focus on autonomous driving scenarios, where the video sequence is from a monocular camera mounted on a car moving on the road plane, and the targets to be tracked are also moving on the road. Feature based odometry is run on a background thread (for rough frame-to-frame motion estimation). Also, we make use of a recent approach that goes beyond bounding boxes and estimates the 3D shape and pose of objects, given just a single image [7]. This is done by lifting discriminative parts in 2D (keypoints) to 3D. These Illustrating the concept of 3D-2D and 3D-3D costs Two subsequent frames, t and t+1, are shown in the left. For each detection in the frame t, we compute and propagate (with uncertainty) its 3D bounding box in the next frame t + 1. These boxes are projected to 2D in the frame t+1. The intersection between the detection boxes of t+1 and these projections constitutes the 3D-2D cost. The intersection of the 3D bounding boxes in 3D constitute the 3D-3D cost as shown in the right; propagated bounding boxes are colored with their respective 2D box in frame t and 3D bounding boxes of detections in frame t+1 are numbered respectively. keypoints are a set of points chosen so that they are common across all object instances (eg. for a car, we have centers of wheels, headlights, taillights, etc). The authors use a CNN architecture [8] to localize these keypoints in 2D, given a detection. The 3D shape of the object is parametrized as the sum of the mean shape (for the object category) and a linear combination of so-called basis shapes. Mathematically, where S is the shape of a particular instance,S is the mean shape for the object category, and V b is the deformation basis (a set of eigenvectors) that characterizes deformation directions of the mean shape. We use the same model in [7] and denote the shape vector of an object by where B is the number of vectors in the deformation basis (typically, B = 5). The pipeline in [7] also estimates the 3D pose of the object, which is parametrized as an axis-angle vector ω. Moreover, an estimate of object dimensions (height, width, and length) is also returned. B. 3D-2D Cost Given the height h cam of the camera above the ground, assuming that the bottom line of each bounding box detection d i f in frame f is on the road plane, a depth estimate of the car in the current camera coordinates can be obtained by back projection via the road plane as in [22], using where x is the bottom center of the detected bounding box, K is the camera intrinsic matrix and π −1 G is used as shorthand for backprojection via the ground plane. This backprojection equation is only accurate when x is known precisely, which is not usually the case. Hence, we estimate the uncertainty in 3D location of X f by using a linearized version of (2) and assuming that the detector confidence is an isotropic 2D Gaussian, i.e., (x i f , y i f ) T ∼ N (0, σ 2 I 2×2 ). This region is expanded (anisotropically) by the estimates of the target dimensions returned by the system [7]. Now, assume we have another detection d j f in frame f with which we wish to compute the pairwise affinity of d i f . We obtain a rough estimate of the camera motion from frame f to frame f using a feature-based odometry thread running in the background. Using this estimate of the camera motion, we transport X f to the camera coordinates of frame f , while duly accounting for the uncertainty in camera motion estimate, and in the backprojection via the road plane. The obtained coordinates X f are then projected down to the image frame f to obtain a 2D search area in which potential matches for X f are expected to be found, as shown in the frame t+1 of Fig.2. Mathematically, the 3D-2D cost for two detections d i f and d j f is defined as follows Intuitively, this cost measures a (weighted) overlap of the 2D region in which the target is expected in frame f and the detection d j f . π denotes the projection operator that projects a 3D point to image pixel coordinates. g(ξ, X) denotes a rigid-body motion ξ ∈ se(3) applied to a 3D point X ∈ R 3 . φ(X, s) denotes the function that estimates the uncertainty of the 3D point S according to a linearized form of (2) and the detector confidence s. Most importantly, this cost is evaluated only for detections d j f that lie inside the expected target area (π(g(ξ, φ(π −1 G (y i f ), s i f )). This significantly reduces the number of comparisons needed to be made among target pairs. C. 3D-3D cost Although useful in reducing the number of candidate detections to be evaluated, the 3D-2D cost has frequent confounding cases. This is because, we still measure overlap in the image space. To mitigate this drawback, we define a 3D-3D cost, which, instead of measuring 2D overlap, measures overlap in 3D,as shown in Fig.2 (right side). Here, we backproject each candidate d j f via the road plane, and measure overlap with respect to the transformed 3D volume from frame f given by g(ξ, φ(π −1 G (y i f ), s i f ). The 3D-3D cost for two detections d i f and d j f is defined as In order to speed up evaluation of 3D overlap, we exploit the inherent geometry of road scenes. Since all objects of interest are on the road plane (the XZ plane in our case), it is sufficient to measure overlap in the XZ plane. This is because all objects are at nearly constant heights above the ground and hence have similar overlap in the Y direction. D. Appearance Cost In [8], the authors train a stacked-hourglass CNN architecture to localize a discriminative set of keypoints on an image. This deep CNN architecture captures various discriminative features for each detection, along with the keypoint evidence. We use weighted combination of activation maps from the output of the layers of the hourglass network as a feature descriptor for each detection, as shown in Fig.3 and compute a similarity score between detections using the L2 Norm between descriptors from the image patch inside each of the bounding boxes. If ψ(.) denotes the feature descriptor of each detection, the appearance cost is defined as where η app is a normalization constant. E. Shape and Pose Cost We use a novel shape and pose cost based on the single image shape and pose returned by the pipeline of [7] . Shape is parameterized as a vector comprising of deformation coefficients Λ = [λ 1 ..λ B ] T , where B is the number of deformation basis vectors (usually 5). Each possible value of Λ denotes a unique class of object instances and hence carries useful information about the 3D shape of the target. For instance varying certain parameters of Λ may represent a shape that is more SUV-like than Sedan-like, and so on. Pose is parametrized as an axis-angle vector ω. For detections d i f and d j f , the shape and pose cost is specified as (6) where η s and η p are normalization constants. The overall pairwise cost term is a weighted linear combination of all the aforementioned cost. The weights of the linear combination are determined by four-fold cross validation on the train set. V. RESULTS In this section, we present an account of the experiments we performed, and we report and analyze the findings thereof. In nutshell, we evaluate our tracking framework on a variety of challenging urban driving sequences and demonstrate a substantial performance boost over the stateof-the-art in multi-object tracking, by using the simplest of tracking frameworks, viz. bipartite matching using the Hungarian algorithm. A. Dataset We evaluate the proposed multi-object tracking framework on the popular KITTI Tracking benchmark on both training as well as testing dataset. [9]. As prescribed in [1], [6], [9], we divide the training dataset, which contains 21 sequences, into four splits,for cross validation. The splits are chosen so that each split contains a similar distribution of number of vehicles per sequence, occlusion and truncation levels, and relative motion patterns between the camera and the target. The cross validation helps us to tune the weight for each of the proposed costs to compute the final cost matrix. The best performing combination of these weighted costs are used for reporting the result on the KITTI Tracking benchmark. Multiple vehicles moving with varying speeds, variance in the ego camera motion, and target objects appearing in non conforming locations in frames make the KITTI Tracking dataset [9] a truly challenging one. We report results on the Car class. B. Evaluation Metrics To evaluate the performance of our approach, we adopt the widely used CLEAR MOT metrics [25]. The overall performance of the tracker is summed up in two intuitive metrics, viz. Multi-Object Tracking Accuracy (MOTA) and Multi-Object Tracking Precision (MOTP). While MOTA is concerned with tracking accuracy, MOTP deals with object localization precision. C. System Overview The proposed approach is a tracking-by-detection approach and hence assumes per-frame bounding box detections as input. We choose two recent object detectors -Recurrent Rolling Convolution (RRC) [15] and SubCNN [16]. Each of these detectors provides multiple detections per frame. A threshold is applied on the detection scores and those detections whose confidence scores are lower than the threshold are pruned. In addition to this, we run a non-maxima suppression (NMS) scheme to subdue multiple detections around the same object. These detections are used to compute pairwise costs as outlined in the previous section. These pairwise costs constitute a cost matrix that is used for a bipartite matching algorithm that associates detections across two frames. In practice, bipartite matching is performed using the O(n 3 ) Hungarian algorithm [26]. E. Performance Evaluation We evaluate the performance of our approach on the current best competitors on the KITTI Tracking Benchmark. While [6], [21], [13] rely on complex handcrafted costs, [1] learns all unary and pairwise costs that are input to a network flow based tracker. Moreover, the data association steps of [6], [21], [13] rely on complex optimization routines. The proposed approach is also evaluated on the KITTI Tracking evaluation sever. Table I,where we compare our two-frame based approach with the other competitors using the best performing object detector in the form of [15] and a judicious combination of such appearance, 3D, pose and shape cues best possible results on KITTI training sequence are achieved in terms of MOTA (91.4%) and MOTP (89.84%). Although our method suffers from ID switches and fragmentations, this is typical of online trackers; more so of two-frame greedy trackers. Using the proposed pairwise costs in a slightly more sophisticated tracker such as [6], [13] will naturally reduce ID switches and fragmentations also. Table II,where we compare our two-frame based approach with the other published approaches on the KITTI Tracking online server. We outperform the next best competitor by a margin of (6%) on the test set, achieving state of the art results in the form of MOTA (84.24%), MOTP (85.73%), MT (73.23%) and ML (2.77%). F. Ablation Study We then perform a thorough ablation analysis of various cues used for computing pairwise costs across two distinct object detectors: RRC [15] and SubCNN [16]. Results are summarized in Table III. This analysis captures the importance of each of the proposed cue and demonstrates that the combination of all these is crucial for overall performance. Notice how each cue improves the performance of our system in terms of MOTA ,ID switches and fragmentations. Even with underperforming detectors such as [16], there is a tangible performance boost by using a combination of monocular 3D cues. This is portrayed in ablation analysis of SubCNN detectors in Table III. Furthermore the repeatability of performance gain using these novel cues over any baseline detection methods is also delineated. There exist subsequences where the role played by shape and pose cues become relevant. While in a typical road scene involving lane driving the pose cues are not discriminatory (as the vehicles are aligned with the lane direction), they become discerning enough in areas such as intersections, round abouts where pose and viewpoint changes are heterogeneous. This is showcase in Table IV. Here, we select particular frames from the KITTI Tracking dataset, which have images containing cars moving at intersections, which captures different viewpoints and shapes of cars. Using detections from a weak detector [16] and a simplistic combination 2D-2D cues along with shape and pose cue of the car performs better than the stand alone 2D cue, for sequences which have cars with various viewpoints over the frames. G. Qualitative Results Finally, we present qualitative results from challenging sequences in Fig.4 and Fig.5. These results clearly indicate the ability of the proposed pairwise costs to disambiguate and track across viewpoint variations, clutter, and varying relative motion between the camera and the target. For example the first column of Fig 4 shows cars occluded on either sides of the road accurately tracked almost till the horizon. Whereas the second column shows efficient tracking of cars at varying depths and varying poses in an intersection while the third column shows precise tracking of occluding cars as well as a car that is being overtaken from the right by the ego car. In fact in the 4th frame a very small portion of the car is visible yet accurately tracked. H. Summary of Results The cornerstone of this effort is that single view monocular 3D cues obtained though formalisms developed on the basis of single view geometry can be effectively exploited to track vehicles in challenging scenes. This gets illustrated in the various tabulations of this section. Table I depicts significant improvements over many of the current state of the art methods with a tracking accuracy in excess of 90%. We test our approach on the KITTI Tracking online server. Table II depicts significant improvements over the published approaches, with tracking accuracy over 84%. Whereas the ablation studies in Table III does showcase the repeatability of 3D cues in improving the baseline appearance only tracking over detectors. While not as significant as in [15] baseline improvement over SubNN object detector [16] can be gleaned from Table III. The improvement in ID switches and fragmentations can also be seen over both detector baselines as a consequence of the 3D cues. Table IV shows the relevance of pose and shape cues over a subsequence where association costs due to such cues improves baseline performance. VI. CONCLUSIONS Most state of the art tracking formalisms have not explored the role of 3D cues and when they have done those cues have been due to immediately available stereo depth. This paper showcased for the first time monocular 3D cues obtained from single view geometry along with pose and shape cues results in the best tracking performance on popular object tracking training datasets. These cues result in a set of simple, intuitive pairwise costs for multi-object tracking in a tracking-by-detection setting. Despite being more difficult to compute than ready made 3D depth data, monocular 3D cues have a role to play in diverse on road applications including object and vehicle tracking. Apart from the quantitative, qualitative results too signify its advantage in challenging scenes that involve considerable occlusions, minimal appearance of the object in the scene and objects that are far enough that they appear on the horizon. Although we demonstrated results using a simple Hungarian method based tracker, incorporation of sophisticated trackers would result in even higher performance boosts.
2018-04-03T01:50:27.246Z
2018-02-26T00:00:00.000
{ "year": 2018, "sha1": "2ebcb7d0f013fbc7225dd2305d76ff985b43eeec", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1802.09298", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9c576520ed9c960270715f790a62b9337ce88bd2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
15113388
pes2o/s2orc
v3-fos-license
Cerebral Salt-Wasting Syndrome Caused by Minor Head Injury A 34-year-old woman was admitted to hospital after sustaining a head injury in a motor vehicle accident (day 1). No signs of neurological deficit, skull fracture, brain contusion, or intracranial bleeding were evident. She was discharged without symptoms on day 4. However, headache and nausea worsened on day 8, at which time serum sodium level was noted to be 121 mEq/L. Treatment with sodium chloride was initiated, but serum sodium decreased to 116 mEq/L on day 9. Body weight decreased in proportion to the decrease in serum sodium. Cerebral salt-wasting syndrome was diagnosed. This case represents the first illustration of severe hyponatremia related to cerebral salt-wasting syndrome caused by a minor head injury. Introduction Hyponatremia resulting from cerebral salt-wasting syndrome (CSWS) can occur after severe brain injury, severe cerebrovascular disease, or surgery [1][2][3][4][5][6][7][8][9]. Hyponatremia can result in brain edema and secondary nausea, headache, altered consciousness, and sometimes death. Close monitoring of serum Na levels and immediate correction of electrolyte abnormalities are therefore necessary after severe brain damage. If left untreated without correct diagnosis, severe hyponatremia may result in seizures and worsening cerebral edema [10]. However, no previous reports have described hyponatremia of CSWS occurring after minor head injury in the absence of intracranial bleeding, skull fracture, or brain contusion. This report describes the case of a patient with minor head injury who developed severe hyponatremia due to CSWS. Case Report A 34-year-old woman with no significant past medical history sustained an injury to the right forehead in a motor vehicle accident (day 1). She was not taking any regular medications. Physical examination revealed no traumatic wounds other than a thin subcutaneous hematoma on the right forehead. She presented with headache and nausea, and Glasgow coma scale score was 14 (E3, V5, M6), but no obvious focal neurological signs were present, including amnesia. Furthermore, computed tomography (CT) of the head revealed no skull fracture, intracranial hemorrhage, or brain contusion. Complete blood cell (CBC) count and serum biochemistry revealed no abnormalities, and serum Na concentration was normal (141 mEq/L). She was hospitalized for observation under a diagnosis of brain concussion. By day 2, Glasgow coma scale score had normalized and symptoms of headache and nausea had almost resolved. On day 3, serum Na was still within the normal range but had decreased to 135 mEq/L. CBC and serum biochemistry revealed no abnormalities, and the patient was discharged without symptoms on day 4. After discharge from hospital, she began to feel severe and gradually worsening fatigue and nausea and finally presented to the emergency department on day 8. Head CT revealed no abnormal findings. Blood testing disclosed serum Na of 121 mEq/L and serum Cl of 90 mEq/L, while serum biochemistry showed no other abnormalities. Skin turgor was slightly diminished, suggesting decreased circulating plasma volume. She was therefore hospitalized for evaluation and management of hyponatremia. Treatment was initiated via intravenous saline and oral administration of salt with frequent monitoring of serum Na levels. Body weight was measured daily to help distinguish These results, the diminished skin turgor, and the decrease in body weight indicated a diagnosis of CSWS and the absence of renal failure, thyroid dysfunction, and adrenal insufficiency. Blood testing showed serum Na of 129 mEq/L on day 11 (urine Na: 60 mEq/L) and serum Na of 136 mEq/L (urine Na: 55 mEq/L) on day 12. Serum Na subsequently remained within the normal range. On day 16, intravenous saline infusion was terminated. Fatigue and nausea resolved as serum Na concentrations increased. After day 20, body weight started to improve towards baseline. She was discharged on day 24 without any subsequent recurrence of hyponatremia. Serum BNP level on day 27 had completely normalized to 10 pg/L (reference range: 0-18 pg/L). The course of treatment is shown in Figure 1. Discussion CSWS is characterized by renal loss of sodium following intracranial disorders, resulting in hyponatremia and hypovolemia [11][12][13]. CSWS ordinarily occurs after severe brain injury, severe cerebrovascular disease, or surgery [1][2][3][4][5][6][7][8][9], and no previous reports have described CSWS after minor head injury. Intriguingly, lightning injury [14] and therapeutic barbiturate coma [15] may also cause CSWS. While many studies have described aspects of CSWS, the pathogenesis of renal salt wasting derived from cerebral disease is not fully understood. The most probable process involves disruption of neural inputs to the kidney and/or central production of a circulating natriuretic factor [6,7,11,12,16,17]. In addition, some authors have indicated that ANP and BNP exert biologic effects that could lead to CSWS [3,6,7,[18][19][20]. The time from traumatic brain injury to development of CSWS can vary from 2 days to 2 months [2,3,6,18,21,22]. Regarding the severity of CSWS, highly invasive surgery and the severity of findings on head CT are associated with the severity of CSWS [9,21]. In other words, CSWS is unlikely to occur in the absence of severe brain damage. Differentiating between CSWS and SIADH is critical for appropriate treatment of hyponatremia, because therapeutic strategies for the two syndromes differ markedly. When hyponatremia is treated inappropriately, the patient is at increased risk of delayed ischemic deficits and/or osmotic demyelination leading to disability and mortality [23]. The primary distinction between CSWS and SIADH is whether the circulating blood volume is decreased or increased [6,12,24,25]. Since some CSWS cases do not present with characteristic physical findings, comprehensive judgment is frequently needed to reach a definitive diagnosis. In the present case, serum Na concentrations, osmolality, ANP, BNP, ADH, diminished skin turgor, decreasing body weight, and prolonged natriuresis despite hyponatremia were Case Reports in Emergency Medicine 3 consistent with the diagnosis of CSWS rather than SIADH. As a matter of fact, serum Na levels and symptoms were greatly improved with substantial hydration and NaCl administration. CSWS is generally caused by severe brain injury or severe cerebrovascular disease, and head CT and magnetic resonance imaging typically reveal abnormal findings. In the present case, head CT was notable for the absence of intracranial bleeding and brain contusion. This case shows that even minor head injuries can cause disruptions to the neural inputs to the kidney and/or central production of circulating natriuretic factors that eventually contribute to CSWS. Clinicians should be aware that even minor head injury may result in CSWS, hyponatremia, and secondary symptoms.
2018-04-03T01:34:10.672Z
2017-01-17T00:00:00.000
{ "year": 2017, "sha1": "649b5b28a68c7af3c527d412147e3deaafa9db15", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/criem/2017/8692017.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44cbc7bb0d3170a1c33614cad72e6ea1fcf7c0fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195583975
pes2o/s2orc
v3-fos-license
Critical p=1/2 in percolation on semi-infinite strips We study site percolation on lattices confined to a semi-infinite strip. For triangular and square lattices we find that the probability that a cluster touches the three sides of such a system at the percolation threshold has the continuous limit 1/2 and argue that this limit is universal for planar systems. This value is also expected to hold for finite systems for any self-matching lattice. We attribute this result to the asymptotic symmetry of the separation lines between alternating spanning clusters of occupied and unoccupied sites formed on the original and matching lattice, respectively. Introduced as a model of transport through a random medium [1], percolation has attracted attention as one of the simplest, purely geometrical models with a phase transition.In its basic version it is defined on a lattice with either nodes or edges being chosen with some probability p to be "open" to transport.If one considers the limit of the system size going to infinity (also known as the continuous or thermodynamic limit), then below some critical value p c the probability that the system as a whole is permeable is 0, whereas for p > p c it is 1. Several rigorous results have been obtained for percolation so far.The concepts of matching and dual lattices were applied to predict the exact values of p c for site percolation on the triangular lattice and bond percolation on the square, triangular and honeycomb lattices [2].A rigorous proof that p c = 1/2 for bond percolation on the square lattice was given in [3].A mapping was found between a class of percolation models and corresponding models of statistical physics, most notably the Potts model [4].The values of several critical exponents as well as of so called crossing probabilities [5] were rigorously established for the site percolation on the triangular lattice [6,7], and are believed to be universal for a wide class of planar percolation models.Similar universality is believed to hold also in higher dimensions, an important ingredient of advanced numerical methods [8][9][10][11][12]. In numerical simulations of planar percolation rectangular systems are preferred.The crossing probability for such geometry is defined for p = p c as the probability that there exists a percolating cluster that spans two opposite sides of the rectangle.Pruessner and Moloney [13] considered also the probability that a cluster spans three sides of a rectangle: two long and a short one.Using extensive simulations, they conjectured that this probability, which we denote as p 3 , tends to 1/2 as the rectangle's aspect ratio r diverges to infinity, but gave no justification for this limit.This conjecture raises several questions.Could such a simple result be exact?If so, to what extent it is universal?In particular, is it valid only in the thermodynamic limit or perhaps it is also valid for some finite lattices? To answer these questions, we start from numerical analysis of the system considered in [13]; however, to get the thermodynamic limit we use extrapolation of finite-size results rather than rely on simulations of large systems.We study site percolation on a square lattice restricted to an elongated rectangle of height H and length L (lattice units), with L ≫ H. Using the 64bit Mersenne-Twister random number generator [14] we generate a sequence of L columns of height H, keeping in the computer memory only the last two of them.These columns have their sites marked as occupied with probability 0.592 746 050 792 10, the best known value of p c for the site percolation on the square lattice [15].The occupied sites form clusters-we identify them with the union-find algorithm [9,16], assuming free boundary conditions on all system's edges.While adding columns, we update some bits of information (specified below) related to the clusters.Even though eventually we are interested only in percolating clusters (i.e., those spanning the two longer sides), all clusters are monitored, because it is not known beforehand whether a given cluster will develop into a percolating one.This contrasts to the method used in [17], where a special cluster labeling technique was developed and the information on only the clusters reaching the last added column was kept in memory.As soon as L columns have been added, we store (append) in a disk file the information about those of the clusters that percolate.This is repeated until a preset number of percolating clusters has been generated for given H. For each percolating cluster i we store four integer parameters: its mass (i.e., the number of the sites it occupies) m(i) and parameters l 0 (i), l 1 (i), and l 2 (i), as defined in the caption for Fig. 1.We then define the cluster width w(i) ≡ l 2 (i) − l 0 (i) + 1, and the gap between two consecutive clusters, g(i) ≡ l 0 (i + 1) − l 2 (i) − 1.Clearly, w(i) ≥ 1, whereas g(i) can be positive, zero or negative.Their average, in the limit of L → ∞, will be denoted as w H and g H , respectively. We also define two additional integer parameters: If one cuts vertically the system along any of the columns contributing to γ + (i), so that this column becomes the right-hand-side edge of the system, then cluster i will be touching at least three sides of the rectangle: top, bottom and right (Fig. 1).Since only one cluster can touch three FIG. 1.An exemplary percolation cluster for H = 7. Parameters l0 and l2 are defined as the column numbers of its leftmost and rightmost sites, respectively, and l1 is the smallest column number such that the cluster between l0 and l1 percolates.The dots mark this minimal percolating cluster.The dashed line marks a line splitting the system into two. given sides of a rectangle, the intervals defining γ + (i) are mutually disjoint.Hence, if we neglect the vicinity of the system's vertical borders or consider an infinite system along the horizontal direction, any column contributes exactly once to either γ + or γ − for some cluster i.Let γ ± H be the average of γ ± (i) over all percolating clusters i in the limit of L → ∞.Then is the probability that if in a horizontally infinite strip of hight H one randomly selects a column and splits the system into two semi-infinite ones so that this column becomes the right-hand-side edge of a semi-infinite rectangle, this column will contain at least one site belonging to a vertically percolating cluster.For example, if one cuts the system shown in Fig. 1 along the dashed vertical line, the cluster that was percolating in the original infinite system would still be percolating and this line would contribute to γ + H and, consequently, to p 3 .Selecting this line anywhere between l 1 (i) and l 2 (i) would have the same effect.However, if the cutting line was selected to the left of l 1 (i), down to l 2 (i − 1), it would split the cluster into at least two non-percolating ones or even miss any percolating cluster, and such cuttings would contribute to γ − H , or 1 − p 3 (H).We performed the simulations for systems with 2 ≤ H ≤ 1000 and L/H varying from ≈ 400 for H = 1000 to ≈ 10 7 for H = 10.The value of H is limited by the computation time, which is of order of H 2 per percolating cluster, whereas L is restricted by the available computer memory per CPU core.The number of clusters generated in our simulations varied between ≈ 2 × 10 9 for H ≥ 100 and > ∼ 5 × 10 10 for H < 100.In Fig. 2 we present a log-log plot of p 3 (H) − 1/2 as a function of H.It suggests that p 3 (H) actually converges to 1/2, as conjectured in [13], and the convergence is of power-law type, p 3 (h) − 1/2 ∝ L −ω with ω ≈ 1.To investigate the convergence, we approximate p 3 using Some authors use a similar formula with an additional term ∝ L −ω , where ω is a non-integer correction-toscaling exponent [11,12,18].However, we found no evidence that such correction is necessary for p 3 .Fitting our data to (2) with M = 3 we found that the minimum value of the regression standard error, s = χ 2 /dof (the square root of the chi-squared statistic per degree of freedom), is obtained if the fit is performed for H ≥ H min = 17, in which case s ≈ 0.6.We used H min as the lower bound for the fits.In this way we obtained a 0 = 1.4(15) × 10 −6 , a 1 = 0.3035(2), a 2 = −0.499(6), and a 3 = 0.67 (6).This is a strong evidence that p 3 (H) indeed tends to 1/2 as H → ∞, with the uncertainty of this limit, 1.5 × 10 −6 , being two orders of magnitude smaller than 4 × 10 −4 reported in [13]. The values of p 3 can be readily determined analytically for H = 1 and 2: ).Our numerical result for p 3 (2), 0.574 954 8 (12), is consistent with this formula, which indirectly validates our numerical procedure. Incipient percolating clusters are known to be fractals with their mass scaling as λ d f , where λ is a characteristic length and d f is the fractal dimension, d f = 91/48.For the model considered here, λ ≡ H and so we expect mH ≡ m H /H 2 ∝ H −5/48 .This can be used to do an additional check of correctness of our simulations.We fitted our data to obtaining b = 0.1050(4), which agrees with the expected value 5/48 ≈ 0.1042.The uncertainty of mH is signifi-FIG.3. A distribution of occupied (filled circles) and unoccupied (empty circles) sites on a triangular lattice with H = 6.Solid lines mark the boundaries between alternating regions dominated by occupied and unoccupied sites.Parameters l0, l1, and l2 characterize the percolating cluster marked with black filled circles, whereas l ′ 0 , l ′ 1 , and l ′ 2 refer to the leftadjacent cluster of unoccupied sites.cantly higher than that found for p 3 .The fit is rather poor, with the regression standard error s being of order of 10 (included into the uncertainty estimation).However, previous attempts to determine d f numerically showed that the convergence of mH to the limit H → ∞ is slow and the uncertainties reported there were even larger than ours [19,20]. Just as we consider clusters of occupied sites, we can also investigate clusters of unoccupied sites.A finite cluster of one species (occupied or unoccupied) is surrounded by sites occupied by the other species.Triangular lattice has a special property that all these surrounding sites are connected and hence belong to the same cluster.This property suffices to show that at any configuration, either occupied or unoccupied sites percolate and to predict the value of the percolation threshold, p c = 1/2 [2].Moreover, if we draw a line separating a cluster of occupied sites from unoccupied ones, all sites on one of its sides will belong to this cluster, whereas all sites on the other side will belong to another, single cluster of empty sites (Fig. 3).Thus, a rectangular system at or near p c is composed of alternating regions dominated by occupied or unoccupied sites.The border of each such region is determined by the border of a percolating cluster, which may surround nonpercolating clusters of the opposite species, which, in turn, may contain nonpercolating clusters of the original species and so on.A region dominated by a percolating cluster of occupied or unoccupied sites will be called "conductive" or "nonconducting", respectively. Let l 0 , l 1 , and l 2 characterize a cluster of occupied sites and l ′ 0 , l ′ 1 , and l ′ 2 -the preceding cluster of empty sites (open magenta and filled black circles in Fig. 3, respectively).Recall that l 1 is the x coordinate of the first column such that if the cluster is cut vertically right af-FIG.4. Percolation on an infinite (planar) strip generates alternating regions "dominated" by occupied (dark green) and unoccupied (light green) regions.We conjecture that at pc for any periodic lattice l2 ter column l 1 , the cluster percolates and touches three edges of the system.For the triangular lattice, l 1 is the maximum value of the x coordinate of the sites adjacent to its left-hand side border.This, in turn implies because l ′ 2 is the largest x coordinate of the empty sites adjacent to the same border from the other side.Thus, to determine the values of l ′ 2 and l 1 , it suffices to consider only the sites adjacent to the line separating the two clusters.Plugging (4) into the denominator of (1) yields l 2 (i) − l 1 (i) + 1 + l ′ 2 (i) − l ′ 1 (i − 1) + 1 , where . . .denotes the average over the clusters (i) in the thermodynamic limit.However, the occupied and unoccupied regions must have identical statistical properties at p c = 1/2, hence the two averages must be equal to each other.This implies that the site percolation on the triangular lattice is characterized by for any H.This can be independently verified for H = 1, as then p 3 = p c , and for H = 2, in which case, after some cluster counting, p 3 = (2 − p c )p 2 c /(1 − p c + p 2 c ) = 1/2.While equation ( 5) was derived for the triangular lattice, it sheds a new light onto the case of lattices that are not self-matching, including the square lattice.An infinite strip on periodic lattice at (or near) p c can be divided into altering conductive and nonconducting regions (Fig. 4).Let us for a moment consider the square lattice.Just as for the triangular lattice, one can define the line separating two consecutive regions.The occupied sites on the border of the conductive phase will belong to the same percolating cluster.The situation in the nonconducting regions appears to be more complex unless one realizes that the unoccupied sites on the square lattice may be considered as "connected" directly if they are nearest (NN) or next-neighbor neighbors (NNN).Then, the conductive regions will be controlled by standard site percolation of occupied sites on the square lattice, whereas the nonconducting ones-by site percolation of unoccupied sites on the square lattice with NN and NNN neighbors.Within this interpretation, (4) will still hold, but (5) can no longer be taken for granted, as there is no trivial symmetry between the square NN and NN + NNN lattices.Our numerical results (Fig. 2) suggest that for the square lattice Eq. ( 5) is valid only in the limit of H → ∞: Simplicity of this limit suggests some deeper relation between the conducting and unconducting phases.Indeed, the square lattices with NN and NN + NNN connections form a pair of mutually matching graphs [2] and we conjecture that this can ba generalized to other periodic lattices: the nonconducting phase can be understood as being built on the lattice matching the lattice of the conductive phase.This is in agreement with the observation that the percolation thresholds of a lattice and its matching lattice always sum up to 1 [2]. As the system size goes to infinity, the line separating the two phases gets more and more complicated, eventually forming a fractal [21].If this line is conformally invariant in the continuous limit then it can be described as a stochastic Loewner evolution SLE 6 [22,23].Validity of this assumption was shown rigorously for the triangular lattice [7].Since the hull of an incipient percolation cluster is believed to be conformally invariant irrespective of the underlying lattice, we expect SLE 6 to be a universal description of the lines separating occupied and unoccupied "phases" in the limit of H → ∞.This limiting line is symmetric in the sense that one cannot tell which of its sides is taken by a cluster of "occupied" sites, and which by "unoccupied".Therefore we expect that the alternating regions of occupied and unoccupied "phases" are statistically indistinguishable in the limit of H → ∞.This, in turn, leads to (6) as a universal property of percolation on planar lattices.For systems built on self-matching lattices this line is symmetric also for finite system sizes, which implies the stronger condition (5) to be valid for all self-matching lattices.This symmetry, however, is broken for finite incipient spanning clusters built on a lattice that is not self-matching.As can be easily verified e.g. for the square lattice, the lines separating different, non-matching phases are asymmetric with respect to the possibility of self-touching (or forming loops without self-cutting) and touching of two adjacent lines.This explains why p 3 (H) = 1/2 for such systems. In summary, we have presented evidence that the prob-ability p 3 that there exists a cluster touching the three sides of a semi-infinite strip at the percolation threshold p c has a universal continuous limit: 1/2.The convergence of p 3 for the site percolation on the square lattice at p c is quick and robust, which suggests that it may be an effective parameter for determining p c and related parameters numerically.We attribute the value of the limit, 1/2, to the symmetry of the separation lines between "conductive" and "nonconducting" phases, which are believed to be universally described by the SLE 6 process. We acknowledge helpful discussions with Grzegorz Kondrat. 3 HFIG. 2 . FIG.2.Convergence of p3 to 1/2 as a function of the system height, H.The solid line is a fit to (2) for H ≥ 17, and the dashed line is a guide to the eye with the slope −1.Inset: the difference between the simulation data and the fit.
2019-06-24T13:30:56.000Z
2019-06-24T00:00:00.000
{ "year": 2019, "sha1": "401c970f57c74d70f531afef788781e6bc033b98", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1906.10543", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7b6a2e0afc2a207ea8c0b92356aed78b27e660e9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
248370079
pes2o/s2orc
v3-fos-license
Morphology of Nasonov and Tergal Glands in Apis mellifera Rebels Simple Summary Communication in a colony of social insects, such as the honeybee, is possible thanks to the pheromones secreted by all individuals. Pheromones are produced and secreted by the glands. Examples of such structures are Nasonov and tergal glands. Nasonov glands are characteristic of worker bees, while tergal glands are primarily found in queens. There are situations in the colony in which the queen and her pheromones are missing. In these instances, the larvae develop into rebels, which are reproductive workers. We therefore assumed that the rebels would have a reduced Nasonov gland and developed tergal glands. Our assumption turned out to be correct. These discoveries bring us closer to explaining the evolutionary formation of different castes of honeybees. Abstract Social insect societies are characterized by a high level of organization. This is made possible through a remarkably complex array of pheromonal signals produced by all members of the colony. The queen’s pheromones signal the presence of a fertile female and induce daughter workers to remain sterile. However, the lack of the queen mandibular pheromone leads to the emergence of rebels, i.e., workers with increased reproductive potential. We suggested that the rebels would have developed tergal glands and reduced Nasonov glands, much like the queen but contrary to normal workers. Our guess turned out to be correct and may suggest that the rebels are more queen-like than previously thought. The tergal gland cells found in the rebels were numerous but they did not adhere as closely to one another as they did in queens. In the rebels, the number of Nasonov gland cells was very limited (from 38 to 53) and there were fat body trophocytes between the glandular cells. The diameters of the Nasonov gland cell nuclei were smaller in the rebels than in the normal workers. These results are important for understanding the formation of the different castes of Apis mellifera females, as well as the division of labor in social insect societies. Introduction Social insect societies are characterized by a high level of organization. This is exemplified by the division of labor in reproductive bees and worker bees based on their life expectancies. This order is mediated through a remarkably complex array of pheromonal signals produced by all members of the colony and regulated by social contexts [1,2]. Pheromone signals in honeybees are often enhanced by synergy and the context in which Insects 2022, 13, 401 2 of 12 they are deployed and mediated through both temporal and spatial distribution [3][4][5][6]. Nowadays, around 50 chemical substances are known to be essential to the functioning of the society [3,7]. Evolutionary changes in chemical production have been instrumental to the emergence of interactions both within and between species, with behaviors as diverse as chemical defense, pheromonal communication and parental care relying on the transmission of information or resources embedded in chemical secretions [8]. Queen pheromones, which signal the presence of a fertile female and induce daughter workers to remain sterile, are considered to play a key role in regulating the reproductive division of labor in insect societies. Although queen pheromones were long thought to be highly taxon-specific, recent studies have shown that structurally related long-chain hydrocarbons act as conserved queen signals across several independently evolved lineages of social insects. These results imply that social insect queen pheromones are ancient and are likely derived from an ancestral signaling system that was present in their common solitary ancestors [9]. It is unclear whether this conservative character only applies to the compounds that comprise pheromones, or if it also applies to the morphology of the cells in which pheromones are produced and secreted. Raguso et al. [10], Tittiger [11], and Brückner and Parker [8] emphasized that knowledge of cytology, morphology and molecular mechanisms, as well as an understanding of the chemical release mechanisms of cells (the identities of molecular components that regulate subcellular exchange and secretion of chemical signals) is absent for the majority of gland cell types. Our research may help to fill this gap in our knowledge. In a honeybee Apis mellifera colony, the secretion of one pheromone stimulates the reaction and secretion of another in individuals of the same or another caste. The pheromones of mandibular (QMP) and tergal glands in queens, as well as the secretion of workers' Nasonov glands, are an example of such caste actions [12,13]. Tergal gland (Renner and Baumann glands, located on tergites II-IV) pheromones support QMP functions [14][15][16]. Secretions from the queen's mandibular and tergal glands evoke the retinue behavior of workers, as well as the effect of ovarian development inhibition in workers [17][18][19]. Moreover, the secretions from these three glands have a cohesive effect in instances of swarm clustering [20,21]. After swarming, when the old queen leaves the nest accompanied by a group of workers to establish a new colony, the remaining workers in the old nest care for the eggs, the larvae of younger workers and developing sister queens [22]. Woyciechowski et al. [23] suggested that information regarding the absence of the queen and her pheromones is transmitted via trophallaxis to worker larvae, which can then change their developmental strategy. As a result, rebels develop from the worker larvae. In contrast to the normal sterile workers, the rebels are primed to reproduce rather than participate in the rearing of the next generation of sister-queen offspring [24]. They have more ovarioles in their ovaries, as well as better developed mandibular glands and underdeveloped hypopharyngeal glands. Moreover, their ovaries are activated regardless of whether they live in queen-less or queen-right colonies [24][25][26][27]. Since the rebels are so anatomically and behaviorally different from the normal workers and more queen-like, the following questions arise: How do their gland cells function, and are they morphologically similar to the gland cells of queens or workers? Since the tergite glands are characteristic of A. mellifera queens and the Nasonov glands of workers, disturbances in these systems lead to an imbalance in reproductive dominance in the colony [28][29][30]. To answer these questions, we dissected cells from the tergal glands of rebels and compared them with those of queens and normal workers. We also dissected cells from the Nasonov glands of rebels and compared them with those of normal workers. Materials and Methods This study was performed at the apiary of the University of Life Sciences in Lublin, Poland (51.224039 N-22.634649 E). We used four colonies of A. m. carnica honeybees; three of them-the source colonies-were used to obtain larvae of known ages to rear normal workers and rebels and one (colony 4) was used for rearing queens. Experimental Design The queens were taken from each of the three unrelated source colonies, each of which populated two-box hives (Dadant Blatt; 20 frames; 435 × 150 mm 2 ). They were caged within a queen-excluder comb-cage containing two empty combs (C1 and C2) for 24 h, with the purpose of laying eggs. On the third day after the end of egg laying, 50 one-day-old (12-24-h-old) larvae from C1 and C2 were grafted into queen cell cups suspended vertically in the colony (No. 4, according to Büchler et al.'s [31] method). After the larvae were grafted, C1 and C2 were restored to their source colonies with the remaining larvae. Next, each of the source colonies was divided into two equal parts with each in a separate box, according to Woyciechowski and Kuszewska's [24] procedure. The first part (top box), containing the queen, workers, brood and C1, was used for rearing normal (non-rebel) workers, whereas the other part (bottom box), without a queen but with workers, brood and C2, served for rearing rebels. After sealing the larval cells in C1 and C2, the two boxes were put together again, respectively, to restore each of the three source colonies. After 15 days from the moment the eggs were laid, sealed queen cells were placed in an incubator (temperature 34.5 • C, relative humidity 60%). Soon after, the one-day-old queens were placed in mini-hives with about 200 nursing workers. The seven-day-old queens were used for the morphological analyses. After 18 days, brood combs C1 and C2 were also placed in the incubator. Freshly emerged rebels and normal workers, marked with different colors on the thorax, returned to their colonies. For the morphological analyses, 20 seven-day-old rebels and 20 seven-day-old normal workers were captured from each of the three source colonies. Morphological Analyses of the Gland Cells The Nasonov glands were dissected (Stereo Zoom Microscope Olympus SZX16; Camera: Olympus DP72; Warsaw, Poland) from each of the 60 rebels and 60 normal workers. The tergal glands from the third, fourth and fifth tergites were dissected from each of the 60 queens, as well as from each of the 60 rebels and 60 normal workers (the same as those used in the Nasonov gland preparation). Each of the glands was placed on glass slides in 0.6% natrium chloratum (pro inj.) and covered with cover-glasses. The gland cells were observed and photographed with an Olympus DP 72 camera (Microscope Olympus BX61; magnification × 40) with a DIC attachment. This method enables the undistorted visualization of living tissues (see [32]). The diameters of the gland cell nuclei were measured using the Olympus software. Examination of Anatomical Parameters In order to confirm whether the emerging bees were normal workers or rebels and verify the queen status, Woyciechowski and Kuszewska's [24] method was used to determine the number of ovarioles (ovarian tubules) in both ovaries. The highest number of tubes was found in all the dissected queens (199.6 ± 25.4). The normal workers had fewer ovarioles (5.1 ± 1.1) than the rebels (12.4 ± 1.8). Significant differences between these results allowed us to continue our research and compare the glands in three different groups of females. Statistical Analysis The results were analyzed using Statistica, version 13.3 (2017) for Windows, StatSoft Inc., USA. The mixed-model two-way ANOVA followed by the post hoc Tukey HSD test were used to compare the number of ovarioles and the diameters of the Nasonov gland cell nuclei between the rebel and normal workers, as well as the queens. The fixed effect was the phenotype of the female (queen, rebel workers, and normal workers). In order to compare the gland-cell nuclei of the tergal gland, the mixed-model three-way ANOVA was used, followed by the post hoc Tukey HSD test. The fixed effect was the phenotype of the female (queen and rebel workers) and the location of the tergal gland (GIII-tergal glands Insects 2022, 13, 401 4 of 12 from the third tergites; GIV-tergal glands from the fourth tergites; GV-tergal glands from the fifth tergites). The Morphology of the Nasonov Gland The Nasonov gland, located just below the intersegmental membrane between the 6th and 7th tergite of the abdomen ( Figure S1), forms cells whose exit ducts are located in the duct region (Figures 1-3). In normal workers, the package of these cells was stretched to a length of about 1500-2000 µm ( Figure 1a); the cells were large with a centrally located nucleus (Figures 1c, 2a and 3). Many cells (from 160 to 277) closely adhered to one another and the ducts departed from each of them ( Figure 2c). On the other hand, in the rebels, the number of glandular cells was very limited (from 38 to 53), with a strand length of about 800-1000 µm ( Figure 1b). Additionally, there were fat body trophocytes between the glandular cells (Figures 1d and 2b,d). The diameters of the cell nuclei were smaller in the rebels than in the normal workers ( Figure 4). cell nuclei between the rebel and normal workers, as well as the queens. The fixed effect was the phenotype of the female (queen, rebel workers, and normal workers). In order to compare the gland-cell nuclei of the tergal gland, the mixed-model three-way ANOVA was used, followed by the post hoc Tukey HSD test. The fixed effect was the phenotype of the female (queen and rebel workers) and the location of the tergal gland (GIII-tergal glands from the third tergites; GIV-tergal glands from the fourth tergites; GV-tergal glands from the fifth tergites). The Morphology of the Nasonov Gland The Nasonov gland, located just below the intersegmental membrane between the 6th and 7th tergite of the abdomen ( Figure S1), forms cells whose exit ducts are located in the duct region (Figures 1-3). In normal workers, the package of these cells was stretched to a length of about 1500-2000 µm ( Figure 1a); the cells were large with a centrally located nucleus (Figures 1c, 2a and 3). Many cells (from 160 to 277) closely adhered to one another and the ducts departed from each of them (Figure 2c). On the other hand, in the rebels, the number of glandular cells was very limited (from 38 to 53), with a strand length of about 800-1000 µm (Figure 1b). Additionally, there were fat body trophocytes between the glandular cells (Figures 1d and 2b,d). The diameters of the cell nuclei were smaller in the rebels than in the normal workers ( Figure 4). The Morphology of the Tergal Glands Packages of tergal gland cells located underneath the abdominal tergites III to V (Figure 5a) were stretched on a length of about 2500-4500 µm in the queens (Figure 5d) and about 1500-3000 µm in the rebels. Normal workers were observed to have 1-3 tergal gland cells, which were very delicate and quickly burst, making it impossible to register their images. The Morphology of the Tergal Glands Packages of tergal gland cells located underneath the abdominal tergites III to V (Fig-Figure 4. Diameters of the cell nuclei in the Nasonov and tergal glands. The queens were found to have a lot of cells (there were 25-32 glandular cells on the 200 µm 2 tissue surface) that closely adhered to one another, with a centrally located nucleus (Figure 5b). From each cell departed the outlet ducts (Figure 5c,d), from which pheromones were emitted with pulsating movements (Video 1). No differences were observed in the morphological images between the glands from various tergites (III, IV and V), but these glandular cells differed in their nucleus diameters-the largest nuclei were found in the third tergite and the smallest ones in the fifth (Figure 4). The glandular cells in the rebels were numerous (there were 15-21 glandular cells on the 200 µm tissue surface) but they did not adhere as closely to one another (Figure 6a-c). Their cell nuclei were centrally located ( Figure 6d) and their diameters did not differ between the third and fourth tergites. The diameters of the cell nuclei in the rebels were smaller in comparison with those in the third and fourth tergites of the queens (Figure 4). The queens were found to have a lot of cells (there were 25-32 glandular cells on the 200 µm 2 tissue surface) that closely adhered to one another, with a centrally located nucleus (Figure 5b). From each cell departed the outlet ducts (Figure 5c,d), from which pheromones were emitted with pulsating movements (Video 1). No differences were observed in the morphological images between the glands from various tergites (III, IV and V), but these glandular cells differed in their nucleus diameters-the largest nuclei were found in the third tergite and the smallest ones in the fifth (Figure 4). The glandular cells in the rebels were numerous (there were 15-21 glandular cells on the 200 µm tissue surface) but they did not adhere as closely to one another (Figure 6a-c). Their cell nuclei were centrally located (Figure 6d) and their diameters did not differ between the third and fourth tergites. The diameters of the cell nuclei in the rebels were smaller in comparison with those in the third and fourth tergites of the queens (Figure 4). Discussion The lack of queen pheromones during the larval development of workers has farreaching effects, not only on their anatomy [23] and behavior [27,[33][34][35][36] but also on the morphology of the emerged rebels (Figures 1-6). Rebels are focused on their own reproduction [24]. Hence, at the stage of preimaginal development, there must already have been changes in their epigenome [37,38] which lead to the development of tergal glands and the reduction in Nasonov glands (Figures 1-6). It can be concluded that the rebels changed their life strategy in order to become as queen-like as possible and achieve personal reproductive success by avoiding worker policing [39]. Billen et al. [15] and Wossler et al. [16] reported that some workers may have tergite glands, but the number of these cells in workers was smaller than in queens. In our experiment, we also observed 1-3 tergal gland cells in normal workers. The morphological structure of these cells was similar to that of the queens. These cells were very fragile, they quickly broke and their measurement and visualization were not possible, contrary to what was observed in the A. m. carnica queens ( Figure 5). A. m. scutellata workers also possessed very few gland cells (mean ± SD; 0.9 ± 0.6), ranging in size from 255 µm 2 to 1327 µm 2 , whereas A. m. capensis workers had on average ten times more cells (9.3 ± 1.7), ranging in size from 723 µm 2 to 2200 µm 2 [16]. It can be calculated that the diameters of these cells ranged from 18.02 to 41.11 µm and from 30.34 to 52.93 µm, respectively. In our experiment, we analyzed the diameters of the glandular cell nuclei as a measure of their metabolic activity. This gland assumed considerable sizes in workers with increased reproductive potential, such as rebels (Figure 6), and consisted of numerous active cells, as indicated by the diameter of the cell nuclei ( Figure 4) at a mean of 11.61 µm (± 0.78; SD). Discussion The lack of queen pheromones during the larval development of workers has farreaching effects, not only on their anatomy [23] and behavior [27,[33][34][35][36] but also on the morphology of the emerged rebels (Figures 1-6). Rebels are focused on their own reproduction [24]. Hence, at the stage of preimaginal development, there must already have been changes in their epigenome [37,38] which lead to the development of tergal glands and the reduction in Nasonov glands (Figures 1-6). It can be concluded that the rebels changed their life strategy in order to become as queen-like as possible and achieve personal reproductive success by avoiding worker policing [39]. Billen et al. [15] and Wossler et al. [16] reported that some workers may have tergite glands, but the number of these cells in workers was smaller than in queens. In our experiment, we also observed 1-3 tergal gland cells in normal workers. The morphological structure of these cells was similar to that of the queens. These cells were very fragile, they quickly broke and their measurement and visualization were not possible, contrary to what was observed in the A. m. carnica queens ( Figure 5). A. m. scutellata workers also possessed very few gland cells (mean ± SD; 0.9 ± 0.6), ranging in size from 255 µm 2 to 1327 µm 2 , whereas A. m. capensis workers had on average ten times more cells (9.3 ± 1.7), ranging in size from 723 µm 2 to 2200 µm 2 [16]. It can be calculated that the diameters of these cells ranged from 18.02 to 41.11 µm and from 30.34 to 52.93 µm, respectively. In our experiment, we analyzed the diameters of the glandular cell nuclei as a measure of their metabolic activity. This gland assumed considerable sizes in workers with increased reproductive potential, such as rebels (Figure 6), and consisted of numerous active cells, as indicated by the diameter of the cell nuclei ( Figure 4) at a mean of 11.61 µm (± 0.78; SD). It is, however, worth emphasizing that the diameters of the nuclei in the rebels did not differ between the third and fourth tergites. The largest nuclei in the queens were observed in the third tergites (32.7 ± 0.8 µm), and the smallest in the fifth tergites (10.5 ± 0.6 µm; Figure 4). This may indicate the functional adaptation and secretory specialization of these cells in queens. Most likely, not all cells in each tergite are simultaneously activated and the cycles of their metabolic activities are probably rotational. Our research shows that the higher the reproductive potential of the female, the greater the specialization and organization of these glands, depending on the segment in which they are located (Figure 4). It is surprising that the rebels had larger cell nuclei in the fifth tergite than the queens. This observation requires further research and explanation. Okosun et al. [40,41] suggested that the workers' tergal gland secretions included the three ethyl esters (ethyl palmitate, ethyl oleate and ethyl stearate) which have both primer and releaser effects. Due to the presence of esters, the pheromone mixture is attractive for other bees and regulates reproduction, whether it is emitted by the queen [18,42,43] or by workers [41]. The queen-like glandular secretions of reproductively dominant workers allow for the determination of their reproductive dominance [28,41,44]. In queens, this dominance is very strong, due to the gland size as well as the number and types of compounds (long-chain fatty acids, long-chain esters and a series of unsaturated and saturated hydrocarbons as components); [18,43]). Since the number of components in worker pheromones is limited [41], this may explain the lack of differences in nucleus diameters between the third and fourth tergites in the rebels ( Figure 4). Thus, tergal gland secretions act synergistically with mandibular gland secretions, which are more developed in the rebels in comparison to normal workers. Moreover, the rebels have underdeveloped hypopharyngeal glands [24], which suggests low production of brood food [45] and restricted nursing activity [46]. In turn, the reduction in the number of Nasonov gland cells, overgrowing with trophocytes and the reduction in the diameter of cell nuclei (Figures 1, 2 and 4) in rebels most likely affects the composition and number of released pheromones. This may suggest a disturbance in the correct orientation of the bees. Thus, intraspecific reproductive parasitism in the rebels is perhaps not only the result of a high reproductive potential and their reproductive strategy, as suggested by Kuszewska et al. [27], but also arises from morphological and functional changes in their Nasonov glands. The limited engagement of the rebels in raising the next generation of bees could result in the reduction in their Nasonov glands (Figures 1 and 2) and pheromone concentrations, as suggested by Al-Kahtani and Bienefeld [21]. Moreover, Bortolotti and Costa [12] stated that the release of the Nasonov pheromone is only stimulated by sugar concentrations. This suggests that the Nasonov pheromone is mainly used to attract workers toward water sources and is involved in nectar source location. This may explain why the rebels display a delayed onset of foraging and a stronger tendency to collect nectar in comparison to normal workers [35]. In other words, by investing in their own egg laying and living a longer life [26] the rebels delay risky tasks, such as foraging [33,[47][48][49]. Moreover, dysfunction of the Nasonov glands in rebels can lead to functional disorders related to social immunity (e.g., the removal of dummies) [50]. These mechanisms are controlled by the juvenile hormone and vitellogenin, the concentrations of which in rebels are much higher than in normal workers [51]. All the above-mentioned facts regarding the morphology and anatomy of the rebels are important to clarify the evolutionary strategy of reproduction in workers, which results from the assumption of kin selection theory [52,53] and can also explain the altruistic strategies of colony members [54,55], as well as certain conflicts and behaviors between individuals in a nest [39,56]. Additionally, we wish to draw the reader's attention to the fact that the main difficulty in analyzing glandular cells, especially those from invertebrates, is their short survival rate. In most studies, such cells are fixed immediately after dissection and only then viewed under a microscope. Our work presents a pioneering approach in imaging glandular cells that can be viewed, measured, and even observed for their release of chemical compounds while maintaining their viability (detailed protocols are described in the Materials and Methods section). By using this method, we have expanded the knowledge of the morphology of the Nasonov and tergal glands in workers with increased reproductive potential, such as the rebels, which in this respect are more like queens than normal workers. Conclusions In order to become as queen-like as possible, rebel honeybees have developed tergal glands and reduced Nasonov glands. Developed tergal glands in rebels, from which pheromones are secreted, are one of the reasons for their temporary reproductive domination in the colony. Moreover, the higher the reproductive potential of the female, the greater the specialization and organization of these glands, depending on the segment in which they are located. The reduction in the number of Nasonov gland cells, overgrowing with trophocytes and the reduction in the diameter of cell nuclei most likely affect the composition and number of released pheromones and suggest a disturbance in the correct orientation of the rebels. Therefore, rebel honeybees are focused on personal reproductive success instead of performing tasks for the colony. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2022-04-25T15:02:45.487Z
2022-04-22T00:00:00.000
{ "year": 2022, "sha1": "3e8228b057e8cdb1dc69ce1f2e60d12a132ad91b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/13/5/401/pdf?version=1650619982", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a253b786ce69bff3e16033f419e604a24055c606", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246677845
pes2o/s2orc
v3-fos-license
Resveratrol ameliorates nutritional steatohepatitis through the mmu-miR-599/PXR pathway The aim of the present study was to elucidate the effect of resveratrol on non-alcoholic steatohepatitis (NASH), and the molecular basis in mice and Hepa1-6 cells, in order to verify its therapeutic effect. C57BL/6J mice were fed a methionine-choline-deficient (MCD) diet to induce steatohepatitis and were treated with resveratrol. Mouse sera were collected for biochemical analysis and enzyme-linked immunosorbent assay, and livers were obtained for histological observation, and mmu-microRNA (miR)-599 and inflammation-related gene expression analysis. Hepa1-6 cells were treated with palmitic acid to establish a NASH cell model, and were then treated with resveratrol, or transfected with mmu-miR-599 mimic, mmu-miR-599 inhibitor or recombinant pregnane X receptor (PXR) plasmid. Subsequently, the cells were collected for mmu-miR-599 and inflammation-related gene expression analysis. Reverse transcription-quantitative polymerase chain reaction and western blotting were used to assess mmu-miR-599 expression levels, and the mRNA and protein expression levels of PXR and inflammation-related genes. The binding site of mmu-miR-599 in the PXR mRNA was verified by the luciferase activity assay. Mice fed an MCD diet for 4 weeks exhibited steatosis, focal necrosis and inflammatory infiltration in the liver. Resveratrol significantly reduced serum aminotransferase and malondialdehyde levels, and ameliorated hepatic injury. These effects were associated with reduced mmu-miR-599 expression, enhanced PXR expression, and downregulated levels of nuclear factor-κB, tumour necrosis factor-α, interleukin (IL)-1β, IL-6, NOD-like receptor family pyrin domain-containing protein 3 and signal transducer and activator of transcription 3. Administration of the mmu-miR-599 mimic inhibited PXR expression in Hepa1-6 cells, whereas the mmu-miR-599 inhibitor exerted the opposite effect. A binding site for mmu-miR-599 was identified in the PXR mRNA sequence. Furthermore, overexpression of PXR inhibited the expression of inflammatory factors in Hepa1-6 cells. The present study provided evidence for the protective role of resveratrol in ameliorating steatohepatitis through regulating the mmu-miR-599/PXR pathway and the consequent suppression of related inflammatory factors. Resveratrol may serve as a potential candidate for steatohepatitis management. Introduction Non-alcoholic steatohepatitis (NASH) is the severe stage of non-alcoholic fatty liver disease (NAFLd), which may progress further to cirrhosis and even hepatocellular carcinoma (1,2). The management of NASH is based on lifestyle changes, with a focus on reducing body weight through physical exercise and adherence to a healthy diet (3), which are hard to sustain. There is currently no effective pharmacological treatment for NASH. Bioactive food constituents are considered potential treatment approaches for NASH. Resveratrol is a natural polyphenol; it is a phytoalexin that is synthesized by a number of plants in response to damage, and can be found in grapes, berries, legumes, peanuts, tea and wine, although in the low milligram range (4). In a clinical trial of patients with NAFLd, daily administration of 500 mg resveratrol for 12 weeks significantly improved anthropometric parameters and liver injury, and decreased inflammatory markers (5). Resveratrol has shown promising antisteatotic, antioxidant and anti-inflammatory effects in NASH (6); however, the detailed mechanism has not been fully clarified. MicroRNAs (miRNAs/miRs) are small, endogenous, noncoding RNAs that interact with the 3'-untranslated regions (UTRs) of target mRNA, resulting in the inhibition of translation or the promotion of mRNA degradation (7). miRNAs have been shown to be involved in the development of NAFLd (8,9). miR-599 has been reported to promote the apoptosis of papillary thyroid carcinoma cells, and to promote interleukin (IL)-1β-induced chondrocyte apoptosis and inflammation in osteoarthritis (10,11). Hepatocyte apoptosis and the inflammatory response are important pathological processes in NASH that cause hepatocyte death and liver damage, promoting disease progression (12). However, to the best of our knowledge, whether miR-599 is involved in the development of NASH remains to be determined. Pregnane X receptor (PXR) is a liver-enriched xenobiotic-responsive transcription factor that exerts anti-inflammatory effects; treatment with the PXR agonist pregnenolone 16α-carbonitrile has been reported to suppress the increased plasma transaminase activity and neutrophil infiltration in the liver of carbon tetrachloride-treated mice (13). It has also been reported that ligand-activated PXR exerts anti-inflammatory effects by antagonizing nuclear factor-κB (NF-κB) (14). Activation of the NF-κB pathway promotes the development of NASH by enhancing the release of numerous proinflammatory cytokines, such as tumour necrosis factor-α (TNF-α), IL-1β and IL-6, which further aggravate inflammatory damage in the liver (15)(16)(17)(18). Therefore, the role of PXR in NASH is worth investigating. The present study aimed to elucidate the therapeutic role of resveratrol in NASH, and to further investigate whether mmu-miR-599, PXR and the related inflammatory genes were involved in this effect. In addition, the present study assessed whether an interaction existed between mmu-miR-599 and PXR, in order to adequately clarify the potential molecular mechanism underlying the effects of resveratrol. Materials and methods Animals and treatments. A total of 18 male c57BL/6J mice (age, 8 weeks; body weight, 20-25 g) were bred in a temperature-controlled animal facility (22±2˚C, 50-60% relative humidity) under a 12-h light-dark cycle. The mice had free access to water, and were allowed to adapt to their food and environment for 1 week before the start of the experiment. The mice were randomly divided into three groups (n=6 mice/group): i) control group mice were fed a diet supplemented with choline bitartrate (2 g/kg) and dL-methionine (3 g/kg) (IcN Biomedicals, Inc.); ii) methionine-choline-deficient (Mcd) group mice were fed an Mcd diet (IcN Biomedicals, Inc.); and iii) Mcd + resveratrol group (RES group) mice were fed an Mcd diet supplemented with resveratrol (0.4 g/kg daily; Shanghai Macklin Biochemical co., Ltd.). The duration of the experiment was 4 weeks. during the experiment, the body weight and rate of diet consumption of the mice were recorded. Animals were euthanized via an intraperitoneal injection of pentobarbital sodium (200 mg/kg) after overnight fasting at the end of the experiment. Blood samples (0.3-0.6 ml/mouse) were collected from the femoral artery for biochemical analysis and enzyme-linked immunosorbent assay. Livers were weighed and fixed in 10% formalin for 24 h at room temperature for histological analysis or snap-frozen in liquid nitrogen followed by storage at -80˚C in a freezer until required. All protocols and procedures were performed following the guidelines of the Hebei committee for care and Use of Laboratory Animals and were approved by the Animal Experimentation Ethics committee of Hebei Medical University (Shijiazhuang, china). Cell culture and resveratrol intervention. Hepa1-6 cells (cell Resource center, Peking Union Medical college) were maintained in high-glucose Dulbecco's modified Eagle's medium (dMEM) supplemented with 100 U/ml penicillin, 100 g/ml streptomycin and 10% foetal bovine serum (FBS), (all from Gibco; Thermo Fisher Scientific, Inc.). The cells were incubated at 37˚C in a humidified atmosphere containing 5% CO 2 . Mmu-miR-599 transfection in Hepa1-6 cells. Untreated Hepa1-6 cells were seeded at a density of 2x10 5 /ml and the medium was replaced with fresh dMEM without FBS and antibiotics. A total of 50 nM mmu-miR-599 mimic or 100 nM mmu-miR-599 inhibitor (Thermo Fisher Scientific, Inc.) was transfected into Hepa1-6 cells using Lipofectamine ® 2000 (Invitrogen; Thermo Fisher Scientific, Inc.) for gain-and loss-of-mmu-miR-599 function experiments, respectively. The same concentration of corresponding negative control sequences was used in miRNA experiments. After 24 h of culture at 37˚C with the transfection mix, the cell culture medium was replaced with DMEM with 10% FBS and antibiotics (100 U/ml penicillin and 100 g/ml streptomycin). A total of 48 h after transfection, cells were harvested by mild trypsinization and washed in phosphate-buffered saline. All experiments were repeated in triplicate. The sequences of mmu-miR-599 mimic, mmu-miR-599 inhibitor, mimic control and inhibitor control are shown in Table I. Overexpression of PXR in Hepa1-6 cells. Full-length coding sequences of PXR were PCR-amplified using Thermo Scientific Phusion Flash High-Fidelity PcR Master Mix (Thermo Fisher Scientific, Inc.) and subcloned into the pcdNA3.1 plasmid (Invitrogen; Thermo Fisher Scientific, Inc.). The recombinant plasmid (0.5 µg) was transfected into 70% confluent Hepa1-6 cells on a 6-well plate using Lipofectamine 2000 for PXR overexpression; the empty pcdNA3.1 plasmid was used as negative control. A total of 12 h before transfection, the cell culture medium was replaced with antibiotic-free medium. The cells were transfected at 37˚C for 6 h, then culture medium was replaced with complete medium. After 24 h, cells were collected and subjected to subsequent experiments. The PXR expression levels were confirmed by reverse transcription-quantitative polymerase chain reaction (RT-qPcR) analysis and western blotting. RT-qPCR analysis. RNA was isolated from liver tissues and cells using a Total RNA Extraction kit (Promega corporation) according to the manufacturer's protocol. The isolated RNA was then reverse transcribed into cDNA at 25˚C for 10 min and then at 42˚C for 60 min with dNTPs (10 mM) and 5X M-MLV buffer using the M-MLV RT kit (Promega corporation) with either miRNA-specific stem-loop primers (Promega corporation) or oligo dT primers (Sangon Biotech co., Ltd.). differential RT-qPcR was performed on an ABI 7500 Real-Time PcR system (Applied Biosystems; Thermo Fisher Scientific, Inc.) using SYBR-Green master mix (Promega corporation). The thermocycling conditions for qPcR were as follows: 95˚C for 5 min; followed by 40 cycles at 95˚C for 30 sec and 60˚C for 1 min. The relative abundance of miRNA was normalized to the small nuclear RNA U6 and the expression levels of the genes were normalized to the endogenous reference gene β-actin. The relative amounts of the miRNAs and genes were measured using the 2 -ΔΔcq method (21). RT-qPcR was conducted in triplicate and the primers used are shown in Table II. Results Effect of resveratrol on inflammatory liver injury and oxidative stress in mice with NASH. As shown in Fig. 1, mice fed an MCD diet had significantly higher serum ALT and AST levels, indicating hepatic injury. Significant reductions in serum ALT and AST levels were observed following resveratrol treatment. Hepatic MdA was analysed as a marker of oxidative stress. The increased hepatic lipid peroxidation level induced by the MCD diet was significantly reduced by resveratrol (Fig. 1c). Effect of resveratrol on body weight and liver weight changes. Administration of the Mcd diet caused body weight loss and an increase in liver-to-body weight ratio in mice. Resveratrol had no effects on body and liver-to-body weight ratio in NASH mice (Table III). Reversal of hepatic pathological changes after resveratrol treatment in mice with NASH. The liver sections obtained from mice in the Mcd group exhibited disordered lobule structure, severe macrosteatosis, spot or focal hepatocyte necrosis, and mixed inflammatory cell infiltration ( Fig. 2A). Accordingly, mice fed a Mcd diet exhibited higher histological scores for liver steatosis and inflammation compared with control mice (Fig. 2B). Resveratrol administration could markedly ameliorate hepatic steatosis and necrotic inflammation. Ectopic expression of mmu-miR-599 and PXR in mice. As revealed in Fig. 3, the MCD diet significantly upregulated the expression levels of mmu-miR-599, and downregulated the mRNA and protein expression levels of PXR. Mice treated with resveratrol showed reduced mmu-miR-599 expression and increased PXR expression. Effect of resveratrol on inflammation-related genes in mice with NASH. To identify the mechanism underlying the amelioration of liver injury induced by resveratrol administration, the serum and hepatic expression levels of proinflammatory factors were assessed. Serum levels of IL-1β, TNF-α and IL-6 ( Fig. 4), and hepatic expression levels of NF-κB, IL-1β, TNF-α, IL-6, NLRP3 and STAT3 (Fig. 5) were increased in NASH mice. Treatment with resveratrol significantly reduced the hepatic mRNA expression levels of these inflammatory factors. concomitant with the reduction in mRNA expression, the serum protein levels of IL-1β, TNF-α and IL-6, and the hepatic protein expression levels of NF-κB, IL-1β, TNF-α, IL-6, NLRP3 and STAT3 were also downregulated by resveratrol. Effect of resveratrol on the expression of mmu-miR-599 and PXR in Hepa1-6 cells. The regulatory effects of resveratrol on the expression of mmu-miR-599 and PXR in Hepa1-6 cells were further verified. As demonstrated in Fig. 6, resveratrol downregulated the expression levels of mmu-miR-599, but upregulated the mRNA and protein expression levels of PXR compared with in the model group. Effect of mmu-miR-599 on the expression of PXR in Hepa1-6 cells. To determine the association between mmu-miR-599 and PXR, mmu-miR-599 was regulated in Hepa1-6 cells via transfection with mmu-miR-599 mimic or inhibitor (Fig. 7), and the expression levels of PXR were detected. As shown in Fig. 8c and d, PXR mRNA and protein expression levels were increased in Hepa1-6 cells transfected with mmu-miR-599 inhibitor, whereas their expression levels were decreased in cells transfected with mmu-miR-599 mimic. Binding site of mmu-miR-599 in the PXR mRNA sequence. The binding site of mmu-miR-599 in the PXR mRNA sequence was verified by the site mutation in luciferase activity assay, which confirmed that miR-599 suppressed PXR expression at mRNA level. The predicted binding site was shown in Fig. 8A, which showed that six nucleotides in the seed region of mmu-miR-599 were complementary to bases in PXR. To determine whether mmu-miR-599 directly binds to the predicated sites in the PXR 3'UTR, a luciferase reporter assay was performed. Transfection with the mmu-miR-599 mimic significantly reduced PXR 3'UTR-dependent luciferase activity but did not affect mutant reporter luciferase activity. In addition, the mimic control had no effect on wild-type or mutant reporter luciferase activity (Fig. 8B). Discussion Feeding mice a Mcd diet is a representative and very reproducible nutritional model of NASH, which can lead to body weight loss due to methionine and choline deficiency in the diet (22,23). c57BL/6J mice fed an Mcd diet for 4 weeks rapidly and consistently developed steatohepatitis. Resveratrol administration attenuated liver injury, as evidenced by the diminished histological injury and decreased aminotransferase levels. Resveratrol has been reported to reduce hepatic lipogenesis by activating the adenosine monophosphate-activated protein kinase (AMPK)/silent information regulation-2 homologue 1 (SIRT1) pathway and consequently inhibiting adipogenesis-related genes, including sterol regulatory element-binding protein-1c, acetyl-coA carboxylase and fatty acid synthase (24)(25)(26). In addition to the anti-steatotic effect through AMPK/SIRT1 activation, resveratrol has antioxidant and anti-inflammatory effects, and these effects may act in unison, combating different pathological injuries in the pathogenesis of NASH development (5,6). Pathological accumulation of lipids in the liver can induce lipid peroxidation and excessive reactive oxygen species generation, which can lead to oxidative stress (27). It has been demonstrated that liver lipoperoxide levels are elevated in mice with steatohepatitis. Oxidative stress can mediate inflammatory recruitment directly or indirectly by activating inflammatory cytokines, which can result in cellular injury and inflammatory recruitment (27). In the present study, resveratrol attenuated oxidative stress, as evidenced by decreased MdA levels, and improved hepatic inflammation in Mcd diet-induced steatohepatitis. The anti-inflammatory effect may be the result of the decreased expression levels of the key inflammatory markers NF-κB, IL-1β, TNF-α and IL-6. Previously, miRNAs have been shown to play important roles in NASH (9). In the present study, mmu-miR-599 expression was increased in the livers of NASH model mice. miR-599 has been reported to promote the inflammatory response. Proinflammatory marker production reflected consistent inflammation, as miR-599 promoted the expression of TNF-α and IL-6 (11). In addition, the present study revealed that resveratrol downregulated the expression levels of mmu-miR-599 in NASH model mice and Hepa1-6 cells. Therefore, it was hypothesized that the anti-inflammatory effects of resveratrol may be mediated by mmu-miR-599; however, the detailed mechanisms need to be explored. The nucleotides of the seed sequence of miRNAs match the target mRNA, and miRNAs silence specific genes by inhibiting the translation of the target mRNA or affecting the stability of the mRNA (7). In the present study, PXR expression was decreased in the livers of mice with NASH. Resveratrol increased PXR expression in mice and Hepa1-6 cells; the effects of resveratrol on PXR were in contrast to those on mmu-miR-599. In Hepa1-6 cells, the mmu-miR-599 mimic significantly suppressed the mRNA and protein expression levels of PXR, whereas the mmu-miR-599 inhibitor induced the opposite effect. The mmu-miR-599 binding site was further detected in the mRNA sequence of PXR, indicating that the expression of PXR was inhibited by mmu-miR-599 at the transcriptional level. Therefore, the effect of resveratrol on NASH may be mediated by the mmu-miR-599/PXR pathway. To explore the downstream genes regulated by mmu-miR-599/PXR, the expression of PXR in Hepa1-6 cells was enhanced and it was revealed that the overexpression of PXR downregulated inflammatory factors. In line with the present results, previous studies revealed that PXR can exert anti-inflammatory effects by suppressing NF-κB and inhibiting its function of regulating the expression of target genes, such as TNF-α and IL-1β (28,29). Reporter gene assays with NF-κB-binding motifs have suggested that NF-κB-dependent gene transcription can be prevented by human PXR binding with its ligand in a dose-dependent manner (30). Additionally, it has been reported that PXR may reduce Toll-like receptor 4 (TLR4) expression by decreasing its mRNA stability TNF-α, IL-6, NLRP3 and STAT3 in the three treatment groups. data are expressed as the mean ± Sd (n=6 per group). *** P<0.001 compared with control group; ## P<0.01 and ### P<0.001 compared with empty plasmid group. NF-κB, nuclear factor-κB; IL, interleukin; TNF-α, tumour necrosis factor-α; NLRP3, NOd-like receptor protein 3; STAT3, signal transducer and activator of transcription 3; PXR, pregnane X receptor. The JNK1/activator protein 1 (AP-1) pathway is another downstream pathway of TLR4. Inflammation-related genes are also regulated by the transcription factor AP-1 (33). NF-κB and AP-1 work with each other to respond to inflammatory signals. Okamura et al (13) reported that TNF-α-induced chemokine cxcl2 expression was suppressed by PXR. In addition, NF-κB inhibitor or mutation of an NF-κB-binding motif partly reduced PXR-dependent suppression of cxcl2, whereas the mutation of both the NF-κB and AP-1 sites abolished this effect. consistently, AP-1-dependent gene transcription was suppressed by PXR with a construct containing AP-1 binding motifs. Therefore, these findings indicated that PXR may exert anti-inflammatory effects by suppressing both NF-κB-and AP-1-dependent inflammatory cytokine and chemokine expression. The expression of NLRP3 was increased in NASH mice in the present study. In recent years, the NLRP3 inflammasome has been identified as an important trigger of liver inflammation in NASH (34). IL-1β signalling acts as an important contributor to liver injury resulting from NLRP3 activation (35). Notably, an interaction between NLRP3 and the TLR4 pathway has been identified. The induction of NLRP3 and pro-IL-1β depends on NF-κB; in turn, IL-1β, which binds its cognate receptor, activates NF-κB (36)(37)(38). Therefore, this interaction leads to a vicious cycle of proinflammatory signalling. Inhibition of the NLRP3 inflammasome has been reported to reduce macrophage and neutrophil infiltration into the liver of an MCD-induced NASH model (35). The present study demonstrated that the expression of NLRP3 was inhibited by resveratrol in NASH model mice and by PXR in Hepa1-6 cells. These findings suggested that resveratrol inhibited NLRP3 expression and reduced its proinflammatory action by inducing PXR expression, and subsequently inhibiting NF-κB and IL-1β signalling, thereby relieving the inflammatory response in NASH. IL-6 is an important member in NF-κB signalling. In NASH mice, IL-6 expression was increased. Binding of IL-6 to its cellular receptor IL-6Rα (cd126) triggers the recruitment of gp130 (cd130), a transmembrane protein important for signal transduction, and results in Janus-activated kinase activation and phosphorylation of transcription factors of the STAT family, particularly STAT3 (39). IL-6 activates STAT3 and STAT3 upregulates IL-6 transcription in turn, forming an autocrine regulatory loop (40). Phosphorylated STAT3 has been reported to upregulate hepatic expression of cd14, a marker of activation of Kupffer cells, playing an important role in inflammation (41). Resveratrol administration can also inhibit activation of the STAT3 pathway, thereby resulting in suppression of Kupffer cell activation in the liver (42). In the present study, resveratrol and PXR reduced IL-6 and STAT3 expression in NASH model mice and Hepa1-6 cells, respectively. Therefore, resveratrol may improve liver inflammation through mmu-miR-599/PXR pathway-mediated inhibition of NF-κB/IL-6/STAT3 signalling in the liver. In conclusion, the present study identified that mmu-miR-599 expression was increased in NASH, and mmu-miR-599 inhibited PXR expression at the transcriptional level and then promoted the expression of inflammatory factors, thus indicating that the mmu-miR-599/PXR pathway may have an important role in the development of NASH. Resveratrol exhibited protective roles in ameliorating hepatic injury in NASH. This effect was mediated by the mmu-miR-599/PXR pathway. Therefore, resveratrol may serve as an effective therapeutic approach for steatohepatitis. However, liver fibrosis was not explored in this study. The effect of resveratrol on NASH-associated liver fibrosis should be verified in our further study.
2022-02-10T06:17:09.247Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "fcdd563767ac1744b84b8c41596377d236720a14", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2022.5102/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1dd18b7620c6cab1f2da96ecaa832f19b75c4f13", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
22468875
pes2o/s2orc
v3-fos-license
Partial rupture of the quadriceps muscle in a child Background The quadriceps femoris muscle ruptures usually occur in the middle-aged population. We present a 4-year-old patient with partial rupture of the quadriceps femoris muscle. To our knowledge, this is the youngest patient reported with a quadriceps femoris muscle rupture. Case Presentation A 4-year-old girl admitted to our clinic with left knee pain and limitation in knee movements. Her father reported that she felt pain while jumping on sofa. There was no direct trauma to thigh or knee. We located a palpable soft tissue swelling at distal anterolateral side of thigh. The history revealed that 10 days ago the patient was treated for upper tract respiratory infection with intramuscular Clindamycin for 7 days. When we consulted the patient with her previous doctor and nurse, we learnt that multiple daily injections might be injected to same side of left thigh. MRI showed a partial tear of vastus lateralis muscle matching with the injection sites. The patient treated with long leg half-casting for three weeks. Clinical examination and knee flexion had good results with conservative treatment. Conclusions Multiple intramuscular injections may contribute to damage muscles and make prone to tears with muscle contractions. Doctors and nurses must be cautious to inject from different parts of both thighs. Background Quadriceps muscle tears usually seen in middle-aged and older people [1][2][3]. Particularly people with chronic diseases (e.g. diabetes mellitus, renal failure, and gout) are prone to quadriceps muscle ruptures [4,5]. Ruptures of quadriceps muscle are rare in children and limited cases were reported in literature [4,[6][7][8]. We report a partial rupture of quadriceps muscle (vastus lateralis part) in a 4-year-old girl after multiple intramuscular antibiotic injections. Case Report A 4-year-old girl admitted to our clinic with left knee pain and limitation in knee flexion. She was holding her left leg at full extension. Her father said that she felt pain and fell down while she jumping on sofa. There was no trauma history. Physical examination revealed a localized palpable soft tissue swelling at anterolateral side of distal left thigh. Knee flexion was restricted. In detailed history we learnt that she had a serious upper tract respiratory infection and used some parenteral antibiotics (twice a day, intramuscular Clindamycin for 7 days). Intramuscular injections were applied to both thighs and ceased 10 days ago. We consultated patient with nurses and we learnt that multipl daily injections might be injected to the same area of left thigh. Plain radiographs revealed nothing. MRI showed a partial tear of vastus lateralis muscle matching with the injection site (Figures 1, 2). The patient placed in a long leg halfcasting for three weeks. After this period, casting was taken off. The patient was symptom-free with full range of knee motion. Discussion Quadriceps muscle tears are not common in children. These injuries were mainly described in middle-aged and older people. With aging and systemic illnesses (e.g. diabetes mellitus, renal diseases, obesity, gout, and rheumatoid arthritis), degeneration and weakness may occur in muscles and tendons. In adults, the weakest area in muscle-tendon-bone structural unit is myotendinous junction regarding tear mechanism. But in children the weakest point is physis. In a healthy child an avulsion fracture is more likely than a tendon rupture. Possible complications of intramuscular injections include fibrosis and contracture [9]. Quadriceps muscle fibrosis and degeneration of muscle fibers can develop after multiple intramuscular injections. Disorganization of collagen fibers at these injection sites and weakening of muscle fibers can produce ruptures after muscle contractions and isotonic movements. The contents of substance injected inside the muscle can cause some kind of reactions and tissue response. This reaction is associated with muscle degeneration and inflammation [9]. The number of injections is also important for muscle fiber damage. Multiple injections into the same area may increase the risk of complications. Diagnostic ultrasonography and MRI can be quite useful to confirm the possible diagnosis. In partial tears MRI may also be useful to determine the extent of injury. Conclusions There are few reports in literature about quadriceps rupture in children and adolescents, but our case is the youngest patient in literature [4,[6][7][8] And also there are no previous history excluding multiple intramuscular injections. With injection into the same muscle area multiple times, muscle may weaken and predispose to tears by muscle contractions. We think that nurses and other health officers must be careful about intramuscular injection sites and avoid from injecting repeatedly to the same areas. On the other hand, because of vastus lateralis has a serious role in stabilization of femoropatellar articulation, injections in this muscle must be lessen. MRI would be a good choice for exact diagnosis of such injuries in orthopedic and pediatric clinics.
2014-10-01T00:00:00.000Z
2010-09-19T00:00:00.000
{ "year": 2010, "sha1": "ef26460b039716e7b0231c1c129af436e6f5d727", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-11-214", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72e90515cabc3d121fb3aa7c02e3c3ac91c992bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213515967
pes2o/s2orc
v3-fos-license
Effectiveness of physical fitness model with game approach in improving physical fitness of students at gajah mada elementary school in medan. This study to explore the effectiveness of a game-based physical fitness activity model in improving the physical fitness of students in Gajah Mada elementary school at Medan. The game-based physical fitness activity model as the alternative model for sport teacher to increasing desire of students to perform various forms of physical activity as stimulus to increase in cardiorespiratory strength, strength, endurance and flexibility. This study uses a pre-experimental approach methodology in the form of one group pre-test and post-test design, this fact has suported antusiasem of elementary school students during folow the model movemed. To analyse the results of the pre-test and post-test using statistical methods (t-test) to find out the significance effect. Furthermore, the results of this study show that the results of the average value of the post-test are greater than the average value of the pre-test. Thus, it was stated that the physical fitness activity model of students in elementary school at Gajah Mada Medan is effective for use in improving learning outcomes and increasing forms of physical activity as well as cardiorespiratory endurance, strength, endurance and determination of elementary school students. Introduction In general physical fitness activities are divided into two forms, namely: healthy, where the body is in a condition free from all diseases. Fit, where the body is able to carry out various optimal daily activities without excessive fatigue and still has an energy reserve. Physical fitness related to health which contains various forms of motion exercises and forms of cardiorespiratory endurance activity, strength and endurance of muscles and flexibility. Researchers see that physical fitness is very beneficial for the daily lives of students in their activities, therefore a literature study by one of them was from the International Journal of GEOMATE Journal, Nopparak Kaesaman and Wichai Eungpinichpong stated in the study of the acute effects of traditional Thai massage on recovery basketball players are shown through heart rate variability (HRV) and physical fitness. The results of this study showed that HRV and physical fitness had a significant increase in both groups. This shows that the TTM can improve the recovery of basketball players by increasing HRV. [1] It can be concluded that HRV and physical fitness can restore the physical condition of basketball players by using some form of physical fitness activity performed. In this case, it showing The data in table 1 stated that as many as 80% or 40 people from 50 physical education teachers that physical fitness activities needed by students are conducted at school. Furthermore, 85% or 42 people from 50 physical education teachers stated that they did not provide learning forms of physical fitness activities carried out to students. And 86% or 43 people from 50 physical education teachers confirmed that they did not understand physical fitness activity models for elementary school students. Based on the preliminary data analysis, the researchers conducted the study by applying a game-based physical fitness activity model that had the extent of the effectiveness of this model of physical fitness activity at the Gajah Mada Elementary School in Medan, Sumatra Utara. Model of Physical Fitness Activities Based on Games for Elementary School Students The research and development models in this study are Fitness Learning Models (health related fitness models). The learning model developed believes that the success of physical education learning begins with the joy of students in physical activity. So a variety of debriefings such as skills, physical fitness, attitude, knowledge and daily prolaku are oriented towards individual pleasure and confidence. Then the description of the game-based physical fitness activity model for elementary school students used is as follows: The use of the learning development model as shown in Figure 1 is based on observational data and the results of the word review or literature which shows that there are still low physical fitness problems accompanied by the lack of development of learning models oriented to physical fitness activities related to health in Medan. Motor Learning. The terms and processes of motion learning have principles that are almost the same as the learning process and can not be separated from the understanding of learning in general. Amung Ma'mun and Yudha M. Saputra (2000: 6) argue that there are three stages in motor learning, namely: cognitive verbal stages and the process of making decisions more prominent; the stages of motion have meaning as a pattern of motion that is developed as well as possible so that students or skilled athletes; and the stage of automation means smoothing the movement so that the performance of students or athletes becomes more solid in carrying out their movements. [2] Richard A. Schimidt (1988: 346) which states that learning motion is a series of processes related to practice or experience that lead to relatively permanent changes in a person's ability to display skilled movements. [3] Based on the literacy of Amung and Schimidt, the authors conclude that elementary school students are in the middle childhood and enter the development of motor behavior. Characteristics of motor behavior, namely perfecting basic motion and awareness of motion. Looking at the characteristics of elementary school age children, in learning physical education will be able to develop the skills provided through several models of development of physical fitness activity learning combined with basic motion learning. Stage of Children's Motor Ability and Physical Characteristics Understanding the nature of growth and development in each phase of development, will provide the possibility for teachers to better treat their students. Glee Johnson (2006: 63) argues that locomotor motion is the motion of the entire body through a certain room or distance such as walking, running, jumping and so on. While non-locomotor motion is a movement where only parts of the body move such as pushing, pulling, leaning the body and so on. Manipulative motion is the movement of skills that use equipment such as throwing, catching, striking, kicking, memvoli, etc. [4] According to David L. Gallahue & John C. Ozmun (1998: 81) that the phase of children's motor development. Physical characteristics of children aged 6-8 years (class I-II) include: Slow reaction time; Active, energetic and happy with rhythmic sounds; Soft bones and easily deformed; The heart is easy in dangerous conditions; A sense of consideration and understanding develops; Coordination of eyes and hands develops, still not able to use smooth muscles properly; Erratic general health. The physical characteristics of the 9-10 year olds (classes III and IV) that they have include: Improved coordination in movement skills; Developing endurance; Fixed growth; Good eye and hand coordination; A bad attitude may be shown. Characteristics of physical body age 10-12 years (classes V and VI) that are owned include: Growth of the muscles of the arms and legs increases; There is awareness of the body; Boys master rough games; Height and weight growth do not vary much. [5] The researcher argues that the characteristics of elementary school children that need to be considered by physical education teachers. The basis of this understanding is needed as an understanding of the conditions in real elementary school children, then studying physical education is very well done. Students will develop very well from physical, motoric, psychological and sociological aspects if physical education learning is given appropriately according to its characteristics. and provide broadest freedom of movement for students to gain learning experiences and carry out exploration of movement. So that students will master the desired movement skills, then students will be able to improve their motion skills. Curriculum for Elementary School Student Physical education during elementary school, should prioritize the function of organ formation, thus physical education in elementary schools is obliged to develop the function of the body's movement of the body as a whole. Wilma S. Longstreet, and Harolg G. Shane (1993: 63) divided the curriculum design into four designs, namely curriculum design oriented to society, children, knowledge, and curriculum design that was electric in nature. [6] then Anthony A. Annarino et al (: 133) provide curriculum design advice which includes seven activities can be seen in Table 2 Based on both expert opinions, the implementation of learning in the physical education curriculum emphasizes aquatic curriculum design, game activity design, rhythmic activity design, self-test activity design, design of educational activities outside the classroom, design of development activities and design of recreational activities. This form of material administration of physical fitness model activities is adjusted to the learning plan in school, and can be seen in the following table: Research Methodology The objectives of this study are as follows: Test the effectiveness of game-based physical fitness activity models for Gajah Mada elementary school students in Medan. Furthermore, it is expected to be an alternative in increasing the desire of students to perform various forms of physical activity as an increase in physical fitness. This research was conducted at Gajah Mada Medan Elementary School Jln. Bunga Kenanga No. 2 Medan Sumatera Utara at the 5th grade and 6th grade elementary school as many as 40 people. This is conducted for 8 (eight) months, starting from April to November 2018. The approach and method used in this study are pre-experiment in the form of one group pretestposttest design. Analyzing the results of the pre-test and post-test using statistical methods (t-test) to find out whether there is a significant effect of the implementation of the physical fitness model. The instrument used in testing the effectiveness of this model is the ACSPFT physical fitness test (Asian Committee on Standardization of Physical Fitness Test) which is a physical fitness test in the field that has been internationally recognized and standardized in Asia. This test aims to determine the level of one's physical fitness. This test is relatively inexpensive and easy to do. The ACSPFT physical fitness test is a series of tests consisting of (1) a 50 meter run (dash / sprint) to measure speed; (2) a long jump without a prefix (standing broad jump) to measure the body's explosive movement / muscle explosive power; (3) distance run which is 600 meters for boys and girls less than 12 years old with (4) Hanging body (son) or hanging elbows (daughter) to measure static strength and endurance of arms and shoulders; (4) Run back and forth 4x10 meters to measure agility; (5) Sit down (sit up) 30 seconds to measure the endurance of the abdominal muscles, (6) Forward flexion of trunk measures flexibility. Discussion and results Model implementation by testing the effectiveness of the model. The effectiveness test of the model was conducted using a pre-experimental research design in the form of "one group pretest-posttest design". The students who were the subjects of the study were given a pre-test in the form of a physical fitness test using the Asian Committee on Standardization of Physical Fitness Test (ACSPFT), which was then treated in the form of physical fitness activity models and post-tests using the same test instruments . Descriptions of the results of the physical fitness pre-test and post-test of students can be explained in the following table 4: Data on physical fitness pre-test conducted on 40 students obtained an average value of 333.20 with a standard deviation of 24.24 and a range of values obtained by a value of 84 from the difference between the lowest score of 293 the highest score of 376, and obtained the total value of 9995. While the physical fitness post-test data conducted on 40 students obtained an average value of 369.35 with a standard deviation value of 25.04 and the range value obtained a value of 93 from the difference between the lowest score of 321 scores the highest is 414, and a total value of 11084 is obtained. The following are presented scores of physical fitness test results of students in the form of a diagram can be seen the average and standard deviation from the results of the pre-test and post-test as follows. Figure 3 shows the results of differences in the average values between the two physical fitness test results, showing that the average value of the post-test results is higher than the average value of the pre-test results. As proof of the significance of the implementation of physical fitness activity models for students on improving physical fitness of students, it is necessary to do statistical testing with a "ttest". Before the data was analyzed, a normality test was conducted on the data from the pre-test and post-physical fitness test using the Lilliefors test at a real level α = 0.05. While the summary of the calculation results is shown in Table 4 Based on the results of the calculation of the normality test as shown in Table 4, it is obtained that the price of Lo for all physical fitness pre-test and post-test data of students is smaller than the Lt at the real level α = 0.05. Thus it can be concluded that all students' physical fitness pre-test and post-test data came from populations that were normally distributed. This conclusion implies that parametric statistical analysis can be used to test the hypothesis proposed in this study, so that the first condition for testing hypotheses has been fulfilled. More details can be described as follows: (1) The results of the calculation of the normality test using data in the form of physical fitness pre-test results in students, where the number of samples 40 obtained Lh is = 0.12 and Lt = 0.16 with a significant level α = 0 , 05. Thus, because Lh is less than Lt, it can be concluded that the overall physical fitness pretest data of students comes from populations that are normally distributed; and (2) The results of the calculation of the normality test using data in the form of physical fitness post-test results for students, where the number of samples 40 obtained Lh is = 0.09 and Lt = 0.16 with a real level α = 0.05. Thus, because Lh is less than Lt, it can be concluded that the overall posttest physical fitness data of students comes from populations that are normally distributed. The normality tests have been conducted on the results of physical fitness pre-tests and post-tests of students using ACSPFT. Then the effectiveness test is carried out by using the "t test". Completing calculation of the steps to test the effectiveness of the application of physical fitness activity learning models to Gajah Mada elementary school students in Medan using the "t-test" technique. Meanwhile, the summary of the calculation results can be seen in Table 5 below: The results of the effectiveness test using the t-test, from the difference in physical fitness test results using ACSPFT between the pre-test and post-test obtained the price t0 = 15.55 greater than the price tt = 2.05 (at the 0.05 significance level), then null hypothesis rejected. So that it can be concluded that, there are significant differences between the pre-test and post-test results of the physical fitness test. In addition, the average value of the pre-test results = 333.20 is smaller than the average post-test results = 369.35. It was stated that the physical fitness activity learning model for elementary school students was effectively carried out as an increase in physical fitness for students of the Gajah Mada elementary school in Medan. Conclusion The result of the effectiveness of the physical fitness learning model for the Gajah Mada elementary school students in Medan, North Sumatra can be summarized as follows: From the results of the Evability Test of physical fitness learning models using the ACSPFT physical fitness test. which shows the results of the average value of the Post-test greater than the average value of the Pre-test. It was stated that physical fitness activity models for students in the Gajah Mada elementary school Medan Sumatera Utara are effective to use in improving learning outcomes and increasing forms of
2019-12-05T09:28:04.929Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "2cd4583bb375ac7254aceb2f1afae53e3d51cbca", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1387/1/012125", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1e389b51aea3bafdfdf2ae61ce48fc27c06aee28", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology", "Physics" ] }
18339455
pes2o/s2orc
v3-fos-license
Combination of a GnRH agonist with an antagonist prevents flare-up effects and protects primordial ovarian follicles in the rat ovary from cisplatin-induced toxicity: a controlled experimental animal study Background With the continuous improvement of surgery and chemotherapeutic treatments, many tumour patients increasingly achieve long-term survival and can even be completely cured. However, platinum-containing drugs, which are widely used to treat a variety of types of cancer, cause menstrual disorders and ovarian failure, which in turn lead to infertility. Thus far, gonadotropin releasing hormone (GnRH) agonist (GnRHa) and antagonist (GnRHant) are reported to act as protective agents of the ovary in chemotherapy through the inhibition of the female gonadal axis. Nevertheless, they both have disadvantages that limit their use. GnRHa causes a flare-up effect during the first week after administration, and no long-acting GnRHant agent is available. GnRHa combined with GnRHant may prevent the flare-up effect of GnRHa and rapidly inhibit the female gonadal axis. Several clinical studies with small sample sizes have reported controversial conclusions. In this strictly controlled animal study, we investigated the advantages of combination treatment with GnRHa and GnRHant. Methods Rats aged 12 weeks were divided into six groups: Control, cisplatin (CDDP), GnRHa, GnRHant, Combination (sht, short-term) and Combination (lng, long-term) of GnRHa and GnRHant. The last four groups received Triptorelin (1 mg/kg·d, for 14 days), Cetrorelix (0.5 mg/kg·d, for 10 days), a combination of Triptorelin (1 mg/kg·d, for 10 days) and Cetrorelix (0.5 mg/kg·d, for 10 days) in the long-term group and for 3 days in the short-term group. The Control and CDDP groups received saline (1 ml/kg·d, for 10 day). Then, all groups apart from the Control group received cisplatin (1 mg/kg·d, for 10 days), and the Control group received another 10 days of saline as described above. Blood samples were collected to detect the serum levels of E2, LH and FSH. Observation of oestrous cyclicity was also performed after drug administration. Finally, bilateral ovaries were collected for histological study and follicle counting. Results We observed a flare-up effect in rats treated with GnRHa, but not in any of the combination groups. The percentage of normal cyclicity increased from 0% in the CDDP group to 25.0%, 33.3%, 66.7% and 41.7%, in the GnRHa, GnRHant, combination (lng) and combination (sht) groups, respectively. Pretreatment with GnRHa, GnRHant and combination (lng) significantly protected the primordial follicles from destruction by preserving 57.6%, 63.4%, 87.1% and 60.4% of the follicles, respectively. Conclusions The combination of a GnRH agonist with antagonist completely prevented the flare-up effect and enhanced the protective effect of the ovary from cisplatin-induced gonadotoxicity in rats. (Continued from previous page) Conclusions: The combination of a GnRH agonist with antagonist completely prevented the flare-up effect and enhanced the protective effect of the ovary from cisplatin-induced gonadotoxicity in rats. Background With the continuous improvement of surgical and chemotherapeutic treatments, many tumour patients achieve long-term survival and can even be completely cured [1]. Platinum-containing anti-cancer drugs are widely used to treat a variety of cancers, including sarcomas, carcinomas, and lymphomas. Cisdiamminedichloroplatinum (cisplatin, CDPP) is the first and most representative drug in this class. Cisplatinbased combination chemotherapy displays significant antitumor activity against cancers of the testis, ovary, head, neck and lung. The underlying mechanism is mainly due to chemical bonding to DNA, leading to crosslinking of the DNA, which induces cell apoptosis [2,3]. However, cisplatin may also harm the granulosa cells of the follicles, causing menstrual disorders and acute or chronic ovarian failure, resulting in infertility [4,5]. Gynaecological oncologists and researchers are faced with the challenge of protecting ovaries from damage caused by chemotherapy and improving the quality of life for patients [6,7]. Natural gonadotropin-releasing hormone (GnRH) is a short-acting decapeptide that is secreted by the hypothalamus, which could induce the secretion of LH and FSH in the pituitary gland [8]. A modification in the 6th and 10th amino acids of GnRH results in the GnRH agonist (GnRHa), which has increased biological activity compared to natural GnRH. The inhibition of the female gonadal axis by GnRHa could reduce the damage caused to primordial follicles by chemotherapeutic agents [9]. Nevertheless, the initial flare-up effect that occurs during the first week of GnRHa treatment limits its use [10]. GnRH antagonist (GnRHant) is one of the derivatives of GnRHa, and it has a significant protective effect on ovarian function through a stronger and rapid inhibition of the female gonadal axis. However, the effect is short lived. No long-acting antagonist is available [9,11]. Several clinical studies with small sample sizes have suggested that GnRHant combined with GnRHa could prevent the flare-up effect of GnRHa and rapidly inhibit the female gonadal axis [12][13][14][15]. However, it is difficult to obtain a strict negative control in clinical studies. In the present study, using animal models, we investigated the advantages of combined treatment with GnRHa and GnRHant for weakening the GnRHa-induced flare-up effect and protecting the ovary from the damage induced by CDPP. Animals Sexually mature female Wistar rats (12 weeks old, body weight 280±20 g) were obtained from the Disease Prevention and Control Center of Hubei Province, China, and housed under special pathogen-free conditions. The animals were kept in individual plastic cages with ad libitum access to full-price feed and water in a temperature-controlled room (22±2°C) on a 12 h light, 12 h dark schedule. All of the experimental procedures were performed at the experimental animal centre of Tongji Medical College, Huazhong University of Science and Technology (HUST), China, according to the international ethical guidelines and with the approval of the HUST Ethics Committee. Drug treatment A total of 72 rats with normal oestrous cycles (4-5days) were randomly divided into six groups: Control, CDDP, GnRHa, GnRHant combination (lng, longterm) and combination (sht, short-term). The control group received 1 ml/kg·d of saline for 20 days. The CDDP group received 1 ml/kg·d of saline for 10 days followed by 1 mg/kg·d of cisplatin (Qilu Pharmaceuticals Co. LTD, China) for 10 days. The GnRHa group received depot injections of 1 mg/kg of Triptorelin (Diphereline, Beaufour-Ipsen Pharmaceutical Co. LTD., Tianjin) followed two weeks later by 1 mg/ kg·d cisplatin for 10 days. The GnRHant group received 0.5 mg/kg·d of Cetrorelix subcutaneously (Owto biotech INC. China) for 10 days followed by 1 mg/kg·d of cisplatin for 10 days. The combination (lng) group received depot injection of 1 mg/kg of Triptorelin in combination with 0.5 mg/kg·d of Cetrorelix for 10days followed by cisplatin as mentioned above. The combination (sht) group was only co-treated with Cetrorelix for 3 days. All of the other agents were administered intraperitoneally. Pretreatment with the GnRH analogues was performed at 9 am every day, and cisplatin was injected at 11 am because previous studies have shown that the serum concentration of Cetrorelix reached a peak 1-2 h after injection. Estradiol (E2), Luteinising Hormone (LH) and Follicle-Stimulating Hormone (FSH) assays Blood specimens were collected on the 1st, 3rd, 5th and 8th day of the treatment schedule through the orbital vein. Serum levels of E2, LH and FSH were analysed by radioactive assay kits (Beijing North Institute of Biological Technology, China) following the manufacturer's instructions. Observation of oestrous cyclicity The oestrous cycle was monitored for 10 days after the treatment. Smears were obtained by carefully inserting a non-needle syringe into the vagina and flushing with approximately 0.2 ml saline. The washes were spotted on slides, and then the air-dried smears were stained with haematoxylin. The slides were examined under a microscope to determine the phase of the oestrous cycle. The different stages of the oestrous cycle were determined according to the predominant cell type present in the vaginal smears by light microscopy [16]. Normal oestrous cyclicity was defined as the occurrence of at least two consecutive normal oestrous cycles lasting for 4-5 days with 1-2 days of oestrus. The cycle length was defined as the number of consecutive days between the oestrus smears in the previous and the subsequent cycles. Histological study and follicle counting Bilateral rat ovaries were taken after observation of the oestrous cyclicity and fixed in 4% paraformaldehyde overnight. Then, the ovaries were embedded in paraffin and cut into 5-μm serial sections to be stained by haematoxylin and eosin (HE). Every tenth section (50 slices in average per ovary) was selected for observation under a light microscope (IX71, Olympus Optical Co. LTD, Japan). The scoring system offered by Sağsöz et al. [17], was used for histopathological evaluation of the ovarian tissues, with some modifications. The histological sections were examined for the presence of haemorrhage, cortical fibrosis, follicular atresia and blood vessel damage. The changes were scored from 0 to 3 according to their severity, where 0 represents no pathological finding, and 1, 2, and 3 represent pathological findings of < 33%, 33-66%, and > 66% of the ovary, respectively. The scores for each parameter were summed and the total scores were calculated. The follicles were classified into three stages as follows: primordial, growing (primary and secondary) and mature follicles. A primordial follicle contains a partial or complete layer of flattened granulosa cells encircling the oocyte. In the primary follicle, the oocyte is surrounded by a single layer of cuboidal granulosa cells. The secondary follicle contains multiple layers of cuboidal granulosa cells surrounding the oocyte with little or no antral space. A mature follicle contains a single large antral space adjacent to the oocyte. The summation of follicles in different stages was calculated for analysis. Statistical analysis SPSS 11.0 (SPSS Inc., Chicago, Illinois, USA) was used for statistical analysis. One-Way ANOVA was performed to analyse the quantitative data followed by the LSD-t test for multiple comparison. Categorical data were analysed by the Kruskal-Wallis test, and multiple comparisons were performed using the Mann-Whitney U test. All tests of significance were two-sided, and the significance value was set at <0.05, whereas α' was adjusted to 0.0017 in the Mann-Whitney U test. Results Combination pretreatment completely prevented gonadotropin flare-up Serum levels of E2 during pretreatment in the six groups are shown in Figure 1. GnRHant and both combination treatments significantly decreased the E2 level on day 3 compared to the control group (P<0.001), and GnRHa treatment slightly increased the E2 level. On day 5, the E2 level in the GnRHa group decreased. On day 8, all four treated groups had lower levels of E2, compared with the control group (P<0.001). Moreover, the combination (lng) group presented a significantly decreased level of E2 compared to that in either the GnRHa or GnRHant groups (P<0.001). No significant change was observed in FSH level. LH level significantly increased in the GnRHa group on day 3 compared with the other five groups (P<0.001). A detailed description of the FSH, LH and E2 levels in different groups at different time points is given in Table 1. Combination pretreatment partly restored oestrous cyclicity Figure 2 shows the percentage of animals with normal, prolonged or irregular cycles. There were significant differences in the distribution of the different types of cyclicity between the groups (P<0.001). Rats in the control group all showed normal (4-5 days) oestrous cycles, while treatment with CDDP significantly induced either irregular oestrus or a prolonged cycle length that was more than 7 days in the majority of the animals (58.3%) (P<0.001). This cyclic change could be partly reversed by pretreatment with GnRHa, GnRHant or a combination of both. The percentage of normal cyclicity increased from 0% to 25.0%, 33.3%, 66.7% and 41.7%, respectively. Compared to the CDDP group, the combination (lng) group showed a significantly altered distribution of cyclicity (P<0.001). No such significance was found in the GnRHa, GnRHant or the combination (sht) group, but there were more (50%) rats experiencing a slightly prolonged cycle (5-7 days) in the GnRHa group (P GnRHa =0.007, P GnRHant =0.029, P sht =0.010). Combination pretreatment preserved more primordial follicles The follicles at different stages were observed under a light microscope. In the control group, the number of follicles at various stages was present and the layers of surrounding granulosa cells were integral. In the CDDP group, the follicle structure was destroyed, resulting in a remarkable reduction in number. Obvious damage including cortical fibrosis, follicular atresia and blood vessel damage induced by cisplatin was significantly different from that in control group (P<0.001, See Additional file 1: Figure S1). None of the pretreatments could rescue the tissue damage. More primordial follicles were reserved in other groups despite the pathological changes mentioned above ( Figure 3A). The follicles in each stage were counted, as shown in Figure 3B, and the detailed numbers are listed in Table 2. The differences between groups in primordial, growth and mature follicles were significant (P<0.001). CDDP induced a significant reduction of primordial, growing, mature and total follicles compared with the control group (P<0.001), and 63.2% of the primordial follicle pool was lost. The GnRHa, GnRHant, combination (lng) and combination (sht) groups showed significant protection of the primordial follicles from destruction, with preservation of 57.6%, Figure 1 Serum gonadotropin levels. The average E2 levels during pretreatment in the control, GnRHa, GnRHant combination (lng) and combination (sht) groups. Blood samples were taken on the 1st, 3rd, 5th and 8th day. **P<0.01, compared with the control group; ##P<0.01, compared with the combination (lng) group. 63.4%, 87.1% and 60.4%, respectively (P<0.001). In addition, the differences between the combination (lng) group and the single pretreatment groups or the combination (sht) group were also significant (P<0.001). The reduction of the growing and mature follicles was not reversed except in the GnRHant group, and the number of growing follicles in the GnRHant group differed significantly from that in the CDDP group. Discussion Chemotherapy decreases the mortality of cancer patients, but it is also associated with irreversible ovarian toxicity [18,19]. Administration of a GnRH agonist has been proposed to be a non-invasive method to protect the ovarian reserve from chemotherapy, although the efficacy of this approach is still controversial [8,13,[19][20][21][22][23][24][25]. By suppressing the hypothalamic-pituitary-ovarian axis, the GnRH agonist preserves more primordial follicles, which are not vulnerable to chemotherapeutic agents [9]. The recommended pretreatment before chemotherapy requires approximately two weeks due to the flare-up effect of gonadotropin concentration in the first week. This flare-up effect is not acceptable in two types of patients, including those suffering from rapidly progressive cancer and who cannot wait for the initiation of chemotherapy, and patients for whom an elevated gonadotropin level may promote the progress of some hormone-susceptible cancers such as breast and ovarian cancer. GnRH antagonist, which is also reported to act as a protective agent, rapidly down regulates the gonadotropin level without flare-up effect, but there are no long-acting GnRH antagonists available [9,11,[26][27][28][29]. The combination of GnRH agonist and antagonist treatment is assumed to have a rapid long-acting suppression effect, avoiding the initial gonadotropin activation. Until now, four studies investigating the combination of these drugs have reported that the hormone changes are beyond those observed using controlled ovarian hyperstimulation protocols. [13,14]. In our study, an increasing tendency was observed for E2, LH and FSH on day 3, but only the level of LH increased significantly. This outcome is possibly due to either the concentration peak not being observed at the right time or the animals that are not at the same stage of estrous at the beginning of treatments. Co-treatment with GnRHant in the long term or the short term significantly prevented the hormone flare-up, and the gonadotropin level rapidly fell to a low level in 5 days. This conflicting result could be attributed to differences between human and rodents, the dosage and the administration strategy. Rats have a higher concentration of GnRH receptors in the ovary, and pituitary-ovarian [19] desensitisation can be completed quickly; thus, rats may show a different response to the treatment compared to humans. Another finding in our study was that the combination (lng) treatment significantly preserved more primordial follicles than observed in the GnRHa or GnRHant only groups. Danforth et al. reported that in a murine model, the antagonist did not protect ovaries from chemotherapyinduced toxicity, and it even depleted primordial follicles [30]. No such effect was observed in the present study, but the antagonist preserved the follicles similarly to the agonist. This result is consistent with other studies performed using the antagonist [9,11,[26][27][28][29]. The combination (lng) treatment enhanced the protective effect, which was confirmed by all of our results. The combination treatment decreased the E2 serum concentration to 96.6±5.8 pg/ml (mean±SD) on day 8, and in the GnRHa and GnRHant groups, the E2 levels were 159.6±8.5 pg/ml and 151.9 ±10.4 pg/ml, respectively. In addition, 80% of the rats in the combination group resumed a normal oestrous cycle within 10 days after chemotherapy, while in the GnRHa and GnRHant only groups, 20% and 40% resumed a normal cycle, respectively. These results indicated that the ovarian function was maximally suppressed and quickly resumed with the combination treatment. Although Johnson et al. report that the primordial follicle pool is renewable, this characteristic remains to be confirmed [31]. Therefore, quantitative measurement of follicles is the best way to evaluate the potential fertility. Consistently, the combination treatment maximally preserved the primordial follicles. The pregnancy rate was not examined because the animals were all sacrificed for histological analysis. Nevertheless, the effect of the combination (sht) treatment was not significantly different compared to that in the GnRHa or GnRHant groups. The enhanced protective effect could be attributed to the total dosage of the two drugs without any calibration, or it could be related to the synergistic reaction between them. Our work suggests two possible methods to rapidly and effectively protect the ovary from chemotherapy-induced damage: one is to add GnRHant in the short term; the other is to increase the dosage of GnRHa. The underlying mechanism requires further investigation.
2017-06-24T02:26:53.486Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "f7d5d9db912b7989bcb53629ad89ad25ee471132", "oa_license": "CCBY", "oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/1477-7827-11-16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc9de6e054d6a8694270a789c9364693b8e68ebb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236688912
pes2o/s2orc
v3-fos-license
Future Teachers Training to Use the Internet Network for Developing High School Students' Scientific Capabilities . Nowadays the Professional Standard of a Teacher is being implemented in the Russian higher education. Special importance is acquired to formation of scientific and methodological readiness of pedagogical specialties students to use the Internet network. Acquired skills will prove useful in a teacher’s career to develop high school students’ scientific capabilities when teaching at school. The development of indicators of scientific and methodological readiness of teachers for the above-mentioned activity and research culture of high school students as well as the design and implementation of pedagogical conditions of formation of scientific and methodological readiness of future teachers in the considered context is based on the culturological approach. The research methods include survey methods, assessment of the measure of coherence between the manifestations of the indicators of the investigated scientific and methodological readiness with the help of K. Pearson criterion. A survey of Russian teachers working in profile classes of high schools showed the following items: 53% of teachers have a reproductive or adaptive level of scientific and methodological readiness to use the Internet network for the development of scientific capabilities of their high school students which were evaluated according to three criteria: motivation, technological readiness and creative activity. The article presents materials revealing the essence of pedagogical conditions for the formation of scientific and methodological readiness of students to use the Internet for developing scientific capabilities of high school students. The authors emphasize the importance of implementing the results of pedagogical research in university practice and using them in the work of students of pedagogical specialties. The identified pedagogical conditions of formation of future teachers' scientific and methodological readiness to use the Internet network in the development of scientific capabilities of high school students serve as the basis for the emergence of new pedagogical technologies. Introduction The Internet network is actively used for high school students' education within the innovative society development. We consider the problem of the Internet network usage within high school students' scientific skills development. Such kind of development is caused by the objective processes and current requirements. In the informational environment it is observed witness decrease of knowledge value which can be gained by means of electronic networks and mail. At the same time it is sharply increasing value of abilities to select, process and use information for up-to-date problem solving as well as abilities to properly formulate informational and intellectual help inquiry for solving non-standard problems. We should pay attention to neurophysiologists' forecasts relative to the fact that the Internet stimulation, attacking sense organs, on the one hand, will adapt brain "conducting" for faster thinking, but at the same time, there can arise a threat of creative activity decrease in the result of comprehension and imagination processes reduction. Forecasts done by the English neurophysiologist S. Greenfield deserve special consideration along with the other forecasts [2]. She assumes that the sounding, bright and mobile screen world which changes rapidly in the result of clicking or touching the screen cannot help students to work out abstract concepts; there is the risk that ready-made technologies will turn education into passive entertainment, indistinguishable from other cyber life components, overfilled with perception. In her opinion, in the future visual perception will be more preferential than the facts. The commonplace training process may be subordinated to construction of free associations. Students will neither analyze the reasons of these or diverse events nor learn general concepts; they will fail to gain the knowledge system. By means of the Internet interaction students will acknowledge with some facts, but they can face shortage of time to consider and generate any creative ideas. At the same time international research shows that students who regularly go online have higher test scores in math and reading, learn easily and have fun (Neil Selwyn, Onno Husen) [5]. Those who use computer games are more capable in self-education and have developed spatial and global thinking. (J. Beck, М. Wade) [1]. The Professional Standard of a Teacher is being implemented in the Russian higher education. Special importance is acquired to formation of scientific and methodological readiness of pedagogical specialties students to use the Internet network. Acquired skills will prove useful in teacher's career to develop high school students' scientific capabilities when teaching at school. Methods and methodology The development of indicators of scientific and methodological readiness of teachers for the above-mentioned activity and research culture of high school students as well as design and implementation of pedagogical conditions for formation of scientific and methodological readiness of future teachers in the considered context is based on the culturological approach. The research methods include survey methods, assessment of the measure of coherence between the manifestations of the indicators of the investigated scientific and methodological readiness with the help of K. Pearson criterion. Results The high school students' scientific capabilities include three main components. They are: educational culture, research culture and future profession research orientation. Research culture of the student's personality is a basic culture component and its integrative characteristic defined by the combination of the following items: understanding of an integrated world image; abilities; scientific investigation skills and valuable attitude to the achieved outcomes, which ensure both self-determination and creative self-development of educational and research culture. Research culture expresses dominating properties of personality development, reflects universality of its connections with the environment, activates creative self-realization capabilities, determines informative activity effectiveness, promotes to application of scientific knowledge, abilities and skills in diverse fields of informative and practical activities. We defined four criteria of educational and research culture. They are: research motivation, scientific style of thinking, creative activity, technological readiness for investigation. Considering future profession research orientation as high school students' ability to justify research value in the course of exercising professional activity, we have determined its following components: an involvement degree in research activity; concern in exploratory research; concern in a high school science. The degree of enumerated criteria manifestation enables to judge each criteria value and then define the development level of high school students' scientific capabilities. Is the contemporary teacher ready to use the Internet network for high school students' scientific capabilities development? The teacher's scientific-methodological readiness is an integrative characteristic of the teacher's personality. It includes abilities and skills to use the Internet network with the purpose to develop students' scientific capabilities and creative activity manifestation. Teacher's task is to involve students in a productive informative activity in the Internet network and create valuable attitude to usage of the Internet technologies for their progress. A survey of 340 teachers working in high schools and assessment of the degree of coherence according to C. Pearson between the assumed indicators of the studied scientific and methodological readiness of teachers allowed us to substantiate its criteria and indicators (Table 1). 378 high school teachers from 42 regional institutions of general education took part in a computer-based testing to identify their status of scientific and methodological readiness to use the Internet network to develop scientific capabilities of high school students. The testing showed that only 5% of teachers reached the creative level. Heuristic level has 42% of testing teachers, reproductive level has 48% of teachers and adaptive level has about 5% of teachers. Findings of a public opinion poll of the Russian schoolteachers working in profile classes stated that they preferably emphasize the following Internet advantages for pedagogical techniques implementation: -bounds crossing due to cooperation and access to information without regard for the school location and status: access to the best libraries, museums, a possibility to listen to lectures and set problems to outstanding scientists by means of e-mail, a possibility of distant education and testing, virtual schooling, going on virtual trips and excursions; -a wide range of opportunities for choosing means and forms of education to advance in studying various subjects; -increase of students' motives in subjects under study due to visualization and interactivity which are used as updated material presentation form, enhancement of crosssubject links; -improvement of educational scientific activity; -increasing motivation to independent instruction and research work, development of critical thinking; -development of cross-instruction methods (discussion of under study subjects by means of Internet conferences, online help obtaining); -development of students' abilities and motives; -approval of students' initiative; -encouragement of students' and teachers' activity in learning new informational technologies; -improvement of social development, acquaintance with a wide range of up-to-date environment problems; -readiness for life in the informational society environment with a lifelong study orientation; -opportunities for making a dialogue with scientists, including foreign ones through computer mail, forums and chat; -information updating (scientific news, journal articles published prior to a printed version); -abundance of high quality scientific and educational information; -heuristic programs for students; -free educational resources. The level of use of the Internet in the productive cognitive activities of high school students At the same time, it is determined the facts confirming that while using the Internet networks teachers unevenly develop the high school students' scientific capabilities components. Therefore, for example, only 52% of teachers develop high school students' understanding how significant it is to carry out research work by means of the Internet network, 26 % of teachers provide their learners with solving exploratory problems technologies, 36 % of teachers train solving exploratory problems techniques. At the Pedagogical Institute appropriate pedagogical conditions were developed to form students' professional qualities within the framework of scientific and methodological readiness to use the Internet network. Among them are (1)preparation of special training manuals of how to use digital resources in the profile classes at schools; (2) intensification of students' activities on the use of Internet resources in the classroom and then in teaching practice; (3)involvement of students in research activities on the development of scientific skills of high school students using the Internet network; (4)creating situations of awareness of the Internet value; (5)using diagnostics of the future teachers' scientific and methodological readiness to use the Internet network to supply high school students' research skills. The pilot testing of the teaching manual for students "The Internet network in the Development of Scientific skills of Students", the teaching manual for high school students "The Internet network for the high school Learners" showed their special interest to teaching techniques of using the Internet network in organizing high school students' work with scientific texts to develop a research problem or question, formulating the topic of the study or using the network to solve research problems during the scientific and practical conference, etc. [3,4]. A number of methodological materials within the use of the Internet network in the development of scientific skills we placed on the University portal of distance courses such as "Innovative processes in education" and "Modern educational technologies". It allowed to provide prompt access to necessary information as well as to organize students' activities for writing course and diploma works, collect certain material to enrich the site on the results of teaching practice. We paid special attention to basic concepts when forming students' abilities and skills to use the Internet network. To fix the pedagogical tasks it was proposed to study the characteristics of the manifestation of the research capabilities of students, including using the codifier, in which the universal learning activities are correlated with each of its indicators. The student's activity in the classroom was to be continued in the extracurricular classes through research projects. Students' research work was multidimensional and involved the following items: analytical review of the literature and electronic sources on the problem, study of survey methods, statistical processing of experimental data on the problem under study, development of web-quests for work with high school students, reference tables, guidelines for students on using the Internet in cognitive activities. When fulfilling the research work students participated in the development of diagnostic material and software for processing the results of the survey of teachers and high school students on the use of the Internet network in cognitive activity. The materials of the students' research works were further used at lectures and practical classes by University teachers. We created situations of awareness of the value of the Internet for the development of high school students' scientific skills through reflective techniques such as "crossassociation", "mind map", "unfinished sentences", etc. We developed tasks demonstrating the use of the Web by analyzing teleconferences we created for teachers of experimental schools of the region held within the project of the Russian Gymnasium Union. To assess the formation of the components of the considered scientific and methodological readiness of students we used the method of expert assessment, content analysis of students' statements as well as computer self-diagnostics. In each of the student groups we observed an equal increase in the indicators of students' scientific and methodological readiness to use the Internet network. Such dynamics testified to the systematic implementation of the identified pedagogical conditions. Discussion The phenomenon of "scientific and methodological readiness of teachers to use the Internet network in the research learning of high school students" and diagnostic materials created allow us to assess the level of formation of scientific and methodological readiness to use the Internet network both in teachers and future teachers, to identify problems of its formation and further progress. Obtained within the experimental work pedagogical conditions allow designing pedagogical technologies of training future teachers to use the Internet network in the research learning of high school students and the formation of their research skills. The success of implementation of pedagogical conditions was ensured primarily by the implementation in the practice of teaching students the results of our pedagogical research, reflecting the problem of developing the research capabilities of students through active involvement of students in scientific work including the conditions during their pedagogical practice at school. Conclusions Theoretical and experimental research resulted in the following conclusions: 1.It is stated that scientific and methodological readiness of a teacher and a high school student to use the Internet network is a system formation which includes a number of teacher skills such as assess the capabilities of the Internet network to implement specific pedagogical conditions for the development of scientific skills of a high school student, to formulate specific, real and diagnostic goals of using the Internet network in developing the components of scientific skills of high school students, to determine the best ways to achieve them resulting in its creative self-realization. 2. Assessment of scientific and methodological readiness of teachers to use the Internet network for the development of scientific skills of high school students revealed that the number of teachers with a reproductive level is equal to 48%. As for adaptive level of readiness it is only 5%. 3. The development and successful testing of the system of pedagogical conditions within the experiment can be considered as the basis for the introduction of new pedagogical technologies.
2021-08-03T00:05:55.727Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7410ac12bb81dc89c20ada22de4f52eabd54beb9", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/21/shsconf_icemt2021_03009.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6546811210832646321abb46fef0fda66ffec082", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
3597112
pes2o/s2orc
v3-fos-license
Review of Batteryless Wireless Sensors Using Additively Manufactured Microwave Resonators The significant improvements observed in the field of bulk-production of printed microchip technologies in the past decade have allowed the fabrication of microchip printing on numerous materials including organic and flexible substrates. Printed sensors and electronics are of significant interest owing to the fast and low-cost fabrication techniques used in their fabrication. The increasing amount of research and deployment of specially printed electronic sensors in a number of applications demonstrates the immense attention paid by researchers to this topic in the pursuit of achieving wider-scale electronics on different dielectric materials. Although there are many traditional methods for fabricating radio frequency (RF) components, they are time-consuming, expensive, complicated, and require more power for operation than additive fabrication methods. This paper serves as a summary/review of improvements made to the additive printing technologies. The article focuses on three recently developed printing methods for the fabrication of wireless sensors operating at microwave frequencies. The fabrication methods discussed include inkjet printing, three-dimensional (3D) printing, and screen printing. Introduction Progress in the area of wireless communication is directed toward a nonstop improvement of device operations, ubiquity, and commercial viability of devices. Evolving research into millimeter wave (mm-wave) radio frequency (RF) communication operations at frequencies ranging 30-300 GHz is important for the advancement of such automation as gigabit wireless local area networks, automobile contact prevention, self-operational steering radar systems, and fine-quality beam-tuning image-processing equipment. The classic production methods for mm-wave structures comprise patterning, lithographic masking, and reproduction of materials that require the use of severe liquid materials, and are expensive. In order to enable the maintained assimilation and spread of rising mm-wave methodologies, attempts should be made to boost their adaptability and minimize the price of component manufacture. An additive electronic fabrication technique known as "inkjet printing" has been gathering immense attention for industrial uses as a greatly scalable, cost-effective, and above all, environment-friendly substitute to classical lamination-based fabrication methods [1]. Using thick/thin polymer-based and conductive nanoparticle-based ink materials, multiple layering structures for RF industry-e.g., fully printed transformers, inductors and capacitors [2][3][4]-have been realized, using inkjet-printing on stretchy materials. Through the refinement and characterization of these ink materials, multilayer RF structures operational in the millimeter-wave frequency range have been accomplished with the help of inkjet-printing manufacture on both solid and stretchy materials [5,6]. Conversely, numerous latest presentations of inkjet-printed millimeter-wave modules suffer from since the fabrication process is simple, affordable, quick, and adaptable. Screen-printed devices can be reproduced by repeating a few steps, and an optimum operating envelope can be developed quickly [29][30][31][32][33][34]. The feasibility of screen printing for flexible electronics has been demonstrated through the production of many printed sensors, electronic devices, and circuits. For example, all-screen-printed thin-film transistors (TFTs) have been demonstrated in [33,35,36]. Screen printing has been used to develop organic light-emitting diodes, following the investigation of the fabrication process and parameters of the screen printing solution i.e., viscosity of the solution and mesh count of the screen [37]. Multilayer high-density flexible electronic circuits, connected to embedded passive and optical devices through micro via holes, have been realized using advanced screen printing processes [32]. Screen printing is also used for patterning to develop shadow masks for the fabrication of organic TFTs. Screen-printed electrical interconnects for a temperature sensor on a polyethylene terephthalate (PET) substrate have been reported in [38]. However, in this review article, a summary/review of the developments in only printing technologies such as inkjet, 3-D and screen printing for the fabrication of batteryless wireless sensors operating at microwave frequencies is presented. Easier processing stages, minimized material waste, high speed and cost-effective substances, and simple patterning procedures render printing tools very attractive for accurate, multi-layered, and cost-effective development [39]. These features of printed electronics have allowed researchers to explore new avenues for material processing and to develop sensors and systems on non-planar surfaces that are otherwise difficult to realize with the conventional wafer-based fabrication techniques. In this paper, we compare the current developments in the three aforementioned additive printing fabrication techniques, with respect to their fabrication time, power consumption, and complexity. The design and analysis of inkjet-printed sensors are presented in Section 2. Section 3 focuses on the construction of a 3D RF sensor and its technological advantages. Section 4 comprises a review of the latest screen-printed sensors. Section 5 compares the technologies presented and summarizes this review. Inkjet Printed Sensors This section describes the advanced inkjet-printed batteryless RF sensor devices. Inkjet-printed wireless sensor systems for numerous future applications are introduced in terms of their environmental impact and performance as a sustainable technology. Inkjet-Printing on Paper Material Inkjet-printing methods have many benefits for RF sensor fabrication. Inkjet printing technology is cost-effective and environment-friendly because no hazardous chemicals are used to wash away the unwanted metals on the surface of a substrate. In this technique, nanoparticle ink is deposited at the desired position. Consequently, there are no by-products because inkjet printing is an additive fabrication method. The advantages of this technology, such as fast fabrication and ease of mass production, also reduce the cost of inkjet-printed electronics. The electrical properties of inkjet-printed silver nanoparticles were thoroughly studied in [1]. Many microwave applications utilizing silver nanoparticles have been proposed in [40][41][42]. The conductivity of the inkjet printing silver nanoparticle inks is approximately 1.12 × 10 7 S/m, which is sufficiently high for microwave or millimeter-wave applications. Inkjet printing technology can be used to fabricate various electronic devices on many kinds of materials; consequently, it is possible to utilize an environment-friendly alternative, such as paper, as a substrate. Paper is a very attractive substrate in inkjet-printed electronics for agricultural applications. Paper is a low-cost, renewable, and inkjet printable material. Apart from being one of the cheapest materials in the world, paper decomposes completely in agricultural environments. Moreover, there are many kinds of paper, such as hydrophobic, porous, and translucent paper. The hydrophilic property of normal paper is useful for implementing humidity or rainfall sensors. These sensors are widely adopted because water monitoring is crucial in agriculture. The properties of paper that are useful for inkjet printing have been reported in many studies based on measurements using different characterization techniques, such as ring resonator or T-resonator methods [1,40]. Paper has a reported dielectric constant (ε r ) of approximately 3.0 and a loss tangent (tan δ) of approximately 0.05-0.06. The relatively high loss of paper is not a critical issue for radio frequency identification (RFID) or planar structures, which have a low Q-factor, because paper is very thin. This high loss results in low interaction between the electric field (E-field) and paper substrate. RFID-Empowered Sensor RFID-empowered sensor devices have many benefits over state-of-the-art sensor components in terms of cost-effectiveness and ease of use. Typically, the price of an RFID tag is small and the structure is comparatively simple (reader and sensor tag). Thus, it is possible to realize an RFID-empowered sensor device over a large agricultural field at a low cost. RFID principles are also appropriate for current wireless sensor networks, which are easy to implement [43]. In this subsection [44], an inkjet-printed RFID-empowered sensor device for haptic and water-level recognition is proposed. The device contains two similar RFID tags for the ultra-high-frequency (UHF) band at approximately 915 MHz. The proposed sensor is combined with one of the RFID tags. The sensor is a meandering line with a self-resonant frequency of approximately 915 MHz. When a material with a dielectric constant and loss tangent different from those of air comes into contact with the meandering stripline, the capacitance of the component varies, which results in shifting of the resonating frequency of an RFID tag. Since the RFID tags have identical resonant frequencies, their unique IDs are returned at a similar frequency values when they are activated by the tag reader. However, the resonant frequency of the antenna connected to the sensor device ( Figure 1) is moved to a lower frequency when the detector is in contact with human skin or liquid (water in this case). Utilizing the tag without the sensor for a reference, the existence of an analyst can be easily determined. Similarly, the level of water can be identified because the variation of the capacitance of the sensor device influences the resonant frequency of the RFID-empowered structure. A metamaterial-empowered resonating element is attached to the tags for overturning crosstalk between the two tags. approximately 0.05-0.06. The relatively high loss of paper is not a critical issue for radio frequency identification (RFID) or planar structures, which have a low Q-factor, because paper is very thin. This high loss results in low interaction between the electric field (E-field) and paper substrate. RFID-Empowered Sensor RFID-empowered sensor devices have many benefits over state-of-the-art sensor components in terms of cost-effectiveness and ease of use. Typically, the price of an RFID tag is small and the structure is comparatively simple (reader and sensor tag). Thus, it is possible to realize an RFIDempowered sensor device over a large agricultural field at a low cost. RFID principles are also appropriate for current wireless sensor networks, which are easy to implement [43]. In this subsection [44], an inkjet-printed RFID-empowered sensor device for haptic and waterlevel recognition is proposed. The device contains two similar RFID tags for the ultra-high-frequency (UHF) band at approximately 915 MHz. The proposed sensor is combined with one of the RFID tags. The sensor is a meandering line with a self-resonant frequency of approximately 915 MHz. When a material with a dielectric constant and loss tangent different from those of air comes into contact with the meandering stripline, the capacitance of the component varies, which results in shifting of the resonating frequency of an RFID tag. Since the RFID tags have identical resonant frequencies, their unique IDs are returned at a similar frequency values when they are activated by the tag reader. However, the resonant frequency of the antenna connected to the sensor device ( Figure 1) is moved to a lower frequency when the detector is in contact with human skin or liquid (water in this case). Utilizing the tag without the sensor for a reference, the existence of an analyst can be easily determined. Similarly, the level of water can be identified because the variation of the capacitance of the sensor device influences the resonant frequency of the RFID-empowered structure. A metamaterial-empowered resonating element is attached to the tags for overturning crosstalk between the two tags. Retro-Directive Transponder for Sensing A retro-directive antenna array can re-transmit an interrogation signal to its source without any complicated computations [45]. The Van Atta topology is a widely used retro-directive antenna array topology owing to its simple structure and passive implementation [46]. The integration of a microfluidic sensor with a retro-directive antenna array was proposed in [47,48]. The resulting device is purely passive and has a self-steering capability. The self-steering capability results in strong Retro-Directive Transponder for Sensing A retro-directive antenna array can re-transmit an interrogation signal to its source without any complicated computations [45]. The Van Atta topology is a widely used retro-directive antenna array topology owing to its simple structure and passive implementation [46]. The integration of a microfluidic sensor with a retro-directive antenna array was proposed in [47,48]. The resulting device is purely passive and has a self-steering capability. The self-steering capability results in strong system performance owing to the wide readable angle of the passive transponder. This property is particularly critical in radar cross-section (RCS)-based backscattering communication applications, such as passive wireless sensor systems. For example, the back-scattered power of most passive RCS-based wireless sensors depends on the illumination angle. Retro-directive antenna arrays can be used to improve the performance of the sensor because these antenna arrays can reflect near-identical powers to the interrogation direction over a broad angle. The proposed inkjet-printed, dual-band, substrate-integrated waveguide retro-directive array, and microfluidic sensor are shown in Figure 2 [47]. The operation of this device suggests a potential application as a chipless RFID-enabled sensor tag operating at two different frequencies for temperature or water quality sensing. The variation of the RCS of the microfluidic sensor can be measured over a broad range owing to the retro-directive transponder. The dual-band property of the retro-directive transponder results in the ability to sense two targets at two different frequencies. Sensors 2017, 17, 2068 5 of 30 particularly critical in radar cross-section (RCS)-based backscattering communication applications, such as passive wireless sensor systems. For example, the back-scattered power of most passive RCSbased wireless sensors depends on the illumination angle. Retro-directive antenna arrays can be used to improve the performance of the sensor because these antenna arrays can reflect near-identical powers to the interrogation direction over a broad angle. The proposed inkjet-printed, dual-band, substrate-integrated waveguide retro-directive array, and microfluidic sensor are shown in Figure 2 [47]. The operation of this device suggests a potential application as a chipless RFID-enabled sensor tag operating at two different frequencies for temperature or water quality sensing. The variation of the RCS of the microfluidic sensor can be measured over a broad range owing to the retro-directive transponder. The dual-band property of the retro-directive transponder results in the ability to sense two targets at two different frequencies. Inkjet-Printed Sensor Platform A cost-efficient inkjet-printed sensor module for agricultural applications was recently proposed in [49]. The sensor platform has been improved to detect ambient moisture content, water content of the soil, and rainfall because moisture sensing is a crucial aspect of farming. The block diagram of the system, which contains a leaf sensor, soil humidity sensor, microcontroller unit, and antenna, is shown in Figure 3a. The capacitance of the leaf sensor and soil humidity sensor differ based on the water content and humidity of the soil or the environment surrounding the sensor module. The microcontroller detects variations in the capacitance of the leaf sensor and the soil moisture sensor. The microcontroller processes the data collected from the sensor and broadcasts this information through the antenna. The microcontroller and antenna can also be used to gather ambient power information to initialize the microcontroller or reduce battery consumption [50]. In contrast to traditional sensor modules, all passive elements are inkjet-printed on an eco-friendly paper substrate. Finally, dense monitoring of rainfall and soil humidity over large agricultural fields is possible owing to the advantages of inkjet printing technology, such as low fabrication cost and ease of mass production. An implementation of the sensor platform is shown in Figure 3b. The soil moisture sensor is buried in the ground to detect surface soil humidity. The leaf sensor, microcontroller, and antenna are visible. The uncovered constituents can be chemically layered with Parylene or silicone, if required, to extend the lifetime and protect the sensor platform. Inkjet-Printed Sensor Platform A cost-efficient inkjet-printed sensor module for agricultural applications was recently proposed in [49]. The sensor platform has been improved to detect ambient moisture content, water content of the soil, and rainfall because moisture sensing is a crucial aspect of farming. The block diagram of the system, which contains a leaf sensor, soil humidity sensor, microcontroller unit, and antenna, is shown in Figure 3a. The capacitance of the leaf sensor and soil humidity sensor differ based on the water content and humidity of the soil or the environment surrounding the sensor module. The microcontroller detects variations in the capacitance of the leaf sensor and the soil moisture sensor. The microcontroller processes the data collected from the sensor and broadcasts this information through the antenna. The microcontroller and antenna can also be used to gather ambient power information to initialize the microcontroller or reduce battery consumption [50]. In contrast to traditional sensor modules, all passive elements are inkjet-printed on an eco-friendly paper substrate. Finally, dense monitoring of rainfall and soil humidity over large agricultural fields is possible owing to the advantages of inkjet printing technology, such as low fabrication cost and ease of mass production. An implementation of the sensor platform is shown in Figure 3b. The soil moisture sensor is buried in the ground to detect surface soil humidity. The leaf sensor, microcontroller, and antenna are visible. The uncovered constituents can be chemically layered with Parylene or silicone, if required, to extend the lifetime and protect the sensor platform. A Fully Inkjet-Printed Wireless and Chipless Sensor for Carbon Dioxide (CO2) and Temperature Detection This subsection describes a printed CO2 and temperature sensor, which utilizes different industrial inks. The sensitivity of a batteryless detector or sensor is the result of the removal of complex single-walled/polymer carbon-nanotube (SWCNT) ink material [51]. It was recently demonstrated that graphene sheets and carbon nanotubes (CNT) provide robust sensitivity to several vapors and gaseous elements [52][53][54][55], mitigating alternative substantial limitations. Prior to this study [56], a straightforward system verifying the sensitivity of the proposed device to smog, which impairs many material properties, was proposed. This section intends to evaluate the performance of the proposed sensor when subjected independently to temperature changes and CO2. Furthermore, a tangential approach regarding the multiple layers of the responsive substance has been adopted for the improvement of the sensing performance. Many specimens are tested to evaluate the reliability of reproduction. With regard to the selectiveness, the authors include the results of coating the first or topmost layer with a polymer-based ink for the sensitivity of the proposed sensor to CO2 and temperature. In the following subsection, we review the dimensions and design of a batteryless sensor device (see Figure 4), and explain its operating principles. A Fully Inkjet-Printed Wireless and Chipless Sensor for Carbon Dioxide (CO 2 ) and Temperature Detection This subsection describes a printed CO 2 and temperature sensor, which utilizes different industrial inks. The sensitivity of a batteryless detector or sensor is the result of the removal of complex single-walled/polymer carbon-nanotube (SWCNT) ink material [51]. It was recently demonstrated that graphene sheets and carbon nanotubes (CNT) provide robust sensitivity to several vapors and gaseous elements [52][53][54][55], mitigating alternative substantial limitations. Prior to this study [56], a straightforward system verifying the sensitivity of the proposed device to smog, which impairs many material properties, was proposed. This section intends to evaluate the performance of the proposed sensor when subjected independently to temperature changes and CO 2 . Furthermore, a tangential approach regarding the multiple layers of the responsive substance has been adopted for the improvement of the sensing performance. Many specimens are tested to evaluate the reliability of reproduction. With regard to the selectiveness, the authors include the results of coating the first or topmost layer with a polymer-based ink for the sensitivity of the proposed sensor to CO 2 and temperature. In the following subsection, we review the dimensions and design of a batteryless sensor device (see Figure 4), and explain its operating principles. The operational functionality of a batteryless RFID chip sensor device is comparable to the idea of a microchip-empowered RFID sensor without a unified analogue-to-digital converter (ADC). The observation of a dimensional criterion depends on the changes to the permittivity or conduction of a susceptible material. These variations result in the changes of the RCS of the RFID tag with respect to the frequency. Consequently, the magnitude shifts of some peaks and the resonant frequency can be sensed in the working range of the RFID tag. The electromagnetic device provides two different types of feedbacks on an equilateral basis. The electromagnetic (EM) results at one polarization are to be utilized for extracting the detected data, whereas the results at another polarization are to be utilized as a reference point for the identification of codes and calibration parameters. A similar idea was presented in [56] for smoke detection. Figure 5a,b shows currents in both scatterers when exposed to a vertically and horizontally polarized incident plane wave, respectively. From this illustration, it can be observed that only one scattering element is agitated at a particular polarization. Further, the EM response is isolated between each scattering element. Thus, variations in the degree of the detecting scatterer do not influence the degree of the corresponding referenced scatterer. The physical sizes of the scattering elements have been adjusted to operate in the 2.4-2.5 GHz band. The SRRs are square-shaped with a lateral length of approximately 18 mm. The two arms of the SRR are 6 mm apart; the value of 6 mm is selected as a compromise considering the bandwidth of resonance, size, and the highest magnitude of the e-field current distribution. The gap of the SRR is maintained at 2 mm, allowing a low strip resistance, which is favorable for the magnitude of the RCS. Undeniably, a slimmer strip dimension might result in a degradation of the performance in terms of the conductivity of a strip lithographed using silver nanoparticle ink, comparatively smaller conductivity than the printed-circuit-board etching of copper. Figure 2a,b demonstrates a distance of 9 mm between the two scattering elements, allowing sufficient decoupling of the EM response (−15 dB isolation between polarizations). A minor estrangement space might have demanded higher bandwidth for the resonating dips in one and the other polarizations, also an abridged isolation of cross-polarization. The operational functionality of a batteryless RFID chip sensor device is comparable to the idea of a microchip-empowered RFID sensor without a unified analogue-to-digital converter (ADC). The observation of a dimensional criterion depends on the changes to the permittivity or conduction of a susceptible material. These variations result in the changes of the RCS of the RFID tag with respect to the frequency. Consequently, the magnitude shifts of some peaks and the resonant frequency can be sensed in the working range of the RFID tag. The electromagnetic device provides two different types of feedbacks on an equilateral basis. The electromagnetic (EM) results at one polarization are to be utilized for extracting the detected data, whereas the results at another polarization are to be utilized as a reference point for the identification of codes and calibration parameters. A similar idea was presented in [56] for smoke detection. Figure 5a,b shows currents in both scatterers when exposed to a vertically and horizontally polarized incident plane wave, respectively. From this illustration, it can be observed that only one scattering element is agitated at a particular polarization. Further, the EM response is isolated between each scattering element. Thus, variations in the degree of the detecting scatterer do not influence the degree of the corresponding referenced scatterer. The physical sizes of the scattering elements have been adjusted to operate in the 2.4-2.5 GHz band. The SRRs are square-shaped with a lateral length of approximately 18 mm. The two arms of the SRR are 6 mm apart; the value of 6 mm is selected as a compromise considering the bandwidth of resonance, size, and the highest magnitude of the e-field current distribution. The gap of the SRR is maintained at 2 mm, allowing a low strip resistance, which is favorable for the magnitude of the RCS. Undeniably, a slimmer strip dimension might result in a degradation of the performance in terms of the conductivity of a strip lithographed using silver nanoparticle ink, comparatively smaller conductivity than the printed-circuit-board etching of copper. Figure 2a,b demonstrates a distance of 9 mm between the two scattering elements, allowing sufficient decoupling of the EM response (−15 dB isolation between polarizations). A minor estrangement space might have demanded higher bandwidth for the resonating dips in one and the other polarizations, also an abridged isolation of cross-polarization. CNT Loaded Scattering Element for Sensing A scattering element, which is denoted by "H" in Figure 4, will be used to detect information. An engraved patterning, which is composite SWCNT/poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) ink-based [57,58], is interleaved between the space/gap of an SRR, to sensitize it to changes in various electrical parameters [59]. According to earlier characterizing processes, it is known that the conductivity and resolution of the deposit are the most sensitive parameters with regard to a temperature or gas variation. Consequently, a variable resistor can model the deposit. The impedance is at a maximum in the gap. Consequently, a strong deviation in the sensing response may be observed because in the case of CNT ink, the bridging resistance of the deposit has a high value. To increase the sensitivity of the sensing device, the size of the sensitive area has to be maximized. Conversely, to avoid canceling the dominant resonant frequency mode of the scattering element, the authors could not deposit a large resisting pattern in the SRR space. For such a structure, the resisting lining can be modeled as a resistor in parallel with a circuit of resonance. Thus, if the bridge resistance is minimized, the quality factor (QF) of the resonance dip is reduced. Consequently, a resonant peak cannot be detected at low bridging resistances. The sensitive stripline is inserted into the SRR gap, as illustrated in Figure 4, to prevent covering a majority of space. The aim was to determine the longest path covering the large area inside the space of an SRR. Further, the delicate stripline is in the shape of a meander line. The ratio of the length and width of the route is selected in such a way that minimal bridge resistance is accomplished with respect to the following research of sensitivity. Hence, a stripline width of 0.75 mm with a route length of 54 mm is employed. Furthermore, to ensure a healthy electrical connection between the SRR and sensitive stripline, the SWCNT/PEDOT:PSS-based stripline overlays with the silver stripline toward and adjacent to the gap-space, with a surface area of 4.5 × 2 mm 2 ( Figure 4). In order to determine the minimum bridge resistance required to maximize the logarithmic and linear RCS changes, the authors performed a parametric simulation using CST Microwave Studio (CST MWS) by controlling the plate resistance of the susceptible stripline between 10 Ω/sq and 100,000 Ω/sq. They demonstrated a susceptible stripline area with zero thickness, which can be termed as an ohmic area in CST software. The simulation results for the RCS responses are illustrated in Figure 6a,b for several plate resistances, for both horizontal and vertical polarizations, respectively. Figure 5a shows the decoupling among both the polarizations, since by modifying the plate resistance of the susceptible deposit, no response was observed in the vertical polarization result. Instead, a noteworthy deviation was observed in the magnitude of the horizontal polarization result. CNT Loaded Scattering Element for Sensing A scattering element, which is denoted by "H" in Figure 4, will be used to detect information. An engraved patterning, which is composite SWCNT/poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) ink-based [57,58], is interleaved between the space/gap of an SRR, to sensitize it to changes in various electrical parameters [59]. According to earlier characterizing processes, it is known that the conductivity and resolution of the deposit are the most sensitive parameters with regard to a temperature or gas variation. Consequently, a variable resistor can model the deposit. The impedance is at a maximum in the gap. Consequently, a strong deviation in the sensing response may be observed because in the case of CNT ink, the bridging resistance of the deposit has a high value. To increase the sensitivity of the sensing device, the size of the sensitive area has to be maximized. Conversely, to avoid canceling the dominant resonant frequency mode of the scattering element, the authors could not deposit a large resisting pattern in the SRR space. For such a structure, the resisting lining can be modeled as a resistor in parallel with a circuit of resonance. Thus, if the bridge resistance is minimized, the quality factor (QF) of the resonance dip is reduced. Consequently, a resonant peak cannot be detected at low bridging resistances. The sensitive stripline is inserted into the SRR gap, as illustrated in Figure 4, to prevent covering a majority of space. The aim was to determine the longest path covering the large area inside the space of an SRR. Further, the delicate stripline is in the shape of a meander line. The ratio of the length and width of the route is selected in such a way that minimal bridge resistance is accomplished with respect to the following research of sensitivity. Hence, a stripline width of 0.75 mm with a route length of 54 mm is employed. Furthermore, to ensure a healthy electrical connection between the SRR and sensitive stripline, the SWCNT/PEDOT:PSS-based stripline overlays with the silver stripline toward and adjacent to the gap-space, with a surface area of 4.5 × 2 mm 2 ( Figure 4). In order to determine the minimum bridge resistance required to maximize the logarithmic and linear RCS changes, the authors performed a parametric simulation using CST Microwave Studio (CST MWS) by controlling the plate resistance of the susceptible stripline between 10 Ω/sq and 100,000 Ω/sq. They demonstrated a susceptible stripline area with zero thickness, which can be termed as an ohmic area in CST software. The simulation results for the RCS responses are illustrated in Figure 6a,b for several plate resistances, for both horizontal and vertical polarizations, respectively. Figure 5a shows the decoupling among both the polarizations, since by modifying the plate resistance of the susceptible deposit, no response was observed in the vertical polarization result. Instead, a noteworthy deviation was observed in the magnitude of the horizontal polarization result. Description of the Measurement Setup The sensors shown in Figure 7 are inkjet-printed over a stretchable 50-μm-thick polyimide coating used as a specimen. The permittivity of polyimide is 3.5 with a tangential loss value of 0.0027. The authors utilized the Dimatix DMP-2831 inkjet printer for the deposition of the material. They used a silver ink known as Harima Nanopaste for generating higher conductivity in the stripline. Two layers with a resolution of approximately 635 dpi (dots per inch) were printed initially, followed by a sintering process for 70 min at 130 °C to achieve the thickness of 2 μm. The plate resistance achieved was approximately 0.5 Ω/sq. Description of the Measurement Setup The sensors shown in Figure 7 are inkjet-printed over a stretchable 50-µm-thick polyimide coating used as a specimen. The permittivity of polyimide is 3.5 with a tangential loss value of 0.0027. The authors utilized the Dimatix DMP-2831 inkjet printer for the deposition of the material. They used a silver ink known as Harima Nanopaste for generating higher conductivity in the stripline. Two layers with a resolution of approximately 635 dpi (dots per inch) were printed initially, followed by a sintering process for 70 min at 130 • C to achieve the thickness of 2 µm. The plate resistance achieved was approximately 0.5 Ω/sq. Description of the Measurement Setup The sensors shown in Figure 7 are inkjet-printed over a stretchable 50-μm-thick polyimide coating used as a specimen. The permittivity of polyimide is 3.5 with a tangential loss value of 0.0027. The authors utilized the Dimatix DMP-2831 inkjet printer for the deposition of the material. They used a silver ink known as Harima Nanopaste for generating higher conductivity in the stripline. Two layers with a resolution of approximately 635 dpi (dots per inch) were printed initially, followed by a sintering process for 70 min at 130 °C to achieve the thickness of 2 μm. The plate resistance achieved was approximately 0.5 Ω/sq. Further, the authors used composite SWCNT/PEDOT:PSS conducting ink for the susceptible conductive stripline [57,58]. The delicate material is inkjet-printed at a resolution of 1694 dpi after being cured at 30 • C for 30 min. The process of sintering is not compulsory, and the ink dries rapidly in the ambient atmosphere. The authors of [51] produced nine models of the prototype illustrated in Figures 4 and 7. The dimensions of the susceptible strip-lines and the SRR were maintained exactly equal. However, the number of layers of the CNT-based stripline was varied between two and four. The measurement process in Figure 8 was utilized for carrying out the CO 2 experiment. An airtight plastic rectangular box sufficiently large to enclose the sensing device, as shown in Figure 8b, was utilized as the device-under-test chamber. The testing box contained an inlet and an outlet with checked regulators to avoid any additional composite vapor travelling backwards. Dry air (10% relative humidity) or any specific gas were injected into the testing box. According to a sensitivity research of CNTs implemented in [52], numerous gases-e.g., NO 2 , NH 3 , and CO 2 -can be detected. In this subsection, we focus on the sensitivity of the SWCNT deposit for CO 2 only. CO 2 gas was injected using a quick hand-operated pump. Each injected amount soaks the purity of the gas inside the box at a degree of approximately 20,000 ppm. A HD37AB17D Delta Ohm probe recorded the purity of CO 2 , and simultaneously measured the humidity and temperature. An ETS-Lindgren 3164-04 wideband dual-polarized ridged horn antenna, with a gain between 9 dBi and 12 dBi and frequency in the range of 3 GHz to 6 GHz, was positioned 20 cm from the testing box. The terminals of this antenna were joined to an Agilent PNA E8358A vector network analyzer. Further, the authors used composite SWCNT/PEDOT:PSS conducting ink for the susceptible conductive stripline [57,58]. The delicate material is inkjet-printed at a resolution of 1694 dpi after being cured at 30 °C for 30 min. The process of sintering is not compulsory, and the ink dries rapidly in the ambient atmosphere. The authors of [51] produced nine models of the prototype illustrated in Figures 4 and 7. The dimensions of the susceptible strip-lines and the SRR were maintained exactly equal. However, the number of layers of the CNT-based stripline was varied between two and four. The measurement process in Figure 8 was utilized for carrying out the CO2 experiment. An airtight plastic rectangular box sufficiently large to enclose the sensing device, as shown in Figure 8b, was utilized as the deviceunder-test chamber. The testing box contained an inlet and an outlet with checked regulators to avoid any additional composite vapor travelling backwards. Dry air (10% relative humidity) or any specific gas were injected into the testing box. According to a sensitivity research of CNTs implemented in [52], numerous gases-e.g., NO2, NH3, and CO2-can be detected. In this subsection, we focus on the sensitivity of the SWCNT deposit for CO2 only. CO2 gas was injected using a quick hand-operated pump. Each injected amount soaks the purity of the gas inside the box at a degree of approximately 20,000 ppm. A HD37AB17D Delta Ohm probe recorded the purity of CO2, and simultaneously measured the humidity and temperature. An ETS-Lindgren 3164-04 wideband dual-polarized ridged horn antenna, with a gain between 9 dBi and 12 dBi and frequency in the range of 3 GHz to 6 GHz, was positioned 20 cm from the testing box. The terminals of this antenna were joined to an Agilent PNA E8358A vector network analyzer. This subsection describes a stretchable inkjet-printed batteryless sensing device, and evaluates its sensitivity to CO2 gas and temperature [51]. An analysis to make this device reactive only for temperature variation is also presented. Wireless measurement of the device, imperiled to a CO2 purity of approximately 20,000 ppm, displayed changes of 0.23 dB and 0.51 dB with and without a dielectric covering, respectively. The authors also observed variations of the magnitude of approximately 2 dB in the changed prototypes over a temperature range of 35 °C to 65 °C. The top covering film in this research did not interfere with the temperature recordings of the sensing device. This subsection describes a stretchable inkjet-printed batteryless sensing device, and evaluates its sensitivity to CO 2 gas and temperature [51]. An analysis to make this device reactive only for temperature variation is also presented. Wireless measurement of the device, imperiled to a CO 2 purity of approximately 20,000 ppm, displayed changes of 0.23 dB and 0.51 dB with and without a dielectric covering, respectively. The authors also observed variations of the magnitude of approximately 2 dB in the changed prototypes over a temperature range of 35 • C to 65 • C. The top covering film in this research did not interfere with the temperature recordings of the sensing device. 3D Printed Sensors 3D printers have been in use for approximately 35 years. With 3D printing, objects are constructed brick-by-brick with comprehensive digital outlines [24]. Moreover, 3D printing has been utilized in fabrication. Free software available online and low-cost 3D printers and 3D printing supplies have resulted in the immense popularity of this trend. The benefits of 3D printing machinery include quick fabrication and the ability to construct difficult structures from more than one material, which is a limitation of the classic fabrication techniques. It has become evident that were will be a proliferation of 3D-printed applications in the near future. A Hyrel [60] System 30 3D printer is illustrated in Figure 9. The printer utilizes Repetrel, a revised variety of the Repetier controller software, which employed the mutual slicing computer-aided drawing software, known as Slic3r [61,62]. 3D Printed Sensors 3D printers have been in use for approximately 35 years. With 3D printing, objects are constructed brick-by-brick with comprehensive digital outlines [24]. Moreover, 3D printing has been utilized in fabrication. Free software available online and low-cost 3D printers and 3D printing supplies have resulted in the immense popularity of this trend. The benefits of 3D printing machinery include quick fabrication and the ability to construct difficult structures from more than one material, which is a limitation of the classic fabrication techniques. It has become evident that were will be a proliferation of 3D-printed applications in the near future. A Hyrel [60] System 30 3D printer is illustrated in Figure 9. The printer utilizes Repetrel, a revised variety of the Repetier controller software, which employed the mutual slicing computer-aided drawing software, known as Slic3r [61,62]. Novel Strain Sensor Based on 3D Printing Technology and 3D Antenna Design The foremost 3D printed stretchable RF strain sensing device is discussed here [18]. The RF response of NinjaFlex, the famous 3D printer material, was characterized. A 3D antenna was analyzed and constructed using these materials and stretchable electrically conductive adhesives (ECAs). These materials hold immense promise for the upcoming 3D-printed RF solicitations, e.g., wearable RF components and flexible 3D sensing devices. The NinjaFlex filament was presented by Fenner Drives, Inc. in 2014 as one of the latest commercial 3D printing provisions [63]. NinjaFlex is a kind of thermoplastic elastomer (TPE) composed of thermoplastic and rubber [64]. The properties of TPEs hypothetically allow 3D printing to spread to various new domains, such as wearable antennas and RF electronics, owing to their elasticity and higher flexibility. Following its announcement, NinjaFlex was used in a variety of assignments [65,66]. This part of the review discusses a batteryless strain sensor based on 3D printing stretchable ECAs and NinjaFlex. 3D Antenna Design The dimensions of the dipole structure are shown in Figure 10. The dielectric material is a 30 mm × 30 mm × 30 mm hollow cube made of NinjaFlex (dielectric constant of 2.98, and tangential loss of 0.06 at 2.4 GHz). With two perpendicular arms, the dipole antenna is constructed using ECAs. We can observe in Figure 10a that the dipole is positioned on the top exterior of the hollow cube and is bent toward two additional planes on the sides. The feeding point of the dipole is at the midpoint of the top exterior. The two arms extend to the edge of the top surface and bend along the other two vertical surfaces. A similar dipole array geometry was also presented recently [67]. This 3D structure facilitates simple quantitatively analysis of the changes to antenna topology caused by strain. The Novel Strain Sensor Based on 3D Printing Technology and 3D Antenna Design The foremost 3D printed stretchable RF strain sensing device is discussed here [18]. The RF response of NinjaFlex, the famous 3D printer material, was characterized. A 3D antenna was analyzed and constructed using these materials and stretchable electrically conductive adhesives (ECAs). These materials hold immense promise for the upcoming 3D-printed RF solicitations, e.g., wearable RF components and flexible 3D sensing devices. The NinjaFlex filament was presented by Fenner Drives, Inc. in 2014 as one of the latest commercial 3D printing provisions [63]. NinjaFlex is a kind of thermoplastic elastomer (TPE) composed of thermoplastic and rubber [64]. The properties of TPEs hypothetically allow 3D printing to spread to various new domains, such as wearable antennas and RF electronics, owing to their elasticity and higher flexibility. Following its announcement, NinjaFlex was used in a variety of assignments [65,66]. This part of the review discusses a batteryless strain sensor based on 3D printing stretchable ECAs and NinjaFlex. 3D Antenna Design The dimensions of the dipole structure are shown in Figure 10. The dielectric material is a 30 mm × 30 mm × 30 mm hollow cube made of NinjaFlex (dielectric constant of 2.98, and tangential loss of 0.06 at 2.4 GHz). With two perpendicular arms, the dipole antenna is constructed using ECAs. We can observe in Figure 10a that the dipole is positioned on the top exterior of the hollow cube and is bent toward two additional planes on the sides. The feeding point of the dipole is at the midpoint of the top exterior. The two arms extend to the edge of the top surface and bend along the other two vertical surfaces. A similar dipole array geometry was also presented recently [67]. This 3D structure facilitates simple quantitatively analysis of the changes to antenna topology caused by strain. The NinjaFlex structure contains a hollow cube at its center. This structured design improves the quality of printed NinjaFlex, and enables easy stretching of the part of the dipole antenna on the front surface, as shown in Figure 10. The directions in which strain is applied can be observed in Figure 10b. NinjaFlex structure contains a hollow cube at its center. This structured design improves the quality of printed NinjaFlex, and enables easy stretching of the part of the dipole antenna on the front surface, as shown in Figure 10. The directions in which strain is applied can be observed in Figure 10b. 3D-Printed Strain Sensor Prototype First, the cube and antenna traces were 3D-printed using a Hyrel System 30 3D-printer and NinjaFlex filament. Subsequently, the antenna traces were filled with the ECAs, based on a design in ANSYS High Frequency Structure Simulator (HFSS). An impedance-matched balun was added between the two antenna traces to connect to a sub-miniature version A (SMA) connector for insertion loss tests, as shown in Figure 11. Strain Experiment and Results Strain was applied to the front and rear faces of the cube-shaped box (Figure 10b), and the change in the resonant frequency of an antenna was observed. Two dissimilar intensities of strain were applied to the box during experimentation. The measured results are illustrated by the solid lines in Figure 12. We observed that the center frequencies shifted by 30 MHz and 50 MHz, respectively, after applying strain-1 and strain-2. Owing to the 3D structure of the box, the most noticeable alteration in the length of the antenna is at the front. After applying a strain, this location of the structure stretches owing to the NinjaFlex. Consequently, the resonant frequency of the antenna is decreased as 3D-Printed Strain Sensor Prototype First, the cube and antenna traces were 3D-printed using a Hyrel System 30 3D-printer and NinjaFlex filament. Subsequently, the antenna traces were filled with the ECAs, based on a design in ANSYS High Frequency Structure Simulator (HFSS). An impedance-matched balun was added between the two antenna traces to connect to a sub-miniature version A (SMA) connector for insertion loss tests, as shown in Figure 11. NinjaFlex structure contains a hollow cube at its center. This structured design improves the quality of printed NinjaFlex, and enables easy stretching of the part of the dipole antenna on the front surface, as shown in Figure 10. The directions in which strain is applied can be observed in Figure 10b. 3D-Printed Strain Sensor Prototype First, the cube and antenna traces were 3D-printed using a Hyrel System 30 3D-printer and NinjaFlex filament. Subsequently, the antenna traces were filled with the ECAs, based on a design in ANSYS High Frequency Structure Simulator (HFSS). An impedance-matched balun was added between the two antenna traces to connect to a sub-miniature version A (SMA) connector for insertion loss tests, as shown in Figure 11. Strain Experiment and Results Strain was applied to the front and rear faces of the cube-shaped box (Figure 10b), and the change in the resonant frequency of an antenna was observed. Two dissimilar intensities of strain were applied to the box during experimentation. The measured results are illustrated by the solid lines in Figure 12. We observed that the center frequencies shifted by 30 MHz and 50 MHz, respectively, after applying strain-1 and strain-2. Owing to the 3D structure of the box, the most noticeable alteration in the length of the antenna is at the front. After applying a strain, this location of the structure stretches owing to the NinjaFlex. Consequently, the resonant frequency of the antenna is decreased as Figure 11. Ready-to-test 3D-printed strain sensor [18]. Strain Experiment and Results Strain was applied to the front and rear faces of the cube-shaped box (Figure 10b), and the change in the resonant frequency of an antenna was observed. Two dissimilar intensities of strain were applied to the box during experimentation. The measured results are illustrated by the solid lines in Figure 12. We observed that the center frequencies shifted by 30 MHz and 50 MHz, respectively, after applying strain-1 and strain-2. Owing to the 3D structure of the box, the most noticeable alteration in the length of the antenna is at the front. After applying a strain, this location of the structure stretches owing to the NinjaFlex. Consequently, the resonant frequency of the antenna is decreased as anticipated. In [18], a 3D antenna for use as a strain sensor was planned and constructed using stretchable ECAs and NinjaFlex. This technology offers enormous prospects for future 3D printing RF equipment, e.g., wearable RF devices and 3D stretchable sensor modules. anticipated. In [18], a 3D antenna for use as a strain sensor was planned and constructed using stretchable ECAs and NinjaFlex. This technology offers enormous prospects for future 3D printing RF equipment, e.g., wearable RF devices and 3D stretchable sensor modules. Microfluidic Sensor Constructed on a Flexible Material of Kapton for Measurement of Complex Permittivity of Different Liquid Materials A sensitive and low-cost microfluidic sensor operating in the frequency range of 10-12 GHz was validated and proposed in a previous study [68]. This sensor contains various dabs coupled to a micro-strip (MS) line. The sensor is used for measuring and characterizing liquid chemicals, with applications in chemical laboratories and biological fields. The device is constructed on a stretching Kapton material by utilizing electronic printing technologies. With the help of mathematical calculations that describe the characteristics of resonance, the variance between the complex permittivity of a test and a reference sample predicts the complex permittivity of different concentrations of sodium chloride (NaCl) water solutions. The estimated numbers for the imaginary and real portions of the complex permittivity present a continuous deviation with the purity of the NaCl-water solution. Two of the linear zones correspond to the real part-one for minor concentration levels (<0.5 M), the other for larger concentration levels (>0.5 M). Only one linear region was achieved for the imaginary part for the several concentration levels examined. A close similarity is observed among the outcomes obtained from the Cole-Cole model and the experimental results recorded. The recorded experimental results determine the sensitivity and practicality of the addressed device for characterizing micro-quantity liquid chemical at microwave frequencies. Although the investigated frequencies were approximately 10 GHz, the same approach can be implemented for any frequency in the microwave range. Furthermore, the proposed method can be utilized to detect several new liquid chemicals that possess a complex permittivity at the same specifications of frequency using the same calibration. Although the proposed sensor is not common, it can be used favorably in a variety of practical experiments where the material under observation is a mixture or a liquid chemical. Design and Fabrication of the Liquid Chemical Sensing Device Sensor Design Figure 13 shows the design of the proposed RF chemical sensing device, constructed on 0.13-mm-broad Kapton material in HFSS. As evident from the illustration, this batteryless sensing device works on various dabs coupled to an MS line. The 25-mm-long MS has a width of 0.3 mm, as Microfluidic Sensor Constructed on a Flexible Material of Kapton for Measurement of Complex Permittivity of Different Liquid Materials A sensitive and low-cost microfluidic sensor operating in the frequency range of 10-12 GHz was validated and proposed in a previous study [68]. This sensor contains various dabs coupled to a micro-strip (MS) line. The sensor is used for measuring and characterizing liquid chemicals, with applications in chemical laboratories and biological fields. The device is constructed on a stretching Kapton material by utilizing electronic printing technologies. With the help of mathematical calculations that describe the characteristics of resonance, the variance between the complex permittivity of a test and a reference sample predicts the complex permittivity of different concentrations of sodium chloride (NaCl) water solutions. The estimated numbers for the imaginary and real portions of the complex permittivity present a continuous deviation with the purity of the NaCl-water solution. Two of the linear zones correspond to the real part-one for minor concentration levels (<0.5 M), the other for larger concentration levels (>0.5 M). Only one linear region was achieved for the imaginary part for the several concentration levels examined. A close similarity is observed among the outcomes obtained from the Cole-Cole model and the experimental results recorded. The recorded experimental results determine the sensitivity and practicality of the addressed device for characterizing micro-quantity liquid chemical at microwave frequencies. Although the investigated frequencies were approximately 10 GHz, the same approach can be implemented for any frequency in the microwave range. Furthermore, the proposed method can be utilized to detect several new liquid chemicals that possess a complex permittivity at the same specifications of frequency using the same calibration. Although the proposed sensor is not common, it can be used favorably in a variety of practical experiments where the material under observation is a mixture or a liquid chemical. Design and Fabrication of the Liquid Chemical Sensing Device Sensor Design Figure 13 shows the design of the proposed RF chemical sensing device, constructed on 0.13-mm-broad Kapton material in HFSS. As evident from the illustration, this batteryless sensing device works on various dabs coupled to an MS line. The 25-mm-long MS has a width of 0.3 mm, as shown in Figure 13c, matching a characteristic impedance of 50 Ω. The size or length of every dab is 4 mm, improved for a 50-Ω impedance matching. Two of the dabs are located symmetrically opposite on each side of a central dab, which is situated at the center of the MS line. The space between the dabs is fixed at 1.5 mm. This sensing scenario is designed to have a larger detection space, where the E-field at resonance is also powerfully concentrated. A system is constructed for obtaining an accurate response in the X-band (8 to 12 GHz). The measurement parameters of the bent area are shown in Figure 13d. The sensor contains two orthodox segments (5 mm each) and three curves. With regard to the stab in the center, 9 mm of the substrate is bent at an angle of 180 • , whereas the other two curves that match the two orthodox segments are curved from lengths of 3 mm at an angle of 90 • . shown in Figure 13c, matching a characteristic impedance of 50 Ω. The size or length of every dab is 4 mm, improved for a 50-Ω impedance matching. Two of the dabs are located symmetrically opposite on each side of a central dab, which is situated at the center of the MS line. The space between the dabs is fixed at 1.5 mm. This sensing scenario is designed to have a larger detection space, where the E-field at resonance is also powerfully concentrated. A system is constructed for obtaining an accurate response in the X-band (8 to 12 GHz). The measurement parameters of the bent area are shown in Figure 13d. The sensor contains two orthodox segments (5 mm each) and three curves. With regard to the stab in the center, 9 mm of the substrate is bent at an angle of 180°, whereas the other two curves that match the two orthodox segments are curved from lengths of 3 mm at an angle of 90°. Sensor Fabrication Process The device is constructed on 130-μm-thick Kapton material using inkjet technology. Kapton was bought from Dupont Teijin films for this research. The reason for choosing Kapton material is its high thermal stability that enables sintering over high temperatures. Before printing, the Kapton sheet was washed in acetone chemical, cleaned by isopropyl alcohol, and completely dried using a flow of nitrogen. An easily accessible nanoparticle ink from market, acquired from Sun Chemical (Suntronic EMD5714) company, was used as a conducting material. The ink material contains silver nanoparticles distributed inside a blend of ethanol, glycerol, and ethanediol, at a 42% concentration by weight. Dimatix printhead (Spectra ® SE-128AA) was used for the deposition fixed in a Ceraprinter X-Series inkjet printer from Ceradrop society. The outlets were activated by a custom-made 57 V vibration at a jet frequency speed of 2 kHz. The space between the outlets and printing material was fixed at 800 μm. The drop spacing was fixed to 38 μm. Sintering was performed for 45 min at 200 °C for obtaining reliable silver tracker conductivity. The thickness of the final deposition material was as small as 1 μm for the metal parts (ground and conductive line). These were the constraints chosen to obtain acceptable realization with respect to conductivity (δ ≈ 5 × 10 6 (S/m)) [69] while maintaining the physical and biochemical properties of the Kapton material unaffected. Figure 14a,b shows the pictures of the resonator prototype. Notably, the resolution of the conductive pattern formed by using inkjet-printing is acceptable and there were no wrinkles observed. The proposed Sensor Fabrication Process The device is constructed on 130-µm-thick Kapton material using inkjet technology. Kapton was bought from Dupont Teijin films for this research. The reason for choosing Kapton material is its high thermal stability that enables sintering over high temperatures. Before printing, the Kapton sheet was washed in acetone chemical, cleaned by isopropyl alcohol, and completely dried using a flow of nitrogen. An easily accessible nanoparticle ink from market, acquired from Sun Chemical (Suntronic EMD5714) company, was used as a conducting material. The ink material contains silver nanoparticles distributed inside a blend of ethanol, glycerol, and ethanediol, at a 42% concentration by weight. Dimatix printhead (Spectra ® SE-128AA) was used for the deposition fixed in a Ceraprinter X-Series inkjet printer from Ceradrop society. The outlets were activated by a custom-made 57 V vibration at a jet frequency speed of 2 kHz. The space between the outlets and printing material was fixed at 800 µm. The drop spacing was fixed to 38 µm. Sintering was performed for 45 min at 200 • C for obtaining reliable silver tracker conductivity. The thickness of the final deposition material was as small as 1 µm for the metal parts (ground and conductive line). These were the constraints chosen to obtain acceptable realization with respect to conductivity (δ ≈ 5 × 10 6 (S/m)) [69] while maintaining the physical and biochemical properties of the Kapton material unaffected. Figure 14a,b shows the pictures of the resonator prototype. Notably, the resolution of the conductive pattern formed by using inkjet-printing is acceptable and there were no wrinkles observed. The proposed microwave/microfluidic device is attached to SMA connectors with an overall 50 Ω impedance matched at both the sides of the MS line as shown in Figure 14c. A 3D-printed mold was constructed using Acrylonitrile butadiene styrene (ABS) material and hardened at an oven temperature of 125 • C with measurements indistinguishable from those obtained in Figure 13. Silicone glue is attached to both sides of the bent device for avoiding seepage of the tested chemicals, as shown in Figure 14d. The Kapton material utilized here is easily curved on the 3D-printed mold and conforms to firm curves without the help of mechanical equipment. both sides of the bent device for avoiding seepage of the tested chemicals, as shown in Figure 14d. The Kapton material utilized here is easily curved on the 3D-printed mold and conforms to firm curves without the help of mechanical equipment. Simulation and Experiment Validity As previously mentioned, a mixture of deionized (DI) water with various purity mixtures of NaCl was examined to evaluate the sensitivity of the microwave/microfluidic device. The S21 spectrum illustrated in this paper was measured using a vector network analyzer (PNA-X N5242A (10 MHz-26.5 GHz)). Figure 15a presents the measured spectrum of the sensing device under experiment without and with the NaCl mixtures. In this study, the volume of the deposited solution was maintained constant (0.3 mL) for all concentrations. Notably, the recorded spectrum (S21) was completely repeatable with less than 0.29% change in the point of the resonant dip and approximately 1.9% variation in accordance with the magnitude of the attenuation dip. By considering the error generated by the depositor, the variations are also expected to escalate in a manner undefined yet. When there is nothing to examine (i.e., air is present on the sensor), the sensor has an insertion resonant dip of 76 dB at 10.61 GHz. When pure DI water (C0) is injected, it shifts the resonant dip to 10.32 GHz with an insertion loss of 70 dB. Notably, with the increase in the purity of NaCl in the mixture, the resonant dip shifts toward lower frequency values, and the bandwidth of the dips increases (Figure 15b). This outcome was expected owing to the fact that, when the purity of NaCl is increased, the real component of the complex permittivity is observed to decrease, resulting in a variation in the resonant dip. Furthermore, the decrease in the magnitude of the attenuation dip as the purity of NaCl in the mixture increases is clearly related to the increased magnitude of the imaginary component of the complex permittivity [70][71][72]. Simulations for many purity levels of NaCl were investigated manually using the HFSS computer tool for detecting the subtlest area of the device and for examining the magnitude of electric field distribution inside the bent region of the device during the experiment. Simulation and Experiment Validity As previously mentioned, a mixture of deionized (DI) water with various purity mixtures of NaCl was examined to evaluate the sensitivity of the microwave/microfluidic device. The S 21 spectrum illustrated in this paper was measured using a vector network analyzer (PNA-X N5242A (10 MHz-26.5 GHz)). Figure 15a presents the measured spectrum of the sensing device under experiment without and with the NaCl mixtures. In this study, the volume of the deposited solution was maintained constant (0.3 mL) for all concentrations. Notably, the recorded spectrum (S 21 ) was completely repeatable with less than 0.29% change in the point of the resonant dip and approximately 1.9% variation in accordance with the magnitude of the attenuation dip. By considering the error generated by the depositor, the variations are also expected to escalate in a manner undefined yet. When there is nothing to examine (i.e., air is present on the sensor), the sensor has an insertion resonant dip of 76 dB at 10.61 GHz. When pure DI water (C 0 ) is injected, it shifts the resonant dip to 10.32 GHz with an insertion loss of 70 dB. Notably, with the increase in the purity of NaCl in the mixture, the resonant dip shifts toward lower frequency values, and the bandwidth of the dips increases (Figure 15b). This outcome was expected owing to the fact that, when the purity of NaCl is increased, the real component of the complex permittivity is observed to decrease, resulting in a variation in the resonant dip. Furthermore, the decrease in the magnitude of the attenuation dip as the purity of NaCl in the mixture increases is clearly related to the increased magnitude of the imaginary component of the complex permittivity [70][71][72]. Simulations for many purity levels of NaCl were investigated manually using the HFSS computer tool for detecting the subtlest area of the device and for examining the magnitude of electric field distribution inside the bent region of the device during the experiment. As shown in Figure 16, the half-structures of the sensor are simulated using symmetry (the symmetric plane is illustrated in Figure 13). The electric field distribution is evaluated at 10 GHz. In the case of C 0 , it is demonstrated that the electric field is intensely gathered at the center dab of the structure. Nevertheless, when the purity is increased, the electric field distribution is gathered toward the board of the dab, but not around the center dab, which is near the MS line, similar to C 0 . This result indicates that the center location close to the MS line is the most subtle with respect to the increment in the purity of the solution under test [68]. As shown in Figure 16, the half-structures of the sensor are simulated using symmetry (the symmetric plane is illustrated in Figure 13). The electric field distribution is evaluated at 10 GHz. In the case of C0, it is demonstrated that the electric field is intensely gathered at the center dab of the structure. Nevertheless, when the purity is increased, the electric field distribution is gathered toward the board of the dab, but not around the center dab, which is near the MS line, similar to C0. This result indicates that the center location close to the MS line is the most subtle with respect to the increment in the purity of the solution under test [68]. A cost-efficient microwave/microfluidic sensor that characterizes the complex permittivity of aqueous solutions in an efficient and exact manner is designed, fabricated, and validated in the X-Band. This sensor is realized using inkjet-technology and 3D printing on a Kapton substrate. Simulation and experimental investigations of the device are presented, and it is observed that the results obtained are consistent with the Cole-Cole model [68]. A cost-efficient microwave/microfluidic sensor that characterizes the complex permittivity of aqueous solutions in an efficient and exact manner is designed, fabricated, and validated in the X-Band. This sensor is realized using inkjet-technology and 3D printing on a Kapton substrate. Simulation and experimental investigations of the device are presented, and it is observed that the results obtained are consistent with the Cole-Cole model [68]. Screen Printing Screen printing is a proven manufacturing technology that enables high-volume production at low cost. Thus, the main advantage of screen-printed RF sensors is the potential for low-cost and high-volume manufacturing. Practical implementation of screen printing technology for the fabrication of antennas began in the 1970s, when low-loss dielectrics arrived in the market. Screen printing technology offers the possibility for cost-optimized inline reel-to-reel manufacturing. Consequently, RF components can be thinner, lighter, more flexible, and cheaper than when fabricated using a conventional manufacturing process [73]. Screen printing is appropriate for fabricating electronics owing to the ability to produce patterned, thick layers from paste-like materials. This technique can produce conducting lines using inorganic materials (e.g., for circuit boards and antennas) and passive insulating layers, where the thickness of the layer is more important than the resolution. The characteristic throughput (50 m 2 /h) and resolution (100 µm) are similar to those observed with inkjet printing. This versatile and comparatively simple method is used primarily for the fabrication of conductive and dielectric layers [74,75]. However, organic semiconductors, e.g., organic photovoltaic cells [76] and complete organic field-effect transistors [77], can be printed. Novel Strain Sensor Based on 3D Printing Technology and 3D Antenna Design The design of a stretchable RF strain sensor fabricated using screen printing technology is suggested in [78]. The proposed sensing device is fabricated using a patch of half of the wavelength, which has a resonant frequency of 3.7 GHz. Its resonant frequency is determined by varying the size of the patch. Therefore, whenever the structure is stretched, it has a different resonant frequency. Polydimethylsiloxane (PDMS) was utilized as a substrate, since it is a stretchable and screen-printable surface. Dupont PE872 silver conductive ink (Dupont, NC, America) was utilized to produce a conducting structure with elasticity. The sensing operation is determined using full-wave computer simulations and experiments to be conducted on the fabricated prototype. After stretching, the resonant frequency of the device decreases to 3.43 GHz from 3.7 GHz, increasing the horizontal size by 7.8% and demonstrating a sensitivity of 3.43 × 10 7 Hz/1%. When the device is stretched in the vertical direction, there is no change in the resonant frequency. Design of a Strain Sensor The construction of the sensing device is espoused from a rectangle resonator patch as shown in Figure 17. These four-sided patches are extensively utilized in RF resonators or structures containing resonator-based modules owing to their uncomplicated construction and ease of fabrication. In this particular research, the conductive patterning was generated using screen printing technology. The resistance of the surface of these conducting patterns was determined using the width-to-length ratio. The value of resistance for the silver conducting ink with stretching ability and the rectangle-shaped conductive patch are 0.64 Ω and 14.2 Ω, respectively. The diminished resistance on the surface of the rectangle-shaped patch is owing to the smaller width-to-length ratio. Figure 7 shows the dimensions of the strain sensor device. A coaxial transmission line feeding is utilized as an alternative to the classical MS line feeding. This modification is performed to ensure that the feed structure remains unchanged when the overall structure is stretched. Furthermore, PDMS is used here as a dielectric substantial or a substrate for the top conducting patch. The permittivity (ε r ) and tangential loss of PDMS were characterized using T-resonator process [79,80]. The resonant frequency ( f 0 ) of the rectangle-shaped conducting patch can be computed as [81,82]: Hs +0.264 (ε e f f −0.258) Wp Hs +0.8 (1) where ε e f f is the effective permittivity emanating from the fringing fields, c is the speed of light in vacuum, W p is the width of the rectangular patch, L p is the length of the rectangular patch, and H s is the thickness of the dielectric material (PDMS). The permittivity and tangential loss of the PDMS material were determined to be 3.01 and 0.025, respectively. The length and width of the rectangle-shaped conductive patch were selected to be 17.05 mm and 23.1 mm, respectively, to enable the sensor to have a resonant frequency of 3.7 GHz. Figure 18a,b displays the respective real and imaginary portions of the input impedance, with various values of D-the lengthwise distance of the coaxial feed to the edge of the rectangular patch. It can be observed in Figure 18 that the resonant frequency and impedance decrease with the increase in D. Thus, to realize matched impedance of 50 Ω, D is determined to be exactly 6.5 mm. ANSYS computer simulator (HFSS) is utilized for the full-wave simulation analysis. The SMA Version A connector was also incorporated in the simulation of the structure, as evident from Figure 17. If the resonant frequency highly depends on the varied lengths of the conducting patch, it is expected that the operating frequency will decrease when the device is to be stretched vertically. Figure 19a,b presents the simulation results of the reflection coefficient for different values of L p and W p , respectively. Originally, the ideal sensor had a resonant frequency of 3.7 GHz with a reflection coefficient of −25 dB. The frequency of operation does not vary when W p is changed [83], as shown in Figure 19a. However, it is demonstrated in Figure 19b that the resonant frequency is reduced from 3.7 GHz to 3.43 GHz after stretching the device by 7.8% in the vertical direction. Consequently, the proposed device can be effectively utilized as a strain-sensing resonator by identifying variations in the resonant frequency. Therefore, to compute the elastic capability, the strain can be defined as: where is the effective permittivity emanating from the fringing fields, is the speed of light in vacuum, is the width of the rectangular patch, is the length of the rectangular patch, and is the thickness of the dielectric material (PDMS). The permittivity and tangential loss of the PDMS material were determined to be 3.01 and 0.025, respectively. The length and width of the rectangle-shaped conductive patch were selected to be 17.05 mm and 23.1 mm, respectively, to enable the sensor to have a resonant frequency of 3.7 GHz. Figure 18a,b displays the respective real and imaginary portions of the input impedance, with various values of D-the lengthwise distance of the coaxial feed to the edge of the rectangular patch. It can be observed in Figure 18 that the resonant frequency and impedance decrease with the increase in D. Thus, to realize matched impedance of 50 Ω, D is determined to be exactly 6.5 mm. ANSYS computer simulator (HFSS) is utilized for the full-wave simulation analysis. The SMA Version A connector was also incorporated in the simulation of the structure, as evident from Figure 17. If the resonant frequency highly depends on the varied lengths of the conducting patch, it is expected that the operating frequency will decrease when the device is to be stretched vertically. Figure 19a,b presents the simulation results of the reflection coefficient for different values of and , respectively. Originally, the ideal sensor had a resonant frequency of 3.7 GHz with a reflection coefficient of −25 dB. The frequency of operation does not vary when is changed [83], as shown in Figure 19a. However, it is demonstrated in Figure 19b that the resonant frequency is reduced from 3.7 GHz to 3.43 GHz after stretching the device by 7.8% in the vertical direction. Consequently, the proposed device can be effectively utilized as a strain-sensing resonator by identifying variations in the resonant frequency. Therefore, to compute the elastic capability, the strain can be defined as: Figure 20 presents the method to fabricate in-house PDMS material. In Figure 20a, the desired PDMS mold has been constructed on a plastic sheet using 3D printing (Ultimaker2 + 3D printer (Ultimaker B.V, Geldermalsen, The Netherlands)) procedure. Since 3D production is faster and easier than classic fabrication methods [84], it is extensively used. The thickness (Hs), length (Ls), and width (Ws), of the flexible PDMS material were 1.01 mm, 50.1, and 40 mm, respectively. After constructing the mold, a liquid conformation of a fraternization, a curing agent with base was created with a ratio of 10:1. Subsequently, a void vacuum machine was utilized for the removal of air bubbles created during the fraternization process. A dense liquid conformation was cured at 30 °C for approximately 50 h, or at 110 °C for 40 min. The device also underwent heat curing process for 35 min at a temperature of 75 °C using a hot plate. Subsequently, a PDC-32G plasma cleaner (Harrick Plasma, NY, USA.) was used to perform plasma treatment on the PDMS material. The plasma treatment was performed for approximately 20 s at 19 W. Figure 20 presents the method to fabricate in-house PDMS material. In Figure 20a, the desired PDMS mold has been constructed on a plastic sheet using 3D printing (Ultimaker2 + 3D printer (Ultimaker B.V, Geldermalsen, The Netherlands)) procedure. Since 3D production is faster and easier than classic fabrication methods [84], it is extensively used. The thickness (Hs), length (Ls), and width (Ws), of the flexible PDMS material were 1.01 mm, 50.1, and 40 mm, respectively. After constructing the mold, a liquid conformation of a fraternization, a curing agent with base was created with a ratio of 10:1. Subsequently, a void vacuum machine was utilized for the removal of air bubbles created during the fraternization process. A dense liquid conformation was cured at 30 °C for approximately 50 h, or at 110 °C for 40 min. The device also underwent heat curing process for 35 min at a temperature of 75 °C using a hot plate. Subsequently, a PDC-32G plasma cleaner (Harrick Plasma, NY, USA.) was used to perform plasma treatment on the PDMS material. The plasma treatment was performed for approximately 20 s at 19 W. Figure 20 presents the method to fabricate in-house PDMS material. In Figure 20a, the desired PDMS mold has been constructed on a plastic sheet using 3D printing (Ultimaker2 + 3D printer (Ultimaker B.V, Geldermalsen, The Netherlands)) procedure. Since 3D production is faster and easier than classic fabrication methods [84], it is extensively used. The thickness (H s ), length (L s ), and width (W s ), of the flexible PDMS material were 1.01 mm, 50.1, and 40 mm, respectively. After constructing the mold, a liquid conformation of a fraternization, a curing agent with base was created with a ratio of 10:1. Subsequently, a void vacuum machine was utilized for the removal of air bubbles created during the fraternization process. A dense liquid conformation was cured at 30 • C for approximately 50 h, or at 110 • C for 40 min. The device also underwent heat curing process for 35 min at a temperature of 75 • C using a hot plate. Subsequently, a PDC-32G plasma cleaner (Harrick Plasma, NY, USA) was used to perform plasma treatment on the PDMS material. The plasma treatment was performed for approximately 20 s at 19 W. Screen Printing The silver screen printing method is shown in Figure 21. As shown in Figure 21a,b, the conducting pattern for the upper rectangle-shaped patch and the ground of the structure were screenprinted on the flexible PDMS material using the silver conductive ink (Dupont PE872, Bucheon, Korea), which exhibits elasticity. Daeyoung Technology Co. (Bucheon, Korea) produced the screen printer used in this project. The device has a printing speed in the range of 45-595 mm/s, and a squeegee angle between 60° and 90°. The 400 wire count mesh of stainless steel with a mesh tension of approximately 150 N was utilized. A mask of the pattern was created and placed on the PDMS material onto which the silver conducting nanoparticle ink was screen-printed using a squeegee. Figure 21c displays a fully fabricated and functional sensor. The rectangle-shaped conductive patch resides on the top, and the bottom is also screen-printed as the top patch. It is necessary to cure the prototype for improving the conduction on the screen-printed area. Therefore, heat sintering was performed in a vacuumed oven (ON-22GW) [85,86] for approximately 35 min at 90 °C. A hovel was subsequently pierced on the patch through to the bottom and an SMA connector pin was implanted using silver glue epoxy (the inner conductor pin of SMA attached to the top patch; the outer part of SMA connected to the ground in the same manner). Screen Printing The silver screen printing method is shown in Figure 21. As shown in Figure 21a,b, the conducting pattern for the upper rectangle-shaped patch and the ground of the structure were screen-printed on the flexible PDMS material using the silver conductive ink (Dupont PE872, Bucheon, Korea), which exhibits elasticity. Daeyoung Technology Co. (Bucheon, Korea) produced the screen printer used in this project. The device has a printing speed in the range of 45-595 mm/s, and a squeegee angle between 60 • and 90 • . The 400 wire count mesh of stainless steel with a mesh tension of approximately 150 N was utilized. A mask of the pattern was created and placed on the PDMS material onto which the silver conducting nanoparticle ink was screen-printed using a squeegee. Figure 21c displays a fully fabricated and functional sensor. The rectangle-shaped conductive patch resides on the top, and the bottom is also screen-printed as the top patch. It is necessary to cure the prototype for improving the conduction on the screen-printed area. Therefore, heat sintering was performed in a vacuumed oven (ON-22GW) [85,86] for approximately 35 min at 90 • C. A hovel was subsequently pierced on the patch through to the bottom and an SMA connector pin was implanted using silver glue epoxy (the inner conductor pin of SMA attached to the top patch; the outer part of SMA connected to the ground in the same manner). Screen Printing The silver screen printing method is shown in Figure 21. As shown in Figure 21a,b, the conducting pattern for the upper rectangle-shaped patch and the ground of the structure were screenprinted on the flexible PDMS material using the silver conductive ink (Dupont PE872, Bucheon, Korea), which exhibits elasticity. Daeyoung Technology Co. (Bucheon, Korea) produced the screen printer used in this project. The device has a printing speed in the range of 45-595 mm/s, and a squeegee angle between 60° and 90°. The 400 wire count mesh of stainless steel with a mesh tension of approximately 150 N was utilized. A mask of the pattern was created and placed on the PDMS material onto which the silver conducting nanoparticle ink was screen-printed using a squeegee. Figure 21c displays a fully fabricated and functional sensor. The rectangle-shaped conductive patch resides on the top, and the bottom is also screen-printed as the top patch. It is necessary to cure the prototype for improving the conduction on the screen-printed area. Therefore, heat sintering was performed in a vacuumed oven (ON-22GW) [85,86] for approximately 35 min at 90 °C. A hovel was subsequently pierced on the patch through to the bottom and an SMA connector pin was implanted using silver glue epoxy (the inner conductor pin of SMA attached to the top patch; the outer part of SMA connected to the ground in the same manner). Experimental Results The specimen of the proposed RF sensor is shown in Figure 21c. Further, an Anritsu MS2038C vector network analyzer (Anritsu, Kanagawa, Japan) was used for the measurement of the reflection coefficients of the RF strain sensor. The measurement results of the reflection coefficients recorded for the strain sensor were compared with the corresponding simulation results in Figure 22a. Originally, the non-stretched device has a resonant frequency of 3.7 GHz with reflection coefficients of −27 dB. The recorded measurement results and the computer-simulated results are consistent with each other. The repeatability test is carried out for ensuring the reliability of results and is shown in Figure 22b. The graph shows the recorded reflection coefficient after every 1, 5, 10, 15, and 20 cycles. One cycle represents the relaxed state after stretching the strain sensor. It can be observed in Figure 22b Experimental Results The specimen of the proposed RF sensor is shown in Figure 21c. Further, an Anritsu MS2038C vector network analyzer (Anritsu, Kanagawa, Japan) was used for the measurement of the reflection coefficients of the RF strain sensor. The measurement results of the reflection coefficients recorded for the strain sensor were compared with the corresponding simulation results in Figure 22a. Originally, the non-stretched device has a resonant frequency of 3.7 GHz with reflection coefficients of −27 dB. The recorded measurement results and the computer-simulated results are consistent with each other. The repeatability test is carried out for ensuring the reliability of results and is shown in Figure 22b. The graph shows the recorded reflection coefficient after every 1, 5, 10, 15, and 20 cycles. One cycle represents the relaxed state after stretching the strain sensor. It can be observed in Figure 22b that the resonant frequency does not change until 10 cycles of stretching and relaxation have been performed. The resonant frequency changes slightly to 3.67 and 3.68 GHz after 15 and 20 repetitions, respectively. Furthermore, the reflection coefficients were recorded for different strain scenarios. The specimen was stretched along the vertical and horizontal axes; this can be observed in Figure 23. Figure 23a illustrates the recorded reflection coefficient results when the device is stretched along the vertical axis. It was already expected from the simulation results of reflection coefficients, as observed in Figure 19a, that the resonant frequency does not vary with the stretch being applied. Nevertheless, the impedance varies, since the coaxial feedhole becomes larger during the stretch. Moreover, Figure 23b demonstrates that the recorded reflection coefficient was changed as the stretch was applied in the horizontal direction. Compared to the reflection coefficient results of the simulation shown in Figure 19b, the resonant frequency varied from 3.7 GHz to 3.44 GHz, after a stretching of 7.82% was applied to the sensor. The strain of 7.82% corresponds to a 1.85 mm increase in length, and it was selected by maintaining the mechanical strength of the flexibility of PDMS material. The relationship between the resonant frequency and the strain alongside the vertical direction (width) and horizontal direction (length) is graphically shown in Figure 24a,b, respectively. As stated before, the frequency does not change when a strain is applied in the vertical direction; nonetheless, it is linearly proportional to the strain applied along the horizontal direction, as evident in Figure 24b. The sensor calibrate/fitting curve is defined as y = −0.0343x + 3.7. Consequently, the sensitivity of the proposed RF straining sensor is 3.443 × 10 7 Hz/percentage. Table 1 confirms the significance of this work, and the proposed sensor is compared to the other strain sensors recently developed. The proposed RF strain sensor exhibits longer flexibility and strain scale owing to its stretch-proof conductive inks and flexibility. Furthermore, the reflection coefficients were recorded for different strain scenarios. The specimen was stretched along the vertical and horizontal axes; this can be observed in Figure 23. Figure 23a illustrates the recorded reflection coefficient results when the device is stretched along the vertical axis. It was already expected from the simulation results of reflection coefficients, as observed in Figure 19a, that the resonant frequency does not vary with the stretch being applied. Nevertheless, the impedance varies, since the coaxial feedhole becomes larger during the stretch. Moreover, Figure 23b demonstrates that the recorded reflection coefficient was changed as the stretch was applied in the horizontal direction. Compared to the reflection coefficient results of the simulation shown in Figure 19b, the resonant frequency varied from 3.7 GHz to 3.44 GHz, after a stretching of 7.82% was applied to the sensor. The strain of 7.82% corresponds to a 1.85 mm increase in length, and it was selected by maintaining the mechanical strength of the flexibility of PDMS material. The relationship between the resonant frequency and the strain alongside the vertical direction (width) and horizontal direction (length) is graphically shown in Figure 24a,b, respectively. As stated before, the frequency does not change when a strain is applied in the vertical direction; nonetheless, it is linearly proportional to the strain applied along the horizontal direction, as evident in Figure 24b. The sensor calibrate/fitting curve is defined as y = −0.0343x + 3.7. Consequently, the sensitivity of the proposed RF straining sensor is 3.443 × 10 7 Hz/percentage. Table 1 confirms the significance of this work, and the proposed sensor is compared to the other strain sensors recently developed. The proposed RF strain sensor exhibits longer flexibility and strain scale owing to its stretch-proof conductive inks and flexibility. In this subsection, an RF strain sensor using screen printing with stretch-proof silver conductive ink over a flexible PDMS material is presented. A rectangle-shaped top patch was considered and utilized for RF strain detection. The sensing operation considers the variation in the frequency of operation after a strain is applied along the vertical and horizontal dimensions of the sensing resonator. The practicality of the sensor was also verified by comparing both simulation and recorded results. Flexible Screen Printed Biosensor with High-Q Microwave Resonator for Rapid and Sensitive Detection of Glucose This section describes a sensitive and fast moderator-free glucose biosensor based on the phenomenon of an RF batteryless resonator realized using circular bent T-shaped identical impedance resonators printed using screen printing on a stretchy polyethylene material. Since the In this subsection, an RF strain sensor using screen printing with stretch-proof silver conductive ink over a flexible PDMS material is presented. A rectangle-shaped top patch was considered and utilized for RF strain detection. The sensing operation considers the variation in the frequency of operation after a strain is applied along the vertical and horizontal dimensions of the sensing resonator. The practicality of the sensor was also verified by comparing both simulation and recorded results. Flexible Screen Printed Biosensor with High-Q Microwave Resonator for Rapid and Sensitive Detection of Glucose This section describes a sensitive and fast moderator-free glucose biosensor based on the phenomenon of an RF batteryless resonator realized using circular bent T-shaped identical impedance resonators printed using screen printing on a stretchy polyethylene material. Since the In this subsection, an RF strain sensor using screen printing with stretch-proof silver conductive ink over a flexible PDMS material is presented. A rectangle-shaped top patch was considered and utilized for RF strain detection. The sensing operation considers the variation in the frequency of operation after a strain is applied along the vertical and horizontal dimensions of the sensing resonator. The practicality of the sensor was also verified by comparing both simulation and recorded results. Flexible Screen Printed Biosensor with High-Q Microwave Resonator for Rapid and Sensitive Detection of Glucose This section describes a sensitive and fast moderator-free glucose biosensor based on the phenomenon of an RF batteryless resonator realized using circular bent T-shaped identical impedance resonators printed using screen printing on a stretchy polyethylene material. Since the device has a high QF of 166, the proposed glucose biosensor has a sensitivity of 72 MHz/(mg·mL −1 ) and an ultralow detection limit of 0.0167 µM at a central frequency of 11.8 GHz, within the linear detection range of 1-5 mg/mL. Moreover, the fair dependency of the loaded QF (Q L ), propagation constant (γ), reflection coefficient (S 11 ), and device impedance (Z) on the levels of glucose facilitates adequate multidimensional sensing by the glucose sensor. Designing and Construction of the Sensor Two circular-folded T-shaped uniform impedance resonators (TSUIRs) joined with parallel-coupled feeding lines for the construction of biosensor device over a PET complement are clearly illustrated in Figure 25a. The condition for the resonance of TSUIR is to be disclosed as an adjustment to the condition of resonance for the two sections of a stepped-impedance resonator, with Z in = 0 and Z 1 = Z 2 as below [91]: Figure 25a depicts two sections of the resonator with electrical lengths of θ 1 and θ 2 . The measurements of the structure were optimized for resonance at the fundamental frequency of 11.8 GHz. This selection of frequency was relevant for glucose detection since the changes in the dielectric constant of this region are highly significant to the glucose-water solution with purity in contrast to the regions with lower frequency [92]. The geometry of the biosensing resonator compels the coupling gap (S) to affect the coupling coefficient significantly, which consequently affects the QF of the resonator [93]. Hence, the coupling gap was designated as the sensing region of the biosensor, as demonstrated in Figure 25b. The size of the coupling gap was optimized to 0.2 mm in order to realize a high QF of 166. Figure 25c shows a snapshot of the sensor prototype and compares the measurement and simulation results of S 11 parameter of the resonator. The recorded fundamental frequency was reduced by 50 MHz. The quality factor was reduced, which was ascribed to a combination of bending loss, dielectric loss of the substrate, and physical dimension accuracy. Figure 25d demonstrates the corresponding design of the device loaded with a glucose section, expressed by L T and R T , which denote the source inductance and resistive loss of the load feed coupler, respectively. L R and C R represent the inductance and capacitance of the resonator, respectively. device has a high QF of 166, the proposed glucose biosensor has a sensitivity of 72 MHz/(mg•mL −1 ) and an ultralow detection limit of 0.0167 μM at a central frequency of 11.8 GHz, within the linear detection range of 1-5 mg/mL. Moreover, the fair dependency of the loaded QF (QL), propagation constant (γ), reflection coefficient (S11), and device impedance (Z) on the levels of glucose facilitates adequate multidimensional sensing by the glucose sensor. Designing and Construction of the Sensor Two circular-folded T-shaped uniform impedance resonators (TSUIRs) joined with parallelcoupled feeding lines for the construction of biosensor device over a PET complement are clearly illustrated in Figure 25a. The condition for the resonance of TSUIR is to be disclosed as an adjustment to the condition of resonance for the two sections of a stepped-impedance resonator, with Zin = 0 and Z1 = Z2 as below [91]: (4) Figure 25a depicts two sections of the resonator with electrical lengths of 1 and 2 . The measurements of the structure were optimized for resonance at the fundamental frequency of 11.8 GHz. This selection of frequency was relevant for glucose detection since the changes in the dielectric constant of this region are highly significant to the glucose-water solution with purity in contrast to the regions with lower frequency [92]. The geometry of the biosensing resonator compels the coupling gap (S) to affect the coupling coefficient significantly, which consequently affects the QF of the resonator [93]. Hence, the coupling gap was designated as the sensing region of the biosensor, as demonstrated in Figure 25b. The size of the coupling gap was optimized to 0.2 mm in order to realize a high QF of 166. Figure 25c shows a snapshot of the sensor prototype and compares the measurement and simulation results of S11 parameter of the resonator. The recorded fundamental frequency was reduced by 50 MHz. The quality factor was reduced, which was ascribed to a combination of bending loss, dielectric loss of the substrate, and physical dimension accuracy. Figure 25d demonstrates the corresponding design of the device loaded with a glucose section, expressed by LT and RT, which denote the source inductance and resistive loss of the load feed coupler, respectively. LR and CR represent the inductance and capacitance of the resonator, respectively. L c and C c denote the inductance and capacitance, respectively, owing to the magnetic and electric coupling of the resonators with the load and the source, and are dependent on the glucose-level inside the testing sample. A Smart 3S screen printer was used to print the intended designs on a 0.245-mm-thick PET material using pg 007 silver ink (PURU, Seoul, Korea) diluted with ethylene glycol. The PET dielectric material has a permittivity of 3.1 and tangential loss of 0.0324. The deposited silver nanoparticle ink has a thickness of 1 µm. The printed patterns are placed in a heating chamber for 5 min at 150 • C to dry them and to increase the conductivity. Preparation of Testing Samples and Measurements A standard solution of glucose consisting of a combination of DI water (Millipore TM) and D-glucose powder (SIGMA, life science, GC, St. Louis, MO, USA) was prepared at the concentrations of 1, 2, 3, 4, and 5 mg/mL. Subsequently, 2 µL of these liquid mixtures were positioned on the surface of the sensor using a micropipette. The reflection coefficients were recorded at frequencies ranging 1-15 GHz using an Agilent 8510C vector network analyzer. The testers were positioned over the detecting area of the biosensor after every 2 s. Detection using S-Parameters The shifts in the central frequency, indicated by the peak value of S 11 of the five samples of glucose-DI water mixtures examined, are shown in Figure 26a. The fundamental frequencies of the sensor were 10.81 and 11.09 GHz for glucose samples with the maximum and minimum concentrations of 5 and 1 mg/mL, respectively. Therefore, the fundamental frequency of the sensor was observed to further reduce when the purity of the glucose liquid was reduced. However, for the rest of the glucose samples, the fundamental frequency increased from 10.81 GHz as the concentration of the glucose was increased. This behavior is caused by the interaction between aqueous glucose and the electromagnetic coupling among the resonators and feeding line. This interaction appears to be dependent on the increase in the permittivity of the glucose composition, as a result of the decrease in glucose concentration [95]. A regression analysis reveals a good linear correlation (r 2 = 0.9993) between the glucose concentration and shift in central frequency. The linear equation is expressed as follows: where y and x represent the central frequency and concentration of glucose, respectively. Therefore, the sensor exhibited a sensitivity of 71 MHz/(mg·mL −1 ) for glucose-water solution. According to the optimization study and the associated calibration plot (see Figure 26b), the detection limit of the assay for a signal-to-noise ratio (S/N) of 3 was calculated as 0.0167 µmol of glucose in a 2 µL sample, as outlined in [96]. The S-parameters for each sample were measured four times. Although the points deviated from the central frequency, as shown by the error bars, there was no overlapping between each purity solution. Thus, the observation confirmed that the experiment is repeatable with the same phenomenon. Figure 26c shows the changes occurring in the loaded QF (Q L ) and reflection coefficient (S 11 ) of the sensor for glucose testing samples of different purity levels. S 11 was maximized at −28.1 and −14.9 dB for glucose compositions of 1 and 5 mg/mL, respectively. There is a negative correlation between Q L and the concentration of glucose. This relationship is expected owing to the positive correlation between the loss factor and concentration of glucose. Detecting through Derived Parameters The propagation constant (γ) and impedance (Z) were obtained from the recorded reflection coefficients of the glucose testers, using the approach described in [97]. The propagation constant increased with changes in glucose purities, from approximately 11.51 GHz to 12.51 GHz, as shown in Figure 27a. Dips in resonance, which have a positive correlation between the frequency and glucose concentration, are observed. Dips in resonant impedance are also observed. These dips occur at dissimilar frequencies for varying glucose concentrations. The frequencies of these dips also have a positive correlation with glucose concentration, as shown in Figure 27b. This subsection presents a stretchable biosensor using screen printing technology as a high QF RF batteryless resonator recognized for moderator-less sensing of glucose levels. Two circular-folded T-shaped uniform impedance resonators were joined with parallel-coupled feeding lines on PET sheets. Based on the central frequency shifts, the projected sensor device confirmed a highly sensitive and quick sensing mechanism of glucose with a significantly lower sensing limit. Summary Printed RF microelectronics is an evolving zone of investigation with larger commercial expectations owing to its capability to sidestep conventional inflexible and expensive silicon-based circuits to fabricate different types and shapes of components on bendable materials using highquality printer methods. For the three additive techniques mentioned in this review, inkjet-printing is an alluring process for manufacturing electronic components owing to the negligible waste produced and its effectiveness at handling some expensive materials. Inkjet printing of the conducting forerunner materials, typically conductive nanoparticles or metallic-organic facilities, is Detecting through Derived Parameters The propagation constant (γ) and impedance (Z) were obtained from the recorded reflection coefficients of the glucose testers, using the approach described in [97]. The propagation constant increased with changes in glucose purities, from approximately 11.51 GHz to 12.51 GHz, as shown in Figure 27a. Dips in resonance, which have a positive correlation between the frequency and glucose concentration, are observed. Dips in resonant impedance are also observed. These dips occur at dissimilar frequencies for varying glucose concentrations. The frequencies of these dips also have a positive correlation with glucose concentration, as shown in Figure 27b. Detecting through Derived Parameters The propagation constant (γ) and impedance (Z) were obtained from the recorded reflection coefficients of the glucose testers, using the approach described in [97]. The propagation constant increased with changes in glucose purities, from approximately 11.51 GHz to 12.51 GHz, as shown in Figure 27a. Dips in resonance, which have a positive correlation between the frequency and glucose concentration, are observed. Dips in resonant impedance are also observed. These dips occur at dissimilar frequencies for varying glucose concentrations. The frequencies of these dips also have a positive correlation with glucose concentration, as shown in Figure 27b. This subsection presents a stretchable biosensor using screen printing technology as a high QF RF batteryless resonator recognized for moderator-less sensing of glucose levels. Two circular-folded T-shaped uniform impedance resonators were joined with parallel-coupled feeding lines on PET sheets. Based on the central frequency shifts, the projected sensor device confirmed a highly sensitive and quick sensing mechanism of glucose with a significantly lower sensing limit. Summary Printed RF microelectronics is an evolving zone of investigation with larger commercial expectations owing to its capability to sidestep conventional inflexible and expensive silicon-based circuits to fabricate different types and shapes of components on bendable materials using highquality printer methods. For the three additive techniques mentioned in this review, inkjet-printing is an alluring process for manufacturing electronic components owing to the negligible waste produced and its effectiveness at handling some expensive materials. Inkjet printing of the conducting forerunner materials, typically conductive nanoparticles or metallic-organic facilities, is This subsection presents a stretchable biosensor using screen printing technology as a high QF RF batteryless resonator recognized for moderator-less sensing of glucose levels. Two circular-folded T-shaped uniform impedance resonators were joined with parallel-coupled feeding lines on PET sheets. Based on the central frequency shifts, the projected sensor device confirmed a highly sensitive and quick sensing mechanism of glucose with a significantly lower sensing limit. Summary Printed RF microelectronics is an evolving zone of investigation with larger commercial expectations owing to its capability to sidestep conventional inflexible and expensive silicon-based circuits to fabricate different types and shapes of components on bendable materials using high-quality printer methods. For the three additive techniques mentioned in this review, inkjet-printing is an alluring process for manufacturing electronic components owing to the negligible waste produced and its effectiveness at handling some expensive materials. Inkjet printing of the conducting forerunner materials, typically conductive nanoparticles or metallic-organic facilities, is employed as a comparatively faster method that can effectively handle roll-2-roll (R2R) manufacture. However, the sintering process in this fabrication, which is necessary to purify the patterns containing conductive inks, involves times longer than 20 min or higher temperatures (>200 • C). Specially, the higher sintering schedules are not scalable for R2R manufacturing. For instance, a web-speed of 1 m/s with a sintering time of 35 min corresponds to the production line required to be at least 1.9 km long. However, screen printing techniques are suitable for bulk production. Further, 3D printed structures, such as origami, are gaining interest owing to their ease of fabrication, which was previously an issue with the technique owing to the support structures required for fabrication. Some of the recent works involve combination of inkjet and screen printing in the development of batteryless sensors [98][99][100][101]. For this review, we have discussed and compared the recent advances in the three popular printing fabrication techniques with respect to their fabrication time, power consumption, and complexity. The focus was on the additive manufacturing of batteryless RF sensors and the advantages of these fabrication techniques for sensors perspective.
2017-09-22T10:11:29.082Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "70afa419d7c580128ac7be7f4c749c6e4c05a1cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/9/2068/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70afa419d7c580128ac7be7f4c749c6e4c05a1cd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Engineering", "Medicine", "Computer Science" ] }
257622870
pes2o/s2orc
v3-fos-license
Fundamental quantum limit achieved by internal squeezing in cavity-enhanced interferometric measurements , Introduction Cavities increase the precision in various kinds of optical sensing devices: from biological [1] and medical [2] sensors, to accelerometers [3], ultra-precise magnetometers [4] and gravitational-wave detectors [5]. The purpose of the cavities is to resonantly enhance the power of the carrier light as well as the signal due to constructive interference over the many cavity round trips. In quantum noise limited devices, the power of the carrier light is ideally maximized up to a value at which either the measured sample gets disturbed or the optical hardware of the sensor loses quality. Further increase in sensitivity beyond this limit requires implementation of squeezed states of light [6][7][8], which suppress the fluctuations in the observable to which the signal is coupled. Squeezed states of light have been successfully implemented in various sensors: gravitational-wave detectors [9][10][11][12][13], optomechanical devices [14], dark matter sensors [15,16], biological [1,17] and magnetometers [18]. Usually, squeezed states are injected into the detector from an external source. However, recently an internal squeezing approach has also been investigated both theoretically [19][20][21][22][23] and experimentally [24]. In this approach quantum noise squeezing is generated directly inside the detector cavity. For this, a second-order nonlinear crystal is placed inside the cavity. When pumped with second harmonic light, it amplifies a quadrature of the optical field selected by the pump phase. The orthogonal quadrature is deamplified. If the signal corresponds to a displacement of the quadrature being squeezed, it gets deamplified. Despite this, the signal-to-noise ratio (SNR) is still improved [19,24]. Internal squeezing can be used in combination with externally injected squeezing. The well-known Quantum Cramer-Rao Bound (QCRB) defines the best possible sensitivity of a device at every angular signal frequency Ω in the absence of decoherence and for a given optical power [25][26][27]. In reality, decoherence always prevents the QCRB from being reached. It also affects the optimal strategy for interferometric sensing. For instance, in the absence of loss, the best phase sensitivity of a Michelson interferometer at a given energy per measuring time would be achieved with a N00N state, where N is the deterministic total photon number [28]. In practice, however, the fragility of this state towards decoherence quickly makes it suboptimal. Instead, almost always the best strategy for reaching high sensitivity is the use of monochromatic light with the relevant signal sideband spectrum in a squeezed vacuum state [29]. The effect of decoherence on sensitivity has been investigated in the general metrological context of open quantum systems [30][31][32], but every specific system requires its own consideration. For optical states, photon loss is arguably the most fundamental kind of decoherence. Its effect on the sensitivity limit was explored in [29,[33][34][35]. Ref. [35] analysed cavity-enhanced sensors with external and internal squeezing as well as intra-cavity loss and computed the fundamental sensitivity limit for sufficiently small losses as the sum of the QCRB and the noises added due to decoherence. Here we show that the ultimate value of the fundamental limit surpasses the one presented in [35]. This is achieved by compensating a significant part of decoherence by optimizing the internal squeezing. Our limit ultimately is defined solely by the losses inside the detector cavity. We realize an experiment that supports the new fundamental limit by optimizing the internal squeeze factor for a given finite external squeeze factor. We achieve an SNR enhancement of 4 dB independent on the detection loss. This serves as the first demonstration of decoherence compensation for continuous sig- Conceptual representation of a cavity-enhanced sensor with internal and external squeezing. The detector cavity is used to measure the displacement x of the movable end mirror. This displacement imprints a phase modulation on the light field, which is amplified by the cavity. Internal squeezing is generated by a nonlinear crystal inside the cavity by pumping it with second harmonic pump. Depending on the phase of the pump, the signal is either amplified or deamplified. External squeezing is generated independently and is injected into the detector cavity through a Faraday isolator (FI). The output signal with external and internal squeezing Sx is detected on a phase-sensitive detector. The impact of the detection loss det can be reduced by optimal choice of the internal gain q, defined by the pump strength. Optimal internal squeeze operation allows to reach the fundamental limit, defined by internal loss int. nals, opening the path towards implementation of the approach in various metrological experiments. We claim that the new fundamental sensitivity limit is the most general one for cavity-enhanced sensing devices such as gravitational-wave detectors. Internal squeezing for enhanced sensitivity The simplest cavity-enhanced sensor is an optical cavity with a movable end mirror. More complex devices can often be mapped into this model [36]. The external force displaces the movable mirror, which creates a phase modulation signal on the reflected light field, which is detected with a balanced homodyne detector. Our analysis considers both injected (quadrature) squeezed light as well as a squeeze operation inside the sensor cavity ( Fig. 1). We focus on the shot-noise limited sensitivity, ignoring quantum back-action effects such as photon radiation pressure, since they can be circumvented by quantum back-action evasion techniques [37]. Optical loss in the system influences the sensitivity in different ways, depending on whether the loss occurs (i) before the measurement interaction, i.e. on the meter, (ii) after the measurement, i.e. on the meter that carries the full information, or (iii) during the time when the meter is accumulating the signal. In the first case, the loss occurs between the squeeze operation and injection in the cavity. This loss sets an upper bound on the external squeeze factor, which of course should be as high as possible while maintaining the high purity of the state. In the second case, the loss occurs after the information has been imprinted on the meter, e.g. due to an imperfect quantum efficiency of the photo-electric detection. In this case the signal-to-noise ratio is reduced due to the additional vacuum being mixed in. If the signal can be parametrically amplified before the loss occurs, the signal-to-noise-ratio is maintained, and made more robust against optical loss that occurs after the amplification [6]. The parametric gain should be as high as possible, which results in antisqueezed quantum noise. This was recently explored in other contexts [38][39][40][41][42]. Case (iii) is central to this work and more complex. The loss inside the cavity affects the measurement process itself, and the resulting signal-tonoise ratio is ultimately limited by this internal loss. No consequent operation could improve that, thus the internal loss defines the fundamental limit. Depending on the combination of the cavity internal loss, the detection loss on the out-coupled light and the injected squeeze factor, either internal noiseless parametric amplification or deamplification is beneficial. We use the input-output formalism [43][44][45] to derive the noise spectrum S sn (Ω) and the power of signal transfer function T x (Ω), as well as the noise-to-signal ratio S x (Ω). They take the form: where T c is the coupling mirror power transmission, int is the internal power loss, and q is the roundtrip parametric power gain, det is the detection power loss, see Fig. 1, β −1 = e −2rext is the external squeezing, with r ext the corresponding squeeze parameter [46], c is the speed of light, λ is the central wavelength and P c is intra-cavity light power. In deriving these equations we used the single-mode approximation, where only one optical mode acquires the signal, and {T c , int , q} 1 [47]. We fixed the average intra-cavity power P c and assumed no loss or depletion on the second harmonic pump power. In order to simplify the conceptual explanation, we focus on the case of the peak sensitivity, which occurs at Ω = 0, and leave out the frequency dependence due to the cavity linewidth. From the input-output relation in the lossless case we obtain the QCRB: which turns to zero in the limit of infinite input squeezing β → ∞ or at the (lossless) parametric threshold for internal squeezing q = T c , resulting in the well-known theoretical limit of infinite SNR in lossless systems. In [35], it was proposed that the loss-induced sensitivity limit is defined by the sum of the QCRB for a lossless system and additional noise added due to the loss. There it was suggested that it can be achieved when the internal gain reaches its threshold (assuming that internal loss is small, i.e. int T c ): However, this expression does not take into account the possibility to optimize internal squeezing. Internal squeezing enhances the sensitivity in one of two ways. (a) When the detection loss or the external squeezing are small, it deamplifies and squeezes the signal quadrature. The deamplification factor of the signal is limited to 6 dB, while the squeeze factor of the quantum noise can in principle approach infinity [24,48,49], increasing the overall SNR. The numbers hold for zero internal loss and at a pump power that corresponds to the optical oscillation threshold of the χ (2) process. The factors are different because the quantum noise enters the cavity through the coupling mirror while the signal is exclusively generated inside the cavity. In practice, there is an optimal parametric pump power value below threshold, which depends on the nonzero optical loss value. (b) When the detection loss or the external squeeze factor are high, the internal squeeze parameter has the opposite sign in order to amplify and anti-squeeze the signal quadrature. In this case, the impact of the detection loss is mitigated, as proposed in [6]. Here this amplification is realised inside the cavity, i.e. already during the time when the signal is accumulated. The optimal internal gain q depends on the quantities int , T c , det , and β. It is computed by minimizing the value of the sensitivity S x (0) in Eq. (3), which result in the optimal sensitivity: The expression for S opt 1/β is strictly lower than the limit proposed in [35] as given by Eq. (5) for non-zero detection loss. Highly squeezed beam has significant photon power, therefore the measurement bandwidth is assumed to be sufficiently small, such that the power over this bandwidth remains small. The fundamental sensitivity limit of our work is achieved for infinite external squeezing, β → ∞. In this case the optimal internal gain maximally amplifies the signal quadrature, q = −q th , and the limit becomes defined solely by the internal loss: The equation formalizes the main statement of our paper: for a fixed power in the carrier field, if the exter- nal squeeze factor approaches infinity (over a measurement bandwidth approaching zero), the noise-to-signal ratio becomes independent on the detection loss and approaches zero when the cavity internal loss approaches zero. As we discussed before, injection loss would cause the impurity of the external squeezing, and thus limit external squeeze parameter. In this case the fundamental limit in Eq. (9) cannot be achieved, but for small values of loss the optimal sensitivity in Eq. (7) is still attainable. We derive the full model including the injection loss in Ref. [47]. Experimental validation. We demonstrated the compensation of the detection loss and the existence of optimal internal squeezing in a table-top experimental setup, see Fig. 2. Our internal squeezing cavity (ISC) was a Fabry-Perot cavity with a nonlinear periodically poled KTP crystal inside acting as an optical-parametric amplifier. Depending on the power of the second harmonic pump the gain of the internal squeezing q was varied. The phase of the pump was actively controlled to keep the amplification phase stable. The cavity output field was analyzed with a balanced-homodyne detector (BHD). The phase of the BHD's local oscillator was actively controlled to keep the readout quadrature stable. We injected a weak field carrying a 5 MHz phase modulation signal from the back of the ISC, which emulated the measurement signal. Depending on the phase of the pump, we could observe amplification or deamplification of the signal, as well as anti-squeezing or squeezing of the noise. FIG. 3. Experimental demonstration of optimal sensitivity with internal and external squeezing approaching the optimal limit in Eq. (7). Each plot shows the improvement in the SNR with respect to the case without internal and with external squeezing. This improvement was observed as a function of internal gain in the detector cavity: negative gain means deamplification (squeezing), positive -amplification. Solid curves are the theoretical predictions based on the independently measured parameters. Plots (a-c) demonstrate the different regimes of internal squeezing for different level of external squeezing: 6.5 dB, 12 dB and 20 dB (values inferred at production). For low values of external squeezing, peak SNR is achieved when squeezing is generated also internally. For high values of external squeezing it becomes optimal to amplify the signal quadrature. Plots (c-e) demonstrate the independence of the peak SNR enhancement on the detection loss. The enhancement of ∼4 dB is achieved for 10%, 20% and 30% of detection loss, for different levels of internal gain. In all data sets, for the case of high squeezing (close to gain equal to −1), the effect of phase noise played a significant role: due to the jitter in the phase of injected squeezing, some part of anti-squeezed noise coupled into the readout quadrature, which further degraded the sensitivity (seen as significant nonlinearity of the curve close to gain equal to −1). The error bars on the experimental data are not shown, see the discussion in the main text. By taking the spectrum of the signal and the noise, we could compute the change in the SNR compared to the case when the pump was off. Further, we injected ex-ternal squeezing from a second squeeze laser [7]. We kept the external squeeze field without any bright carrier field and periodically varied its phase. On the output of the homodyne detector, we recorded the spectrogram (spectrum as a function of time) of the observed signal. As a result, we consecutively measured two orthogonal quadratures of injected squeeze field, and used it to extract optical parameters of the setup: the transmission of the incoupling mirror T c = 11%, the internal loss int = 1.2%, the injection loss of 8% and the detection loss det = 10% [50]. Initial injected external squeezing was inferred to be 20 dB (before injection and detection loss). Phase noise was inferred from the measurements with different pump strength to be around 50 mrad. We observed less phase noise for smaller values of external squeezing: 40 mrad for 12 dB and 15 mrad for 6.5 dB. We conclude that the main contribution came from the phase noise of the external squeezing interacting with the internal squeezing process. We changed the internal squeezing gain and recorded squeezing spectra together with amplified or de-amplified signal. In the first stage of the experiment, we gradually increased the injected squeezing value from 6.5 dB to 20 dB. When both the detection loss and the injected squeezing were small, the optimal internal gain was a squeezing process, see Fig.3(a). As we increased injected squeezing, the optimal internal gain approached zero, see Fig.3(b), and then it became optimal to amplify inside the detector cavity, Fig.3(c). We further artificially increased the detection loss from 10% to 30%, by dumping part of the squeezed light on a polarizing beam splitter. By taking the full range from maximal deamplification to maximal amplification we could find the optimal point where the SNR was the highest. We saw maximal SNR improvement of 4 dB, independent of the detection loss, see Fig. 3(c-e). This way, we were able to demonstrate for the first time the compensation of quantum decoherence with optimal choice of internal gain. In the case of 10% of detection loss, the optimal internal gain was close to zero. For higher loss, as expected from our theory in Eq. (8), it became optimal to amplify the signal inside the cavity. Our results show good match with theory and demonstrate the significant enhancement over the case of maximal intra-cavity squeezing, which was considered in [35]. We could not deduce meaningful error bars for Fig.3. Most of the source data were averaged directly on the spectrum analyser, which did not allow to extract variances. However, we used the theoretical description together with independently measured parameters of our setup to calculate the theoretical curves in the graphs. All data points are statistically independent. Therefore, the good match between the data and the theory allows to be confident in the significance of our observed results in Fig.3. Discussion and outlook. Our result can be placed in more general context of computing the impact of the purity on the QCRB [30][31][32], which has not been done for cavity-enhanced sensors. While we do not derive the QCRB for our setup from first principles, our argument follows the same spirit as Refs. [30][31][32]. There are two general conditions for achieving QCRB: (i) the detector should be in a pure state, and (ii) the back-action of the meter should be evaded [26,27]. If the state is not pure, it not only prevents achieving arbitrary low uncertainty in one quadrature, but also prohibits efficient back-action evasion, which relies on quantum correlations between the two quadratures. Therefore the main limitation on the achievable sensitivity is defined by the purity of the state upon interaction between the meter and the object. This is directly manifested in our fundamental limit in Eq. (9), via its proportionality to cavity internal loss. We also note that the output amplification proposed in Ref. [6] also evades the detection loss. Compared to the internal squeezing, it amplifies the signal after it has exited the detector cavity. Such an approach could also yield high sensitivity approaching the fundamental limit. However, in some applications, like gravitational-wave detection, the main source of the detection loss would occur between the interferometer and the output amplifier. In other cases, like chip-based sensors, propagation and coupling losses could be dominating. In these cases implementing output amplification might be in fact not beneficial or challenging, while internal squeezing provides a natural way to use the least lossy part of the system -the detector cavity itself. In the supplementary paper [47] we provide more details on the comparison of the two approaches. Our results are readily applicable to quantummetrological devices that are limited by quantum shot noise, and whose principle schemes can be mapped to a single cavity. It is especially promising for the cavities that naturally have nonlinear materials in them, such as on-chip devices [51,52], or whispering-gallery-mode sensors [53][54][55][56]. For these devices squeezing injection might be challenging, and the readout is often subject to significant losses. Then internal squeezing can become a useful tool for compensating these losses and achieving further quantum improvement to the sensitivity. Even in the systems with several cavities internal squeezing allows to enhance the sensitivity, by quantum-expanding the linewidth [22,23] or creating PT-symmetric configurations [57,58]. Our work contributes to the detailed understanding of the limits on quantum metrological experiments in such devices, and enables a wider range of acceptable losses in the system.
2023-03-20T05:09:53.424Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "2a42099baac1c0f8ee0d468b17e6c0b8aa876b29", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2a42099baac1c0f8ee0d468b17e6c0b8aa876b29", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245091208
pes2o/s2orc
v3-fos-license
Locking compression plate fixation of periprosthetic distant humeral fracture after intramedullary nail for humeral shaft fracture: A case report a Guangzhou University of Traditional Chinese Medicine Second Affiliated Hospital, Guangdong Provincial Hospital of Traditional Chinese Medicine, Orthopedics Trauma Zhuhai Branch, Jingle Road Number 53, Xiangzhou District, Zhu Hai City, Guangdong Province 519015, China b Guangzhou University of Traditional Chinese Medicine Second Affiliated Hospital, Guangdong Provincial Hospital of Traditional Chinese Medicine, Department of Orthopaedics, Dade Road Number 111, Yuexiu District, Guangzhou City, Guangdong Province 440100, China Periprosthetic distant humeral fracture after intramedullary nail for humeral shaft fracture is relatively rare in clinical practice. Only a few cases have been reported in literature [1,2]. It is extremely rare for a distal humerus fracture around the prosthesis due to reinjury only 3 weeks after intramedullary nailing. Also, the treatment of such case is very complex and challenging. Because it is always difficult to achieve satisfactory fixation with an intramedullary device, while minimizing soft tissue disruption. We hereby present a case of a periprosthetic distal humeral fracture (occurring below an intramedullary humeral nail inserted for fixation of a midshaft humeral shaft fracture) that was successfully managed using a locking compression plate with cerclage cables. Case presentation A 47-year-old woman injured her left humerus due to a fall. Initial radiographs and CT revealed a transverse, angulated, midshaft of left humeral shaft fracture (2018 OTA 12A3) ( Fig. 1a-b), and the patient underwent a close reduction and internal fixation with an intramedullary nail four days after injury. Excellent reductions were achieved according to post operation X-ray and CT (Fig. 1c-f). However, after the patient was discharged from hospital one week after the operation, the patient reinjured her left humerus with swelling and pain due to a fall from 0.8 meter high of window platform at home 3 weeks after the operation. She was then sent to the emergency department of our hospital immediately and initial radiographs revealed periprosthetic distal humeral fracture occurred below the intramedullary humeral nail while the humerus shaft fractures were well aligned without intramedullary nail loosening ( Fig. 1g-i). She was readmitted and reoperated one week after the injury. The patient was placed in a supine position under general anesthesia. The posterior-lateral of distal humerus approach was used. The distal locking screw was removed and radial nerve of posterior humerus was exposed and protected. Reduction forceps were used to maintain reduction for the distal humeral fracture temporary. A six-hole distal humeral plate (LCP plating system, Depuy Synthes) was then chosen as a dynamic compression plate. After that, five locking screws were inserted into the distal humerus, while the proximal humeral of fracture were fixated by two unicortical locking screws and two cerclage cables. Good reduction and stable fixation of fracture were confirmed under direct vision. Postoperatively, a triangular towel was suspended on the forearm for 6 weeks, gradually restoring elbow function and allowing shoulder and elbow movement. CT showed bony union at 5 months after ORIF (Fig. 1o-p). At the last follow-up on the end of 5 months postoperatively, the patient had no shoulder and elbow pain. The active elbow ranges of motion were an extending of 0 • , flexion of 140 • (Fig. 1r-s), pronation of 90 • , supination of 90 • . The active shoulder ranges of motion were an anterior elevation of 120, external rotation at the side of 50, and internal rotation to the 3rd lumbar vertebra. The numerical rating scale for pain was zero. The American Shoulder and Elbow Surgeons shoulder scores were 80 and she was able to perform all daily activities without assistance. . c-f. AP and lateral view X-ray and CT of left humerus first day after surgery showed excellent reduction and good alignment by intramedullary nail fixation. g-j. AP and lateral view X-ray and CT of reinjury left humerus three weeks after surgery showed periprosthetic distant humeral fracture. k-n. AP and lateral view X-ray and CT of left humerus first day after reoperation showed excellent reduction of periprosthetic distant humeral fracture. o-p. CT of left humerus five months after reoperation showed union of periprosthetic distant humeral fracture. q-r. Outside view of patient five months after reoperation showed good motion of elbow joint. arthroplasty. Periprosthetic distant humeral fracture after intramedullary nail for humeral shaft fracture is relatively rare in clinical practice. Only a few cases have been reported in literature [1][2][3]. Discussion It is extremely rare for a distal humerus fracture around the prosthesis due to re-injury only 3 weeks after intramedullary nailing. As far as we know, there has been no report in the literature. Similar to our patient, the distal region of the intramedullary nail locking screw had an increased risk of fracture due to increased stress [3]. The treatment of such a case is very complex and challenging. Firstly, Fractures of the distal humerus around prosthesis with intramedullary nailing are obviously displaced and difficult to reduce due to the mass effect of intramedullary nails. Secondly, for extremely unstable fractures, maintaining reduction with cast to this kind of short oblique fracture can also be complex and challenging. Even when unavoidable, prolonged immobilization can lead to stiff elbows and dysfunction. ORIF is a treatment option in the absence of evidence of intramedullary nail loosening. ORIF is also a challenge in this situation. Both distal humeral and humeral shaft fractures require plate fixation after nail removal, which inevitably require extended incisions and extensive soft tissue dissection. And it would also lead to a nonunion or delayed union of humeral shaft fracture or even failure of operation that destroyed blood supply both inside and outside of humeral intramedullary. It would limit the way of screw fixation on the proximal plate such as bicortical locking screws that cannot be fixed due to the mass effect of the nail and the thickness of the bone cortex. Therefore, it could only be done by cerclage cable or one cortical screw fixation which is inferior to bicortical locking screws concerning torsion load and axial compression load [4,5]. That is why we increased two cerclage cables after two unbicortical locking screw fixation in operation to avoid loose and failure of internal fixation. We removed the distant locking screw of the nail for the reason that it alleviates the distant humeral fracture and provides a better location of the plate. Primary humeral shaft fracture lines were confirmed under direct vision after removing the screw. Periprosthetic distant humeral fractures were found stable after fixation of a posterior lateral locking compression plate. Conclusion The periprosthetic distant humeral fracture is rare in clinical practice but should be paid more attention to because of its special fracture site. We demonstrated how to manage these difficult fractures successfully through a posterior incision with the use of a single locking plate combination with unicortical locking screws and cables around the region overlapped with the humeral nail. Soft tissue management is paramount to maintain vascularity to the fracture region and maximize healing potentials. Consent informed Consent was obtained from the patient for publication of this case report and accompanying images.
2021-12-12T17:12:53.362Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "407fdc13d67622188f9cc6cd55a7aeffa6f44643", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tcr.2021.100565", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2f290d6d4265e1a9cf483f53fc46e1204b23d716", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14032022
pes2o/s2orc
v3-fos-license
Robust control, multidimensional systems and multivariable Nevanlinna-Pick interpolation The connection between the standard $H^\infty$-problem in control theory and Nevanlinna-Pick interpolation in operator theory was established in the 1980s, and has led to a fruitful cross-pollination between the two fields since. In the meantime, research in $H^\infty$-control theory has moved on to the study of robust control for systems with structured uncertainties and to various types of multidimensional systems, while Nevanlinna-Pick interpolation theory has moved on independently to a variety of multivariable settings. Here we review these developments and indicate the precise connections which survive in the more general multidimensional/multivariable incarnations of the two theories. Introduction Starting in the early 1980s with the seminal paper [139] of George Zames, there occurred an active interaction between operator theorists and control engineers in the development of the early stages of the emerging theory of H ∞ -control. The cornerstone for this interaction was the early recognition by Francis-Helton-Zames [65] that the simplest case of the central problem of H ∞ -control (the sensitivity minimization problem) is one and the same as a Nevanlinna-Pick interpolation problem which had already been solved in the early part of the twentieth century (see [110,105]). For the standard problem of H ∞ -control it was known early on that it could be brought to the so-called Model-Matching form (see [53,64]). In the simplest cases, the Model-Matching problem converts easily to a Nevanlinna-Pick interpolation problem of classical type. Handling the more general problems of H ∞ -control required extensions of the theory of Nevanlinna-Pick interpolation to tangential (or directional) interpolation conditions for matrix-valued functions; such extensions of the interpolation theory were pursued by both engineers and mathematicians (see e.g. [26,58,90,86,87]). Alternatively, the Model-Matching problem can be viewed as a Sarason problem which is suitable for application of Commutant Lifting theory (see [125,62]). The approach of [64] used an additional conversion to a Nehari problem where existing results on the solution of the Nehari problem in state-space coordinates were applicable (see [69,33]). The book of the H ∞ -theory and the interpolation theory in these more general settings. As we shall see, some aspects which are taken for granted in the 1-D/single-variable case become much more subtle in the N -D/multivariable case. Along the way we shall encounter a variety of topics that have gained attention recently, and sometimes less recently, in the engineering literature. Besides the present Introduction, the paper consists of five sections which we now describe: (1) In Section 2 we lay out four specific results for the classical 1-D case; these serve as models for the type of results which we wish to generalize to the N -D/multivariable settings. (2) In Section 3 we survey the recent results of Quadrat [117,118,119,120,121,122] on internal stabilization and parametrization of stabilizing controllers in an abstract ring setting. The main point here is that it is possible to parametrize the set of all stabilizing controllers in terms of a given stabilizing controller even in settings where the given plant may not have a double coprime factorizationresolving some issues left open in the book of Vidyasagar [136]. In the case where a double-coprime factorization is available, the parametrization formula is more efficient. Our modest new contribution here is to extend the ideas to the setting of the standard problem of H ∞ -control (in the sense of the book of Francis [64]) where the given plant is assumed to have distinct disturbance and control inputs and distinct error and measurement outputs. (3) In Section 4 we look at the internal-stabilization/H ∞ -control problem for multidimensional systems. These problems have been studied in a purely frequencydomain framework (see [92,93]) as well as in a state-space framework (see [81,55,56]). In Subsection 4.1, we give the frequency-domain formulation of the problem. When one takes the stable plants to consist of the ring of structurally stable rational matrix functions, the general results of Quadrat apply. In particular, for this setting stabilizability of a given plant implies the existence of a double coprime factorization (see [119]). Application of the Youla-Kučera parametrization then leads to a Model-Matching form and, in the presence of some boundary rank conditions, the H ∞ -problem converts to a polydisk version of the Nevanlinna-Pick interpolation problem. Unlike the situation in the classical single-variable case, this interpolation problem has no practical necessary-and-sufficient solution criterion and in practice one is satisfied with necessary and sufficient conditions for the existence of a solution in the more restrictive Schur-Agler class (see [1,3,35]). In Subsection 4.2 we formulate the internal-stabilization/H ∞ -control problem in Givone-Roesser state-space coordinates. We indicate the various subtleties involved in implementing the state-space version [104,85] of the double-coprime factorization and associated Youla-Kučera parametrization of the set of stabilizing controllers. With regard to the H ∞ -control problem, unlike the situation in the classical 1-D case, there is no useable necessary and sufficient analysis for solution of the problem; instead what is done (see e.g. [55,56]) is the use of an LMI/Bounded-Real-Lemma analysis which provides a convenient set of sufficient conditions for solution of the problem. This sufficiency analysis in turn amounts to an N -D extension of the LMI solution [78,66] of the 1-D H ∞ -control problem and can be viewed as a necessary and sufficient analysis of a compromise problem (the "scaled" H ∞ -problem). While stabilization and H ∞ -control problems have been studied in the statespace setting [81,55,56] and in the frequency-domain setting [92,93] separately, there does not seem to have been much work on the precise connections between these two settings. The main point of Subsection 4.3 is to study this relationship; while solving the state-space problem implies a solution of the frequency-domain problem, the reverse direction is more subtle and it seems that only partial results are known. Here we introduce a notion of modal stabilizability and modal detectability (a modification of the notions of modal controllability and modal observability introduced by Kung-Levy-Morf-Kailath [88]) to obtain a partial result on relating a solution of the frequency-domain problem to a solution of the associated state-space problem. This result suffers from the same weakness as a corresponding result in [88]: just as the authors in [88] were unable to prove that minimal (i.e., simultaneously modally controllable and modally observable) realizations for a given transfer matrix exist, so also we are unable to prove that a simultaneously modally stabilizable and modally detectable realization exists. A basic difficulty in translating from frequency-domain to state-space coordinates is the failure of the State-Space-Similarity theorem and related Kalman state-space reduction for N -D systems. Nevertheless, the result is a natural analogue of the corresponding 1-D result. There is a parallel between the control-theory side and the interpolation-theory side in that in both cases one is forced to be satisfied with a compromise solution: the scaled-H ∞ problem on the control-theory side, and the Schur-Agler class (rather than the Schur class) on the interpolation-theory side. We include some discussion on the extent to which these compromises are equivalent. (4) In Section 5 we discuss several 1-D variations on the internal-stabilization and H ∞ -control problem which lead to versions of the N -D/multivariable problems discussed in Section 4. It was observed early on that an H ∞ -controller has good robustness properties, i.e., an H ∞ -controller not only provides stability of the closed-loop system associated with the given (or nominal) plant for which the control was designed, but also for a whole neighborhood of plants around the nominal plant. This idea was refined in a number of directions, e.g., robustness with respect to additive or multiplicative plant uncertainty, or with respect to uncertainty in a normalized coprime factorization of the plant (see [100]). Another model for an uncertainty structure is the Linear-Fractional-Transformation (LFT) model used by Doyle and coworkers (see [97,98]). Here a key concept is the notion of structured singular value µ(A) for a finite square matrix A introduced by Doyle and Safonov [52,124] which simultaneously generalizes the norm and the spectral radius depending on the choice of uncertainty structure (a C * -algebra of matrices with a prescribed block-diagonal structure); we refer to [107] for a comprehensive survey. If one assumes that the controller has on-line access to the uncertainty parameters one is led to a gain-scheduling problem which can be identified as the type of multidimensional control problem discussed in Section 4.2-see [106,18]; we survey this material in Subsection 5.1. In Subsection 5.2 we review the purely frequency-domain approach of Helton [73,74] toward gain-scheduling which leads to the frequency-domain internal-stabilization/H ∞ -control problem discussed in Section 4.1. Finally, in Section 5.3 we discuss a hybrid frequency-domain/state-space model for structured uncertainty which leads to a generalization of Nevanlinna-Pick interpolation for single-variable functions where the constraint that the norm be uniformly bounded by 1 is replaced by the constraint that the µ-singular value be uniformly bounded by 1; this approach has only been analyzed for very special cases of the control problem but does lead to interesting new results for operator theory and complex geometry in the work of Bercovici-Foias-Tannenbaum [38,39,40,41], Agler-Young [5,6,7,8,9,10,11,12,13], Huang-Marcantognini-Young [77], and Popescu [114]. (5) The final Section 6 discusses an enhancement of the LFT-model for structured uncertainty to allow dynamic time-varying uncertainties. If the controller is allowed to have on-line access to these more general uncertainties, then the solution of the internal-stabilization/H ∞ -control problem has a form completely analogous to the classical 1-D case. Roughly, this result corresponds to the fact that, with this noncommutative enhanced uncertainty structure, the a priori upper bound µ(A) for the structured singular value µ(A) is actually equal to µ(A), despite the fact that for non-enhanced structures, the gap between µ and µ can be arbitrarily large (see [133]). In this precise form, the result appears for the first time in the thesis of Paganini [108] but various versions of this type of result have also appeared elsewhere (see [37,42,60,99,129]). We discuss this enhanced noncommutative LFT-model in Subsection 6.1. In Subsection 6.2 we introduce a noncommutative frequency-domain control problem in the spirit of Chapter 4 of the thesis of Lu [96], where the underlying polydisk occurring in Section 4.1 is now replaced by the noncommutative polydisk consisting of all d-tuples of contraction operators on a fixed separable infinite-dimensional Hilbert space K and the space of H ∞ -functions is replaced by the space of scalar multiples of the noncommutative Schur-Agler class introduced in [28]. Via an adaptation of the Youla-Kučera parametrization of stabilizing controllers, the internal-stabilization/H ∞ -control problem can be reduced to a Model-Matching form which has the interpretation as a noncommutative Sarason interpolation problem. In the final Subsection 6.3, we show how the noncommutative state-space problem is exactly equivalent to the noncommutative frequencydomain problem and thereby obtain an analogue of the classical case which is much more complete than for the commutative-variable case given in Section 4.3. In particular, if the problem data are given in terms of state-space coordinates, the noncommutative Sarason problem can be solved as an application of the LMI solution of the H ∞ -problem. While there has been quite a bit of recent activity on this kind of noncommutative function theory (see e.g. [14,22,75,82,115,116]), the noncommutative Sarason problem has to this point escaped attention; in particular, it is not clear how the noncommutative Nevanlinna-Pick interpolation problem studied in [22] is connected with the noncommutative Sarason problem. Finally we mention that each section ends with a "Notes" subsection which discusses more specialized points and makes some additional connections with existing literature. Acknowledgement. The authors thank Quanlei Fang and Gilbert Groenewald for the useful discussions in an early stage of preparation of the present paper. We also thank the two anonymous reviewers for their thorough readings of the first version and constructive suggestions for the preparation of the final version of this paper. The 1-D systems/single-variable case Let C[z] be the space of polynomials with complex coefficients and C(z) the quotient field consisting of rational functions in the variable z. Let RH ∞ be the subring of stable elements of C(z) consisting of those rational functions which are analytic and bounded on the unit disk D, i.e., with no poles in the closed unit disk D. We assume to be given a plant G = G11 G12 G21 G22 : W ⊕ U → Z ⊕ Y which is given as a block matrix of appropriate size with entries from C(z). Here the spaces U, W, Z and Y have the interpretation of control-signal space, disturbance-signal space, error-signal space and measurement-signal space, respectively, and consist of column vectors of given sizes n U , n W , n Z and n Y , respectively, with entries from C(z). For this plant G we seek to design a controller K : Y → U, also given as a matrix over C(z), that stabilizes the feedback system Σ(G, K) obtained from the signal-flow diagram in Figure 1 in a sense to be defined precisely below. Note that the various matrix entries G ij of G are themselves matrices with entries from C(z) of compatible sizes (e.g., G 11 has size n Z × n W ) and K is a matrix over C(z) of size n U × n Y . The system equations associated with the signal-flow diagram of Figure 1 can be written as  Here v 1 and v 2 are tap signals used to detect stability properties of the internal signals u and y. We say that the system Σ(G, K) is well-posed if there is a welldefined map from w v1 v2 to z u y . It follows from a standard Schur complement computation that the system is well-posed if and only if det(I − G 22 K) = 0, and that in that case the map from w v1 v2 to z u y is given by  We say that the system Σ(G, K) is internally stable if Σ(G, K) is well-posed and, in addition, if the map Θ(G, , stable inputs w, v 1 , v 2 are mapped to stable outputs z, u, y. Note that this is the same as the condition that the entries of Σ(G, K) be in RH ∞ . We say that the system Σ(G, K) has performance if Σ(G, K) is internally stable and in addition the transfer function T zw from w to z has supremum-norm over the unit disk bounded by some tolerance which we normalize to be equal to 1: Here T zw (λ) refers to the induced operator norm, i.e., the largest singular value for the matrix T zw (λ). We say that the system Σ(G, K) has strict performance if in addition T zw ∞ < 1. The stabilization problem then is to describe all (if any exist) internally stabilizing controllers K for the given plant G, i.e., all K ∈ C(z) nU ×nY so that the associated closed-loop system Σ(G, K) is internally stable. The standard H ∞ -problem is to find all internally stabilizing controllers which in addition achieve performance T zw ∞ ≤ 1. The strictly suboptimal H ∞ -problem is to describe all internally stabilizing controllers which also achieve strict performance T zw ∞ < 1. 2.1. The model-matching problem. Let us now consider the special case where G 22 = 0, so that G has the form G = G11 G12 G21 0 . In this case well-posedness is automatic and Θ(G, K) simplifies to Thus internal stability for the closed-loop system Σ(G, K) is equivalent to stability of the four transfer matrices G 11 , G 12 , G 21 and K. Hence internal stabilizability of G is equivalent to stability of G 11 , G 12 and G 21 ; when the latter holds a given K internally stabilizes G if and only if K itself is stable. Now assume that G 11 , G 12 and G 21 are stable. Then the H ∞ -performance problem for G consists of finding stable K so that G 11 + G 12 KG 21 ∞ ≤ 1. Following the terminology of [64], the problem is called the Model-Matching Problem. Due to the influence of the paper [125], this problem is usually referred to as the Sarason problem in the operator theory community; in [125] it is shown explicitly how the problem can be reduced to an interpolation problem. In general control problems the assumption that G 22 = 0 is an unnatural assumption. However, after making a change of coordinates using the Youla-Kučera parametrization or the Quadrat parametrization, discussed below, it turns out that the general H ∞ -problem can be reduced to a model-matching problem. 2.2. The frequency-domain stabilization and H ∞ problem. The following result on characterization of stabilizing controllers is well known (see e.g. [64] or [136,137] for a more general setting). of size (n Z + n Y ) × (n W + n U ) with entries in C(z) as above. Assume that G is stabilizable, i.e., there exists a rational matrix function K of size n U × n Y so that the nine transfer functions in (2.2) is stable. Moreover, if we are given a double coprime factorization for G 22 , i.e., stable transfer matrices D, N , X, Y , D, N , X and Y so that the determinants of D, D, X and X are all nonzero (in RH ∞ ) and (such double coprime factorizations always exists since RH ∞ is a Principal Ideal Domain), then the set of all stabilizing controllers K is given by either of the formulas where Λ is a free stable parameter from RH ∞ L(U ,Y) such that det(X + N Λ) = 0 or equivalently det( X + ΛN ) = 0. Through the characterization of the stabilizing controllers, those controllers that, in addition, achieve performance can be obtained from the solutions of a Model-Matching/Sarason interpolation problem. Theorem 2.2. Assume that G ∈ C(z) (nZ +nY )×(nW +nU ) is stabilizable and that G 22 admits a double coprime factorization (3.9). Let K ∈ C(z) nU ×nY . Then K is a solution to the standard H ∞ problem for G if and only if where Λ ∈ RH ∞ L(U ,Y) so that det(X + N Λ) = 0, or equivalently det( X + ΛN ) = 0, is any solution to the Model-Matching/Sarason interpolation problem for G 11 , G 12 and G 21 defined by i.e., so that We note that in case G 12 is injective and G 21 is surjective on the unit circle, by absorbing outer factors into the free parameter Λ we may assume without loss of generality that G 12 is inner (i.e., G 12 (z) is isometric for z on unit circle) and G 21 is co-inner (i.e., G 21 (z) is coisometric for z on the unit circle). Let Γ : [26,Theorem 16.9.3]), a direct generalization of the connection between a model-matching/Sarason interpolation problem with Nevanlinna-Pick interpolation as given in [125,65] for the scalar case, but we will not go into the details of this here. 2.3. The state-space approach. We now restrict the classes of admissible plants and controllers to the transfer matrices whose entries are in C(z) 0 , the space of rational functions without a pole at 0 (i.e., analytic in a neighborhood of 0). In that case, a transfer matrix F : U → Y with entries in C(z) 0 admits a state-space realization: There exists a quadruple {A, B, C, D} consisting of matrices whose sizes are given by where the state-space X is finite dimensional, so that for z in a neighborhood of 0. Sometimes we consider quadruples {A, B, C, D} of operators, of compatible size as above, without any explicit connection to a transfer matrix, in which case we just speak of a realization. Associated with the realization {A, B, C, D} is the linear discrete-time system of equations The system Σ and function F are related through the fact that F is the transferfunction of Σ. The two-by-two matrix (2.4) is called the system matrix of the system Σ. For the rest of this section we shall say that an operator A on a finite-dimensional state space X is stable if all its eigenvalues are in the open unit disk, or, equivalently, A n x → 0 as n → ∞ for each x ∈ X . The following result deals with two key notions for the stabilizability problem on the state-space level. has a positive-definite solution X. Here Γ < 0 for a square matrix Γ means that −Γ is positive definite. (II) Dually, if {C, A} is an output pair, i.e., C, A are operators with A : X → X and C : X → Y for a finite-dimensional state space X and a finite-dimensional output space Y, then the following are equivalent: (1) {C, A} is operator-detectable, i.e., there exists an output-injection operator L : Y → X so that A + LC is stable. (2) {C, A} is Hautus-detectable, i.e., the matrix pencil I−zA C is injective for all z in the closed disk D. When the input pair {A, B} satisfies any one (and hence all) of the three equivalent conditions in part (I) of Theorem 2.3, we shall say simply that {A, B} is stabilizable. Similarly, if (C, A) satisfies any one of the three equivalent conditions in part (II), we shall say simply that {C, A} is detectable. Given a realization {A, B, C, D}, we shall say that {A, B, C, D} is stabilizable and detectable if {A, B} is stabilizable and {C, A} is detectable. In the state-space formulation of the internal stabilization/H ∞ -control problem, one assumes to be given a state-space realization for the plant G: where the system matrix has the form  One then seeks a controller K which is also given in terms of a state-space realization which provides internal stability (in the state-space sense to de defined below) and/or H ∞ -performance for the closed-loop system. Well-posedness of the closedloop system is equivalent to invertibility of I − D 22 D K . To keep various formulas affine in the design parameters A K , B K , C K , D K , it is natural to assume that D 22 = 0; this is considered not unduly restrictive since under the assumption of wellposedness this can always be arranged via a change of variables (see [78]). Then the closed loop system Θ(G, K) admits a state space realization {A cl , B cl , C cl , D cl } given by its system matrix and internal stability ( [57].) Suppose that we are given a system matrix as in (2.6) with D 22 = 0 with associated transfer matrix G as in (2.5). Then there exists a K(z) = D K +zC K (I−zA K ) −1 B K which internally stabilizes G (in the state-spaces sense) if and only if {A, B 2 } is stabilizable and {C 2 , A} is detectable. In this case one such controller is given by the realization {A K , B K , C K , D K } with system matrix where F and L are state-feedback and output-injection operators chosen so that A + B 2 F and A + LC 2 are stable. In addition to the state-space version of the stabilizability problem we also consider a (strict) state-space H ∞ problem, namely to find a controller K given by a state-space realization {A K , B K , C K , D K } of compatible size so that the transferfunction T zw of the closed loop system, given by the system matrix (2.7), is stable (in the state-space sense) and has a supremum norm T zw ∞ of at most 1 (less than 1). The definitive solution of the H ∞ -control problem in state-space coordinates for a time was the coupled-Riccati-equation solution due to Doyle-Glover-Khargonekar-Francis [54]. This solution has now been superseded by the LMI solution of Gahinet-Apkarian [66] which can be stated as follows. Note that the problem can be solved directly without first processing the data to the Model-Matching form. (2.9) and the coupling condition Here N c and N o are matrices chosen so that N c is injective and Im N c = Ker B * 2 D * 12 and N o is injective and Im N o = Ker C 2 D 21 . We shall discuss the proof of Theorem 2.5 in Section 4.2 below in the context of a more general multidimensional-system H ∞ -control problem. The next result is the key to transferring from the frequency-domain version of the internal-stabilization/H ∞ -control problem to the state-space version. has all matrix entries in RH ∞ . Notes. In the context of the discussion immediately after the statement of Theorem 2.2, in case G 12 and/or G 21 drop rank at points on the unit circle, the Model-Matching problem in Theorem 2.2 may convert to a boundary Nevanlinna-Pick interpolation problem for which there is an elaborate specialized theory (see e.g. Chapter 21 of [26] and the more recent [43]). However, if one sticks with the strictly suboptimal version of the problem, one can solve the problem with the boundary interpolation conditions if and only if one can solve the problem without the boundary interpolation conditions, i.e., boundary interpolation conditions are irrelevant as far as existence criteria are concerned. This is the route taken in the LMI solution of the H ∞ -problem and provides one explanation for the disappearance of any rank conditions in the formulation of the solution of the problem. For a complete analysis of the relation between the coupled-Riccati-equation of [54] versus the LMI solution of [66], we refer to [127]. The fractional representation approach to stabilizability and performance In this section we work in the general framework of the fractional representation approach to stabilization of linear systems as introduced originally by Desoer, Vidyasagar and coauthors [50,137] in the 1980s and refined only recently in the work of Quadrat [118,121,122]. For an overview of the more recent developments we recommend the survey article [117] and for a completely elementary account of the generalized Youla-Kučera parametrization with all the algebro-geometric interpretations stripped out we recommend [120]. The set of stable single-input single-output (SISO) transfer functions is assumed to be given by a general ring A in place of the ring RH ∞ used for the classical case as discussed in Section 2; the only assumption which we shall impose on A is that it be a commutative integral domain. It therefore has a quotient field K := Q(A) = {n/d : d, n ∈ A, d = 0} which shall be considered as the set of all possible SISO transfer functions (or plants). Examples of A which come up include the ring R s (z) of real rational functions of the complex variable z with no poles in the closed right half plane, the Banach algebra RH ∞ (C + ) of all bounded analytic functions on the right half plane C + which are real on the positive real axis, and their discrete-time analogues: (1) real rational functions with no poles in the closed unit disk (or closed exterior of the unit disk depending on how one sets conventions), and (2) the Banach algebra RH ∞ (D) of all bounded holomorphic functions on the unit disk D with real values on the real interval (−1, 1). There are also Banach subalgebras of RH ∞ (C + ) or RH ∞ (D) (e.g., the Wiener algebra and its relatives such as the Callier-Desoer class-see [48]) which are of interest. In addition to these examples there are multivariable analogues, some of which we shall discuss in the next section. We now introduce some notation. We assume that the control-signal space U, the disturbance-signal space W, the error-signal space Z and the measurement signal space Y consist of column vectors of given sizes n U , n W , n Z and n Y , respectively, with entries from the quotient field K of A: We are given a plant G = G11 G12 G21 G22 : W ⊕U → Z ⊕Y and seek to design a controller K : Y → U that stabilizes the system Σ(G, K) of Figure 1 as given in Section 2. The various matrix entries G ij of G are now matrices with entries from K (rather than RH ∞ as in the classical case) of compatible sizes (e.g., G 11 has size n W × n U ) and K is a matrix over K of size n U × n Y . Again v 1 and v 2 are tap signals used to detect stability properties of the internal signals u and y. Just as was explained in Section 2 for the classical case, the system Σ(G, K) is well-posed if there is a well-defined map from w v1 v2 to z u y and this happens exactly when det(I − G 22 K) = 0 (where the determinant now is an element of A); when this is the case, the map from where Θ(G, K) is given by (2.2). We say that the system Σ(G, K) is internally stable if Σ(G, K) is well-posed and, in addition, if the map Θ(G, K) maps A nW ⊕A nU ⊕A nY into A nZ ⊕ A nU ⊕ A nY , i.e., stable inputs w, v 1 , v 2 are mapped to stable outputs z, u, y. Note that this is the same as the entries of Σ(G, K) being in A. To formulate the standard problem of H ∞ -control, we assume that A is equipped with a positive-definite inner product making A at least a pre-Hilbert space with norm · A ; in the classical case, one takes this norm to be the L 2 -norm over the unit circle. Then we say that the system Σ(G, K) has performance if Σ(G, K) is internally stable and in addition the transfer function T zw from w to z has induced operator norm bounded by some tolerance which we normalize to be equal to 1: We say that the system Σ(G, K) has strict performance if in fact T zw op < 1. The stabilization problem then is to describe all (if any exist) internally stabilizing controllers K for the given plant G, i.e., all K ∈ K nU ×nY so that the associated closed-loop system Σ(G, K) is internally stable. The standard H ∞ -problem is to find all internally stabilizing controllers which in addition achieve performance T zw op ≤ 1. The strictly suboptimal H ∞ -problem is to describe all internally stabilizing controllers which achieve strict performance T zw op < 1. The H ∞ -control problem for the special case where G 22 = 0 is the Model-Matching problem for this setup. With the same arguments as in Subsection 2.1 it follows that stabilizability forces G 11 , G 12 and G 21 all to be stable (i.e., to have all matrix entries in A) and then K stabilizes exactly when also K is stable. 3.1. Parametrization of stabilizing controllers in terms of a given stabilizing controller. We return to the general case i.e., G = G11 G12 G21 G22 : W ⊕U → Z ⊕Y. Now suppose we have a stabilizing controller K ∈ K nU ×nY . Set (3.1) Furthermore, Θ(G, K) can then be written as It is not hard to see that if U ∈ A nY ×nY and V ∈ A nU ×nY are such that det U = 0, U − G 22 V = I and (3.2) is stable, i.e., in A (nZ +nU +nY )×(nW +nU +nY ) , then K = V U −1 is a stabilizing controller. A dual result holds if we set while conversely, for any U ∈ A nU ×nU and V ∈ A nU ×nY with det U = 0 and U − V G 22 = I and such that (3.4) is stable, we have that K = U −1 V is a stabilizing controller. This leads to the following first-step more linear reformulation of the definition of internal stabilization. With this result in hand, we are able to get a parametrization for the set of all stabilizing controllers in terms of an assumed particular stabilizing controller. (1) Let K * ∈ K nU ×nY be a stabilizing controller for G ∈ K (nZ +nY )×(nW +nU ) . Then the set of all stabilizing controllers is given by where Q ∈ K nU ×nY is an element of the set (2) Let K * ∈ K nU ×nY be a stabilizing controller for G ∈ K (nZ +nY )×(nW +nU ) . Define Then the set of all controllers is given by where Q ∈ K nU ×nY is an element of the set Ω (3.6) such that in addition det( U * + QG 22 ) = 0. Proof. By Theorem 3.1, if K is a stabilizing controller for G, then K has the form (1) of Theorem 3.1 and then Θ(G, K) is as in (3.2). Similarly Θ(G, K * ) is given as Θ(G; U * , V * ) in (3.2) with U * , V * in place of U, V . As by assumption Θ(G; where Q is an element of Ω such that det(U * + G 22 Q) = 0. The drawback of the parametrization of the stabilizing controllers in Theorem 3.2 is that the set Ω is not really a free-parameter set. By definition, Q ∈ Ω if Q itself is stable (from the (1,3) entry in the defining matrix for the Ω in (3.6)), but, in addition, the eight additional transfer matrices should all be stable as well. The next lemma shows how the parameter set Ω can in turn be parametrized by a free stable parameter Λ of size (n U + n Y ) × (n U + n Y ). Lemma 3.3. Assume that G is stabilizable and that K * is a particular stabilizing controller for G. Let Q ∈ K nU ×nY . Then the following are equivalent: (iii) Q has the form Q = LΛL for a stable free-parameter Λ ∈ A (nU +nY )×(nU +nY ) , where L ∈ A nU ×(nU +nY ) and L ∈ A (nU +nY )×nY are given by Hence (ii) implies (iii). Finally assume Q = LΛL for a stable Λ. To show that Q ∈ Ω, as Λ is stable, it suffices to show that   L is stable, and L 2 := L G 21 G 22 I is stable. Spelling out L 1 , using the definition of L from (3.8), gives We note that each of the six matrix entries of L 1 are stable, since they all occur among the matrix entries of Θ(G, K * ) (see (2.2)) and K * stabilizes G by assumption. Similarly, each of the six matrix entries of L 2 given by is stable since K * stabilizes G. It therefore follows that Q ∈ Ω as wanted. We say that K stabilizes Figure 1 is stable, i.e., the usual stability holds with w = 0 and z ignored. This amounts to the stability of the lower right 2 × 2 block in Θ(G, K): The equivalence of (i) and (ii) in Lemma 3.3 implies the following result. Proof. Assume K * ∈ K nU ×nY stabilizes G. Then in particular the lower left 2 × 2 block in Θ(G, K * ) is stable. Thus K * stabilizes G 22 . Moreover, K stabilizes G 22 if and only if K stabilizes G when we impose G 11 = 0, G 12 = 0 and G 21 = 0, that is, K is of the form (3.5) with U * and V * as in Theorem 3.2 and Q ∈ K nU ×nY is such that I G22 Q [ G22 I ] is stable. But then it follows from the implication (ii) =⇒ (i) in Lemma 3.3 that Q is in Ω, and thus, by Theorem 3.2, K stabilizes G (without Combining Lemma 3.3 with Theorem 3.2 leads to the following generalization of Theorem 2.1 giving a parametrization of stabilizing controllers without the assumption of any coprime factorization. Then the set of all stabilizing controllers for G are given by where Q = LΛL where L and L are given by (3.8) and Λ is a free stable parameter 3.2. The Youla-Kučera parametrization. There are two drawbacks to the parametrization of the stabilizing controllers obtained in Theorem 3.5, namely, to find all stabilizing controllers one first has to find a particular stabilizing controller, and secondly, the map Λ → Q given in Part (iii) of Lemma 3.3 is in general not one-toone. We now show that, under the additional hypothesis that G 22 admits a double coprime factorization, both issues can be remedied, and we are thereby led to the well known Youla-Kučera parametrization for the stabilizing controllers. Recall that G 22 has a double coprime factorization in case there exist stable transfer matrices D, N , X, Y , D, N , X and Y so that the determinants of D, D, X and X are all nonzero (in A) and According to Corollary 3.4 it suffices to focus on describing the stabilizing controllers of G 22 . Note that K stabilizes G 22 means that is stable, or, by Theorem 3.2, that K is given by (3.5) or (3.7) for some Q ∈ K nU ×nY so that I G22 Q [ G22 I ] is stable. In case G 22 has a double coprime factorization Quadrat shows in [120, Proposition 4] that the equivalence of (ii) and (iii) in Lemma 3.3 has the following refinement. We provide a proof for completeness. Lemma 3.6. Suppose that G 22 has a double coprime factorization (3.9). Let Q ∈ K nU ×nY . Then Proof. Let Q = DΛD for some Λ ∈ A nU ×nY . Then Then with X, Y , X and Y the transfer matrices from the coprime factorization (3.9) we have Thus Λ is stable. In particular, Proof. Note that if K is a stabilizing controller for G 22 , then, in particular, is stable. The above identity makes sense, irrespectively of K being a stabilizing controller, as long as the left hand side is invertible. Let X, Y , X and Y be the transfer matrices from the double coprime factorization. are stable, it follows that the right-hand side of (3.10) is stable as well. We conclude that Now let K 0 be any stabilizing controller for G 22 . It follows from the first part of the proof that K = Y X −1 = X −1 Y is stabilizing for G 22 . Define V and U by (3.1) and V and U by (3.3). Then, using Theorem 3.2 and Lemma 3.6, there exists a Λ ∈ A nU ×nY so that where Q = DΛD. We compute that and Then certainly det X 0 = 0 and det X 0 = 0, and we have Since any stabilizing controller for G is also a stabilizing controller for G 22 , the following corollary is immediate. is a stabilizable and that G 22 admits a double coprime factorization. Then any stabilizing controller K of G admits a double coprime factorization. Lemma 3.9. Assume that G is stabilizable and that G 22 admits a double coprime factorization. Then there exists a double coprime factorization (3.9) for G 22 so that DG 21 and G 12 D are stable. Proof. Let K be a stabilizing controller for G. Then K is also a stabilizing controller for G 22 . Thus, according to Lemma 3.7, there exists a double coprime factorization (3.9) for G 22 = I. In particular, D Y = Y D and N X = XN . Moreover, from the computations (3.11) and (3.12) we see that Inserting these identities into the formula for Θ(G, K), and using that K stabilizes G, we find that In particular G 12 D X G 12 D Y is stable, and thus We now present an alternative proof of Corollary 3.4 for the case that G 22 admits a double coprime factorization. Proof. It was already noted that in case K stabilizes G, then K also stabilizes G 22 . Now assume that K stabilizes G 22 . Let Q ∈ K nU ×nY so that K is given by (3.5). It suffices to show that Q ∈ Ω, with Ω defined by (3.6). Since G is stabilizable, it follows from Lemma 3.9 that there exists a double coprime factorization (3.9) of G 22 so that DG 21 and G 12 D are stable. According to Lemma 3.6, Q = DΛD for some Λ ∈ A nU ×nY . It follows that  Combining the results from the Lemmas 3.6, 3.7 and 3.10 with Theorem 3.2 and the computations (3.13) and (3.14) from the proof of Lemma 3.7 we obtain the Youla-Kučera parametrization of all stabilizing controllers. Theorem 3.11. Assume that G ∈ K (nZ +nY )×(nW +nU ) is stabilizable and that G 22 admits a double coprime factorization (3.9). Then the set of all stabilizing controllers is given by where Λ is a free stable parameter from A nU ×nY such that det(X + N Λ) = 0 or equivalently det( X + ΛN ) = 0. 3.3. The standard H ∞ -problem reduced to model matching. We now consider the H ∞ -problem for a plant G = G11 G12 G21 G22 : W ⊕ U → Z ⊕ Y, i.e., we seek a controller K : Y → U so that not only Θ(G, K) in (2.2) is stable, but also Assume that the plant G is stabilizable, and that K * : Y → U stabilizes G. Define U * , V * , U * and V * as in Theorem 3.2. We then know that all stabilizing controllers of G are given by where Q ∈ K nU ×nY is any element of Ω in (3.6). We can then express the transfer matrices U and V in (3.1) in terms of Q as follows: Similar computations provide the formulas U = U * + QG 22 and V = V * + Q for the transfer matrices U and V in (3.3). Now recall that Θ(G, K) can be expressed in terms of U and V as in (3.2). It then follows that left upper block in Θ(G, K) is equal to The fact that K * stabilizes G implies that G 11 := G 11 + G 12 V * G 21 is stable, and thus G 12 QG 21 is stable as well. We are now close to a reformulation of the H ∞problem as a model matching problem. However, to really formulate it as a model matching problem, we need to apply the change of design parameter Q → Λ defined in Lemma 3.3, or Lemma 3.6 in case G 22 admits a double coprime factorization. The next two results extend the idea of Theorem 2.2 to this more general setting. with Q = LΛL, where L and L are defined by (3.8), so that det(U * + G 22 Q) = 0, or equivalently det( U * + QG 22 ) = 0, and Λ ∈ A (nU +nY )×(nU +nY ) is any solution to the model matching problem for G 11 , G 12 and G 21 defined by i.e., so that Proof. The statement essentially follows from Theorem 3.5 and the computation (3.15) except that we need to verify that the functions G 11 , G 12 and G 21 satisfy the conditions to be data for a model matching problem, that is, they should be stable. It was already observed that G 11 is stable. The fact that G 12 and G 21 are stable was shown in the proof of Lemma 3.3. We have a similar result in case G 22 admits a double coprime factorization. Theorem 3.13. Assume that G ∈ K (nZ +nY )×(nW +nU ) is stabilizable and that G 22 admits a double coprime factorization (3.9). Let K ∈ K nY ×nU . Then K is a solution to the standard H ∞ problem for G if and only if where Λ ∈ A nU ×nY so that det(X + N Λ) = 0, or equivalently det( X + ΛN ) = 0, is any solution to the model matching problem for G 11 , G 12 and G 21 defined by i.e., so that Proof. The same arguments apply as in the proof of Theorem 3.12, except that in this case Lemma 3.9 should be used to show that G 12 and G 21 are stable. 3.4. Notes. The development in Section 3.1 on the parametrization of stabilizing controllers without recourse to a double coprime factorization of G 22 is based on the exposition of Quadrat [120]. It was already observed by Zames-Francis [140] that Q = K(I − G 22 K) −1 can be used as a free stable design parameter in case G 22 is itself already stable; in case G 22 is not stable, Q is subject to some additional interpolation conditions. The results of [120] is an adaptation of this observation to the general ring-theoretic setup. The more theoretical papers [118,122] give module-theoretic interpretations for the structure associated with internal stabilizability. In particular, it comes out that every matrix transfer function G 22 with entries in K has a double-coprime factorization if and only if A is a Bezout domain, i.e., every finitely generated ideal in A is principal; this recovers a result already appearing in the book of Vidyasagar [136]. A new result which came out of this module-theoretic interpretation was that internal stabilizability of a plant G 22 is equivalent to the existence of a double-coprime factorization for G 22 exactly when the ring A is projective-free, i.e., every submodule of a finitely generated free module over A must itself be free. This gives an explanation for the earlier result of Smith [130] that this phenomenon holds for the case where A is equal H ∞ over the unit disk or right-half plane. Earlier less complete results concerning parametrization of the set of stabilizing controllers without the assumption of a coprime factorization were obtained by Mori [102] and Sule [132]. Mori [103] also showed that the internal-stabilization problem can be reduced to model matching form for the general case where the plant has the full 2 × 2-block structure G = G11 G12 G21 G22 . Lemma 3.10 for the classical case is Theorem 2 on page 35 in [64]. The proof there relies in a careful analysis of signal-flow diagrams; we believe that our proof is more direct. Feedback control for linear time-invariant multidimensional systems 4.1. Multivariable frequency-domain formulation. The most obvious multivariable analogue of the classical single-variable setting considered in the book of Francis [64] is as follows. We take the underlying field to be the complex numbers C; in the engineering applications, one usually requires that the underlying field be the reals R, but this can often be incorporated at the end by using the characterization of real rational functions as being those complex rational functions which are invariant under the conjugation operator s(z) → s(z). We let be the unit polydisk in the d-dimensional complex space C d and we take our ring A of stable plants to be the ring C(z) s of all rational functions s(z) = p(z) q(z) in d variables (thus, p and q are polynomials in the d variables z 1 , . . . , z d where we set z = (z 1 , . . . , z d )) such that s(z) is bounded on the polydisk D d . The ring C[z] of polynomials in d variables is a unique factorization domain so we may assume that p and q have no common factor (i.e., that p and q are relatively coprime) in the fractional representation s = p q for any element of C(z 1 , . . . , z d ). Unlike in the single-variable case, for the case d > 1 it can happen that p and q have common zeros in C d even when they are coprime in C[z] (see [138] for an early analysis of the resulting distinct notions of coprimeness). It turns out that for d ≥ 3, the ring C(z) s is difficult to work with since the denominator q for a stable ring element depends in a tricky way on the numerator p: if s ∈ C(z) s has coprime fractional representation s = p q , while it is the case that necessarily q has no zeros in the open polydisk D d , it can happen that the zero variety of q touches the boundary ∂D d as long as the zero variety of p also touches the same points on the boundary in such a way that the quotient s = p q remains bounded on D d . Note that at such a boundary point ζ, the quotient s = p/q has no well-defined value. In the engineering literature (see e.g. [45,131,84]), such a point is known as a nonessential singularity of the second kind. To avoid this difficulty, Lin [92,93] introduced the ring C(z) ss of structured stable rational functions, i.e., rational functions s ∈ C(z) so that the denominator q in any coprime fractional representation s = p q for s has no zeros in the closed polydisk D d . According to the result of Kharitonov-Torres-Muñoz [84], whenever s = p q ∈ C(z) s is stable in the first (non-structured) sense, an arbitrarily small perturbation of the coefficients of q may lead to the perturbed q having zeros in the open polydisk D d resulting in the perturbed version s = p q of s being unstable; this phenomenon does does not occur for s ∈ C(z) ss , and thus structured stable can be viewed just as a robust version of stable (in the unstructured sense). Hence one can argue that structured stability is the more desirable property from an engineering perspective. In the application to delay systems using the systemsover-rings approach [46,85,83], on the other hand, it is the collection C(z) ss of structurally stable rational functions which comes up in the first place. As the ring A = C(z) ss is a commutative integral domain, we can apply the results of Section 3 to this particular setting. It was proved in connection with work on systems-over-rings rather than multidimensional systems (see [46,83]) that the ring C(z) ss is projective-free. As pointed out in the notes of Section 3 above, it follows that stabilizability of G 22 is equivalent to the existence of a double coprime factorization for the plant G 22 (see [119]), thereby settling a conjecture of Lin [92,93,94]. We summarize these results as follows. Theorem 4.1. Suppose that we are given a system G = G11 G12 G21 G22 over the quotient field Q(C(z) ss ) of the ring C(z) ss of structurally stable rational functions in d variables. If there exists a controller K = Y X −1 = X −1 Y which internally stabilizes G, then G 22 has a double coprime factorization and all internally stabilizing controllers K for G are given by the Youla-Kučera parametrization. Following Subsection 3.3, the Youla-Kučera parametrization can then be used to rewrite the H ∞ -problem in the form of a model-matching problem: Given T 1 , T 2 , T 3 equal to matrices over C(z) ss of respective sizes n Z × n W , n W × n U and n Y × n W , find a matrix Λ over C(z) ss of size n U × n Y so that the affine expression S given by For mathematical convenience we shall now widen the class of admissible solutions and allow Λ 1 , . . . , Λ J to be in the algebra Just as in the classical one-variable case, it is possible to give the model-matching form (4.1) an interpolation interpretation, at least for special cases (see [73,74,32]). One such case is where n W = n Z = n Y = 1 while n U = J. Then T 1 and T 3 are scalar while T 2 = [ T2,1 ··· T2,J ] is a row. Assume in addition that T 3 = 1. Then the model-matching form (4.1) collapses to where Λ 1 , . . . Λ J are J free stable scalar functions. Under the assumption that the intersection of the zero varieties of T 2,1 , . . . , T 2,J within the closed polydisk D d consists of finitely many (say N ) points and if we let w 1 , . . . , w N be the values of T 1 at these points then it is not hard to see that a function S ∈ C(z) ss has the form (4.2) if and only if it satisfies the interpolation conditions In this case the model-matching problem thus becomes the following finite-point Nevanlinna-Pick interpolation problem over D d : find S ∈ C(z) ss subject to |S(z)| ≤ 1 for all z ∈ D d which satisfies the interpolation conditions (4.3). Then the dvariable H ∞ -Model-Matching problem becomes: find S ∈ S d so that S(z 1 ) = w 1 for i = 1, . . . , N . A second case (see [32]) where the polydisk Model-Matching Problem can be reduced to an interpolation problem is the case where T 2 and T 3 are square (so n Z = n U and n Y = n W ) with invertible values on the distinguished boundary of the polydisk; under these assumptions it is shown in [32] (see Theorem 3.5 there) how the model-matching problem is equivalent to a bitangential Nevanlinna-Pick interpolation problem along a subvariety, i.e., bitangential interpolation conditions are specified along all points of a codimension-1 subvariety of D d (namely, the union of the zero sets of det T 2 and det T 3 intersected with D d ). For d = 1, codimension-1 subvarieties are isolated points in the unit disk; thus the codimension-1 interpolation problem is a direct generalization of the bitangential Nevanlinna-Pick interpolation problem studied in [26,58,62]. However for the case where the number of variables d is at least 3, there is no theory with results parallel to those of the classical case. Nevertheless, if we change the problem somewhat there is a theory parallel to the classical case. To formulate this adjustment, we define the d-variable Schur-Agler class SA d to consist of those functions S analytic on the polydisk for which the operator S(X 1 , . . . , X d ) has norm at most 1 for any collection X 1 , . . . , X d of d commuting strict contraction operators on a separable Hilbert space K; here S(X 1 , . . . , X d ) can be defined via the formal power series for S: where we use the standard multivariable notation For the cases d = 1, 2, it turns out, as a consequence of the von Neumann inequality or the Sz.-Nagy dilation theorem for d = 1 and of the Andô dilation theorem [17] for d = 2 (see [109,34] for a full discussion), that the Schur-Agler class SA d and the Schur class S d coincide, while, due to an explicit example of Varopoulos, the There is a result due originally to Agler [1] and developed and refined in a number of directions since (see [3,35] and [4] for an overview) which parallels the one-variable case; for the case of a simple set of interpolation conditions For the case d = 1, the Pick matrix P = is the unique solution of this equation, and we recover the classical criterion P ≥ 0 for the existence of solutions to the Nevanlinna-Pick problem. There is a later realization result of Agler [2] (see also [3,35]): a given holomorphic function S is in the Schur-Agler class SA d (L(U, Y)) if and only if S has a contractive Givone-Roesser realization: Direct application of the Agler result to the bitangential Nevanlinna-Pick interpolation problem along a subvariety, however, gives a solution criterion involving an infinite Linear Matrix Inequality (where the unknown matrices have infinitely many rows and columns indexed by the points of the interpolation-node subvariety)-see [32,Theorem 4.1]. Alternatively, one can use the polydisk Commutant Lifting Theorem from [31] to get a solution criterion involving a Linear Operator Inequality [32,Theorem 5.2]. Without further massaging, either approach is computationally unattractive; this is in contrast with the state-space approach discussed below. In that setting there exists computable sufficient conditions, in terms of a pair of LMIs and a coupling condition, that in general are only sufficient, unless one works with a more conservative notion of stability and performance. X W⊕U → X Z⊕Y and a partitioning X = X 1 ⊕ · · · ⊕ X d of the space X . Associate with such a quadruple {A, B, C, D} is a linear state-space system Σ of Givone-Roesser type (see [67]) that evolves over Z d + and is given by the system of equations Here e k stands for the k-th . . . . We call X the state-space and A the state operator. Moreover, the block operator matrix [ A B C D ] is referred to as the system matrix. Following [81], the Givone-Roesser system (4.4) is said to be asymptotically stable in case, for zero input u(n) = 0 for n ∈ Z d + and initial conditions with the property sup t∈Z d where n → ∞ is to be interpreted as min{n 1 , . . . n d } → ∞ when n = (n 1 , . . . , n d ) ∈ Z d + . With the Givone-Roesser system (4.4) we associate the transfer function G(z) given by We then say that {A, B, C, D} is a (state-space) realization for the function G, or if G is not specified, just refer to {A, B, C, D} as a realization. The realization {A, B, C, D}, or just the operator A, is said to be Hautus-stable in case the pencil Here we only consider the case that X is finite dimensional; then the entries of the transfer function G are in the quotient field Q(C(z) ss ) of C(z) ss and are analytic at 0, and it is straightforward to see that G is structurally stable in case G admits a Hautus-stable realization. For the case d = 2 it has been asserted in the literature [81,Theorem 4.8] that asymptotic stability and Hautus stability are equivalent; presumably this assertion continues to hold for general d ≥ 1 but we do not go into details here. Given a realization {A, B, C, D} where the decomposition X = X 1 ⊕ · · · ⊕ X d is understood, our main interest will be in Hautus-stability; hence we shall say simply that A is stable rather than Hautus-stable. As before we consider controllers K in Q(C(z) ss ) of size n Y × n U that we also assume to be given by a state-space realization with system matrix AK BK CK DK : XK Y → XK U , a decomposition of the state-space X K = X 1,K ⊕ · · · ⊕ X d,K and Z K (z) defined analogous to Z(z) but with respect to the decomposition of X K . We now further specify the matrices B, C and D from the realization {A, B, C, D} as compatible with the decompositions Z ⊕ Y and W ⊕ U. We can then form the closed loop system G cl = Σ(G, K) of the two transfer functions. The closed loop system G cl = Σ(G, K) corresponds to the feedback connection  This feedback loop is well-posed exactly when I −D 22 D K is invertible. Since, under the assumption of well posedness, one can always arrange via a change of variables that D 22 = 0 (cf., [78]), we shall assume that D 22 = 0 for the remainder of the paper. In that case well-posedness is automatic and the closed loop system G cl admits a state-space realization with system matrix and The state-space (internal) stabilizability problem then is: Given the realization {A, B, C, D} find a compatible controller K with realization {A K , B K , C K , D K } so that the closed-loop realization {A cl , B cl , C cl , D cl } is stable, i.e., so that I − Z cl (z)A cl is invertible on the closed polydisk D d . We also consider the strict statespace H ∞ -problem: Given the realization {A, B, C, D}, find a compatible controller K with realization {A K , B K , C K , D K } so that the closed loop realization {A cl , B cl , C cl , D cl } is stable and the closed-loop system G cl satisfies G cl (z) < 1 for all z ∈ D d . State-space stabilizability. In the fractional representation setting of Section 3 it took quite some effort to derive the result: "If G is stabilizable, then K stabilizes G if and only if K stabilizes G 22 " (see is a state-feedback operator F : X → U so that A + B 2 F is stable. Notice that both Hautus-detectability and operator-detectability for the pair (C, A) reduce to stability of A in case C = 0. A similar remark applies to stabilizability for an input pair (A, B). We will introduce yet another notion of detectability and stabilizability shortly, but in order to do this we need a stronger notion of stability. We first define D to be the set which is also equal to the commutant of {Z(z) : z ∈ Z d } in the C * -algebra of bounded operators on X . We then say that the realization {A, B, C, D}, or just A, is scaled stable in case there exists an invertible operator Q ∈ D so that Q −1 AQ < 1, or, equivalently, if there exists a positive definite operator X (notation X > 0) in D so that AXA * − X < 0. To see that the two definitions coincide, take either X = QQ * ∈ D, or, when starting with X > 0, factor X as X = QQ * for some Q ∈ D. It is not hard to see that scaled stability implies stability. Indeed, assume there exists an invertible Q ∈ D so that Q −1 AQ < 1. Then Z(z)Q −1 AQ = Q −1 Z(z)AQ is a strict contraction for each z ∈ D d , and thus is invertible on D d . But then I − Z(z)A is invertible on D d as well, and A is stable. The converse direction, even though asserted in [111,95], turns out not to be true in general, as shown in [16] via a concrete example. The output pair {C 2 , A} is then said to be scaled-detectable if there exists an output-injection operator L : Y → X so that A+ LC 2 is scaled stable, and the input pair {A, B 2 } is called scaled-stabilizable if there exists a state-feedback operator F : X → U so that A+B 2 F is scaled stable. While a classical result for the 1-D case states that operator, Hautus and scaled detectability, as well as operator, Hautus and scaled stabilizability, are equivalent, in the multidimensional setting considered here only one direction is clear. ( Proof. Since scaled stability is a stronger notion than stability, the first implications of both (1) and (2) are obvious. Suppose that L : Y → X is such that A + LC 2 is stable. Then is invertible for all z ∈ D d from which it follows that {C 2 , A} is Hautus-detectable. The last assertion concerning stabilizability follows in a similar way by making use of the identity The combination of operator-detectability together with operator-stabilizability is strong enough for stabilizability of the realization {A, B, C, D} and we have the following weak analogue of Theorem 2.4 where L : Y → X and F : X → U are any operators chosen such that A + LC 2 and A + F B 2 are stable. Proof. It is possible to motivate these formulas with some observability theory (see [57]) but, once one has the formulas, it is a simple direct check that It is now a straightforward exercise to check that this last matrix can be put in the triangular form A+LC2 0 −LC2 A+B2F via a sequence of block-row/block-column similarity transformations, from which we conclude that A cl is stable as required. Remark 4.4. A result for the systems-over-rings setting that is analogous to that of Theorem 4.3 is given in [85]. There the result is given in terms of a Hautustype stabilizable/detectable condition; in the systems-over-rings setting, Hautusdetectability/stabilizability is equivalent to operator-detectability/stabilizability (see Theorem 3.2 in [83]) rather than merely sufficient as in the present setting (see While there are no tractable necessary and sufficient conditions for solving the state-space stabilizability problem available, the situation turns out quite differently when working with the more conservative notion of scaled stability. The following is a more complete analogue of Theorem 2.4 combined with Theorem 2.3. where B 2,⊥ any injective operator with range equal to Ker B 2 . (c) There exists Y ∈ D satisfying the LMIs (4.14) (2) The following conditions concerning the output pair are equivalent: There exists X ∈ D satisfying the LMIs: where C 2,⊥ any injective operator with range equal to Ker C 2 . (c) There exists X ∈ D satisfying the LMIs One of the results we shall use in the proof of Theorem 4.5 is known as Finsler's lemma [61], which also plays a key role in [98,78]. This result can be interpreted as a refinement of the Douglas lemma [51] which is well known in the operator theory community. Finsler's lemma can be seen as a special case of another important result, which we shall refer to as Finsler's lemma II. This is one of the main underlying tools in the proof of the solution to the H ∞ -problem obtained in [66,18]. where R ⊥ and S ⊥ are injective operators with ranges equal to ker R and ker S, respectively. The proof of Finsler's Lemma II given in [66] uses only basic linear algebra and is based on a careful administration of the kernels and ranges from the various matrices. In particular, the matrices J in statement (i) can actually be constructed from the data. We show here how Finsler's lemma follows from the extended version. Proof of lemma 4.6 using Lemma 4.7. Apply Lemma 4.7 with R = S. Then (ii) reduces to R * ⊥ HR ⊥ < 0, which is equivalent to the existence of a matrix J so that K = −(J * + J) satisfies R * KR > H. Since for such a matrix K we have K * = K, it follows that R * KR > H holds for K = µI as long as µI > K. With these results in hand we can proof Theorem 4.5. Proof of Theorem 4.5. We shall first prove that scaled stabilizability of {A, B, C, D} is equivalent to the existence of solutions X and Y in D for the LMIs (4.15) and (4.13). Note that A cl can be written in the following affine way: Now let X cl : X ⊕ X K be an invertible matrix in D cl , where D cl stands for the commutant of {Z cl (z) : z ∈ Z d }. Let X be the compression of X cl to X and Y the compression of X −1 cl to X . Then X, Y ∈ D. Assume that X cl > 0. Thus, in particular, X > 0 and Y > 0. Then Note that H, R and S are determined by the problem data, while J amounts to the system matrix of the controller to be designed. Then Thus, by Finsler's lemma II, the inequality (4.18) holds for some J = AK BK CK DK if and only if R * ⊥ HR ⊥ < 0 and S * ⊥ HS ⊥ < 0, where without loss of generality we can take with C 2,⊥ and B 2,⊥ as described in part (b) of statements 1 and 2. Writing out R * ⊥ HR ⊥ we find that R * ⊥ HR ⊥ < 0 if and only if  which, after taking a Schur complement, turns out to be equivalent to A similar computation shows that S * ⊥ HS ⊥ < 0 is equivalent to B 2,⊥ (AY A * − Y )B * 2,⊥ < 0. This proves the first part of our claim. For the converse direction assume we have X and Y in D satisfying (4.15)-(4.13). Most of the implications in the above argumentation go both ways, and it suffices to prove that there exists an operator X cl on X ⊕X K in D cl , with X K an arbitrary finite dimensional Hilbert space with some partitioning X K = X K,1 ⊕ · · · ⊕ X K,d , so that X cl > 0 and X and Y are the compressions to X of X cl and X −1 cl , respectively. Since (4.15)-(4.13) hold with X and Y replaced by ρX and ρY for any positive number ρ, we may assume without loss of generality that [ X I I Y ] > 0. The existence of the required matrix X cl can then be derived from Lemma 7.9 in [57] (with n K = n). To enforce the fact that X cl be in D cl we decompose X = diag(X 1 , . . . , X d ) and Y = diag(Y 1 , . . . , Y d ) as in (4.11) and complete X i and Y i to positive definite matrices so that [ Xi * * * ] −1 = [ Yi * * * ]. To complete the proof it remains to show the equivalences of parts (a), (b) and (c) in both statements 1 and 2. The equivalences of the parts (b) and (c) follows immediately from Finsler's lemma with R = B 2 (respectively, R = C * 2 ) and H = AY A * − Y (respectively, H = A * XA − X), again using that X in (4.15) can be replaced with µX (respectively, Y in (4.13) can be replaced with µY ) for any positive number µ. We next show that (a) is equivalent to (b) for statement 1; for statement 2 the result follows with similar arguments. Let F : X → U, and let X ∈ D be positive definite. Taking a Schur complement it follows that if and only if . The latter inequality is the same as −X −1 < 0 and thus vacuous. The first inequality, after writing out R * ⊥ HR ⊥ , turns out to be Thus, applying Finsler's lemma II with which, after another Schur complement, is equivalent to Since scaled stability implies stability, it is clear that finding operators F and L wit A + B 2 F and A + LC 2 scaled-stable implies that A + B 2 F and A + LC 2 are also stable. In particular, having such operators F and L we find the coprime factorization of G 22 via the functions in Theorem 4.3. While there are no known tractable necessary and sufficient conditions for operator-detectability/stabilizability, the LMI criteria in parts (iii) and (iv) of Theorem 4.5 for the scaled versions are considered computationally tractable. Moreover, an inspection of the last part of the proof shows how operators F and L so that A+B 2 F and A+LC 2 are scaled stable can be constructed from the solutions X and Y from the LMIs in (4.13)-(4.16): Assume we have X, Y ∈ D satisfying (4.13)-(4.16). Define H, R and S as in (4.21), and determine a J so that H + [ R * S * ] 0 J * J 0 [ R S ] < 0; this is possible as the proof of Finsler's lemma II is essentially constructive. Then take F = J. In a similar way one can construct L using the LMI solution Y . Stability versus scaled stability, µ versus µ. We observed above that the notion of scaled stability is stronger, and more conservative than the more intuitive notions of stability in the Hautus or asymptotic sense. This remains true in a more general setting that has proved useful in the study of robust control [98,57,107] and that we will encounter later in the paper. Let A be a bounded linear operator on a Hilbert space X . Assume that in addition we are given a unital C * -algebra ∆ which is realized concretely as a subalgebra of L(X ), the space of bounded linear operators on X . The complex structured singular value µ ∆ (A) of A (with respect to the structure ∆) is defined as Here σ(M ) stands for the largest singular value of the operator M . Note that this contains two standard measures for A: the operator norm A if we take ∆ = L(X ), and ρ(A), the spectral radius of A, if we take ∆ = {λI X : λ ∈ C}; it is not hard to see that for any unital C * -algebra ∆ we have ρ(A) ≤ µ ∆ (A) ≤ A . See [107] for a tutorial introduction on the complex structured singular value and [60] for the generalization to algebras of operators on infinite dimensional spaces. The C * -algebra that comes up in the context of stability for the N -D systems studied in this section is ∆ = {Z(z) : z ∈ C d }. Indeed, note that for this choice of ∆ we have that A is stable if and only if µ ∆ (A) < 1. In order to introduce the more conservative measure for A in this context, we write D ∆ for the commutant of the C * -algebra ∆ in L(X ). We then define µ ∆ (A) = inf{γ : Q −1 AQ < γ for some invertible Q ∈ D ∆ } = inf{γ : AXA * − γX < 0 for some X ∈ D ∆ , X > 0}. (4.23) The equivalence of the two definitions again goes through the relation between X and Q via X = Q * Q. It is immediate that with ∆ = {Z(z) : z ∈ C d } we find D ∆ = D as in (4.11), and that A is scaled stable if and only if µ ∆ (A) < 1. The state-space H ∞ -problem. The problems of finding tractable necessary and sufficient conditions for the strict state-space H ∞ -problem are similar to that for the state-space stabilizability problem. Here one also typically resorts to a more conservative 'scaled' version of the problem. We say that the realization {A, B, C, D} with decomposition (4.8) has scaled performance whenever there exists an invertible Q ∈ D so that 24) or, equivalently, if there exists an X > 0 in D so that The equivalence of the two definitions goes as for the scaled stability case through the relation X = QQ * . Looking at the left upper entry in (4.25) it follows that scaled performance of {A, B, C, D} implies scaled stability. Moreover, if (4.24) holds for Q ∈ D, then it is not hard to see that the transfer function G(z) in (4.5) is also given by where the system matrix is equal to a strict contraction. It then follows from a standard fact on feedback connections (see e.g. Corollary 1.3 page 434 of [62] for a very general formulation) that G(z) < 1 for z ∈ D d , i.e., G has strict performance. The scaled H ∞ -problem is then to find a controller K with realization {A K , B K , C K , D K } so that the closed loop system {A cl , B cl , C cl , D cl } has scaled performance. The above analysis shows that solving the scaled H ∞ -problem implies solving that state-space H ∞ -problem. The converse is again not true in general. Further elaboration of the same techniques as used in the proof of Theorem 4.5 yields the following result for the scaled H ∞ -problem; see [18,66]. For the connections between the Theorems 4.8 and 4.5, in the more general setting of LFT models with structured uncertainty, we refer to [25]. Note that the result collapses to Theorem 2.5 given in the Introduction when we specialize to the single-variable case d = 1. (4.27) and the coupling condition X I I Y ≥ 0. Note that Theorem 4.8 does not require that the problem be first brought into model-matching form; thus this solution bypasses the Nevanlinna-Pick-interpolation interpretation of the H ∞ -problem. 4.3. Equivalence of frequency-domain and state-space formulations. In this subsection we suppose that we are given a transfer matrix G of size (n Z + n Y ) × (n W + n U ) with coefficients in Q(C(z) ss ) as in Section 4.1 with a given state-space realization as in Subsection 4.2: where Z(z) is as in (4.6). We again consider the problem of finding stabilizing controllers K, also equipped with a state-space realization in either the state-space stability or in the frequency-domain stability sense. A natural question is whether the frequency-domain H ∞ -problem with formulation in state-space coordinates is the same as the state-space H ∞ -problem formulated in Section 4.2. For simplicity in the computations to follow, we shall always assume that the plant G has been normalized so that D 22 = 0. In one direction the result is clear. Suppose that K(z) = D K + C K (I − Z(z)A K ) −1 Z(z)B K is a stabilizing controller for G(z) in the state-space sense. It follows that the closed-loop state matrix is stable, i.e., I − Z cl (z)A cl is invertible for all z in the closed polydisk D d , with has realization As the resolvent expression (I − Z cl (z)A cl ) −1 has no singularities in the closed polydisk D d , it is clear that W (z) has matrix entries in C(z) ss , and it follows that K stabilizes G 22 in the frequency-domain sense. Under the assumption that G is internally stabilizable (frequency-domain sense), it follows from Corollary 3.4 that K also stabilizes G (frequency-domain sense). We show that the converse direction holds under an additional assumption. The early paper [88] We then have the following partial converse of the observation made above that state-space internal stabilization implies frequency-domain internal stabilization; this is an N -D version of Theorem 2.6 in the Introduction. Remark 4.10. As it is not clear that a given realization can be reduced to a modally observable and modally controllable realization for a given transfer function, it is equally not clear whether a given transfer function has a modally detectable and modally stabilizable realization. However, in the case that d = 1, such realizations always exists and Theorem 4.9 recovers the standard 1-D result (Theorem 2.6 in the Introduction). The proof of Theorem 4.9 will make frequent use of the following basic result from the theory of holomorphic functions in several complex variables. For the proof we refer to [128,Theorem 4 page 176]; note that if the number of variables d is 1, then the only analytic set of codimension at least 2 is the empty set and the theorem is vacuous; the theorem has content only when the number of variables is at least 2. We shall also need some preliminary lemmas. Proof. To prove the first statement, note the identity Since the factor I −Z(z)L 0 I is invertible for all z, we conclude that, for each z ∈ C d , has maximal rank exactly when I−Z(z)(A+LC) C has maximal rank, and hence, in particular, the modal detectability for {C, A} holds exactly when modal detectability for {C, A + LC} holds. The second statement follows in a similar way from the identity Lemma 4.13. Suppose that the function W (z) is stable (i.e., all matrix entries of W are in C(z) ss ) and suppose that is a realization for W which is both modally detectable and modally stabilizable. Then the matrix A is stable, i.e., (I − Z(z)A) −1 exists for all z in the closed polydisk D d . Proof. As W is stable and Z(z)B is trivially stable, then certainly We next note the identity where the quantity on the left-hand side is holomorphic on D d by the result es- We are now ready for the proof of Theorem 4.9. Proof of Theorem 4.9 By Lemma 4.12 we see that modal detectability of DK C2 CK The modal stabilizability of A cl , B2 B2DK 0 BK follows in a similar way by making use of the identities and noting that DK I I 0 is invertible. In both the frequency-domain setting of Section 4.1 and the state-space setting of Section 4.2, the true H ∞ -problem is intractable and we resorted to some compromise: the Schur-Agler-class reformulation in Section 4.1 and the scaled-H ∞ -problem reformulation in Section 4.2. We would now like to compare these compromises for the setting where they both apply, namely, where we are given both the transfer function G and the state-space representation {A, B, C, D} for the plant. Proof. Simply note that, under the assumptions of the theorem, W (z) has a realization W = D cl + C cl (I − Z cl (z)A cl ) −1 Z cl (z)B cl for which there is a state-space change of coordinates Q ∈ D transforming the realization to a contraction: Thus we also have W (z) = D + C ′ (I − Z cl (z)A ′ )Z cl (z)B ′ from which it follows that W is in the strict Schur-Agler class, i.e., W (X) < 1 for any d-tuple X = (X 1 , . . . , X d ) of contraction operators X j on a separable Hilbert space X . By construction W necessarily has the model matching form W = G 11 + G 12 ΛG 21 with Λ stable. Remark 4.15. In general a Schur-Agler function S(z) can be realized with a colligation matrix [ A B C D ] which is not of the form equal to a strict contraction and Q ∈ D invertible. As an example, let A be the block 2 × 2 matrix given by . This matrix has the property that I − Z(z)A is invertible for all z ∈ D 2 , but there is no Q ∈ D so that Q −1 AQ < 1. Here Z(z) and D are compatible with the block decomposition of A. Then for γ > 0 sufficiently small the function S(z) = γ(I − Z(z)A) −1 has S(z) ≤ ρ < 1 for some 0 < ρ < 1 and all z ∈ D 2 . Hence S is a strict Schur-class function. As mentioned in Section 4.1, a consequence of the Andô dilation theorem [17] is that the Schur class and the Schur-Agler class coincide for d = 2; it is not hard to see that this equality carries over to the strict versions and hence S is in the strict Schur-Agler class. As a consequence of the strict Bounded-Real-Lemma in [29], S admits a strictly contractive state-space realization A ′ B ′ C ′ D . However, the realization [ A B C D ] = A A γI γI of S, obtained from the fact that as in (4.36) since that would imply the existence of an invertible Q ∈ D so that Q −1 AQ = A ′ is a strict contraction. Remark 4.16. Let us assume that the G(z) in Theorem 4.14 is such that G 12 and G 21 are square and invertible on the distinguished boundary T d of the polydisk D d so that the Model-Matching problem can be converted to a polydisk bitangential Nevanlinna-Pick interpolation problem along a subvariety as in [32]. As we have seen, the solution criterion using the Agler interpolation theorem of [1,35] then involves an LOI (Linear Operator Inequality or infinite LMI). On the other hand, if we assume that we are given a stable state-space realization {A, B, C, D} for , we may instead solve the associated scaled H ∞ -problem associated with this realization data-set. The associated solution criterion in Theorem 4.8 remarkably involves only finite LMIs. A disadvantage of this state-space approach, however, is that in principle one would have to sweep all possible (similarity equivalence classes of) realizations of G(z); while each non-equivalent realization gives a distinct H ∞ -problem, the associated frequency-domain Model-Matching/bitangential variety-interpolation problem remains the same. Notes. In [92] Lin conjectured the result stated in Theorem 4.1 that G 22stabilizability is equivalent to the existence of a stable coprime factorization for G 22 . This conjecture was settled by Quadrat (see [122,117,120]) who obtained the equivalence of this property with projective-freeness of the underlying ring and noticed the applicability of the results from [46,83] concerning the projectivefreeness of C(z) ss . For the general theory of the N -D systems, in particular for N =2, considered in Subsection 4.2 we refer to [81,55]. The sufficiency of scaled stability for asymptotic/Hautus-stability goes back to [59]. Theorem 4.5 was proved in [98] for the more general LFT models in the context of robust control with structured uncertainty. The proof given here is based on the extended Finsler's lemma (Lemma 4.7), and basically follows the proof from [66] for the solution to the scaled H ∞ -problem (Theorem 4.8). As pointed out in [66], one of the advantages of the LMI-approach to the state-space H ∞ problem, even in the classical setting, is that it allows one to seek controllers that solve the scaled H ∞ -problem with a given maximal order. Indeed, it is shown in [66,18] (see also [57]) that certain additional rank constraints on the solutions X and Y of the LMIs (4.26) and (4.27) enforce the existence of a solution with a prescribed maximal order. However, these additional constraints destroy the convexity of the solution criteria, and are therefore usually not considered as a desirable addition. An important point in the application of Finsler's lemma in the derivation of the LMI solution criteria in Theorems 4.5 and 4.8 is that the closed-loop system matrix A cl in (4.31) has an affine expression in terms of the unknown design parameters {A K , B K , C K , D K }. This is the key point where the assumption D 22 = 0 is used. A parallel simplification occurs in the frequency-domain setting where the assumption G 22 = 0 leads to the Model-Matching form. The distinction however is that the assumption G 22 = 0 is considered unattractive from a physical point of view while the parallel state-space assumption D 22 := G 22 (0) = 0 is considered innocuous. There is a whole array of lemmas of Finsler type; we have only mentioned the form most suitable for our application. It turns out that these various Finsler lemmas are closely connected with the theory of plus operators and Pesonen operators on an indefinite inner product space (see [44]). An engaging historical survey on all the Finsler's lemmas is the paper of Uhlig [135]. The notions of modally detectable and modally stabilizable introduced in Subsection 4.3 along with Theorem 4.9 seem new, though of somewhat limited use because it is not known if every realization can be reduced to a modally detectable and modally stabilizable realization. We included the result as an illustration of the difficulties with realization theory for N -D transfer functions. We note that the usual proof of Lemma 4.13 for the classical 1-D case uses the pole-shifting characterization of stabilizability/detectability (see [57,Exercise 2.19]). The proof here using the Hautus characterization of stabilizability/detectability provides a different proof for the 1-D case. Robust control with structured uncertainty: the commutative case In the analysis of 1-D control systems, an issue is the uncertainty in the plant parameters. As a control goal, one wants the control to achieve internal stability (and perhaps also performance) not only for the nominal plant G but also for a whole prescribed family of plants containing the nominal plant G. A question then is whether the controller can or cannot have (online) access to the uncertainty parameters. In a state-space context it is possible to find sufficient conditions for the case that the controller cannot access the uncertainty parameters, with criteria that are similar to those found in Theorems 4.5 and 4.8 but additional rank constraints need to be imposed as well, which destroys the convex character of the solution criterion. The case where the controller can have access to the uncertainty parameters is usually given the interpretation of gain-scheduling, and fits better with the multidimensional system problems discussed in Section 4. In this section we discuss three formulations of 1-D control systems with uncertainty in the plant parameters, two of which can be given gain-scheduling interpretation, i.e., the controller has access to the uncertainty parameters, and one where the controller is not allowed to use the uncertainty parameters. 5.1. Gain-scheduling in state-space coordinates. Following [106], we suppose that we are given a standard linear time-invariant input/state/output system Σ : is not known exactly but depends on some uncertainty parameters δ U = (δ 1 , . . . , δ d ) in C d . Here the quantities δ i are viewed as uncertain parameters which the controller can measure and use in real time. The goal is to design a controller Σ K (independent of δ U ) off-line so that the closed-loop system (with the controller accessing the current values of the varying parameters δ 1 , . . . , δ d as well as the value of the measurement signal y from the plant) has desirable properties for all admissible values of δ U , usually normalized to be |δ k | ≤ 1 for k = 1, . . . , d. It is not too much of a restriction to assume in addition that the functional dependence on δ U is given by a linear fractional map (where the subscript U suggests uncertainty and the subscript S suggests shift)  where Z(δ U ) is defined analogously to Z(z) in (4.6) relative to a given decomposition of the "uncertainty" state-space X U = X U,1 ⊕· · ·⊕X U,d on which that state operator A UU acts. In that case the transfer function G(δ) admits a state-space realization with system matrix given by  Here Z(δ) is again defined analogously to (4.6) but now on the extended state-space X ext = X U ⊕ X . We can then consider this gain-scheduling problem as a problem of the constructed N -D system (with N = d + 1), and seek for a controller K with a statespace realization so that the closed loop system has desirable properties from a gain-scheduling perspective. Making a similar decomposition of the system matrix for the controller K as in (5.4), we note that K(δ) can also be written as where A M,K (δ U ), B M,K (δ U ), C M,K (δ U ) and D M,K (δ U ) appear as the transfer functions of N -D systems (with N = d), that is, K(δ) can be seen as the transfer function of a linear time-invariant input/state/output system depending on the same uncertainty parameters δ U = (δ 1 , . . . , δ d ) as the system Σ. Similarly, the transfer function G cl (δ) of the closed-loop system with system matrix A cl B cl C cl D cl as defined in (4.10) also can be written as a transfer matrix also appears as the closed-loop system of Σ and Σ K . It then turns out that stability of A cl , that is, I − Z cl (δ)A cl invertible for all δ in D d+1 (with Z cl as defined in Subsection 4.2) corresponds precisely to robust stability of Σ cl , i.e., the spectral radius of A M,cl (δ U ) is less than 1 for all δ U = (δ 1 , . . . , δ d ) so that |δ k | ≤ 1 for k = 1, . . . , d, and K with realization (5.5) solves the state-space H ∞ -problem for G with realization (5.3) means that the closed loop system Σ cl has robust performance, i.e., Σ cl is robustly stable and the transfer function G cl satisfies We may thus see the state-space formulation of the gain-scheduling problems considered in this subsection as a special case of the N -D system stabilization and H ∞ -problems of Subsection 4.2. In particular, the sufficiency analysis given there, and the results of Theorem 4.5 and 4.8, provide practical methods for obtaining solutions. As the conditions are only sufficient, solutions obtained in principle may be conservative. 5.2. Gain-scheduling: a pure frequency-domain formulation. In the approach of Helton (see [73,74]), one eschews transfer functions and state-space coordinates completely and supposes that one is given a plant G whose frequency response depends on a load with frequency function δ(z) at the discretion of the user; when the load δ is loaded onto G, the resulting frequency-response function has the form G(z, δ(z)) where G = G(·, ·) is a function of two variables. The control problem (for the company selling this device G to a user) is to design the controller K = K(·, ·) so that K(·, δ(·)) solves the H ∞ -problem for the plant G(·, δ(·)). The idea here is that once the user loads δ onto G with known frequency-response function, he is also to load δ onto the controller K (designed off-line); in this way the same controller works for many customers using many different δ's. When the dust settles, this problem reduces to the frequency-domain problem posed in Section 4.1 with d = 2; an application of the Youla-Kučera parametrization (or simply using the function Q(z) = K(z)(I − G 22 (z)K(z)) −1 if the plant G itself is stable) reduces the problem of designing the control K to a Nevanlinna-Pick-type interpolation problem on the bidisk. 5.3. Robust control with a hybrid frequency-domain/state-space formulation. We now consider a hybrid frequency-domain/state-space formulation of the problem considered in Subsection 5.1; the main difference is that in this case the controller is not granted access to the uncertainty parameters. Assume we are given a 1-D-plant G(λ) that depends on uncertainty parameters δ U = (δ 1 , . . . , δ d ) via the linear fractional representation with Z(δ U ) as defined in Subsection 5.1, and where the coefficients are 1-D-plants independent of δ U : In case G aug (λ) is also given by a state-space realization, we can write G(δ U , λ) as in (5.3) with δ = (δ U , λ) and Z(δ) acting on the extended state-space X ext = X U ⊕ X . For this variation of the gain-scheduling problem we seek to design a controller K(λ) with matrix values representing operators from Y to U so that K solves the H ∞ -problem for G(δ U , λ) for every δ U with Z(δ U ) ≤ 1, i.e., |δ j | ≤ 1 for j = 1, . . . , d. For the sequel it is convenient to assume that Z = W. In that case, using the Main Loop Theorem [141, Theorem 11.7 page 284], it is easy to see that this problem can be reformulated as: Find a single-variable transfer matrix K(·) so that Θ( G, K) given by (2.2) in (2.2) taken to be is stable and such that Here µ ∆ is as defined in (4.22) with ∆ the C * -algebra Application of the Youla-Kučera parametrization of the controllers K that stabilize Θ( G, K) as in Subsection 3.3 converts the problem to the following: Given stable 1-variable transfer functions T 1 (λ), T 2 (λ), and T 3 (λ) with matrix values representing operators in the respective spaces find a stable 1-variable transfer function Λ(λ) with matrix values representing operators in L(X U ⊕ Y, X U ⊕ U) so that the transfer function S(λ) given by has µ ∆ (S(λ)) < 1 for all λ ∈ D. If T 2 (ζ) and T 3 (ζ) are square and invertible for ζ on the boundary T of the unit disk D, the model-matching form (5.7) can be converted to bitangential interpolation conditions (see e.g. [26]); for simplicity, say that these interpolation conditions have the form for given distinct points λ i , λ ′ j in D, row vectors x i , y i and column vectors u j , v j . Then the robust H ∞ -problem (H ∞ rather than rational version) can be converted to the µ-Nevanlinna-Pick problem: find holomorphic function S on the unit disk with matrix values representing operators in L(X U ⊕ W, X U ⊕ Z) satisfying the interpolation conditions (5.8) such that also µ ∆ (S(λ)) < 1 for all λ ∈ D. It is this µ-version of the Nevanlinna-Pick interpolation problem which has been studied from various points of view (including novel variants of the Commutant Lifting Theorem) by Bercovici-Foias-Tannenbaum (see [38,39,40,41]) and Agler-Young (see [5,7,9,11] and Huang-Marcantognini-Young [77]). These authors actually study only very special cases of the general control problem as formulated here; hence the results at this stage are not particularly practical for actual control applications. However this work has led to interesting new mathematics in a number of directions: we mention in particular the work of Agler-Young on new types of dilation theory and operator-model theory (see [6,9]), new kinds of realization theorems [10], the complex geometry of new kinds of domains in C d (see [8,12,13]), and a multivariable extension of the Bercovici-Foias-Tannenbaum spectral commutant lifting theorem due to Popescu [114]. Notes. In the usual formulation of µ (see [107,141]), in addition to the scalar blocks δ i I ni in Z(δ), it is standard to also allow some of the blocks to be full blocks of the form ∆ The resulting transfer functions then have domains equal to be (reducible) Cartan domains which are more general than the unit polydisk. The theory of the Schur-Agler class has been extended to this setting in [15,20]. More generally, it is natural also to allow non-square blocks. A formalism for handling this is given in [29]; for this setting one must work with the intertwining space of ∆ rather than the commutant of ∆ in the definition of µ in (4.23). With a formalism for such a non-square uncertainty structure available, one can avoid the awkward assumption in Subsection 5.3 and elsewhere that W = Z. 6. Robust control with dynamic time-varying structured uncertainty 6.1. The state-space LFT-model formulation. Following [97,98,96,108], we now introduce a variation on the gain-scheduling problem discussed in Section 5.1 where the uncertainty parameters δ U = (δ 1 , . . . , δ d ) become operators on ℓ 2 , the space of square-summable sequences of complex numbers indexed by the integers Z, and are to be interpreted as dynamic, time-varying uncertainties. To make the ideas precise, we suppose that we are given a system matrix as in (5.4). We then tensor all operators with the identity operator I ℓ 2 on ℓ 2 to obtain an enlarged system matrix which we also write as Given a decomposition X U = X U1 ⊕ · · · ⊕ X Ud of the uncertainty state space X U , we define the matrix pencil Z U (δ U ) with argument equal to a d-tuple δ U = (δ 1 , . . . , δ d ) of (not necessarily commuting) operators on ℓ 2 by In addition we let S denote the bilateral shift operator on ℓ 2 ; we sometimes will also view S as an operator on the space ℓ of all sequences of complex numbers or on the subspace ℓ 2 fin of ℓ 2 that consists of all sequences in ℓ 2 with finite support. We obtain an uncertain linear system of the form Σ : where the system matrix  (6.4) As this system is time-varying, due to the presence of the time-varying uncertainty parameters δ U , it is not convenient to work with a transfer-function acting on the frequency-domain; instead we stay in the time-domain and work with the input-output operator which has the form Now write δ for the collection (δ U , S) of d + 1 operators on ℓ 2 . Then the inputoutput operator G(δ) given by (6.5) has the noncommutative transfer-function realization with system matrix as in (6.1) and Z(δ) = ZU (δU ) 0 0 IX S ⊗S . In the formulas (6.4)-(6.6) the inverses may have to be interpreted as the algebraic inverses of the corresponding infinite block matrices; in that way, the formulas make sense at least for the nominal plant, i.e., with δ U = (0, . . . , 0). More generally, the transfer-function G can be extended to a function of d + 1 variables in L(ℓ 2 ) by replacing S with another variable δ d+1 ∈ L(ℓ 2 ). In that case, the transfer-function can be viewed as an LFT-model with structured uncertainty, as studied in [98,57]. However, as a consequence of the Sz.-Nagy dilation theory, without loss of generality it is possible in this setting of LFT-models to fix one of the variables to be the shift operator S; in this way the LFT-model results developed for d + 1 free variable contractions apply equally well to the case of interest where one of the variables is fixed to be the shift operator. Such an input/state/output system Σ with structured dynamic time-varying uncertainty δ U is said to be robustly stable (with respect to the dynamic timevarying uncertainty structure Z U (δ U )) if the state-matrix A M (δ U ) is stable for all choices of δ U subject to Z U (δ U ) ≤ 1, that is, if I XS⊗ℓ 2 − (I XS ⊗ S)A M (δ U ) is invertible as an operator on X S ⊕ ℓ 2 for all δ U with Z U (δ U ) ≤ 1. Since it follows from the Main Loop Theorem [141,Theorem 11.7 page 284], that this condition in turn reduces to: Note that this condition amounts to a noncommutative version of the Hautusstability criterion for the matrix A (where A = A ⊗ I ℓ 2 ). We shall therefore call the state matrix A nc-Hautus-stable if (6.7) is satisfied (with nc indicating that we are in the noncommutative setting). The input/state/output system Σ is said to have nc-performance (with respect to the dynamic time-varying uncertainty structure Z U (δ U )) if it is robustly stable (with respect to this dynamic time-varying uncertainty structure) and in addition the input-output operator G(δ) has norm strictly less than 1 for all choices of δ = (δ U , S) with Z(δ) ≤ 1. One of the key results from the thesis of Paganini [108] which makes the noncommutative setting of this section more in line with the 1-D case is that, contrary to what is the case in Subsection 4.2, for operators A = A ⊕ I ℓ 2 on X ⊕ ℓ 2 we do have µ ∆ (A) = µ ∆ (A) when we take ∆ to be the C * -algebra . Then the main implication of the fact that µ ∆ (A) = µ ∆ (A) is that nc-Hautus-stability of A is now the same as the existence of an invertible operator Q ∈ D so that Q −1 AQ < 1 or, equivalently, the existence of a solution X ∈ D to the LMIs A * XA − A < 0 and X > 0. However, it is not hard to see that X is an element of D if and only if X = X ⊗ I ℓ 2 with X being an element of the C * -algebra D in (4.11). Thus, in fact, we find that A = A ⊕ I ℓ 2 is nc-Hautus-stable precisely when A is scaled stable, i.e., when there exists a solution X ∈ D to the LMIs A * XA − A < 0 and X > 0. These observations can also be seen as a special case (when C 2 = 0 and B 2 = 0) of the following complete analogue of Theorem 2.3 for this noncommutative setting due to Paganini [108]. Proposition 6.1. Given a system matrix as in (6.1)-(6.2), then: (i) The output pair {C 2 , A} is nc-Hautus-detectable, that is, for every δ = (δ 1 , . . . , δ d+1 ), with δ j ∈ L(ℓ 2 ) for j = 1, . . . , d + 1, so that Z(δ) ≤ 1 the operator The input pair {A, B 2 } is nc-Hautus-stabilizable, that is, for every δ = (δ 1 , . . . , δ d+1 ), with δ j ∈ L(ℓ 2 ) for j = 1, . . . , d + 1, so that Z(δ) ≤ 1 the operator In case the input/state/output system Σ is not stable and/or does not have performance, we want to remedy this by means of a feedback with a controller K, which we assume has on-line access to the structured dynamic time-varying uncertainty operators δ U in addition to being dynamic, i.e., K = K(δ) = K(δ U , S). More specifically, we shall restrict to controllers of the form with system matrix M K of the form where X KU = X KU1 ⊕ · · · ⊕ X KUd , and where the matrix entries in turn have a tensor-factorization If such a controller K(δ) is put in feedback connection with G(δ), where we impose the usual assumption D 22 = 0 to guarantee well-posedness, the resulting closed-loop system input-output operator G cl (δ), as a function of the operator uncertainty parameters δ U = (δ 1 , . . . , δ d ) and the shift S, has a realization which is formally exactly as in (4.9), that is which is the same as the system matrix (4.10) tensored with I ℓ 2 , and The state-space nc-stabilization problem (with respect to the given dynamic timevarying uncertainty structure δ U ) then is to design a controller K with state-space realization {A K , B K , C K , D K } as above so that the closed-loop system Σ cl defined by the system matrix (6.14) is robustly stable. The state-space nc-H ∞ -problem is to design a controller K with state-space realization {A K , B K , C K , D K } as above so that the closed-loop system Σ cl also has robust performance. Since the closed-loop state-operator A cl is equal to A cl ⊗ I ℓ 2 with A cl defined by (4.10), it follows as another implication of the fact that µ ∆ is equal to µ ∆ for operators that are tensored with I ℓ 2 (with respect to the appropriate C * -algebra ∆) that A cl is nc-Hautus-stable precisely when A cl is scaled stable, i.e., we have the following result. Proposition 6.2. Let Σ and Σ be the systems given by (6.3) and (5.1), respectively, corresponding to a given system matrix (5.4). Then Σ is nc-Hautus-stabilizable if and only if Σ is scaled-stabilizable. Thus, remarkably, the solution criterion given in Section 4.2 for the scaled statespace stabilization problem turns out to be necessary and sufficient for the solution of the dynamic time-varying structured-uncertainty version of the problem. Theorem 6.3. Let Σ be the system given by (6.3) corresponding to a given system matrix (6.1). Then Σ is nc-Hautus-stabilizable if and only if the output pair {C 2 , A} is nc-Hautus-detectable and the input pair {A, B 2 } is nc-Hautus-stabilizable, i.e., if there exist solutions X, Y ∈ D, with D the C * -algebra given in (4.11), to the LMIs (6.9) and (6.10). In this case K ∼ AK BK CK DK ⊗ I ℓ 2 with AK BK CK DK as in (4.12) is a controller solving the nc-Hautus stabilization problem for Σ. In a similar way, the state-space nc-H ∞ -problem corresponds to the scaled H ∞problem of Subsection (4.2). Theorem 6.4. Let Σ be the system given by (6.3) for a given system matrix (6.1). Then there exists a solution K, with realization (6.11), to the state-space nc-H ∞problem for the non-commutative system Σ if and only if there exist X, Y ∈ D that satisfy the LMIs (4.27) and (4.26) and the coupling condition (4.28). Proof. Let Σ and Σ be the systems given by (6.3) and (5.1), respectively, corresponding to a given system matrix (5.4). Using the strict bounded real lemma from [29] in combination with similar arguments as used above for the nc-stabilizability problem, it follows that a transfer-function K with realization (6.11)-(6.13) is a solution to the state-space nc-H ∞ -problem for Σ if and only if the transfer function K with realization (4.7) is a solution to the scaled H ∞ -problem for the system Σ. The statement then follows from Theorem 4.8. We need a few preliminary definitions. We define F d to be the free semigroup consisting of all words α = i N · · · i 1 in the letters {1, . . . , d}. When α = i N · · · i 1 we write N = |α| for the number of letters in the word α. The multiplication of two words is given by concatenation: The unit element of F d is the empty word denoted by ∅ with |∅| = 0. In addition, we let z = (z 1 , . . . , z d ) stands for a d-tuple of noncommuting indeterminates, and for any α = i N · · · i 1 ∈ F d − {∅}, we let z α denote the noncommutative monomial z α = z iN · · · z i1 , while z ∅ = 1. If α and β are two words in F d , we multiply the associated monomials z α and z β in the natural way: Given two Hilbert spaces U and Y, we let L(U, Y) z denote the collection of all noncommutative formal power series S(z) of the form S(z) = α∈F d S α z α where the coefficients S α are operators in L(U, Y) for each α ∈ F d . Given a formal power series S(z) = α∈F d S α z α together with a d-tuple of linear operators δ = (δ 1 , . . . , δ d ) acting on ℓ 2 , we define S(δ) by whenever the limit exists in the operator-norm topology; here we use the notation δ α for the operator We define the noncommutative Schur-Agler class SA nc,d (U, Y) (strict noncommutative Schur-Agler class SA o nc,d (U, Y)) to consist of all formal power series in We then define the strict noncommutative H ∞ -space H ∞,o (L(U, Y)) to consist of all functions F from D nc,d to L(U ⊗ K, Y ⊗ K) which can be expressed in the form for all δ ∈ D nc,d where ρ −1 S is in the strict noncommutative Schur-Agler class SA o nc,d (U, Y) for some real number ρ > 0. We write H ∞ nc,d (L(U, Y)) for the set of functions G from D nc,d to L(U ⊗ K, Y ⊗ K) that are also of the form G(δ) = S(δ), but now for δ ∈ D nc,d and ρ −1 S in SA nc,d (U, Y) for some ρ > 0. Note that SA nc,d (U, Y) amounts to SA nc,d (C, C) ⊗ L(U, Y). In the sequel we abbreviate the notation SA nc,d (C, C) for the scalar Schur- Agler U, Y)). For the definition of completely positive kernel and more complete details, we refer to [30]. The formulation given here does not have the same form as in Theorem 3.6(2) of [30], but one can use the techniques given there to convert to the form given in the following theorem. One of the main results of [28] is that the noncommutative Schur-Agler class has a contractive Givone-Roesser realization. for some Hilbert state space X = X 1 ⊕ · · · ⊕ X d so that the evaluation of F at δ = (δ 1 , . . . , δ d ) ∈ D nc,d is given by Hence a function F : and only if there is a bounded linear operator so that F is given as in (6.16 [27], this rationality assumption on a given function F in H ∞,o nc,d can be expressed intrinsically in terms of the finiteness of rank for a finite collection of Hankel matrices formed from the power-series coefficients F α of F , i.e., the operators F α ∈ L(U, Y) such that In general, the embedding of a noncommutative integral domain into a skewfield is difficult (see e.g. [75,82]). For the case of RH ∞,o , the embedding issue becomes tractable if we restrict to denominator functions D(δ) ∈ H ∞,o ∈ L(U) for which D(0) is invertible. If D is given in terms of a strictly contractive realization D(δ) = D + C(I − Z(δ)A) −1 Z(δ)B (where A = A ⊗ I K and similarly for B, C and D), then D(δ) −1 can be calculated, at least for Z(δ) small enough, via the familiar cross-realization formula for the inverse: for some finite-dimensional state-spaces X 1 , . . . , X d . Unlike the assumptions in the case of a realization for a Schur-Agler-class function in Theorem 6.6, there is no assumption that M be contractive or that A be stable. It is easily seen that Q(RH ∞,o nc,d (L(U, Y))) 0 is a subset of RO 0 nc,d (L(U, Y)); whether these two spaces are the same or not we leave as an open question. We also note that the class RO 0 nc,d (L(U, Y)) has an intrinsic characterization: F is in RO 0 nc,d (L(U, Y)) if and only if some rescaled version F (δ) = F (rδ) (where rδ = (rδ 1 , . . . , rδ d ) if δ = (δ 1 , . . . , δ d )) is in the rational noncommutative H ∞ -class RH ∞,o nc,d (L(U, Y)) for some r > 0 and hence has the intrinsic characterization in terms of a completely positive Agler decomposition and finite-rankness of a finite collection of Hankel matrices as described above for the class RH ∞,o nc,c (L(U, Y)). We may then pose the following control problems: Noncommutative polydisk internal-stabilization/H ∞ -control problem: We suppose that we are given finite-dimensional spaces W, U, Z, Y and a block-matrix . We seek to find a controller K in RO 0 nc,d (L(Y, U)) which solves the (1) internal stabilization problem, i.e. so that the closed-loop system is internally stable in the sense that all matrix entries of the block matrix Θ(G, K) given by (2.2) are in RH ∞,o nc,d , and which possibly also solves the (2) H ∞ -problem, i.e., in addition to internal stability, the closed-loop system has performance in the sense that T zw = G 11 + G 12 K(I − G 22 K) −1 G 21 is in the rational strict noncommutative Schur-Agler class RSA o nc,d (W, Z). Even though our algebra of scalar plants RO 0 nc,d is noncommutative, the parameterization result Theorem 3.5 still goes through in the following form; we leave it to the reader to check that the same algebra as used for the commutative case leads to the following noncommutative analogue. Theorem 6.7. Assume that G ∈ RO 0 nc,d (L(W ⊕ U, Z ⊕ Y)) is given and that G has at least one stabilizing controller K * . Define Then the set of all stabilizing controllers K for G is given by either of the two formulas where in addition Q has the form Q = LΛL where L and L are given by (3.8) and is invertible, and both formulas give rise to the same controller K. Given a transfer matrix G 22 ∈ RO 0 nc,d (L(U, Y)), we say that G 22 has a stable double coprime factorization if there exist transfer matrices D(δ), N (δ), X(δ), Y (δ), D(δ), N (δ), X(δ), and Y (δ) of compatible sizes with stable matrix entries (i.e., with matrix entries in RH ∞,o nc,d ) subject also to D(0), D(0), X(0), X(0) all invertible so that the noncommutative version of condition (3.9) holds: Then we leave it to the reader to check that the same algebra as used for the commutative case leads to the following noncommutative version of Theorem 3.11. Theorem 6.8. Assume that G ∈ RO 0 nc,d is stabilizable and that G 22 admits a double coprime factorization (6.17). Then the set of all stabilizing controllers is given by Just as in the commutative case, consideration of the H ∞ -control problem for a given transfer matrix G ∈ RO 0 nc,d (L(W ⊕ U, Z ⊕ Y)) after the change of the design parameter from the controller K to the free-stable parameter Λ in either of the two parameterizations of Theorems 6.7 and 6.8 leads to the following noncommutative version of the Model-Matching problem; we view this problem as a noncommutative version of a Sarason interpolation problem. Noncommutative-polydisk Sarason interpolation problem: Given matrices T 1 , T 2 , T 3 of compatible sizes over RH ∞,o nc,d , find a matrix Λ (of appropriate size) over RH ∞,o nc,d so that the matrix S = T 1 + T 2 ΛT 3 is in the strict rational noncommutative Schur-Agler class RSA o nc,d (W, Z). While there has been some work on left-tangential Nevanlinna-Pick-type interpolation for the noncommutative Schur-Agler class (see [22]), there does not seem to have been any work on a Commutant Lifting theorem for this setup or on how to convert a Sarason problem as above to an interpolation problem as formulated in [22]. We leave this area to future work. 6.3. Equivalence of state-space noncommutative LFT-model and noncommutative frequency-domain formulation. In order to make the connections between the results in the previous two subsections, we consider functions as in Subsection 6.2, but we normalize the infinite dimensional Hilbert space K to be ℓ 2 and work with d + 1 variables δ = (δ 1 , . . . , δ d+1 ) in L(ℓ 2 ) instead of d. As pointed out in Subsection 6.1, we may without loss of generality assume that the last variable δ d+1 is fixed to be the shift operator S on ℓ 2 . The following is an improved analogue of Lemma 4.13 for the noncommutative setting. Theorem 6.9. Suppose that the matrix function W ∈ RO 0 nc,d+1 (L(U, Y)) has a finite-dimensional realization The first step is to observe the identity Y)) by assumption and trivially Z(δ)B is in H ∞,o nc,d+1 (L(U, X )), it follows that S 1 (δ) is in H ∞,o nc,d+1 (L(U, X ⊕ Y)). By the detectability assumption and Proposition 6.1 it follow that there exists an operator L = L ⊗ I ℓ 2 with L : Y → X so that A + LC is nc-Hautus-stable. Thus Now the nc-Hautus-stabilizability assumption and the second part of Proposition 6.1 imply in a similar way that S 3 (δ) = Z(δ)(I − Z(δ)A) −1 is in H ∞,o nc,d+1 (L(X , X )). Note that S 3 in turn has the trivial realization . Thus (A ′ , B ′ , C ′ , D ′ ) = (A, I, I, 0) is trivially GR-controllable and GR-observable in the sense of [27]. On the other hand, by Theorem 6.6 there exists a strictly contractive matrix A ′′ B ′′ Moreover, by the Kalman decomposition for noncommutative GR-systems given in [27], we may assume without loss of generality that (A ′′ , B ′′ , C ′′ , 0) is GR-controllable and GR-observable. Then, by the main result of Alpay-Kaliuzhnyi-Verbovetskyi in [14], it is known that the function S(δ) = α∈F d S α ⊗ δ α uniquely determines the formal power series S(z) = α∈F d S α z α . It now follows from the State-Space Similarity Theorem for noncommutative GRsystems in [27] that there is an invertible block diagonal similarity transform Q ∈ L(X ′ , X ′′ ) so that In particular, A = Q −1 A ′′ Q where A ′′ is a strict contraction and Q is a structured similarity from which it follows that A is also nc-Hautus-stable as wanted. We can now obtain the equivalence of the frequency-domain and state-space formulations of the internal stabilization problems for the case where the statespace internal stabilization problem is solvable. for an element G ∈ RO 0 nc,d+1 (L(W ⊕ U, Z ⊕ Y)) such that the state-space internal stabilization problem has a solution. Suppose also that we are given a controller K ∈ RO 0 nc,d+1 (L(Y, U)) with state-space realization } if and only if K(δ) solves the noncommutative frequency-domain internal stabilization problem associated with G(δ) = G11(δ) G12(δ) G21(δ) G22(δ) . Proof. By Theorem 6.3, the assumption that that the state-space internal stabilization problem is solvable means that {C 2 , A} is nc-Hautus-detectable and {A, B 2 } is nc-Hautus-stabilizable. We shall use this form of the standing assumption. Moreover, in this case, a given controller K ∼ {A K , B K , C K , D K } solves the state-space internal stabilization problem if and only if K stabilizes G 22 . Suppose now that K ∼ {A K , B K , C K , D K } solves the state-space internal stabilization problem, i.e., the state operator A cl in (6.14) is nc-Hautus-stable. Note that the 3 × 3 noncommutative transfer matrix Θ(G, K) has realization Θ(G, K) = D Θ + C Θ (I − Z Θ (δ)A Θ ) −1 Z Θ (δ)B Θ with Z Θ (δ) = Z cl (δ) as in (6.15) where (6.20) Now observe that A Θ is equal to A cl , so that all nine transfer matrices in Θ(G, K) have a realization with state operator A Θ = A cl nc-Hautus-stable. Hence all matrix entries of Θ(G, K) are in H ∞,o nc,d+1 . Suppose that K(δ) with realization K ∼ {A K , B K , C K , D K } internally stabilizes G in the frequency-domain sense. This means that all nine transfer matrices in Θ(G, K) are stable. In particular, the 2 × 2 transfer matrix W := Θ(G 22 , K) − Θ(G 22 , K)(0) is stable. From (6.20) we read off that W has realization By Theorem 6.9, to show that A cl = A Θ is nc-Hautus-stable, it suffices to show that DK C2 CK C2 0 , A cl is nc-Hautus-detectable and that A cl , B2 B2DK 0 BK is nc-Hautus-stabilizable. By using our assumption that {A K , B K , C K , D K } is both nc-Hautus-detectable and nc-Hautus-stabilizable, one can now follow the argument in the proof of Theorem 4.9 to deduce that DK C2 CK C2 0 , A cl is noncommutative detectable and that A cl , B2 B2DK 0 BK is noncommutative Hautus-stabilizable as needed. We do not know as of this writing whether any given controller K in the space RO 0 nc,d+1 (L(Y, U)) has a nc-Hautus-detectable/stabilizable realization (see the discussion in the Notes below). However, for the Model-Matching problem, internal stabilizability in the frequency-domain sense means that all transfer matrices T 1 , T 2 , T 3 are stable (i.e., have all matrix entries in H ∞,o nc,d+1 ) and hence the standard plant matrix G = T11 T12 T22 0 has a stable realization. A given controller K solves the internal stabilization problem exactly when it is stable; thus we may work with realizations K ∼ {A K , B K , C K , D K } with A K nc-Hautus-stable, and hence a fortiori with both {C K , A K } nc-Hautus-detectable and {A K , B K } nc-Hautus-stabilizable. In this scenario Theorem 6.10 tells us that a controller K(δ) solves the frequency-domain internal stabilization problem exactly when any stable realization K ∼ {A K , B K , C K , D K } solves the state-space internal stabilization problem. Moreover, the frequency-domain performance measure matches with the state-space performance measure, namely: that the closed-loop transfer matrix T zw = G 11 + G 12 (I − KG 22 ) −1 KG 21 be in the strict noncommutative Schur-Agler class SA o nc,d+1 (W, Z). In this way we arrive at a solution of the noncommutative Sarason interpolation problem posed in Section 6.2. Theorem 6.11. Suppose that we are given a transfer matrix of the form G = T1 T2 T3 0 ∈ H ∞,o nc,d+1 (L(W ⊕ U, Z ⊕ Y)) with a realization T 1 (δ) T 2 (δ) as usual. Then there exists a K ∈ H ∞,o nc,d+1 so that T 1 + T 2 KT 3 is in the strict noncommutative Schur-Agler class SA o nc,d+1 if and only if there exist X, Y ∈ D, with D as in (4.11), satisfying LMIs: 6.4. Notes. 1. The equality of µ ∆ (A) with µ ∆ (A) where ∆ is as in (6.8) appears in Paganini's thesis [108]; as mentioned in the Introduction, results of the same flavor have been given in [37,42,60,99,129]. Ball-Groenewald-Malakorn [29] show how this result is closely related to the realization theory for the noncommutative Schur-Agler class obtained in [28]. There it is shown that µ ∆ (A) ≤ µ ∆ (A) = µ ∆ (A), where µ ∆ (A) is a uniform version of µ ∆ (A). The fact that µ ∆ (A) = µ ∆ (A) is the content of Theorem B.3 in [108]. Paganini's analysis is carried out in the more general form required to obtain the result of Proposition 6.1. The thesis of Paganini also includes some alternate versions of Proposition 6.1. Specifically, rather than letting each δ j be an arbitrary operator on ℓ 2 , one may restrict to such operators which are causal (i.e., lower-triangular) and/or slowly time-varying in a precise quantitative sense. With any combination of these refined uncertainty structures in force, all the results developed in Section 6 continue to hold. With one or more of these modifications in force, it is more plausible to argue that the assumption made in Section 6.1 that the controller K has on-line access to the uncertainties δ i is physically realistic. The replacement of the condition µ(∆) < 1 by µ(∆) < 1 can be considered as a relaxation of the problem: while one really wants µ(∆) < 1, one is content to analyze µ(∆) < 1 since µ(∆) is easier to compute. Necessary and sufficient conditions for µ(∆) < 1 then provide sufficient conditions for µ(∆) < 1 (due to the general inequality µ(∆) ≤ µ(∆)). In the setting of the enhanced uncertainty structure discussed in this section, by the discussion immediately preceding Proposition 6.1 we see in this case that the relaxation is exact in the sense that µ(∆) < 1 is necessary as well as sufficient for µ(∆) < 1. In Remark 1.2 of the paper of Megretsky-Treil [99], it is shown how the µ-singular-value approach can be put in the following general framework involving quadratic constraints (called the S-procedure for obscure reasons). One is given quadratic functionals σ 0 , σ 1 , . . . , σ ℓ defined on some set L and one wants to know when it is the case that σ j (x) ≥ 0 for j = 1, . . . , ℓ =⇒ σ 0 (x) ≤ 0 for x ∈ L. (6.21) A computable sufficient condition (the relaxation) is the existence of nonnegative real numbers τ 1 , . . . , τ ℓ (τ j ≥ 0 for j = 1, . . . , ℓ) so that The main result of [99] is that there is a particular case of this setting (where L is a linear shift-invariant subspace of vector-valued L 2 (0, ∞) (or more generally L 2 loc (0, ∞)) and the quadratic constraints are shift-invariant) where the relaxation is again exact (i.e., where (6.21) and (6.22) are equivalent); this result is closely related to Proposition 6.1 and the work of [108]. A nice survey of the S-procedure and its applications to a variety of other problems is the paper of Pólik-Terlaky [112]. 2. It is of interest to note that the type of noncommutative system theory developed in this section (in particular, nc-detectability/stabilizability and nc-coprime representation as in (6.17)) has been used in the work of Beck [36] and Li-Paganini [89] in connection with model reduction for linear systems with LFT-modelled structured uncertainty. 3. We note that Theorem 6.8 gives a Youla-Kučera-type parametrization for the set of stabilizing controllers for a given plant G ∈ RO 0 nc,d (L(W ⊕ U, Z ⊕ Y)) under the assumption that G 22 has a double coprime factorization. In connection with this result, we formulate a noncommutative analogue of the conjecture of Lin: If G ∈ RO 0 nc,d (L(W ⊕ U, Z ⊕ Y)) is stabilizable, does it follow that G 22 has a double-coprime factorization? If G 22 has a realization C 0 ] ⊗ I ℓ 2 nc-Hautus stabilizable and nc-Hautus detectable, then one can adapt the state-space formulas for the classical case (see [104,85]) to arrive at state-space realization formulas for a double-coprime factorization of G 22 . If it is the case that one can always find a nc-Hautus stabilizable/detectable realization for G 22 , it follows that G 22 in fact always has a double-coprime factorization and hence the noncommutative Lin conjecture is answered in the affirmative. However, we do not know at this time whether nc-Hautus stabilizable/detectable realizations always exist for a given G 22 ∈ RO 0 nc,d (L(U, Y)). From the results of [27], it is known that minimal i.e., controllable and observable realizations exist for a given G 22 . However, here controllable is in the sense that a certain finite collection of control operators be surjective and observable is in the sense that a certain finite collection of observation operators be injective. It is not known if this type of controllability is equivalent to nc-Hautus controllability, i.e., to the operator pencil I − Z(δ)A B being surjective for all δ ∈ L(ℓ 2 ) d+1 (not just δ in the noncommutative polydisk D nc,d ). Thus it is unknown if controllable implies nc-Hautus stabilizable in this context. Dually, we do not know if observable implies nc-Hautus detectable. 4. Theorem 6.9 can be viewed as saying that, under a stabilizability/detectability hypothesis, any stable singularity of the noncommutative function W must show up internally as a singularity in the resolvent (I − Z(δ)A) −1 of the state matrix A. A variant on this theme is the well known fact for the classical case that, under a controllability/observability assumption, any singularity (stable or not) of the rational matrix function W (λ) = D + λC(I − λA) −1 B necessarily must show up internally as a singularity in the resolvent (I −λA) −1 of the state matrix A. A version of this result for the noncommutative case has now appeared in the paper of Kaliuzhnyi-Verbovetskyi-Vinnikov [82]; however the notion of controllable and observable there is not quite the same as the notion of controllable and observable for non-commutative Givone-Roesser systems as given in [27]. 5. Given a function S(z) = n∈Z d + S n z n (where z = (z 1 , . . . , z d ) is the variable in the commutative polydisk D d and we use the standard multivariable notation z n = z n1 1 · · · z n d d if n = (n 1 , . . . , n d ) ∈ Z d + ), we know from the results of [2,3,35] that S has a contractive realization S(z) = D +C(I −Z(z)A)Z(z)B. In light of the work of [28], we see that any such contractive system matrix [ A B C D ] : (⊕ d k=1 X k ⊕ U) → (⊕ d k=1 X k ⊕ Y) can also be used to define an element S of the noncommutative Schur-Agler class While the realization for the commutative function is highly non-unique, the realization for the noncommutative function is unique up to state-space similarity if arranged to be minimal (i.e., controllable and observable as in [27]). Philosophically one can say that evaluation of the function on the commutative polydisk D d does not give enough frequencies to detect the realization; enlarging the frequency domain (or points of evaluation) to the noncommutative polydisk D d nc,d does give enough frequencies to detect the realization in an essentially unique way.
2009-06-18T07:13:17.000Z
2009-06-18T00:00:00.000
{ "year": 2009, "sha1": "4236494a6809527e0e7ec306867751792a21b752", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0906.3363", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "91cdb0a7ea78311c0c9e90cc5ae4034e60d145b4", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247843649
pes2o/s2orc
v3-fos-license
YouTube as a source of information regarding the effect of vitamin C on coronavirus disease Objectives With the expansion of the internet, social media platforms have become a major source of medical information. However, medical information on online multimedia platforms is often inaccurate. In the current study, we evaluated the reliability, quality, and accuracy of the most viewed YouTube videos featuring the effects of vitamin C on COVID-19. Methods A search was conducted on YouTube on January 13, 2022, using the keywords ("ascorbic acid" OR "vitamin C" OR "sodium ascorbate" OR "L-ascorbic") AND ("coronavirus" OR "COVID 19" OR "COVID-19" OR "Corona" OR "COVID" OR "SARSCoV2"). We assessed the 50 most-viewed videos using a modified DISCERN scale (mDISCERN) and Global Quality Scale (GQS). Additionally, the accuracy of the information in each video was evaluated. Results Out of the 50 most-viewed videos featuring the effect of vitamin C on COVID-19, 54% were not reliable. Furthermore, 62% presented poor quality, and 74% were misleading or neither accurate nor misleading. The average mDISCERN and GQS scores of the 50 included videos were 2.2 ± 1.4 (≥ 3: highly reliable) and 2.2 ± 1.1 (2: generally poor), respectively. Although the videos were made by medical doctors, their reliability, quality, and accuracy were not significantly different from those displayed in other sources, including fitness channels, television or internet-based news or programs, consumers, company channels, product advertisements, or prepared by nurses. Conclusions The reliability, quality, and accuracy of the 50 most-viewed videos on the effect of vitamin C on COVID-19 were not high. Video creators, especially medical doctors, should make an effort so that the videos present reliable content with high-quality and correct information is disseminated to people. Introduction Since the first confirmed case of coronavirus disease was reported in December 2019, COVID-19 rapidly spread worldwide in a short span of 2-3 months, threatening public health. 1 As of 2022, despite the development and distribution of vaccines against COVID-19, it continues to spread due to the emergence of various mutations. 2 Patients with COVID-19 experience various symptoms, including fever, chills, cough, runny nose, dyspnea, confusion, dizziness, and chest pain. 3,4 Symptomatic treatment is used to manage COVID-19, and hospitalization is required if the symptoms are severe. 1 Pneumonia is a potential complication of COVID-19, affecting 10-20% of patients, as is acute respiratory distress syndrome. 1 In severe cases of pneumonia, intensive care is required to reduce the risk of mortality. The immunocompromised population is more likely to develop severe COVID-19. 5 Furthermore, if the pro-inflammatory cytokines in patients with severe COVID-19 are activated and the inflammation continues, then the symptoms of COVID-19 may aggravate and result in death. 6 Vitamin C increases immunity by increasing the immune cell function. [7][8][9] It exerts anti-inflammatory effects by inhibiting the pro-inflammatory cytokine production, neutralizing reactive oxygen species, modulating nuclear transcription factor kappa B, and assisting immunomodulation as a cofactor in various biosynthetic pathways in the immune system. 7-9 Therefore, it was supposed that vitamin C activation not only prevents severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and worsening of the symptoms after infection, but also helps to treat severe COVID-19. 10 [10][11][12][13][14][15][16][17][18] ; however, the reported results varied, and recent meta-analyses reported no treatment effects. 19,20 Therefore, there is no strong evidence regarding the effect of vitamin C on COVID-19. With the expansion of the internet, people can easily obtain medical information online and decide which medical services to receive. 21 People also seek advice from medical experts and listen to other patients' experiences through social media. However, the medical information available on online multimedia platforms is not always accurate, which leads to patients making incorrect decisions. 21 YouTube, the most popular and largest media-sharing online platform, is considered the most important online platform for disseminating medical information. This study investigated the reliability, quality, and accuracy of the most frequently viewed YouTube videos on the effects of vitamin C on COVID-19. Video selection This cross-sectional study conducted a search on https://www.youtube.com/ on January 13, 2022, using the keywords "ascorbic acid" OR "vitamin C" OR "Sodium Ascorbate" OR "L-ascorbic" AND "coronavirus" OR "COVID 19 ′′ OR "COVID-19 ′′ OR "Corona" OR "COVID" OR "SAR-SCoV2." The inclusion criteria for the videos were content related to the effect of vitamin C on COVID-19 and videos in English. The exclusion criteria were duplicated videos and absence of audio. The 50 mostviewed videos fulfilling these criteria were included in the review. Ethics committee approval was not required for this study, as it did not include any human participants and the videos were publicly accessible. Data extraction We extracted data from each video. The data included title, production source, duration on YouTube, video length, and total number of views, likes, and subscribers. The video production source was categorized as nutrition, wellness, or fitness channels, television or internetbased news or programs, videos by consumers (clips uploaded by an individual without any professional affiliation), company channels or product advertisements (videos uploaded by a supplement producing company or for sales/promotion of products), videos by medical professionals (medial doctors), or by nurses. Assessment of reliability, quality, and accuracy The reliability of the video content was assessed using the modified DISCERN (mDISCERN) scale, which was adapted from the original DISCERN for the assessment of written health information by Charnock et al. 22 The mDISCERN scale includes the following five questions: (1) Are the aims clear and achieved; (2) Are reliable sources of information used; (3) Is the information presented balanced and unbiased; (4) Are additional sources of information listed for patient reference; and (5) Are areas of uncertainty mentioned. A higher mDISCERN score indicates greater reliability. When the mDISCERN score is ≥ 3, the information is considered highly reliable. The Global Quality Scale (GQS) was used to assess the quality of the video content. 23 This evaluation tool was originally developed to evaluate website resources and to assess the flow and ease of use of the available information. The information can be classified as follows using the GQS: (1) poor quality, poor flow, and most information is missing, and hence not helpful for people; (2) generally poor with some information given but of limited use to people; (3) moderate quality and some important information is adequately discussed; (4) good quality, good flow, and most relevant information is covered, making it useful for people; and (5) excellent quality and excellent flow, making it very useful for people. A higher GQS score indicates greater quality of information. In addition, each video was classified as accurate, misleading, or neither accurate nor misleading. When the videos included at least one correct or one inaccurate scientific statement about the effect of vitamin C on COVID-19, they were classified as accurate videos and misleading videos, respectively. If the videos had no scientific information on the effect of vitamin C on COVID-19, they were considered neither accurate nor misleading videos. When a video contained both accurate and inaccurate statements, it was classified as misleading. Two reviewers (H. S.L and M.C.C) assessed the reliability, quality, and accuracy of the included videos, and any discrepancies in assessment were discussed until consensus was reached. The assessment was conducted based on previously published meta-analyses and review articles. 19,20 Statistical analysis Statistical Product and Service Solutions, version 22 (IBM, Armonk, NY, USA) was used for the statistical analysis. The Kruskal-Wallis test and chi-square test were used to evaluate statistically significant differences in the general features and assessment results of the videos of the groups categorized according to the production sources. The Mann-Whitney U-test was used for comparison between videos with mDIS-CERN scores ≥ 3 and < 3, between videos with moderate to excellent quality (GQS ≥3) and poor quality (GQS <3), and between accurate videos and misleading or neither accurate nor misleading videos. Pvalues < 0.05 were considered statistically significant. Results The general features (production source, duration on YouTube, video length, and total number of views, likes, and subscribers) of the 50 mostviewed videos are presented in Table 1. The web address, title of the videos on YouTube, and detailed data are presented in Supplementary 1. Of the 50 videos, 17 were produced by hospitals or physicians, 17 by television or internet-based news or programs, and 9 by nutrition, wellness, or fitness channels. Additionally, three videos were produced by consumers, three by company channels or product advertisements, and two by nurses. The average mDISCERN score of the included 50 videos was 2.2 ± 1.4. Of these videos, 46% (n = 23) contained information with high reliability. The distribution of the videos according to the mDISCERN scores was as follows: 5 points, n = 3; 4 points, n = 5; 3 points, n = 15; 2 points, n = 7; 1 point, n = 15; and 0 points, n = 5. Regarding the assessment of information quality, the average GQS score of the included videos was 2.2 ± 1.1 (2 points, generally poor). Furthermore, 19 videos (36%) were of moderate (n = 13, 26%), good (n = 5, 10%), or excellent (n = 1, 2%) quality, whereas 18 (36%) and 13 (26%) videos were of poor and generally poor quality, respectively. Additionally, 26% (n = 13) were classified as accurate videos and 48% (n = 24) as misleading videos. The remaining 26% (n = 13) were classified as neither accurate nor misleading. The inter-rater reliabilities of the mDISCERN, GQS, and accuracy were high (intra-class correlation coefficient; mDISCERN = 0.896, GQS = 0.862, and accuracy = 0.906). (SD, standard deviation; mDISCERN, modified DISCERN; GQS, Global Quality Scale). The videos did not differ significantly with respect to the production sources, mDISCERN score, GQS score, and accuracy (p > 0.05 for all in the Kruskal-Wallis and chi-square tests) ( Table 2). In addition, there was no significant difference in the other data, including duration on You-Tube, video length, number of views, number of likes, and number of subscribers for the different production sources (p > 0.05 for all in the Kruskal-Wallis test) ( Table 2). The Discussion This study showed that 54% of the 50 most-viewed videos on the effect of vitamin C on COVID-19 were not reliable. Moreover, 62% had poor quality, and 74% were either misleading or neither accurate nor misleading. Hence, there is concern regarding the reliability, quality, and accuracy of the 50 most-viewed videos on the Internet on the effect of vitamin C on COVID-19. Several previous studies have evaluated the effect of vitamin C on COVID-19, but their results were inconsistent. [10][11][12][13][14][15][16][17][18] These inconsistent results have contributed to the increased confusion regarding the use of vitamin C for managing COVID-19 patients. Some recent meta-analysis studies concluded that there is a lack of evidence supporting the therapeutic use of vitamin C in COVID-19 patients. 19,20 We cannot determine whether vitamin C is effective in controlling the COVID-19 symptoms and in reducing the mortality rate, hospitalization rate, and length of hospital stay. In addition, no study has reported the beneficial effects of vitamin C in preventing COVID-19. Therefore, the information that vitamin C is effective in managing COVID-19 symptoms and that it results in good therapeutic outcomes or prevents COVID-19 is inaccurate or misleading. Only 26% of the videos contained accurate information regarding the effect of vitamin C on COVID-19. Furthermore, more than half of the included videos were not reliable and had poor quality content. Inaccurate or biased videos can result in misconceptions regarding the effect of vitamin C on COVID-19, which can lead to the application of unnecessary treatment in patients. Prior to conducting this study, we assumed that the videos made by doctors would have higher reliability, quality, and accuracy as compared to those made by other sources. However, even though the videos were made by doctors, their reliability, quality, and accuracy results did not differ significantly from those of the other videos, including those made by fitness channels, television or internet-based news or programs, consumers, company channels or product advertisements, or nurses. Our study showed that even doctors posted videos that were inaccurate and of low reliability and low quality. Doctors should review previous studies thoroughly prior to making videos and create their videos based on accurate and verified facts. Likewise, individuals, companies, or broadcast stations need to consult specialists with sufficient knowledge regarding the effectiveness of vitamin C treatment in COVID-19 patients. Videos with high reliability or quality and those containing accurate information did not have more likes or subscribers than those with poor reliability or poor quality and inaccurate information. This suggests that the public has difficulty in assessing whether the information provided in videos is correct. For the public to have accurate knowledge of the effect of vitamin C on COVID-19, the medical professionals' society should create videos with accurate information and share them on social media platforms, such as YouTube. Table 2 Comparison of the general features and results of the assessment of the videos among the groups according to production sources. Conclusions In conclusion, we found that the reliability, quality, and accuracy of the most-viewed 50 YouTube videos on the effect of vitamin C on COVID-19 were low. With the growing and advancing importance of social media in the health field, video creators, especially medical professionals, should make an effort to post content that is reliable and of high quality to ensure that correct information is disseminated. Our study is the first to evaluate the reliability, quality, and accuracy of the information provided by YouTube videos on the effect of vitamin C on COVID-19. However, it included only 50 most-viewed videos, which is a limitation. Although our statistical analyses did not reveal any significant intergroup differences, we think that such differences may become apparent if a larger number of videos are included in future analyses. Future studies compensating for this limitation are warranted. Declaration of Competing Interest The authors report no declarations of interest.
2022-04-01T13:11:41.540Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "821a7a1ba93c4f7c008e571812e9cbe5499bc1ec", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ctim.2022.102827", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4a06aa6ed3ef22956c7d067095578d0598b83b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119674780
pes2o/s2orc
v3-fos-license
An explicit solution for optimal investment problems with autoregressive prices and exponential utility We calculate explicitly the optimal strategy for an investor with exponential utility function when the stock price follows an autoregressive Gaussian process. We also calculate its performance and analyse it when the trading horizon tends to infinity. Dependence of asymptotic performance on the autoregression parameter is determined. This provides, to the best of our knowledge, the first instance of a theorem linking directly the memory of the asset price process to the attainable satisfaction level of investors trading in the given asset. Introduction Sequences of independent random variables have no memory at all, Markovian processes remember their past through their present value only. In the case of processes with longer memory the entire past may influence the current evolution of the given stochastic system, e.g. in the case of fractional Brownian motion and related processes. Econometric time series exhibit various degrees of influence of the past on the present, depending on the sampling frequency. High-frequency volatility has long-range dependence while asset prices may or may not have this property, [1]. The principal motivating question of our research is the following: how does the memory of an asset's price influence the satisfaction attainable from investing into this asset ? The present paper concentrates on a Markovian setting. It precisely characterizes the dependence of performance on memory in a concrete model class where the price follows a Gaussian autoregressive processes. In the case of investors with exponential utility we find the optimal trading strategy for each finite time horizon and analyse what happens when the horizon tends to infinity. We determine the exact dependence of the asymptotic performance on the autoregression parameter and hence make the first step towards general results linking investment performance to the memory length of the underlying security price. The present paper continues previous investigations of [3,4,5], where asymptotic arbitrage in the utility sense was considered, i.e. the speed of the expected utility growth when the time horizon tends to infinity. The first two references concentrated on continuous-time models, [5] treated a model where borrowing and short-selling were forbidden and utility functions were defined on the positive axis only. The possibly negative prices of the model we consider may be acceptable in certain contexts (e.g. futures trading). Its parameters may also be tuned such that negative prices practically never occur. We nonetheless stress that our purpose is to exhibit a theoretical model whose qualitative conclusions are hoped to extend to a broader class of processes in the future so we are not bothered by the eventual negativity of prices. We stress that it occurs very rarely that optimal strategies can be determined in closed form for discrete-time investment problems. As far as we know our paper is the first to have found the explicit solution for the case of autoregressive Gaussian processes. In the present section we explain our model and the optimisation problem in consideration. In Section 2 we present our results, Section 3 contains the proofs. We are working with a financial market in which two assets are traded: a riskless asset with price constant 1, and a single risky asset whose price X t is an R-valued stochastic process governed by the equation where α ∈ R, σ > 0 are parameters and ε t are i.i.d. standard Gaussian, independent of X 0 . Introducing β := α − 1, we may rewrite (1) as The information flow is given by We interpret α (or, equivalently, β) as a "memory parameter" indicating how previous values of the process Xinfluence its present value. Eventually, our purpose is to find the dependence of maximal achievable utility on this parameter. A trading strategy is described by the number of units in the risky asset at t, denoted by φ t for t ≥ 1. Trading strategies are assumed (F t ) t≥0 -predictable R-valued processes (i.e. φ t is F t−1measurable for all t), in particular, short-selling is allowed. The totality of trading strategies is denoted by Φ. The wealth process corresponding to a given trading strategy (φ t ) t≥1 is where L φ 0 := L 0 is the initial capital of the investor. In other words, the terminal wealth of the investor is given by where T ≥ 1 is a time horizon. We focus on a finite horizon utility maximization problem and look for the optimal strategy (φ * t ) 1≤t≤T which satisfies sup where U : R → R is the utility function U (x) = −e −x . Note that the expectations exist but may be −∞. We are going to give an explicit solution for this problem. After these preparations we are able to give an explicit solution for the optimal strategies of the wealth process in case of the price is an autoregressive process. In this Section we prove Theorem 2.1, first we focus on the case where the investor uses past information. We consider the case T = 1, so the wealth process according to (4) takes the form We have hence we get arg min because θ 1 1 = 1. So we proved the first part of Theorem 2.1 for T = 1. Now let's assume that (7) is true for T − 1, i.e. satisfies (6) for all φ ∈ Φ. We will prove that (7) also holds for T . By Lemma 3.1, for all ψ ∈ Φ, Hence, according to (22), it remains to find φ which minimizes If we prove that φ =φ T 1 (z) does the job then we will be able to conclude that the optimal strategy for time horizon T is indeed as given in (21) for T − 1. To compute the minimiser φ we will write Q T (φ, X 0 , ε) in a sum of a quadratic, a linear and a constant function of ε. . We compute each C n separately. According to these, we can write Q T (φ, X 0 , ε) as where A T = [a ik ] ∈ R T ×T is a symmetric matrix with and c : We need to compute the conditional expected utility given by In order to evaluate this integral we need some preparation. We know that for all b ∈ R and a > 0. Lemma 3.3. Let A ∈ R n×n be a symmetric, positive definite matrix , and b ∈ R n . Then Proof. Since A is symmetric, there is an S orthonormal, and a D diagonal matrix for which SDS −1 = SDS T = A and | det S| = 1. Using Lemma 3.2 and setting y : Now we can compute the expression in (26) using Lemma 3.3: We proceed to examining the determinant of A T to prove that A T is positive definite (as (30) holds only in this case) and we will need to compute one element of the inverse matrix, A −1 1,1 . First we present a lemma which will be very useful later. Proof. Proof. First we consider the case n = T , Then we consider the case n = T , We compute the sums separately. The other terms in (33) are Substituting these into (33): Definition 3.6. For a matrix A ∈ R n×n let A(i, j) ∈ R (n−1)×(n−1) denote the appropriate minor matrix of A, i.e. the matrix obtained by omitting the ith row and the jth column of A. Lemma 3.7. We have Proof. We denote the elements of A T (1, 1) and A T −1 by u i,k and v i,k , respectively. Proof. We construct a matrix B T in such a way that we subtract the rows of A T multiplied by β from the first row. Then, according to Lemma 3.5, in the first row of B T all elements expect the first one (b 1,1 ) are zero. Hence, using Lemma 3.7 We need to check that Indeed, We substitute this into (36): Lemma 3.9. A T is positive definite and its determinant is Proof. For T = 1, (37) gives 1/2. During the computation ofφ 1 1 we saw that the coefficient of the quadratic term was indeed 1/2. Let's assume that (37) holds for T − 1, namely Then Since det A T > 0 for all T ≥ 1, Lemma 3.7 applies and the determinants of the matrices [a ij ] i=n,...,T ;j=n,...,T are positive for all 1 ≤ n ≤ T , therefore A T is positive definite for all T ≥ 1. Obviously, we can express det A T with the well-known Γ function. Later on we will need the value of p := A −1 T 1,1 . Now we compute it using Lemmas 3.7 and 3.8: . We need to compute the minimiser of (30). Note that in (30) only the exponent depends on φ, so we can focus on this. Let Then, we need to solve From the definition a b(φ, z) and c(φ, z) Note that we can write b(φ, z) as where e 1 = (1, 0 . . . , 0) T , and A T (:, 1) is the first column of A T . Let's substitute (42), (43) and (44) into (41): We can see from the above calculation that this φ is a global minimiser of f for a given z. Hence the minimiser φ for (24) is and we have proved the first part Theorem 2.1 in the case of using past information. As we have found explicit optimal strategies for the expected utility problem, we can now turn to (8) and (11). First we compute the maximal conditional expected utility which is Hence the maximal achievable conditional expected utility is and we have proved (8). Now we prove (11). For stable processes, in case of var(X t ) = 1 for all N (0, 1), the maximal expected utility can be found using (27): so we have proved (11). Now we focus on the case where the strategies depend only on the initial value X 0 of the autoregressive process. In this case using the strategy η = (η 1 , . . . , η T ) we get Let c : R T → R, and b : Using the notation L η T = c(η) + b T (η)ε we get from Lemma 3.2 that We need to solve the system of equations ∇g(η) = 0. We denote these equations by (E k ), where 1 ≤ k ≤ T : The partial derivatives of b are: η j α j+k−2l−2 + σ 2 η k + σ 2 β T j=k+1 η j α j−k−1 . Therefore the equations E k take the form Let k ∈ {1, 2, . . . , T − 1}. Then for E k+1 we have We define equation F k for 1 ≤ k ≤ T − 1 by substract equation E k multiplied by α from equation E k+1 , so we get Lemma 3.11. For the solutions of the system F k , k = 1, . . . , T − 1, hold, for all k = 1, . . . , T − 1. Proof. First we consider the equation F t−1 , and it is well-known that this expression tends to 1 if T (and hence also y(T )) tend to infinity.
2015-01-07T14:37:56.000Z
2015-01-07T00:00:00.000
{ "year": 2015, "sha1": "e9385d8b40eaffe253c9b5ec34b8b1563e5766ba", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1501.01506", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9385d8b40eaffe253c9b5ec34b8b1563e5766ba", "s2fieldsofstudy": [ "Mathematics", "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
209387248
pes2o/s2orc
v3-fos-license
Inflammation and Progression of Cholangiocarcinoma: Role of Angiogenic and Lymphangiogenic Mechanisms Cholangiocarcinoma (CCA), or cancer of the biliary epithelium is a relatively rare but aggressive form of biliary duct cancer which has a 5-year survival rate post metastasis of 2%. Although a number of risk factors are established for CCA growth and progression, a careful evaluation of the existing literature on CCA reveals that an inflammatory environment near the biliary tree is the most common causal link between the risk factors and the development of CCA. The fact that inflammation predisposes affected individuals to CCA is further bolstered by multiple observations where the presence and maintenance of an inflammatory microenvironment at the site of the primary tumor plays a significant role in the development and metastasis of CCA. In addition, mechanisms activating the tumor vasculature and enhancing angiogenesis and lymphangiogenesis significantly contribute to CCA aggressiveness and metastasis. This review aims to address the role of an inflammatory microenvironment-CCA crosstalk and will present the basic concepts, observations, and current perspectives from recent research studies in the field of tumor stroma of CCA. INTRODUCTION Cholangiocarcinoma (CCA) is a term used to define a group of different biliary epithelial cancers and is the second most common type of liver cancer. This group of primary biliary malignancy represents three different classically recognized kinds of biliary tree cancers, classified on the basis of anatomical point of origin in the bile duct, intrahepatic CCA (iCCA), perihilar CCA (pCCA), and distal CCA (dCCA) (1). Among these three types, iCCA originates in the intrahepatic ducts and represents the second most prevalent type of primary liver malignancy (about 10% of primary liver malignancies are iCCA). Duration of survival post-resection in intrahepatic CCA is 12.4 months (2). The most common type of CCA, is the pCCA, constituting ∼50-60% of all recorded cases. pCCA comprises tumor arising from the emergence of left/right hepatic ducts at liver hilum to the confluence of cystic duct with common hepatic duct (choledocus formation) while distal CCA representing 20-30% of CCA occurs in the epithelial cells of the extra hepatic bile ducts (3). Although iCCA represents only about 5-10% of all CCA cases there is an increase in the number of iCCA among the three CCA types being observed recently (4). Internationally, CCA cases have increased since the past decade, in United States ∼5,000 new cases are diagnosed each year (5). The incidence of CCA is highest among Hispanics and Asians (2.8-3.3/100,000) and lowest (2.1/100,000) among non-hispanics and African Americans (6). With a 5 year mortality rate post metastasis of 2%, CCA, originally described as a rare form of cancer is receiving more attention compared to the past decades due to its high mortality rate (1). ETIOLOGY The number of people afflicted with CCA differs geographically primarily because of the difference in the presence of the risk factors that predispose an individual toward CCA. The number of CCA cases is higher in Asian countries (7). CCA also shows a slight bias toward the male gender (7). The number of risk factors and their extent of influence on CCA predisposition, is not as high as in other malignancies. This could be partially due to the limited number of studies focused on identifying risk factors of CCA. Presence of bile duct cysts, primary sclerosing cholangitis, liver cirrhosis, hepatobiliary parasitic infections such as with liver fluke, hepatolithiasis, and thorotrast exposure, are the most common risk factors. Further complicating this scenario is the fact that a majority of CCA cases develop without the presence of any of the above-mentioned risk factors (8). Hence, there is a need to look at new prognostic factors that will aid in predicting the surgical eligibility, outcome and survival of CCA patients. This has opened up new avenues in research and has identified the critical role of different inflammatory cytokines, increased lymphangiogenesis, relatively low angiogenesis, cancer associated fibroblasts (CAFs), mesenchymal stem cells (MSCs), and other factors, in the growth and progression of the different types of CCA. In the next section we will review some of these risk factors. Primary Sclerosing Cholangitis (PSC) Inflammation and inflammatory mediators form a key underlying basis for several risk factors significantly associated with CCA (9). An inflammatory and obstructive autoimmune disease of the bile ducts, PSC is one of the most important risk factors of CCA. Patients with PSC have a 400-fold higher chance of developing CCA than those without PSC. Interestingly, majority of PSC patients are between the ages of 30 and 40 (in general CCA has a reported age specificity of 60-70 years in age) at the time of diagnosis (9). Up to 50% of these cases are recorded in the first year of diagnosis of PSC (10). The presence of chronic inflammation of bile ducts typically associated with PSC is thought to be one of the reasons for this heightened risk (11)(12)(13). Other factors that serve as the link between PSC and CCA include increased proliferation of the epithelial cells of the biliary tree, cholestasis in the ducts leading to the liver and presence of mutagens produced in the bile (10,11). The role of inflammation in the growth and rapid development of CCA is also underscored by studies that identify inflammatory bowel disease (IBD) as one of the risk factors of CCA (12,13). Cirrhosis Liver cirrhosis develops as a consequence of liver diseases and/or conditions such as alcoholism and hepatitis. As a result, the liver parenchyma is dominated by fibrosis/scarring of liver tissue resulting in disruption and eventual loss of normal liver function. Cirrhosis is an important risk factor for iCCA (14) and shows high degree of association, especially, in Asian populations (15). Similar to PSC, this too has an inflammatory stimulus and a sudden rise in epithelial proliferation, presence of proinflammatory cytokines and chemokines and the generation of fibrotic nodules in liver, mediates a link between cirrhosis and CCA (3). Liver Fluke Infections Liver fluke (Clonorchis sinensis, Opisthorchis viverrini) infections have been identified as critical risk factors for CCA especially in eastern Asian countries where these infections are deemed to be endemic (16). In fact, the recognition of O. viverrini as a cancerinducing parasite by IARC (International Agency for research on Cancer) is due to its role in the development of CCA in affected individuals. These infections are associated with a rise in inflammation, generation of fibrotic nodules, obstruction of bile ducts and/or cholestasis. Chronic inflammation in the biliary tree in O. viverrini infected patients (especially in the background of gene polymorphisms and exposure to other environmental factors) leads to CCA development (17,18). Viral Infections Viral infections such as hepatitis B and C (HBV and HCV, respectively), serve as important risk factors for CCA (19). While HBV infection is endemic to Asian countries and thus serves as the stronger risk factor for iCCA (20), HCV is the primary causative agent for iCCA in western countries (14). Cirrhosis is a common manifestation of hepatitis and leads to the development of the chronic inflammatory background that predisposes to CCA. However, the role of hepatitis viruses in causing proliferation of the hepatic epithelium is also considered to be a reason for CCA incidence in hepatitis patients (21). Choledocholithiasis and Hepatolithiasis Choledolithiasis and cholelithiasis are conditions that involve the presence of stones in the gall bladder and common bile ducts. The presence of these gall stones causes biliary obstruction resulting in cholestasis and serve especially as a risk factor for extrahepatic CCA (22). The presence of stones or calcium deposition inside the intrahepatic bile ducts also leads to cholestasis and chronic inflammation, ultimately serving as a risk factor for CCA. In the Asian population, 5-13% of patients with hepatolithiasis develop iCCA (15,23). Other Inflammatory Conditions Chronic pancreatitis is a strong risk factor for extrahepatic CCA with an odds ratio of 6.61 (95% CI 5. 21-8.40) in comparison to the 2.66 odds ratio of iCCA (95% CI, 1.72-4.10). In chronic pancreatitis too, cholestasis and inflammation may arise leading to CCA (24). Also, the presence of cysts in the bile ducts (intrahepatic and extrahepatic) when left untreated leads to the development of iCCA and eCCA tumors, because of biliary duct obstruction and dilatation leading to cholestasis and inflammation (24,25). Thus, it is evident that inflammation forms an underlying theme in the predisposition and development of CCA. Role of Cell of Origin As CCAs originate from cholangiocytes from different anatomical locations of the biliary tree, they also exhibit considerable tumor heterogeneity that points to the possibility of diverse cellular origins (1,26). In general, CCA originates from the peribiliary gland (PBG) lining epithelium of intraand extra-hepatic ducts (IH and EH, respectively) of the biliary tree (27). Additionally, cholangiocytes and hepatocytes originating from canals of Herring can undergo mutation to give rise to tumors having varying phenotypes (28). Based on the wide range of these phenotypes, pCCA and dCCA have been characterized as adenocarcinomas mucinous in nature, while iCCA has two subtypes: iCCA arising from small bile ducts, mixed in histological phenotype and those arising directly from large intrahepatic bile duct, mucinous in histology (29,30). While bile ductular type iCCA has been recognized to be associated with solid tumor formation not having preneoplastic lesions, iCCA arising from large intrahepatic bile duct is one which is distinctly preceded by preneoplastic lesions (biliary intraepithelial/intraductal papillary neoplasm). Additionally, bile ductular type iCCA has been correlated with chronic liver disease cases such as cirrhosis in contrast to bile duct type iCCA which is mostly correlated with PSC. These differences in histology point to the role of different cells of origin of CCA (29,30). Stem Cell/Progenitor Niches for CCA Development The undeniable role of stem cells in CCA development and origin is proven by the fact that human hepatic stem cells (hHPSCs) are the progenitor cells giving rise to cholangiocytes and hepatocytes that mutate to give rise to CCA (28). The PBG niche starts at septal-segmental bile ducts and ends near duodenal area at hepatic pancreatic common duct. PBG niche thus distributed all across the biliary tree has a significant role in harboring a multipotent stem cell niche which forms the source of the endodermal hepatic mucinous cells that ultimately give rise to the mucinous CCA subtypes of dCCA, pCCA and bile duct type iCCA (27,(30)(31)(32). Cancer stem cells (CSCs) are more generally characterized as cellular subset that maintains tumor growth, such CSCs are recognized by the expression of extracellular markers like CD 24, CD44, CD133, epithelial cell adhesion molecule (EpCAM) etc. in liver malignancies (33). In CCA, more of these studies identifying the specific roles of CSCs are needed. As such two distinct stem cell niches are recognized for CCA development: BTSCs (biliary tree stem cell niche within PBG) and hHPSCs within canals of Herring (26,27). These findings suggest that CCA has more than one type of cell-of-origin and the differences can be looked at to develop a treatment strategy(s) that is personified from an anatomical point of view (34). FACTORS INFLUENCING THE INFLAMMATORY TUMOR MICROENVIRONMENT OF CCA CCA is one of the most desmoplastic tumors and the tumor microenvironment of CCA is characterized by a dense bed of connective tissue intertwining the tumor cells. This dense stroma is composed of a contiguously activated subset of fibroblasts called CAFs that play key roles in modulating several aspects of CCA progression (35). Further during tumor development and progression and resulting increase in cellular and metabolic demands there is often restricted access to nutrients and oxygen supply. This results in regions of the solid tumor having permanent or transient hypoxia, due to alterations in the tumor associated vasculature (36). The expanding vascular network is unable to meet up with the growing demands of the tumor and hypoxic regions persist and induce cellular pathways that promote more malignant phenotypes. In addition, there are immune cells, blood vessels, and lymphatic vessels that contribute to tumor progression which will be discussed in the following sections. Role of Cancer Associated Fibroblasts CAFs release a number of molecules functioning as extracellular matrix proteins (ECM) such as collagen I and fibronectin (35). In CCA, CAFs typically infiltrate the tumor stroma, and are differentially stimulated by a variety of molecular factors released by CCA tumor cells as well as hypoxia. The CAFs population in CCA thus is heterogenous in origin (37). Two of the main sources of these CAFs are liver (hepatic stellate cells, HSCs) and portal vein (portal fibroblasts), while bone marrow derived MSCs also serve as a source of CAFs to a minor extent (37). CCA tumor cells and other immune cells such as macrophages secrete inflammatory chemokines, cytokines and growth factors that not only signal fibroblasts from liver and portal vein to infiltrate the tumor microenvironment but also result in constitutive activation of fibroblasts (35). Platelet derived growth factor (PDGF-DD) overexpressed by CCA cells under hypoxic condition has been shown to be an important CAF infiltrating factor. Binding of PDGF-DD to its receptor PDGFRβ activates Cdc42, Rac1, and Rho GTPases and JNK pathways (38). PDGF-DD binding Cdc42 induces the formation of filopodia and Rac1 induces the formation of lamellipoda, thus ensuring the migration of CAFs to CCA tumor stroma. In addition to PDGF-DD, a number of other growth factors such as FGF (fibroblast growth factor), numerous factors belonging to PDGF family and TGF-β also aid CAF infiltration (39). Alpha-smooth muscle actin-positive (α-SMA) fibroblasts promote biliary cell proliferation and correlate with poor survival in CCA. CCA fibroblasts have proliferative effects that enhance tumor promotion and progression of CCA (40). CCA patients with a high population of CAFs have poorer prognosis than patients with low number of CAFs (41). Consequently, CAFspecific α-SMA is a prognostic factor of CCA patient survival (42). The tumor boosting ability of stromal CAFs was also shown using a 3D collagen matrix-based co-culturing system, in which CCA cells and CAFs isolated from a syngeneic orthotopic rat model of CCA showed a corresponding increase in the formation of structures resembling ducts from CCA cells with the increase in CAF plating density (43). Interestingly, hepatic stellate cells (HSCs) under the influence of CCA cells can also transform into CAFs and support CCA growth (44,45). These findings were further corroborated by studies in a syngeneic rat CCA model with selective stromal CAF depletion that exhibited improved host survival and decreased tumor growth (46). Factors Supporting CAF-CCA Cross-Talk CAFs in CCA show unique characteristics and gene signatures (47). Gene expression studies with human CCA sample derived CAFs showed significant differences between normal liver fibroblasts and CAFs. Most of the genes that were induced in CAFs were involved in controlling cellular metabolism, a prerequisite for the active production of cellular proteins to support the tumor microenvironment and promote tumorigenesis (47). In addition, exosomes also serve as important vessels for transporting regulatory molecular factors (between CAFs and CCA cells) thus supporting cross-talk between CCA cells and CAFs. While studies characterizing the exosomal cargo involved in CAF-CCA crosstalk has been relatively limited (48,49), it has been shown that exosomes shuttle miR-195 between CAFs and CCA (50). Stimulation of MSCs to CCA cell-derived exosomes lead to increased migration and production of inflammatory tumor promoting cytokines as CXCL1, CCL2, and IL-6 (51). In addition, several growth factors contribute to the inflammatory microenvironment. EGFA/EGFR binding has been shown to promote tumorigenesis and metastasis in CCA, another important EGFR ligand, HB-EGF was found to be highly expressed in myofibroblasts. HB-EGF activated EGF signaling promotes proliferation of CCA cells and also induces epithelial-mesenchymal (EMT) changes as well as invasion. HB-EGF secretion from fibroblasts is also activated by the pro-tumorigenic growth factor TGF-β secreted by tumor cells that in turn favors CCA growth (52). Stromal cell derived factor 1 or SDF-1 has previously been reported to be involved in promoting cancer growth as a ligand for CXCR4/CXCR7 (53). In CCA, SDF-1 expression is only produced by the stromal CAF, possibly as a result of the HSC infiltration under stimulatory signals derived from angiotensin-II secreted by cancer cells (54). In vitro studies indicate that when SDF-1 is expressed by HSCs, a number of protumorigenic responses are induced such Bcl-2, and activation of PI3K/Akt pathway. These responses initiate increased CCA cell invasion and prolonged survival in addition to inducing epithelial-mesenchymal transition (45,55). Tumor associated macrophages were shown to produce TNF-α that induces CXCR4 expression, thus promoting SDF-1 mediated pro-tumorigenic effects (54). CAFs are also shown to release high levels of HGF (hepatocyte growth factor) that might mediate high expression of CXCR4 (43). Role of Mesenchymal Stem Cells (MSCs) One of the most important cellular components of CCA stroma are MSCs. MSCs may activate a series of tumor signaling pathways through the release of cytokines and that may either promote or inhibit tumor development and progression (56). The function of MSCs in tissue repair is similar to the homing of MSCs to sites of tissue damage and to sites of tumor microenvironment (51). Injured tissues secrete a wide variety of inflammatory chemokines that sends signals to MSCs for repair. It has been seen in a number of studies that tumor cells too, while modulating several other factors in their microenvironment that foster a metastatic condition, secrete inflammatory chemokines that result in MSC infiltration (51). CCA cells also secrete exosomal vesicles that are shown to enhance expression of IL-6, CXCL-1, and CCL2 by MSCs. Further, conditioned medium from MSCs exposed to tumor cell-derived extracellular vesicles (EVs) caused an upregulation in STAT3 phosphorylation and proliferation of CCA cells, possibly by secretion of CCL2/MCP1, CXCL1/GRO-α, CXC3CL1/Fractalkine, IL-6, and PDGF-AA (51). Conditioned media from MSCs also has been found to upregulate the Wnt signaling pathway in CCA cells and increased nuclear translocation of ß-catenin (57). Further, coculture studies of CCA and MSCs have shown that increased CCR5 expression by tumor cells upregulates metalloproteinases MMP-2 and MMP-9 in CCA cells and thereby promoted angiogenesis and CCA metastasis (58). Role of Macrophages The CCA stroma is densely populated by different infiltrating immune cells among which tumor associated macrophages (TAMs) play an important role by regulating angiogenesis, lymphangiogenesis, tumor proliferation and also modulating matrix related changes (59,60). In a study by Wongkham et al. more than half of CCA tumor samples showed high macrophage infiltration in CCA (61). It has also been seen that CD14 + /CD16 + monocyte cells which are precursors of tissue resident macrophages are present in an increased number in CCA patients. It is significant that these circulating CD14 + /CD16 + monocytes have high VEGF and CXCL3 expression that promote tumor angiogenesis (62). In a correlation study it was seen that CD163 + M2 macrophages were associated with FOXP3 + regulatory T cell-related infiltration. Additionally, this study also showed that CCA conditioned media treatment of macrophages led to polarization bias toward M2 macrophages along with secretion of TGFβ, IL10, and VEGF-A (63). A high density of the M2-TAMs in patients is significantly associated with increased extrahepatic metastases possibly due to the effects on EMT pathways (41). ROLE OF INFLAMMATORY CYTOKINES The association between chronic inflammation and the development and progression of malignancy is significantly pronounced in onset and development of CCA (64). Inflammation in the tumor microenvironment of CCA is promoted by a number of cytokines and chemokines that further enhance tumor progression and aid pathways involved in distant metastasis (47). Below, we discuss several inflammatory cytokines that contribute to an inflammatory tumor microenvironment and enhance CCA progression. Tumor Necrosis Factor-Alpha (TNF-α) TNF-α is one of the most well-known mediators of inflammatory stimuli in the tumor microenvironment (65). Although TNF-α is involved in cancer progression, its more prominent pro-tumoral effects have been seen in angiogenesis and invasion of cancer cells (66,67). During pathogenesis, TNF-α elicits an immune response at tissue injury locations. TNF-α also induces hepatic stellate cells (HSCs) so that they secrete oxidative radicals such as hydroxyl radical, nitic oxide (NO), and superoxide anion and is associated with aggressive development of CCA (68). Suksawat et al. showed that CCA cells express very high levels of eNOS and phosphorylated eNOS that correlate with poor prognosis in CCA patients. This phosphorylation mediated activation of eNOS by VEGF-C is through activation of PI3K/AKT pathway. The downstream effects of eNOS/peNOS/iNOS is thought to originate from VEGF-C pathway activation (69). TNF-α has been shown to promote migration of CCA cells by upregulating expression of S100A4, vimentin and ZEB2, molecules involved in EMT transition. In neoplastic bile ducts, these molecules have been seen to be associated with upregulation of TGF-β and downregulation of E-cadherin expression, an observation that has been correlated to poor prognosis in CCA patients (70). Interleukin 1β (IL-1β) Classified as one of the most important pro-inflammatory cytokines, IL-1β has been shown to be highly expressed from HSCs. The autocrine signaling mediated by CCA cells also becomes prominent in this regard as CCA cells have been shown to produce high levels of IL-1β that further enhances the CXCL5/CXCR2 pathway that in turn activates AKT/PI3K or ERK1/2 pathways. In fact, heightened CXCL5 expression has been seen to indicate poor rates of survival in CCA patients (71,72). Interleukin 6 (IL-6) Bone marrow derived MSCs (BM-MSC) when exposed to tumor conditioned medium can transform into CAFs and stimulate tumor growth via secretion of inflammatory cytokine IL-6 in the tumor stroma. In CCA, this IL-6 overexpression was found to decrease the methylation of the EGFR promoter and enhance EGFR expression that in turn is associated with poor prognosis and overall survival (64,73). IL-6 also mediates its tumorigenic effects by causing hypermethylation based silencing of tumor suppressor genes (74). In CCA, IL-6 has been shown to activate the p38 pathway and consequently downregulate p21 WAF/CIP1 a cyclin dependent kinase inhibitor, involved in cell cycle regulation (75). IL-6 also induces upregulation of STAT3 and Mcl-1 (myeloid cell leukemia-1) genes that mediate an antiapoptotic response in neoplastic cholangiocytes (76). In addition, IL-6 also induces EMT by increasing expression of Snail and JAK/STAT and a resulting downregulation of E-cadherin and promotes CCA progression (77). Transforming Growth Factor (TGF-β) TGF-β plays dual roles in cancer progression and inhibits cell proliferation, regulates anti-inflammatory, and pro-apoptotic effects in cells under normal physiological conditions (78). It also actively promotes tumor progression and most cancer cells are resistant to its anti-proliferative effects. TGF-β activates the expression of its downstream genes (such as Bim) through differential phosphorylation and nuclear translocation of SMAD transcription factors (79). Mutational changes in the TGFβ receptor resulting in changes in Smad4 phosphorylation, increased cyclin D1 levels activate pathways that make CCA cells resistant to the tumor suppressive effects of TGF-β (80). Mouse model-based studies have shown that loss of expression of PTEN and SMAD4 gives rise to CCA (81). Correlation studies have shown that high levels of TGFβ is related to CCA metastasis to lymph nodes and distant sites as well as CCA recurrence (82). Consequently, inhibition of TGF-β resulted in significant reduction of CCA cell invasion (83). Further, altered TGF-β signaling in CCA cells also causes EMT-driven changes in cytoskeletal structure and CCA cell motility thus influencing cancer cell invasion through upregulation of EMT genes (84). Overall, inflammatory cytokines set the stage for CCA growth by enhancing proliferation, activation of tumor promoting mechanisms such as EMT, activation of signaling pathways that promote tumor growth and loss of cell cycle checkpoints. However, the major cause for the high mortality associated with these cancers is its ability to metastasize, that is aided by the activation of lymphangiogenic (growth of new lymphatic vessels) and angiogenic (growth of new blood vessels). The various growth factors secreted by CCA cells into their stroma and other components of the tumor microenvironment foster the development of new lymphatic and blood vessels that in turn promote tumor growth and dissemination to distant organs. LYMPHANGIOGENESIS AND ANGIOGENIC MECHANISMS IN CCA PROGRESSION AND METASTASIS Tumor cells employ several mechanisms to establish a functional and integrated vascular system comprised of both blood and lymphatic vessels to promote cellular growth and metabolism. Expansion of these vascular networks is key to migration of the tumor cells to distant sites where they establish tumor niches. A surge of recent data has implicated the roles of both lymphatic and the blood vascular in promoting CCA metastasis. Lymphangiogenesis and Lymph Node Remodeling Tumor-associated lymphangiogenesis, or the sprouting of new lymphatic vessels in the tumor microenvironment is a form of tumor-associated neovascularization that has been the focus of studies concerning the metastatic spread of highly aggressive form of cancers (85). Lymphatic involvement has emerged as a hallmark of CAA with significant lymphatic invasion or lymph node metastasis implicated with poor disease prognosis (86,87). Early metastatic CCA is characterized by a striking expansion of the intratumoral and peritumoral lymphatic vessels, which represents a key determinant of the early metastasis to the regional lymph nodes in patients rendering patients unable to opt for surgical resection. Post-surgical resection period is characterized by an increase in lymphangiogenesis and lymphatic vessel remodeling that correlates with poor post-surgical survival (86). Hence, it is critical to look at the elements in the tumor microenvironment of CCA that cause lymphangiogenesis and lymph node remodeling. As discussed above, the tumor microenvironment of CCA is enriched with abundant cytokines and chemokines necessary for paracrine signaling that promotes development of a lymphatic bed dedicated to sustaining the growth of tumor. CAFs actively crosstalk with CCA cells in driving the development of a rich lymphatic vasculature within a pro-lymphangiogenic tumor stroma (35). High expression of VEGF-C and VEGFR-3 has been observed in the tumor microenvironment of intrahepatic CCA patients (iCCA), that also correlated with poor prognosis in patients (88)(89)(90). VEGF-C is required for the growth of small (or intial) lymphatic vessels whereas angiopoietin 1 & 2 are need by VEGF-C to form terminal lymphatic vessels in the adult body (91,92). The dense network of lymphatic vessels and a reduced number of blood vessels, in the CCA tumor stroma also creates a hypoxic microenvironment (93). Hypoxia inducible factor-1 (HIF-1α) is known to induce lymphangiogenesis in several cancers (94). In CCA, high expression of HIF-1α promotes tumor progression and metastasis and is associated with poor patient survival (95). Interestingly, HIF-1α has also been shown to support cancer related lymphangiogenesis by upregulating the expression and subsequent secretion of Ang1/2, VEGF-C/D and PDGF-B from neoplastic cells into the tumor stroma, in several cancers as breast cancer, esophageal cancer, and oral squamous carcinoma (96)(97)(98). PDGF-D secreted from neoplastic CCA cells binds PDGFRβ on CAFs resulting in activation of ERK/NF-kB and JNK signaling networks that in turn secretes VEGF-C and promotes expansion of the lymphatic vasculature and tumor cell intravasation. Pharmacological depletion of CAFs in a CCA in vivo however, significantly reduced lymphatic vascularization and reduced lymph node metastases (99). VEGF-C expression in CCA is also mediated by M2 macrophages (63). Further, overexpression of Nerve Growth Factor Beta (NGF-B) overexpression correlated with VEGF-C overexpression, lymphatic vessel density and lymph node metastasis along with nerve cell invasion in patients of hilar CCA (100). Different correlation studies have established lymphatic vessel density (LVD) and expression of several lymphatic specific markers such as podoplanin and VEGFR-3 as prognostic biomarkers of CCA (101,102). Podoplanin is highly expressed on the surface of CAFs as well as LECs and emerged to be a prognostic biomarker in human perihilar CCA (101). Lymph node metastasis has also been correlated with a high podoplanin expression on activated CAFs in intrahepatic CCA (90). Further studies are needed to determine the role of podoplanin in tumor lymphangiogenesis in CCA. However, podoplanin mediated regulation of small GTPases as Cdc42 induces capillary morphogenesis, polarized migration, and invasiveness of LECs (103,104). Thelen et al. have demonstrated that a high lymphatic vessel density or existence of lymphangiogenesis significantly correlates with poor prognosis in patients with hilar CCA. This observation adds to the role of lymphatic vessel remodeling in cancer progression, specifically the migration of cancer cells via lymphatic vessels (104,105). In CCA, a "high" LVD is associated with increased nodal spread, and "high" LVD tumors more frequently develop recurrence (105). Indeed recent studies have shown that both peritumoral as well as intratumoral lymphatic bed is composed of capillaries that lack organization and/or drainage function thus favoring neoplastic cell infiltration because of the differential permeability or leaky nature of these vessels (106). In this regard, LECs lining these vessels also interact with tumor cells to transport them through endothelium, an event mediated by the CCL1-CCR8 chemokine axis. CC-type chemokine ligand 1 (CCL1) is expressed on the surface of LECs which bind CC-type chemokine receptor 8 (CCR8) on the surface of tumor cells and thus help in their trans-endothelial migration (107). Tumor lymphangiogenesis, which results in proliferation of LECs also functions in immune-evasion of the cancer cells. LECs in draining lymph nodes express on their surface the wellknown antigen PD-L1 which binds to PD-1 on the surface of cancer specific CD 8 + cells and induces their apoptosis (108). However, there is a need to study the mechanisms of lymph node remodeling and lymphatic metastasis in CCA, that would further establish the link between lymphatic vessel remodeling, tumor stroma, tumor lymphangiogenesis and CCA metastasis. Angiogenesis Tumor related angiogenesis, or the sprouting of new blood vessels is one of the key mechanisms for tumor metastasis that is promoted by angiogenic factors actively secreted by tumor cells (109). Tumor angiogenesis is pronounced in CCA, one of the most aggressive and metastatic cancers. Cholangiocytes promote neo-vascularization by enhanced expression of pro-angiogenic growth factors both at the site of primary tumor as well as in the tumor stroma of distant sites where these cholangiocytes have metastasized. Thus, a sprawling network of blood vessels created by secreted factors from cholangiocytes supports the growth and spread of cholangiocytes (110). Critical mediators and activators of angiogenesis include the growth hormones VEGF, EGF, and NGF, FGF, placental growth factor, the angiopoietins and their receptors, Tie1 and Tie2. Further, neuropilin, ephrin, and leptin are being recognized as key mediators of angiogenesis and tumor growth (111). These pro-angiogenic factors play important roles both in maintenance and growth of the primary tumor as well as neo-vascularization during CCA metastasis (111). In normal tissues, following induction of angiogenesis by pro-angiogenic factors such as the VEGF factor family proteins (VEGF-A, VEGF-B, VEGF-C), remodeling of the newly formed vessel wall takes place where intercellular tight junctions and adherens junctions are created between vascular endothelial cells (BECs), that brings about permeability and elasticity in the vessel (110). After vessel remodeling is completed in normal tissues the ensuing blood flow and establishment of normoxia (normal/physiological O 2 concentration) results in the inhibition of angiogenesis inhibition. In tumor cells however, hypoxia or a low oxygen environment in the region of the tumor induces expression of VEGF hormones. Cholangiocytes, accordingly, have been found to secrete high levels of both VEGF-A in the tumor stroma and VEGFR-2 during cholangiocyte hyperplasia (112). This suggests an autocrine mechanism by which cholangiocytes regulate their own growth. Similar to the studies in CCA lymphangiogenesis where high VEGFR-3 expression is enhanced under the influence of CAFs and tumor cells on the surface of LECs, it has been shown that a similar paracrine signaling mechanism exists in BECs where high levels of VEGFR-2 are expressed on its surface (112). Enhanced expression of VEGF-A and other members of the VEGF family such as VEGF-C cause BECs to secrete MMP-9 and MMP-7 which help in remodeling of the basement membrane and surrounding ECM and promotes tumor metastasis. Interestingly, it has been shown that TGF-β and VEGF are co-expressed in human CCA and that overexpression and functional interaction of TGF-β and VEGF could potentially contribute to the "angiogenic switch" and the malignant phenotype in human CCC (113). In addition to hypoxia stimulating production of VEGF, additional factors such as estrogen along with IGF1 (insulin like growth factor 1) and IGFR (IGF1 receptor) synergistically increases the expression of VEGFs such as VEGF-A, VEGF-C and their corresponding receptors in cultured CCA cells (114). In addition, metastasisassociated in colon cancer-1 (MACC1) protein upregulates VEGF-A thus favoring the growth of CCA (115). Overexpression of histidine decarboxylase (HDC) enzyme correlated with that of VEGF-A/C expression. HDC knockdown/inhibition significantly reduced tumor growth by reducing tumor cell proliferation and VEGF expression (109). microRNA Regulation of Lymphangiogenesis and Angiogenesis in CCA Growing evidence from literature suggests that microRNAs (miRNA), endogenous small non-coding RNAs (19-24 nucleotides) regulate various aspects of cholangiopathies including CCA and has been extensively reviewed elsewhere (116,117). However, studies evaluating their role in regulation of lymphangiogenesis associated with CCA is very limited. The miRNAs involved in CCA associated angiogenesis have been more extensively investigated and miR-92a, miR-126, miR-132, and miR-296 regulate several key pathways that enhance CCA associated angiogenesis (118). Overexpression of miR 16 and miR-424 has been shown to regulate the VEGF-A/FGF signaling cascades and reduce tumor cell proliferation and migration (119). miR-101, an miRNA highly expressed in liver was found to inhibit the growth of CCA by inhibiting VEGF expression (120). Understanding how miRNA regulate different molecular players involved at different levels of CCA progression also will help design better therapeutic interventions for arresting tumor progression. Further these miRNAs have the potential of being diagnostic biomarkers for CCA metastasis. CONCLUSION AND FUTURE DIRECTIONS The epidemiology of CCA varies across different regions owing to the differences in the number and intensity of the risk factors present in each place, the malignancy also varies in terms of the epidemiology of its types (iCCA, pCCA, dCCA), however based on the data above it can be postulated that inflammation of the tumor microenvironment and its associated players have a crucial role in shaping the response of the CCA cells to therapeutic strategies, their growth and progression. To this end, the early metastatic events of CCA is an area that can be pursued in the future to look for new therapeutic targets as well as to unravel the intricacies of the inflammatory tumor microenvironment-CCA crosstalk. While therapies targeting specific molecules and signaling pathways have shown promise, combinatorial therapies as a whole have come up to be effective in different cancer types. Hence, a better understanding of the different components of the tumor stroma that the CCA cells modulate and exploit in order to give rise to a pro-inflammatory and pro-tumorigenic environment can lead to a holistic understanding and approach toward treating CCA. Some of these key mechanisms that interact and promote the onset and progression and subsequent metastasis of CCA is shown in Figure 1. It is also evident that the aggressiveness of this cancer is directly related to its ability to metastasize and hence understanding key events that promote lymphatic metastasis in the early stages of the cancer will be critical for development of targeted therapies. Specific traits of CCA such as the high rate of lymphangiogenesis vs. the low rate of angiogenesis, deserve special research focus to unravel some of the underlying molecular pathways that mediate disease progression. AUTHOR CONTRIBUTIONS SR, SG, and SC planned and wrote the manuscript and approved the final submitted version. SR made the figure. FUNDING This work was supported by Auf-X-Grant Award from Texas A&M University Health Science Center and Department of Medical Physiology, to SC, faculty start-up funds from Texas A&M University Health Science Center College of Medicine to SC and NIH grant DK110035 to SG, and American Heart Association Scientist Development grant 17SDG33670306 to SC.
2019-12-18T14:06:31.415Z
2019-12-18T00:00:00.000
{ "year": 2019, "sha1": "30ba82fcfe32dcb2c002a38b83396b874efe9398", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fmed.2019.00293", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30ba82fcfe32dcb2c002a38b83396b874efe9398", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
40108038
pes2o/s2orc
v3-fos-license
Dynamical analysis of quantum linear systems driven by multi-channel multi-photon states In this paper, we investigate the dynamics of quantum linear systems where the input signals are multi-channel multi-photon states, namely states determined by a definite number of photons superposed in multiple input channels. In contrast to most existing studies on separable input states in the literature, we allow the existence of quantum correlation (for example quantum entanglement) in these multi-channel multi-photon input states. Due to the prevalence of quantum correlations in the quantum regime, the results presented in this paper are very general. Moreover, the multi-channel multi-photon states studied here are reasonably mathematically tractable. Three types of multi-photon states are considered: 1) $m$ photons superposed among $m$ channels, 2) $N$ photons superposed among $m$ channels where $N\geq m$, and 3) $N$ photons superposed among $m$ channels where $N$ is an arbitrary positive integer. Formulae for intensities and states of output fields are derived. Examples are used to demonstrate the effectiveness of the results. In the quantum control community, the responses of quantum systems to single-photon states and multi-photon states have been studied in the past few years. The phenomenon of cross phase shift on a coherent signal induced by a single photon pulse was investigated in [31]. Gough driven by single-photon states or Schrodinger's cat states, [21], [22]. This theory has been applied to the study of phase modulation in [13]. Quantum master equations for an arbitrary quantum system driven by multi-photon states were derived in [4]. Quantum filters (stochastic master equations) for multi-photon states have been derived in [44], for both homodyne detection and photodetection. Numerical simulations carried out in [44] for a two-level system driven by a 2-photon state revealed interesting and complicated nonlinear behavior in this photon-atom interaction. When a two-level atom, initialized in the ground state, is driven by a single photon, the exact form of the output field state was derived in [37]. More discussions can be found in, e.g., [16], [28], [35] and references therein. In [57], the analytic expression of the output field state of a quantum linear system driven by a single-photon state was derived. Specifically, a class of m-channel m-photon states was given in [57,Eq. (44)]. For such states, each input channel has exactly one photon whose pulse shape is determined by a single-variable function ν k , (k = 1, . . . , m). Moreover, there exists no statistical correlation among photons in different channels; that is, these m-photon states are separable states. A more general class of m-channel m-photon states was given in [57,Eq. (95)]. For this class of states, quantum correlations are allowed to exist among different channels. Unfortunately, because photon pulses are functions of two variables, ξ jk , the extent of quantum correlation is severely limited. The research initialized in [57] has been continued in [53], where more general forms of multi-photon input states were considered. In the study of [53], different channels may have different numbers of photons. To be specific, as shown in [53,Eq. (22)], the jth channel may have ℓ j photons. The m-channel multi-photon states defined in [53,Eq. (22)] belong to the class of m-channel multi-photon states defined in [53,Eq. (34)]. This class of states also contain those defined in [57,Eq. (95)] as special cases (when ℓ j = 1 for all j = 1, . . . , m). However, because photon pulses are functions of three variables, ξ ijk , the extent of quantum correlation among the input channels is still limited. An m-channel multiphoton state is then proposed in [53,Eq. (41)], where the jth channel is an ℓ j -photon state whose pulse shape is given by a function of ℓ j variables, Ψ j (t 1 , . . . , t ℓj ). Therefore, the states defined in [53,Eq. (41)] somehow is more general than those in [53,Eq. (34)]. It is worth noting that the states defined in [53,Eq. (41)] are separable states, that is, there exists no correlation among different channels. The m-channel multi-photon states defined in [53,Eq. (41)] is subsequently extended to a broader class of states as given in [53,Eq. (43)], which allow quantum correlations among different input channels. Unfortunately, The states defined in [53,Eq. (34)] and [53,Eq. (43)] appear rather abstract and mathematically intractable. In fact, all the examples studied in [53] focused on separable input states. In other words, none of these examples is for the general multi-channel multi-photon states defined in [53,Eq. (34)] or [53,Eq. (43)]. Actually, in the existing literature it appears hard to find multi-channel multi-photon states that are described by [53,Eq. (34)] or [53,Eq. (43)]. It is fair to say that the multi-photon states studied in [53] are either too simple (separable states) or too complicated (such as those given in [53,Eq. (43)]). The purpose of this paper is to provide a direct study of the dynamical response of quantum linear systems to initially entangled m-channel multi-photon states. Unlike those separable states defined in [57,Eq. (44)] and [53,Eq. (41)], and those states defined in [57,Eq. (95)] and [53,Eq. (34)] whose pulse shapes are functions of two or three variables, the pulse shapes of the states defined in this paper are characterized by functions of m (or N ≥ m) variables, more detail is given in Eqs. (41) and (119). Examples presented in this paper demonstrate that these types of m-channel multi-photon states can be easily processed by quantum linear systems. The states in Eqs. (41) and (119) are subsequently extended to more general classes of states Eqs. (99) and (143), respectively. These classes of states are very general as they contain many forms of multi-channel multi-photon states as special cases, see. e.g. [29,Chapter 6], [41], [11]. Moreover, these states are mathematically more tractable than those in [53,Eq. (43)]. Therefore, the study carried out in this paper is more relevant to quantum linear feedback networks and control. Three types of multi-channel multi-photon states are studied in this paper. Case 1): m photons are superposed among m channels. Specifically, the m-channel m-photon states are defined in Subsection 3.1. When the underlying quantum linear system is passive, the analytic expression of the output intensity is presented in Subsection 3.2, see Theorem 1. Moreover, the steady-state output field state is investigated in Subsections 3.3 and 3.4, see Theorems 2 and 3. When the underlying quantum linear system is non-passive, the steady-state output state is no longer an m-channel m-photon state, the explicit form of the output state is presented in Subsection 3.5, see Theorem 4. Case 2): N photons are superposed among m channels where N ≥ m. For this case, we assume the underlying quantum linear system is passive. Then the analytical expressions of the output state are derived, see Theorems 5 and 6. Case 3): N photons are superposed among m channels where N is an arbitrary positive integer. The m-channel N -photon states are presented in Subsection 5.1. And in Subsection 5.2, the steady-state output state of a quantum linear passive system driven by an m-channel N -photon input state is derived, see Theorem 7. Notation. The complex unit √ −1 is denoted by i. Given a column vector of complex numbers or operators x = [x 1 · · · x k ] T , denote x # = [x * 1 · · · x * k ] T , where the superscript " * " stands for complex conjugation or Hilbert space Let I k be an identity matrix and 0 k a zero square matrix, both of dimension k. Denote J k = diag(I k , −I k ). (The subscript "k" may be omitted when it causes no confusion.) Given a matrix X ∈ C 2j×2k , define X ♭ J k X † J j . Given a matrix A, let A jk denote the entry on the jth row and kth column. Let m be the number of input channels. Let n be the number of degrees of freedom of a given quantum linear system, namely the number of quantum harmonic oscillators. The ket |φ denotes the initial state of the system, and |0 stands for the vacuum state of free fields. The convolution of two functions f and g is f ⊛g. Given a function f (t) in the time domain, define its two-sided Laplace transform [43,Eq. (13)] to be We set = 1 throughout this paper. Quantum linear systems A quantum linear system G is shown schematically in Fig. 1. In this model, the quantum linear system G consists of a collection of n interacting quantum harmonic oscillators a = [a 1 , . . . , a n ] T . Here, a j (j = 1, . . . , n), defined on a Hilbert space H G , is the annihilation operator of the jth quantum harmonic oscillator. The adjoint operator of a j , denoted by a * j , is called a creation operator. These operators satisfy the canonical commutation relations [a j , a * k ] = δ jk . The input light fields are represented by a vector of annihilation operators b in (t) = [b in,1 (t), . . . , b in,m (t)] T ; the entry b in,j (t) (j = 1, . . . , m), defined on a Fock space F , is the annihilation operator for input channel j. The adjoint operator of b in,j (t), denoted by b * in,j (t), is also called a creation operator. However, unlike a j and a * k , these annihilation and creation operators satisfy the following singular commutation relations, [17], [20,Eq. (20)], Notice the presence of the Dirac delta function δ(t − r) in Eq. (2). Mathematically, it is often more convenient to work with integrated annihilation and creation operators, which are defined respectively to be B in (t) t t0 b in (r)dr and B # in (t) t t0 b # in (r)dr, where the lower limit t 0 of the integral is the initial time, namely the time when the system and the fields start interaction. The gauge process (also called number process) is defined by the following m-by-m matrix function, [17,Chapter 11], [20, Section III.A], [57,Eq. (11)], In this paper, we deal with canonical quantum input fields, that is, the only non-zero Ito products are, [17,Chapter 11], [19], [20], [57,Eq. (12)], dB in,j (t)dB * in,k (t) = δ jk dt, dΛ jk dB * in,l (t) = δ kl dB * in,j (t), dB in,j (t)dΛ kl (t) = δ jk dB in,l (t), dΛ jk (t)dΛ lr (t) = δ kl dΛ jr (t), j, k, l, r = 1, . . . , m. (4) The dynamics of the open quantum linear system G can be described conveniently in the (S − , L, H) formalism [19], [56]. Here, S − is a constant unitary matrix of dimension m, which can be used to model static devices such as phase shifters and beamsplitters. The operator L describes how the system is coupled to the fields, and is of the form L = C − a + C + a # with C − , C + ∈ C m×n . For example, when an optical cavity is driven by a light field, L can be of the form L = √ κa, where a is the annihilation operator of the quantum harmonic operator for the cavity (the cavity mode) and κ > 0 is the coupling strength. The operator H stands for the initial system Hamiltonian, which can be written as H = 1 2ȃ † ∆ (Ω − , Ω + )ȃ with constant matrices Ω − , Ω + ∈ C n×n satisfying Ω − = Ω † − and Ω + = Ω T + . For example, for the cavity just mentioned, upon a constant shift 1 2 where ω d is the detuning frequency between the cavity mode and the carrier frequency of the input light field. Then, in Ito form the Schrodinger's equation for the temporal evolution of the open quantum linear system in Fig. 1 is, [23], [19,Eq. (30)], [20,Eq. (22)], [57,Eq. (13)], with U (t, t 0 ) = I (identity operator) for all t ≤ t 0 . In the Heisenberg picture, system operators evolve according toȃ(t) = U (t, t 0 ) * ȃ U (t, t 0 ) (component-wise on the components ofȃ). The output fieldb out (t) carries away information of the system after interaction, and is defined byb where the constant system matrices are The gauge process Λ out (t) of the output fields, satisfies the following quantum stochastic differential equation (QSDE), [19], [56,Eq. (16)], The diagonal elements of Λ out (t) are operators for the total number of photons in each of the m output channels, counted from time t 0 to t. The intensity of the output field, namely the rate of change of the number process Λ out (t), is defined to be, [57,Eq. (45)],n In Eq. (11), |φ is the initial system state and |Ψ is the initial input field state, respectively. Therefore, the ket vector |φΨ is the joint system-field state. The bra vector φΨ| is the Hilbert space conjugate of the ket vector |φΨ . In this paper, |φ is assumed to be the vacuum state, whereas the specific form of |Ψ will be given in due course. The quantum linear system G is said to be asymptotically stable if the matrix A is Hurwitz stable [55, Sec. III-A]. In analogy to classical (namely non-quantum) control theory, the impulse response function of the system G is, [57, Eq. which enjoys the following block form Solving Eqs. (6)- (7) we haveb Remark 1 If the interaction starts in the remote past, namely t 0 → −∞, and if the system is asymptotically stable, Eq. (16) indicates that the initial system information has no influence on the output field. This is also true in classical control theory, see, e.g., [26]. Define a matrix function is the inverse function of the impulse response function g G (t). According to Eqs. (16) and (18) A class of quantum linear passive systems is obtained when C + = 0 and Ω + = 0 in Eq. (8). In this context, it is sufficient to work in the annihilation-operator representation. To be specific, it suffices to studẏ where Finally, we cite the following result for the Gaussian transfer of quantum linear systems. Lemma 1 [57, Theorem 2] Let the quantum linear system G be initialized in the vacuum state |φ and let the input field be in the vacuum state |0 . Then, the steady-state output field state is a Gaussian state with the spectral density where G[iω] is the two-sided Laplace transform of g G (t), as introduced in the Notation part. In particular, if the system is passive, then the output is also in a vacuum state, that is, Tensors The concept of tensors and their associated operations are essential mathematical machinery for the study carried out in this paper [39], [25], [53]. In this subsection, we briefly discuss several tensors. Given an m×m matrix function A(t) and an m-way m-dimensional tensor function Eq. (24) may be re-written in a more compact form where the subscript "t" indicates the time domain, while the superscript "m" implies the m-fold convolution. Applying the m-dimensional Fourier transform (1) to Eq. (24), we get where In analogy to Eq. (25), we may also write Eq. (26) in the following compact form where the subscript "ω" indicates the frequency domain. Given an m-way m-dimensional tensor function ϕ(iω 1 , . . . , iω m ), its norm is defined to be We end this subsection by citing the following result. More tensors and related operations will be discussed in due course. 3 m photons superposed among m channels In this section, we investigate how a quantum linear system responds to m-photon input states. We first define n-photon states in Subsection 3.1, then derive the output intensity in Subsection 3.2, after that, we present the analytic form of the output field state when the underlying quantum linear system is passive in Subsections 3.3 and 3.4, finally we turn to the non-passive case in Subsection 3.5. m-photon states In this subsection we introduce m-photon states. We begin with the single-channel single-photon state case. In this case, m = 1. A single-channel single-photon state can be defined by Here, ξ is a square-integral function, i.e., ξ ∈ L 2 (t, C). The Euclidian norm of ξ, ξ where Λ(t) is the gauge operator defined in Eq. (3). Eq. (31) shows that there is only one photon in the field. On the other hand, it can be readily shown that That is, the average field amplitude is zero. It is worth noting that |1 ξ is not a single-photon coherent state that can be defined by where α = e iθ is a complex number. Actually, for |α ξ Eq. (31) still holds, but Eq. (32) does not. Now, let us look at two-channel two-photon states, which can be defined to be Again, the ordinary function ξ(t 1 , t 2 ) is required to normalize the state, namely Ψ in,2 |Ψ in,2 = 1. This is guaranteed by (Notice that in this case, the symmetry condition ξ(t 1 , t 2 ) = ξ(t 2 , t 1 ) is not necessary.) It can be readily shown that where Λ(t) is the gauge process defined in Eq. (3). Eq. (38) means that each channel contains one photon. Moreover, if we use the single-photon state ∞ −∞ γ(t)b * in,2 dt |0 2 to measure the second channel, we will get a single-photon state for the first channel, which is given by In general, Eq. (36) defines a state for which the two photons are entangled. However, for the special case that ξ(t 1 , t 2 ) = ξ 1 (t 1 )ξ 2 (t 2 ), we end up with a product state That is, there exists no statistical correlation between these two photons. We are ready to introduce general m-channel m-photon states. Such states can be of the form That is, m photons are superposed among m input channels. By analogy with Eq. (37), it can be readily shown that the normalization condition for |Ψ in is The bra vector Ψ in |, namely the conjugate of the ket vector |Ψ in , is For the m-photon state |Ψ in , it is clear that That is, the average field amplitude of the input light field in the m-photon state |Ψ in is 0. Next, we look at two-time correlations. For each k = 1, . . . , m, define a function of two variables ζ k (t, r) to be Also, define a diagonal matrix function Clearly, Λ (t, r) # = Λ (r, t) and dt Λ (t, t) = I m . Furthermore, it can be shown that Remark 2 If all the input fields are in the vacuum state, it is well-known that In this case, the field is Markovian. The second term on the right-hand side of Eq. (47) reveals the non-Markovian nature of the m-photon input fields. Moreover, due to the presence of the pulse shape ψ in in all the diagonal entries of Λ (t, r), the inputs can be regarded as correlated non-Markovian noise inputs. The passive case: output intensity In this subsection, for the quantum linear passive system (20) driven by the m-photon state |Ψ in defined in Eq. (41), we derive a formula for the output intensityn out (t) defined in Eq. (11). Recall that in the passive case the matrix C + = 0. Substitution of L(t) = C − a(t) into Eq. (10) yields Define a matrix function of dimension n to be Define an n-by-m matrix function The following theorem is the main result of this subsection, which gives an explicit procedure for computing the output intensityn out (t). Theorem 1 Assume the underlying quantum linear passive system G is asymptotically stable. The matrix function f (t) defined in Eq. (51) is the solution to a system of ordinary differential equations (ODEs) with the initial condition f (t 0 ) = 0, where The output intensity is given bȳ in which the covariance function Σ(t) solves the following matrix equatioṅ with the initial condition Σ(t 0 ) = I n . Proof. We prove this theorem in three steps. The passive case: state transfer In this subsection, we derive the analytical form of the output state of a quantum linear passive system driven by the m-photon state |Ψ in defined in Eq. (41). The following is the main result of this subsection. Theorem 2 Let G be an asymptotically stable quantum linear passive system which is initialized in the vacuum state and is driven by the m-photon input |Ψ in defined in Eq. (41). The steady state (t 0 → −∞) of the output field is an m-photon state of the form where the output pulse is given by the m-fold convolution with the transfer function g G − (t) given in Eq. (21). Remark 3 In particular, when the input pulse is of a product form ψ in (t 1 , . . . , t m ) = ξ 1 (t 1 ) · · · ξ m (t m ), where the notation has been used. In this case, Eq. (72) reduces to ψ out,j1,...,jm (r 1 , . . . , Define ξ out,jk (r) Then, by Theorem 2, where Interestingly, |Ψ out in Eq. (78) can also be derived by means of [57,Theorem 5]. Therefore, Theorem 2 generalizes the main result in [57]. Finally, it is worth pointing out that, in general, even if the input state |Ψ in is a separable state (74), the output state |Ψ out in Eq. (78) is not a separable state any more, as illustrated by the following two examples. Example 1 (beamsplitter) A beamsplitter is a static passive device widely used in optical laboratories, [27], [3], [34], see Fig. 2. In terms of the (S − , L, H) formalism, a beamsplitter can be modeled by L = 0, H = 0, and Let the 2-photon input state be If the input pulse shape ψ in (t 1 , t 2 ) is not factorizable, then the two input channels are initially entangled. By (68), the output state is which is exactly [29, Eq. (6.8.7)]. Example 2 (optical cavity) An optical cavity is a system composed of totally reflecting and/or partially transmitting mirrors [3,Chapter 5.3], [46,Chapter 7], [34]. A widely used type of optical cavities is the so-called Fabry-Perot cavity. A single-mode Fabry-Perot cavity with two input channels, as shown in Fig. 3, can be modeled by parameters Here, κ 1 and κ 2 are coupling strengths between the cavity and the external fields, and ω d is the detuning between the resonant frequency of the cavity and the external fields. By Eq. (20) we have the following QSDEṡ Let the input state be that given in Eq. (81). In what follows we calculate the steady state of the output field. Define functions and By Theorem 2, the output state is In particular, if that is, the input is a tensor product state of two single-photon states, one for each channel, then Eqs. (85)-(87) reduce to Φ 1 (r 1 , r 2 ) = ξ 2 (r 2 )η 1 (r 1 ), (90) and where As a result, Eq. (88) becomes This means that the influence of the system on the first channel is negligible and the output fields are almost in a product state. This is quite reasonable: when the coupling strength κ 1 → 0, the first channel has no interaction with the system, so the state of channel one does not change. On the other hand, in the limit κ 1 → 0, the non-separable state in Eq. (88) becomes Notice that where That is, the output state is still an entangled state. And the system does have influence on the first channel. This cannot happen when the two input channels are separable, as shown in Eq. (95) above. The passive case: the invariant set Define a class of m-channel m-photon states of the form |the m way m dimensional tensor function ψ normalizes |Ψ .} . Here, b * k (t) (k = 1, . . . , m) is a creation operator. In the definition of the class F 1 , we don't specify whether b * k (t) is for input or for output. In fact, in this subsection we show that F 1 is invariant under the linear action of a quantum linear passive system. That is, both input and output states are elements of F 1 . Clearly, |Ψ in = |ψ ↑ in ∈ F 1 . On the other hand, by Theorem 2, the output state |ψ out ∈ F 1 too. This motivates us to study more general pulse shape transfer than that in Theorem 2. Actually, we have the following result. Theorem 3 Let the input state for an asymptotically stable quantum linear passive system G be an element |Ψ in ∈ F 1 with pulse shape parametrized by an m-way m-dimensional tensor function ψ in . Then the steady-state output state |Ψ out is also an element in F 1 with pulse shape given by Alternatively, in the frequency domain, Proof. By analogy with the proof of Theorem 2, we know that for all k = 1, . . . , m and j k = 1, . . . , m, Consequently, where ψ out,i1,...,im (r 1 , . . . , r m ) m j1,...,jm=1 The non-passive case A quantum linear system is said to be non-passive if C + = 0 and (or) Ω + = 0 in Eq. (8). Non-passive elements, like optical parametric oscillators (OPOs), are key ingredients of quantum optical systems. [27], [3], [34]. In this subsection, we study the output state of a non-passive linear system driven by an m-photon input state. The following result shows how a quantum non-passive linear system processes multi-photon states. Theorem 4 Let G be an asymptotically stable quantum linear system which is initialized in the vacuum state and is driven by the m-photon input |Ψ in defined in Eq. (41). The steady state (t 0 → −∞) of the output field is where b, ψ is that given in Eq. (111), and ρ ∞ is a zero-mean Gaussian state for the joint system whose covariance function is given by Eq. (22) in Lemma 1. Proof. In steady state, the joint system-field state is The steady-state output field state is obtained by taking the partial trace of ρ ∞ with respect to the system, that is, According to Eq. (19),b This, together with Eq. (17), yields Substituting it into Eq. (114) we obtain which is exactly Eq. (112). N (N ≥ m) photons superposed among m channels In this section, we study how quantum linear passive systems respond to N photons that are superposed over m channels, thus generalizing the results in Section 3. The case when N ≥ m Let the input field be in a state where N photons are superposed among m input channels share. Specifically, the input state is defined as In Eq. (119), the positive integers k i satisfy m i=1 k i = N , and N N > 0 is the normalization coefficient. Clearly, the ith input channel has k i photons. In what follows, we derive the output field state. Theorem 5 The steady-state output field state of an asymptotically stable quantum linear passive system G driven by the N -photon input state |Ψ in is |Ψ out (120) where, for all l Proof. We prove this result by means of an approach different from that for Theorem 2. In the Schrodinger picture, the output field state can be obtained by tracing out the system. That is, whereb By Eq. (19),b Alternatively, Substituting Eq. (125) iuto Eq. (123) yieldsb Consequently, Substituting Eq. (127) into Eq. (122) gives where, in the last step, the definition of ψ Example 3 Given a beamsplitter and a 3-photon input state of the form Clearly, in this case, there are two input channels (m = 2), the total number of photons is N = 3, while there is one photon in the first channel (k 1 = 1) and two photons in the second channel (k 2 = 2). Moreover, the element of the impulse response g G − (t) ≡ S − . Simple calculation yields So the normalization constant N 3 is Finally, if that is, the input is a product state where each channel contains exactly one photon. In this case, Eq. (136) reduce to That is, all the states become product states. If we ignore pulse shapes, we may identify |Π 30 with |3 1 , |Π 03 with |3 2 , |Π 21 with |2 1 ⊗ |1 2 , and |Π 12 with |1 1 ⊗ |2 2 . Accordingly, the state reduces to The invariant set In this subsection, we define a set of N -photon states and show that this set is invariant under the steady-state linear action of a quantum linear passive system G. The discussions in the subsection generalizes those in subsection 3.4. Then the output pulse shape defined in Eq. (121) can be expressed in the following tensor form Motivated by this, define a class of N -photon state The following result shows that the set F 2 is invariant under the steady-state action of a quantum linear passive system G. Theorem 6 The steady-state output state of the quantum linear passive system G driven by an N -photon input |Ψ in ∈ F 2 with pulse information encoded by an m-way m-dimensional tensor matrix ψ in is another element |Ψ out ∈ F 2 , whose pulse information is encoded by an m-way m-dimensional tensor matrix ψ out with elements, ψ l 1 1 ,...,l 1 k 1 ,...,l m 1 ,...,l m km out (r 1 1 , . . . , r 1 k1 , . . . , r m 1 , . . . , r m km ) In compact form, Eq. (144) can be written as This result can be established in a similar way as Theorem 3. So the proof is omitted. An arbitrary number of photons superposed in m input channels In all the previous discussions, we have implicitly assumed that the total number of photons is no less than the number of input channels. In this section, we remove this constraint. m-channel N -photon input states In this subsection, we first present a class of m-channel N -photon input states where N is an arbitrary integer. Some illustrative examples are also given. Let an N -photon input state be where N is an arbitrary positive integer and N N is the corresponding normalization coefficient. The input state |Ψ in is parametrized by the pulse shapes ξ jk (t) (k = 1, . . . m and j = 1, . . . , N ). Clearly, different combinations of ξ jk (t) give rise to different multi-photon multi-channel states. By the notation in Eq. (75), the N -photon input state in Eq. (146) is Remark 4 A class of photon-Gaussian states has been defined in [57,Eq. (95)]. If ρ R used there is of the form ρ R = |φ 0 ⊗m , then the resulting states are m-channel m-photon states. They are in fact the special case of the m-channel N -photon above-defined (the case when N = m). We first study two examples of this type of multi-photon states. Example 4 When N = 1 and m = 2, by Eq. (147), the input state is That is, a single photon is superposed over two input channels. If ξ 11 = ξ 12 ≡ ξ and ξ = 1, then the normalization condition requires that mathcalN 1 = 1. In this case, Eq. (148) becomes For the state in Eq. (149) it can be readily shown that That is, the photon is not localized in one of the two channels. Instead, it is indeed shared by two channels. This reveals the particle property of photons. On the other hand, if ξ 11 ≡ 0, then In this case, the first channel is in the vacuum state and the second channel is in a single-photon state. Example 5 When N = 2 and m = 3, by Eq. (147), the input state is That is, three channels share two photons. If in particular ξ 11 ≡ ξ 22 ≡ 0, then the input state becomes if further ξ 21 ≡ ξ 13 ≡ 0, then Eq. (153) reduces to which is a separable state. Finally, if ξ 23 (t) ≡ 0, then Eq. (154) reduces to That is, the second channel has two photons while the first and third channels are in the vacuum states. As demonstrated by the above two examples, Eq. (147) provides flexibility for specifying various types of multiphoton states. The following is the main result of this subsection, which shows the linear transfer of the pulse shapes from the input channels to the output channels. Multi-photon output states In this subsection, we derive the analytic form of the steady-state output state of a quantum linear passive system driven by an m-channel N -photon state defined in Eq. (146). The following is the main result of this section. Theorem 7 Let G be an asymptotically stable quantum linear passive system which is initialized in the vacuum state and is driven by the N -photon input |Ψ in defined in Eq. (147). The steady state (t 0 → −∞) of the output field is another N -photon state of the form where the output pulses are given by η lj (t) m k=1 ∞ −∞ g lk G − (t − r)ξ jk (r)dr, l = 1, . . . , m, j = 1, . . . , N. (157) Proof. The proof is similar to that for Theorem 2. In the limit t 0 → −∞, by Eqs. (17) and (19) where the output pulse functions η lj (t) are those given in Eq. (157). Conclusion In this paper, we have studied the dynamics of quantum linear systems in response to multi-channel multi-photon states. We have derived the intensity of the output field which can be used to investigate the influence of quantum linear systems on quantum correlations of light fields. We have also presented the explicit formula of the steady-state output field state for several classes of multi-channel multi-photon input states. The results presented here are very general and hold promising applications in photon-state-based quantum coherent feedback networks.
2017-05-10T04:44:16.000Z
2016-09-29T00:00:00.000
{ "year": 2017, "sha1": "e47bc3ee84582b9e603e975fbdd4e6633f3e6836", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "e47bc3ee84582b9e603e975fbdd4e6633f3e6836", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Computer Science" ] }
81958163
pes2o/s2orc
v3-fos-license
Mobile phone use among children and its impact on hearing: Our experience at a tertiary care teaching hospital Introduction: The wide use of mobile phones in the world has raised the possibility of exposure to radiofrequency waves causing many side effects to the health of users, and even more in children. Aim of the study: To study the impact of radiofrequency waves of mobile phones on the hearing of the children in a tertiary care teaching hospital. Material and methods: We studied two groups of children of age less than 16 years. One group comprised 52 mobile phone users for more than one hour per day for more than one year, and the second group comprised 52 children who were non-users or used a mobile phone for less than one hour per day for less than one year. Results: The children using mobile phones for more than one hour (2-3 hours) per day for more than one year had 5 dB loss in 7.2%, 10 dB loss in 4.5%, and 15 decibel loss in 2.5% of cases. There was a 5 dB loss in 7.9 and 10 dB in 5.5% of cases in those using mobiles 3-4 hours per day. There was sensorineural hearing loss in 28.6% of the children and 3.6% in the control group. Conclusions: This study did not show any significant hearing loss in children using mobile phones. INTRODUCTION The most effective and advanced communication device in this century is the mobile phone. Mobile phones are not only used by adults and qualified persons but also by children for playing games and many other forms of entertainment in. The mobile phone is nowadays a widely used electronic item in the world, affecting all ages of society. Nowadays children are a more noticeable group using mobile phones, and they are capable of using any advanced type of mobile phone. The increased use of mobile phones has now focused attention on the biological effects and health hazards due to radiofrequency exposure from mobile phones. Approximately 85% of Americans, 60% of the British, and more than 45% of Indians are using mobile phones [1]. The adverse effect due to mobile phone use is a global concern and affects people all over the world. The emission of radiofrequency radiation from mobile phones can affect the user over longer periods. Mobile phones receive and emit signals by using electromagnetic fields in the radiofrequency band. The Global System for Mobile communications (GSM) is presently the most widely used digital phone service operating at 900-1800 MHz frequency bands [2]. The inner ear is close to the mobile phone during any phone call, thus making it the most vulnerable organ. There are no regenerative properties of the hair cells in the cochlea, and so permanent damage may occur after prolonged exposure to radiofrequency waves by mobile phones. The hair cells of the cochlea are very sensitive to prolonged exposure to loud sound, and so the ear is at risk from mobile phones as well as the electromagnetic radiation emitted from the mobile phones. Keeping in the mind the hazards of the mobile phone, this study was conducted to investigate the association of mobile phone use and hearing loss in children. MATERIAL AND METHODS This study was conducted in a tertiary care teaching hospital of eastern India during a period of 3 years from January 2015 August to July 2017. There were 104 children who participated in this study, with 52 mobile phone users and 52 were mobile phone non-users, with ages between 6 and 16 years. Mobile phone users (n = 52) were those who used a mobile phone for more than one year with minimum usage of more than one hour per day were included in our study. All the children underwent a questionnaire including the average duration of mobile use and different symptoms felt during and after the mobile phone use. All parents of the children gave informed consent prior to being included in the study. This study was approved by the Medical Ethics Committee of our Institute. Children with chronic suppurative otitis media, history of head injury, history of hearing loss, and those exposed to noisy environments were excluded from this study. All the children underwent detailed history taking with special emphasis on duration of usage, type of mobile, and hearing loss. Detailed general and systemic examinations along with thorough examination of the ear with an otoscope were done. At the outpatient department, all the children were assessed with tuning fork tests such as Rinne's, Weber's, and absolute bone conduction (ABC) tests. All the patients had undergone hearing assessment with pure tone audiometry. Pure tone audiograms were assessed for the type and degree of hearing loss. RESULTS This study was divided into two groups: 1. Those children using mobile phones more than one hour per day for more than one year; 2. No usage of mobile phones or using mobile only occasionally, i.e. less than one hour for less than one year. Chi-square and Student's t-tests were used for statistical analysis, and a p value of less than 0.05 was considered as significant. There are three main symptoms found among patients: block sensation in the ear of 12 children (23%), hearing loss in 4 children (7.69%), and tinnitus in 11 children (21.15%). Block sensation in the ear and tinnitus were statistically significant with p value < 0.05, whereas hearing loss was statistically insignificant (p > 0.05) ( Table 1). In this study, 18 (34.61%) children had 1-2 hours of exposure to mobile phone per day, 23 (44.23%) had 2-3 hours, and 11 (21.15%) had 3-4 hours exposure to mobile phones per day. The c 2 value for hours of exposure to mobile phone was 6.29, and the p value was 0.043067, which was statistically significant ( Table 2). In our study, those using mobile phones for 2-3 hours per day, 5 dB loss was seen in 12%, 10 dB loss in 4.4%, and 15 dB loss in 3.5%. There was 5 dB loss in 7.5%, 10 dB in 4.5% and 15 dB in 2.3% noted among those using mobile phones for 3-4 hours per day (Table 3). In group 1, the mean length of exposure was 2.34 years, whereas in group II it was 0.11 years with a p value < 0.05, which was statistically significant (Table 4). In this study, 24% of children using mobile phones for 2 years had sensorineural hearing loss, and 29% of those using mobile phones for 3 years had sensorineural hearing loss (Table 5). Continuous exposure was associated with minimal sensorineural DISCUSSION Mobile phone use is very popular and almost indispensable in modern daily life. This is one of the fastest growing technological advancements in present times. Non-ionising electromagnetic radiofrequency radiation is commonly used in telecommunications like mobile phones, radio, TV, Wi-Fi, and radar. The exposure to this radiation is rapidly increasing in this decade, which has created interest in the possible harmful effects to health [3]. Sensitive individuals sometimes present dizziness, fatigue, headache, memory impairment, sleep disturbances, myalgia, anxiety, hearing loss, and tinnitus [4]. In our study, aural block sensation (23%) followed by tinnitus (21.15%) and decreased hearing (7.69%) were the presentations among mobile users. Mobile phones have been used since 1983, and their use was estimated to include around 6 billion users in 2010 [5]. The mobile phone is one of the fastest-growing technological advancements in modern times. However, there is public concern about the possibility of health hazards of electromagnetic field (EMF) exposure from mobile phones. The prolonged use of mobile phones can be hazardous to the health of human life. The inner ear, particularly the cochlea, is the first important organ that usually receives the impact of the electromagnetic radiation due to its close locality and the delicate outer hair cells of the cochlea, which are highly vulnerable to acoustic injury in comparison to the other structures of the body. There are many studies comparing users and non-user of mobile phones, which showed some differences, even though the thresholds were within normal limits [6]. A study of mobile phone users concluded that hearing loss is associated with prolonged exposure to the electromagnetic fields that are generated from mobile phones [7]. Another study concluded that a 10-minute exposure to a radiofrequency field from a mobile phone had no effect on hearing loss [8]. Radiofrequency signals are emitted and received from the antenna of the mobile phone during phone calls. This may cause a high specific absorption rate (SAR) in the region of the ear in comparison to the other parts of the body. It can enter the tissue and is absorbed and converted into the heat [8]. The rapid use of wireless communications, particularly mobile phones, has created controversy regarding whether or not they pose a risk to human life. The mobile phone use among children is rapidly increasing nowadays. It is a great attraction for children due to various games and videos. The radiofrequency waves from the mobile phones affect the health in two ways: thermal and non-thermal. The thermal or heating effect is due to prolonged holding of the mobile phone close the ear or body, and the non-thermal effect is due to radiation coming from the mobile phone. Mobile phones usually emit pulsed high frequency electromagnetic waves that can penetrate the skull and affect the brain and inner ear [9]. The electromagnetic waves may alter the electrical response due to acoustic stimuli. The prolonged and extensive exposure to microwaves radiating from mobile phones affects certain brain functions like electrochemistry, electrical activity, blood-brain barrier permeability, and the immune system [10]. The mobile phone radiofrequency waves are usually concentrated on the tissue nearer to the handset, which includes the auditory nerve [11]. The low-level radiofrequency radiation from the mobile phones sometimes gives rise to symptoms like headache, an unpleasant burning feeling, or dull ache at the temporal, occipital, or auricular area [12]. The biological effect due to mobile phone use depends on several factors like duration of irradiation, individualised nervous system, immune system, rate of absorption, and distribution of electromagnetic field energy by body tissue [13]. There have been reports of sensorineural hearing loss due to GSM mobile phone use [14]. Although little is known about mobile phones and their biological effect, this study shows that a higher degree of hearing loss is seen with long-term use of cellular phones. Therefore, it is advisable to avoid excess use of mobile phones, particularly among children. Mobile phones should be used only for short periods and only for important purposes. Brainstem-evoked response audiometry (BERA) usually evaluates the hearing by using a signal averaging process and by measuring bioelectric events in response to auditory stimuli. The responses in the BERA can be recorded from the cochlea to the midbrain. These electric potentials are important tools for measuring hearing thresholds and neurological lesions. BERA is also an important tool for evaluating retro-cochlear lesions. BERA and pure tone audiometry are usually used for differentiating cochlear lesions from retro-cochlear pathologies. There are some studies regarding other health hazards from mobile phones. There is some controversy regarding brain lesions and tumours associated with mobile phone use [15]. Mobile phones generate electromagnetic radiation that is below the guidelines of the International Commission on Non-Ionising Radiation Protection (ICNIRP) [16]. The radiofrequencies emitted from mobile phones are not energetic enough to destabilise the electron configuration within DNA. So, there is a direct link between radiofrequency exposure and genotoxic side effects like DNA mutations [16]. This study suggests several recommendations for children using mobile phones, which are: to set the lowest volume while playing mobile games and phone calls, having a short conversation period, using a hands-free device, and choosing mobile phones with low electromagnetic field emissions. CONCLUSIONS This study does not show any significant correlation between mobile phone use and hearing loss among children. The children using a mobile phone for more than two hours showed mild hearing loss of around 10-15 dB, but 0-25 dB hearing loss is taken as normal. Here we suggest a long-term follow up study is required among the children using prolonged period mobile users. The authors suggest mobile phones should be used only for short periods and for essential purposes. DISCLOSURE The authors declare no conflict of interest.
2019-03-18T14:04:10.931Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "e38f9ef4137ddaacf1c5728ff33800f331974d7a", "oa_license": null, "oa_url": "https://www.termedia.pl/Journal/-127/pdf-32525-10?filename=Mobile%20phone%20use%20among.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c5a067ccbe9768c49a4bcfe6407ce831265bb04c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46783462
pes2o/s2orc
v3-fos-license
Prognosis of men with penile metastasis and malignant priapism: a systematic review Introduction: Metastases to the penis are rare, but can have severe consequences. The aim of this study was to systematically review the literature in order to gain more information on the presentation and prognosis of this metastatic disease. We reviewed the literature relating to all case reports, series and reviews about penile metastasis, from 2003 to 2013, through a Medline search. We identified 63 articles and 69 patients. Metastases were located on the root (38.8%), the shaft (38.8%) or the glans (22.2%) of the penis. The diagnosis of penile metastasis was made after the primary cancer had been diagnosed. The most common presentation was a single small penile nodule. Ten patients reported priapism. The median survival time after diagnosis of penile metastasis was 10 months (range 6-18 months). A Kaplan-Meier analysis has shown that the patients presenting with priapism and those with metastases from non-urologic tumors have a significantly worse prognosis (age adjusted Log Rank: p=0.037 for priapism vs. no priapism and p=0.045 for urologic vs. non urologic). There are prognostic differences based on the presentation of penile metastases. Survival is substantial and treatment should therefore take into account symptoms improvement and quality of life. INTRODUCTION Although penile metastasis is relatively rare, its management presents a challenging problem. The first description of a penile metastasis was published in 1870 [1] and the first extensive review of the problem in 1961 by Abeshouse et al. [2]. Since then, about 460 additional cases have been reported in literature. Priapism can be defined as a prolonged penile erection in absence of sexual stimulation, usually caused by hematological diseases such as sickle cell anemia, leukemia or polycythemia, pelvic thrombosis or thrombophlebitis, by neurological diseases or by extensive pelvic tumors with or without penile metastasis [2,4,5,8]. Various mechanisms for priapism secondary to cancer have been suggested, including tumor infiltration of the corpora cavernosa or blockage of the cavernous venous drainage system [1]. Metastases to the penis mimicking priapism are extremely rare, especially in the absence of a disseminated disease. Most cancers leading to penile metastasis are from pelvic organs: prostate and bladder followed by colon of the recto-sigmoid region. Priapism resulting from such metastases will often be clinically overlooked and therefore difficult to treat. Whether or not the occurrence of priapism secondary to penile metastases can be one of the major prognostic factors remains unknown and the aim of this systematic review is to clarify this question. Evidence acquisition We conducted a search of the English language literature, ranging from 2003 to January 2013, using the Medline database of the US National Library of Medicine (http://www.ncbi.nlm.nih.gov/pubmed) and the Google Scholar database. The Medline search was carried out by using the following Medical Subject Headings (MESH) and free text terms: penile, and metastasis were combined with the terms 'treatment, clinical manifestations, therapy' and then limited to 'humans, male and young adult, 19-24 years'. Abstracts were excluded if subsequently followed by extended articles. Overlapping reports were not considered because of redundant information. From the initial literature search yielding 843 unique citations, a total of 63 papers were selected to review. Out of these 63 papers, 69 patients and their data were used for the analysis . The Prisma Statement was used to perform an accurate research check-list and report ( Figure 1). Statistical analysis For the null hypothesis, we assumed that there was no difference in terms of outcome between all clinical, pathological and instrumental parameters in patients affected by secondary penile malignancy. Fisher's exact test and chi-square test were used to assess the significance of differences between parameters, with p<0.05 considered the cut-off for significance. Categorical variables were presented as percentages and were compared using χ 2 analysis. Continuous variables were presented as the mean ± standard deviation and were compared using Student's t-test or the Mann-Whitney U-test. Relative Risks and 95% confidence intervals were estimated by applying log-binomial regression and Cox regression analysis with a constant in the time variable. Moreover, difference in survival were assessed by Kaplan Meyer survival curves (and age adjusted log rank). All reported p values are 2-sided. Statistical analyses were performed using SPSS 11.0 for Apple-Macintosh (SPSS, Chicago, Illinois). RESULTS The population for analysis consisted of patients with an age range of 57 to 92 years and a mean follow-up of 15.6 months (range 5-30). The clinical characteristics of the patients are given in Table 1. Clinical presentation and treatment Penile metastases were located at the root (38.8%), the shaft (38.8%) or the glans (22.2%) of the penis. Five patients had multiple penile metastatic lesions. In four patients the diagnosis of penile metastasis was synchronous with the diagnosis of the primary tumor, but metachronous in the majority. The most common form of presentation was a single small painless nodule of 1-2 cm in diameter. Ten patients presented with priapism secondary to penile metastasis. Survival analysis The median cancer-specific survival time for men with penile metastasis was 14.5 months (range 5-30). The Kaplan-Meier curve analysis showed that patients with metastases from non-urological tumors generally seemed to have a poorer prognosis than those with tumours of urological origin ( Figure 2) and that patients presenting with malignant priapism had a worse prognosis than those without priapism (Figure 2). Thus, patients with priapism as the presenting symptom from a metastasis originating from a non-urological malignancy had a worse prognosis compared to those with metastases from urological malignancies and without priapism (age adjusted log rank p=0.045 for urological vs. non-urological and p=0.037 for priapism vs. no priapism) (Figure 3). 30 patients with urological metastases (43%) had a median cancer specific survival time of 18 months compared to 30 patients with non urological metastases (57%) who had a median cancer specific survival time of 11 months. 10 patients presented with priapism as the first symptom (5 from urological and 5 from non-urological cancers). Patients with priapism from urological cancer had a median cancer specific survival time of 30 months, patients with priapism from non-urological cancer had a median cancer specific survival time of 15 months ( Figure 4). DISCUSSION Penile metastases are relatively rare and usually occur in the context of more widespread disseminated disease. Therefore, the prognosis is significantly poor. However, little more than this is known about the problem. As penile metastases are only reported in case reports or small case series there will never be reliable evidence from larger trials from prospective series. Thus, the only evidence that can be used to gather information about penile metastases is from case reports. Therefore, we analysed the available literature on case reports of penile metastases in order to gain more information. Our analysis is the most comprehensive systematic review of the topic with a clinically relevant number of cases. Moreover, we performed our analysis on all cases of penile metastasis without excluding patients based on the site of the primary malignancy or the presentation. This approach has, of course, several limitations as its nature is retrospective, making it therefore impossible to assess data other than those reported by the primary authors. However, we excluded patients with incomplete clinical or pathological data, i.e. when survival time or pathology of the primary tumor were not given. Another limitation is that it was impossible to draw any conclusion on the impact of the treatment from the case reports we analysed. Thus, our limited analysis shows that penile metastases presenting with priapism have a very poor prognosis, especially if the primary tumor is not of prostatic or bladder origin. However, based on the published cases, the mean survival time is 14 months and this data has to be considered in view of a possible disseminated disease, which could in turn cause other symptoms. Our finding that malignant priapism is associated with a poorer prognosis in patients with penile metastases is consistent with other previous reports. Whilst penile metastases most commonly appear as an infiltrative lesion or nodule, up to 40% of cases reported intermittent or continuous malignant priapism. This was first described in 1938 by Peacock. A literature review by Lin YH et al based on reports from 2006 to 2011 suggested that the true incidence of penile metastasis may be higher given that 12% of penile metastasis may be asymptomatic and discovered only at autopsy. They also suggested that most cases of malignant priapism would be low-flow priapism due to neoplastic invasion of cavernous sinuses and venous system [2]. While this theory seems to be highly plausible, some authors have suggested that malignant priapism may also be due to high flow in some cases. Dubocq et al used doppler ultrasound and have found evidence for this theory in their cases. Differences in the pathophysiology of malignant priapism may also be related to the mode of the metastatic spread. Whilst urological cancers may also invade the penile cavernous bodies directly, non-urological metastases will occur from lymphatic or hematogenous spread. We calculated a mean survival time of the reported cases of 14 months. Lin YH et al. reported an average cancer-specific survival time of 9 months with an overall survival time of under 18 months. According to our analysis, survival time is over one year and therefore is a substantial data for planning a treatment. Whilst the treatment approach is commonly palliative, this may be questioned in view of the survival time. According to our review, in patients with better prognostic indicators (urological cancer, no priapism), the efforts of the treatment should be on prolonging the survival time and enhancing the quality of life. CONCLUSIONS Whilst penile metastasis is rare and is only one of the manifestations of a disseminating cancer, there are differences in prognosis between patients presenting with or without priapism and those with urological or nonurological primary cancers. Cancer-specific survival time is on average substantial with over one year and treatment should take this data into consideration.
2018-04-03T01:35:51.362Z
2017-12-18T00:00:00.000
{ "year": 2017, "sha1": "57aca156ae607dc8011081ecc25324e15474e9e5", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=23366&path[]=73624", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57aca156ae607dc8011081ecc25324e15474e9e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
43228688
pes2o/s2orc
v3-fos-license
Radiofrequency ablation for early oesophageal squamous neoplasia: Outcomes form United Kingdom registry AIM: To report outcomes on patients undergoing radiofrequency ablation (RFA) for early oesophageal squamous neoplasia from a National Registry. METHODS: A Prospective cohort study from 8 tertiary referral centres in the United Kingdom. Patients with squamous high grade dysplasia (HGD) and early squamous cell carcinoma (ESCC) confined to the mucosa were treated. Visible lesions were removed by endoscopic mucosal resection (EMR) before RFA. Following initial RFA treatment, patients were followed up 3 monthly. Residual flat dysplasia was treated with RFA until complete reversal dysplasia (CR-D) was achieved or progression to invasive Squamous cell cancer defined as infiltration into the submucosa layer or beyond. The main outcome measures were CR-D at 12 mo from start of treatment, long term durability, progression to cancer and adverse events. RESULTS: Twenty patients with squamous HGD/ESCC completed treatment protocol. Five patients (25%) had EMR before starting RFA treatment. CR-D was 50% at 12 mo with a median of 1 RFA treatment, mean 1.5 (range 1-3). Two further patients achieved CR-D with repeat RFA after this time. Eighty per cent with CR-D mains unclear. In our series 50% patients responded at 12 mo. These figures are lower than limited published data. study may better inform clinicians about minimally invasive endoscopic therapy in these patients where perhaps more radical treatments are not an option for patients. The authors present a good study to evaluate the role of radiofrequency ablation in early squamous cell cancer and high grade squamous dysplasia. They provide a new finding of early response to RFA is prognostically important. This could be clinically relevant for physicians to perform RFA in the management of squamous high-grade dysplasia. INTRODUCTION Squamous cell cancer (SCC) comprises nearly 90% of all esophageal cancers worldwide [1] . The incidence of esophageal SCC has fallen in the western world in the past 3 decades but still remains between 4 and 16 per 100000 population. This is strongly dependent on geographical location worldwide with figures far higher in Asia [2] . In the western world factors such as alcohol and tobacco play an important role in the development of oesophageal SCC [3] . This condition caries a poor prognosis with an overall five year survival rate of 10%-15% [4] . Those treated with surgery following neo-adjuvant therapy still carry a poor prognosis with a 5 year overall survival of about 33%. Surgery carries a significant mortality of 5% and postoperative morbidity of up to 47% [5] . The precursor lesion to SCC is known as squamous dysplasia. The World Health Organization (WHO) refers to squamous dysplasia as squamous intra-epithelial neoplasia and has further categorized the condition depending on the grade of dysplasia as low grade intra-epithelial neoplasia (LGIN) through to high grade intra-epithelial neoplasia (HGIN) [6] . Non invasive SCC is often referred to as ESCC. Squamous high-grade dysplasia and ESCC carry a risk of progression to invasive SCC of up to 65% at 3.5 years and as high as 74% at 13.5 years [7] . The chance of lymph node metastasis is dependent upon the penetration and depth of the lesion. Lesions restricted to the epithelial layer (mL) or the lamina propria (m 2 ) have a low rate of lymph node metastases (< 5%); lesions that penetrate into the muscularis mucosae (m3) or the first third of the submucosa (sm1) have a higher risk (5%-15%) [8][9][10][11][12] . Once there is deeper submucosal (sm2 and sm3) involvement the risk of lymph node spread can be in the region of 24% [13,14] and surgical or oncological interventions are the treatments of choice. Traditional treatment for squamous neoplasia has been surgery or chemo-radiotherapy. However with disease limited to the mucosa the risk of lymph node involvement is low and minimally invasive endoscopic therapy is an alternative. With the advances in minimally invasive esophageal endotherapy over the past decade there are now additional treatment options for patients with squamous HGD and ESCC confined to the lamina propria. Centers in Asia and the Western countries have shown that the use of EMR is effective and curative for these patients [15,16] . However this form of endotherapy is associated with significant oesophageal stenosis [17] and therefore alternative treatments have been desirable. The use of photodynamic therapy in this cohort of patients has again shown promising results [18] but is not widely available and again is associated with significant stenosis post treatment. Endoscopic submucosal dissection (ESD) has been developed in Asia as one of the standard endoscopic resection techniques for early squamous neoplasia of the oesophagus. ESD enables oesophageal lesions, regardless of their size, to be removed en bloc and thus has a lower local recurrence rate than EMR. The en bloc resection rate is greater than 90% (90.6%-100%) [19][20][21][22][23][24][25] . En bloc resection, meaning resection in a single piece, facilitates an accurate histological assessment and reduces the risk of recurrence. In fact, the local recurrence rate after oesophageal ESD is extremely low (0%-3.1%) [21][22][23][24][25] . ESD is yet to become established in the united kingdom as this techniques is not widely available and is confined to specialist center's only. RFA using the HALO system (BÂRRX Medical, Sunnyvale, California, United States) is a novel minimally invasive field ablation technique which has established efficacy for treating HGD and early adenocarcinoma arising in Barrett's esophagus (BE) [26] . The HALO System uses ultra short pulsed radiofrequency that ablates the mucosa whilst preserving the submucosa. Emerging evidence suggests that RFA is safe and efficacious in the management of squamous HGD [26][27][28][29] . We report the United Kingdom HALO registry experience of the first 20 patients with squamous HGD and ESCC who have completed treatment protocol. The UK HALO RFA registry was created to audit outcomes of patients undergoing RFA for HGD/early neoplasia in BE and patients with squamous esophageal HGD/ESCC. It is a prospective multicenter registry which holds patient data from 20 centers nationwide. Ethical approval Ethical approval was granted by the Joint UCL/UCLH Committee on the ethics of Human research (REC REF 08/H0714/27). The HALO ablation system has already been approved by the US food and Drug Administration (FDA). In addition it is a European Cleared (CE) device as well as having been approved by the National Institute of Clinical Excellence (NICE) in the United Kingdom for treatment of HGD in BE. Inclusion criteria All patients referred for consideration for endotherapy for squamous HGD and early SCC were invited to enter at collaborating centers. Patients had an endoscopic and histological diagnosis of squamous HGD or ESCC confirmed by two independent expert gastrointestinal histopathologist prior to embarking on endotherapy. Enrolment Between January 2008 and March 2013 a total of 670 patients were enrolled to the registry from a total of 20 centers nationwide in the United Kingdom. Amongst these, 27 patients from 8 centers had squamous HGD or ESCC. Twenty of these have completed treatment protocol. Pre-enrolment staging All patients were referred by the cancer center specialist multidisciplinary team (sMDT). All had endoscopic assessment with multiple biopsies to exclude invasive disease. All investigators used the Paris Classification to classify macroscopic lesions [30] . Enhanced endoscopic techniques including chromoendoscopy with Lugol's iodine, narrow band imaging (Olympus, Hamburg, Germany) and I-scan (Pentax, Hoya Corporation, Japan) were used to target areas of suspected squamous dysplasia (see Figure 1) depending on which technology was present in the respective hospitals. Endoscopic ultrasound (EUS) and CT scanning was performed according to sMDT requirements. Endoscopic ultrasound was used if there were visible lesions seen to ensure that the neoplasia was confined to the mucosa. Endoscopic mucosal resection (EMR) was performed for any raised visible lesions and assessed by two expert gastrointestinal histopathologists at each center. Four quadrant biopsies were then obtained very 2 cm through the oesophagus to map for any further neoplasia. If invasive cancer was detected, the patient was referred back to the sMDT for alternative therapy. Invasive cancer was defined as neoplasia invading into the submucosa such that it was no longer amenable to endoscopic therapy with EMR or RFA. Registry endoscopy protocol Once consented patients had their first ablation as described below. At 3 mo all patients returned for follow up endoscopy where mapping biopsies were taken using chromoendoscopy with Lugol's iodine and enhanced endoscopic imaging. As well as targeted biopsies of USLs seen after Lugol's staining, systematic 4 quadrant biopsies every 2 cm were taken through the squamous esophagus to map any residual dysplasia. Any new raised visible lesions were treated with EMR. A decision to repeat RFA treatment was based on histology rather than visual clearance of disease. At 12 mo after the first ablation, biopsies were again taken to assess for eradication of dysplasia (CR-D) and this was defined as the primary end point. Patients with residual disease at this point were considered for ongoing endotherapy at the clinicians' discretion after consultation with the patient and discussion at the sMDT. Development of invasive cancer at any time was defined as treatment failure and data were censored at this point. An overview of the study protocol is shown in Figure 2. Radiofrequency ablation procedures and follow up procedures RFA was delivered circumferentially (HALO 360) or focally (HALO 90) at 12 J/cm 2 . As opposed to the 2 ablations delivered for RFA in BE at each treatment session, only a single ablation was delivered in the registry protocol for squamous HGD. In patients with multifocal dysplasia circumferential RFA was applied and focal RFA administered in patients with unifocal well defined areas of dysplasia. These areas were reported in centimeters from the incisors so that follow up procedures could use this reference to interrogate treated segments of esophagus for success or failure. RESULTS Twenty seven patients were treated at 8 different centers nationwide. Twenty of these have completed the treatment protocol and we report the outcomes of these patients. Pre treatment parameters are given in Table 1. Four patients (20%) had disease limited to 1cm (focal disease) and the remaining 16 patients had definable lengths of dysplasia (multi-focal disease) with a median length of 5cm (IQR 1-10). Five patients (25%) had EMR before starting RFA treatment. All patients gave informed written consent. Median follow up for those who had successful ablation and are still in follow up (n = 10) is 24 mo (IQR 17-54). Patients had a median of 1 RFA session and mean of 1.5 RFA treatments during the protocol period. A total of 6 rescue EMRs have been carried out in 6 patients after their first RFA. Two of these patients had already undergone EMR prior to initiating RFA treatment. Reversal of dysplasia at end of protocol Ten patients (50%) had reversal of dysplasia/CIS (CR-D) at end of protocol after a median of 1 treatment. Eight of these patients (80%) remain free of dysplasia on their latest follow up (median follow up from first treatment 24 mo, range 19-54 mo, see Figure 3). Of the 2 patients who had a recurrence after initial successful RFA, one progressed to invasive disease. The other had multifocal low grade dysplasia (LGD) 4 mo after completing treatment and at latest follow up (41 mo after protocol end) has had 3 further circumferential ablations and one focal Chromoendoscopy with magnifying endoscopy and enhanced imaging were used at follow up procedures to examine the previously treated areas referenced at the previous endoscopy. Again all histology was reviewed by two expert gastrointestinal histopathologists. Post procedure care/follow up All patients were maintained on a twice-daily regimen of a proton pump inhibitor. Soluble co-codamol was prescribed for discomfort post procedure. All patients were discharged home the same day after review by the endoscopist. Follow up endoscopies were carried out at 3 monthly intervals as per protocol with enhanced endoscopic imaging and lugol's chromoendoscopy (Figure 2). Primary and secondary end points and long term outcomes The primary end points were complete reversal of squamous dysplasia (CR-D) at 12 mo and development of invasive cancer at any stage. Secondary end points include long-term durability, number of RFA procedures and adverse events. We also followed up all patients who progressed to cancer so that we could determine their longterm outcomes. Statistical analysis Endpoints such as CR-D at end of protocol were compared to the patient's baseline status using log rank test and long term outcomes predicted with Kaplan-Meier survival analysis. ablation. This patient still has LGD at latest follow up. The clinician and patient have, nonetheless, agreed to perform a further RFA treatment. Four patients (20%) had residual dysplasia at 12 mo. One patient was referred for surgery to remove the dysplasia. One patient left the country and has been lost to follow up. The other 2 have opted to have further RFA and after a mean of 2 RFA treatments over 10 mo are free of disease at present (Figure 3). Figure 4 shows the predicted rate of dysplasia reversal in all 20 patients who have undergone treatment and completed protocol using Kaplan Meier outcome statistics. Previous endoscopic resection and baseline histology Although our numbers are too small to draw firm conclusions, there is a trend towards baseline histological grade (HGD or ESCC) having an influence on eventual outcome with HGD having a better outcome (66% vs 33%, HR = 0.535, P = 0.0308, 95%CI: 0.141-1.505, Log rank test). Larger numbers are however needed to confirm this trend. Our data do not suggest that EMR before commencing RFA influences dysplasia reversal rates and long term outcomes (33% vs 66%, HR = 0.229, P = 0.778, 95%CI: 0.3264-3.642, P = 0.8882, Log Rank test). Dysplasia reversal was also the same whether EMR was required during the RFA protocol (3/6 -50%) or whether patients underwent RFA alone (7/14 -50%). Early cancer progression and overall cumulative risk of progression Four of the 20 (20%) patients had progressed to invasive SCC at their first follow up and therefore no further RFA was performed. Two further patients who were treated with an initial circumferential RFA followed by an EMR at follow up endoscopy progressed to invasive cancer at protocol end. All progressors were referred for consideration of chemoradiotherapy. Using Kaplan Meier analysis, the risk of progression to invasive disease in all 20 patients who completed the protocol is 26% at 18 mo ( Figure 5). Adverse events One patient suffered a superficial esophageal tear following sizing prior to attempted circumferential ablation. The procedure was discontinued and focal ablation was used at the same procedure. The patient was discharged home without any complications. Four patients (20%) required dilatations for moderate esophageal structuring after their first circumferential treatment. Only one had been treated by EMR prior to RFA. Two of these patients have required serial dilatations thereafter for symptomatic dysphagia with a median of 4 dilatations per patient. The other 2 patients required only a single dilatation to achieve an adequate symptomatic response. Three serious adverse events have been reported to date. Two patients had bleeding at their follow up endoscopy after biopsies and required adrenaline injection. Both occurred following Lugol's iodine application. Although they were admitted overnight in hospital for observation they were discharged the following day without blood transfusion. DISCUSSION The role for endotherapy such as RFA as a first line intervention for patients with squamous HGD and ESCC in situ is yet to be established as standard practice. Squamous dysplasia is a very aggressive pathology and early diagnosis and intervention are paramount as disease progression often precludes curative therapy. However the high surgical mortality rate of up to 2%-5% and subsequent morbidity of 20%-50% means that alternative minimally invasive and novel techniques must be investigated [5,11,12,31] . Early literature into the use of RFA in squamous dysplasia emerged in 2008, following on from its recognized potential in early Barrett's neoplasia. Pouw et al [32] described the case of a 66-year-old patient with a unifocal lesion within the esophagus. This lesion had arisen after the patient had previously undergone chemoradiotherapy for a T2N1M0 squamous cell cancer of the hypopharynx. Following pre-treatment staging to ensure the lesion was confined to the mucosal surface only, the patient received a single balloon based ablation and had no recurrent disease at 4 mo follow up. Subsequent to this report, data regarding the efficacy of RFA in squamous dysplasia have been limited. One of the largest series to date examined the success of RFA in 13 patients within two tertiary centers [28] . Nine of the study cohort (69%) required EMR at baseline for visible nodules prior to commencing RFA. Dysplasia reversal was excellent in these patients with 100% achieving CR-D with a median of 2 treatments and remaining disease free with a follow up period of 17 mo from first treatment. In this study patients received 2 ablations at each treatment endoscopy with 12 J/cm 2 for the circumferential ablation and 15 J/cm 2 for focal ablation. Stenosis and stricturing was confined to just 2 of the 13 patients in this series. This same group has recently gone on investigate the efficacy of RFA in a larger cohort of 29 patients in a prospective study [33] . This study was conducted in a single Chinese center. All patients underwent an index circumferential ablation of all unstained lesions (USLs) with Lugol's chromoendoscopy. All patients were followed up at 3 monthly intervals with chromoendoscopy with biopsies followed by focal ablation of any USLs. Using the Chinese classification of squamous neoplasia [34] , 18 patients had moderate intraepithelial neoplasia (MGIN), 10 had high grade intraepithelial neoplasia (HGIN), and a single patient had early ESCC. At 12 mo 97% (28/29) had a complete reversal of neoplasia and furthermore there was no progression to invasive cancer within the treated group. The single patient with residual disease at 12 mo had EMR for unifocal disease with clear resection margins. In our study examining outcomes from the UK RFA registry of patients with squamous dysplasia undergoing RFA, CR-D was 50% at protocol completion, although dysplasia was later reversed in two further patients following more RFA sessions. These figures are lower than the limited published data from other centers worldwide [28] . We designed our study with a protocol end at 12 mo so that this study could be directly compared with those previously published. It may be that future studies should consider alternative end points to allow for a longer duration of treatment. In our series 20% of patients progressed to invasive disease after only one session of RFA and were then offered chemo-radiotherapy. These patients represent most of those who eventually developed cancer. This suggests that a single RFA treatment might even be considered as a staging procedure. Early failure would identify patients who should be treated with more conventional modalities. Indeed, 80% of those who achieved successful reversal of dysplasia at the end of the RFA treatment protocol remain in remission at most recent follow up. The 20% rate of progression after a single treatment may also point to the fact that these patients may have been under staged and may in some cases harbor more aggressive neoplasia at baseline that was not sampled. Whereas with other dysplastic conditions of the esophagus such as BE where there are often distinct areas of abnormality within the macroscopically visible columnar lined mucosa, with squamous dysplasia these areas are subtle. Detection relies on adjuncts such as chromoendoscopy with Lugol's iodine solution, optical endoscopic enhancements and the experience and expertise of the endoscopists to spot anomalies. Even with Lugol's solution the accuracy of detecting lesions varies greatly. In a recently published series the positive predictive value for Lugol's detecting squamous neoplasia in unstained lesions after RFA was only 14% [33] . It appears that the use of EMR in our cohort of patients is somewhat limited compared to similar published studies [15,35,36] . Only 5 patients (25%) underwent EMR before starting HALO RFA and there were a total of only 6 resections in 4 patients after their index treatment. This may account for the lower rates of dysplasia clearance and progression after the index treatment. Visible nodular lesions before or after RFA treatment may harbor submucosal disease and unless resected early may represent recurrence and progressive invasive disease. The published data on the success of RFA in BE is robust and plentiful whilst there is limited data on its use in squamous HGD. The AIM dysplasia trial [26] demon- LGD: Low grade dysplasia. strated impressive outcomes with reversal of HGD in patients with BE as high as 81% with a structured ablation protocol over 12 mo. In this protocol all patients were ablated twice at each treatment with 12 J/cm 2 for both circumferential and focal ablation. Mean treatments per patient was 3.5 ablations and stricturing occurred in 5% requiring dilatations. Subsequent published literature and our own outcomes from the RFA United Kingdom national registry have reproduced similar outcomes in patients with BE [37] . In 335 patients with Barrett's related neoplasia, HGD was cleared from 86% of patients, all dysplasia from 81%, and BE from 62%, at the 12-mo time point, following a mean 2.5 RFA procedures. There is still debate about the number of ablations patients with squamous dysplasia should undergo. It is also not clear whether these should be performed at successive treatment encounters or whether these patients should undergo staging with chromoendoscopy after successive treatments. In our series, the median number of ablations was only one (range 1-3) for the 10 patients who achieved CR-D at 12 mo. These were all circumferential balloon ablations. This compares to a median of 3 ablations administered in patients with BE during a similar treatment protocol in published series. This may account for the lower eradication rates but the protocol was designed to provide an extremely cautious approach to restaging these patients after every treatment in view of the aggressive nature of the disease. The rate of stricturing in our series was 20%. All 4 patients who developed strictures had undergone circumferential balloon ablation with the HALO 360 device. This rate is higher that noted in trials of ablation for Barrett's oesophagus [27,28] . Nonetheless they were all overcome with straightforward dilatations. In a recent series of patients treated with RFA for squamous dysplasia the rate of stricturing was 14% after circumferential ablation [33] . Other than a self-limiting bleed from biopsies and not from a RFA procedure our study confirms that RFA is a safe procedure with few other reported adverse events. A criticism of the protocol in the United Kingdom registry is that there may be too long an interval between treatments due to the requirement of a mapping endoscopy every 3 mo after RFA. With the aggressive nature of this disease it is very important to restage these patients early after treatment and perhaps shortening the intervals between treatment and follow up may help improve outcomes. Current United Kingdom practice is to only ablate once at each treatment session for patients with squamous neoplasia compared to the double ablation carried out in BE with mucosal cleaning of the coagulum between treatments. Recent publications have used 2 ablations per session safely in patients with squamous dysplasia and our practice may have to change to improve outcomes. Delivering 2 ablations at a single treatment session is standard practice for patients undergoing RFA for BE. Bergman and colleagues [33] explored various ablation energy settings in patients with squamous dysplasia. These included 2 ablations at a single treatment session with and without coagulum clearance between ablations. In 16 cases where they employed a 12 J/cm 2 -clean -12 J/cm 2 protocol CR-D at 12 mo was 100% with a stricture rate of 19% compared to a CR-D of 86% and stricture rate of 14% with a 12 J/cm 2 -no clean -12 J/cm 2 ablation protocol. By using lower energy settings of 10 J/cm 2 for the second ablation or using 10 J/cm 2 for delivering both ablations they did not compromise CR-D (100%) but interestingly had no strictures. The numbers in each of these treatment groups were however low (4 and 2 cases respectively). Another short coming of our study is the small size of the cohort. These lesions are rarely diagnosed at an early stage. With the availability of high definition en- dosocopy and growing experience of minimally invasive endotherapy as an alternative to surgical and oncological interventions, these numbers will undoubtedly grow in the coming years. This study examines data from 8 different centers nationwide. Despite standardized protocols, the expertise and experience of each endoscopist will be very different and individual preference and clinical practice will differ from one center to another. These patients have all undergone endotherapy within the confines of demanding endoscopy service provision. These data represent real life outcomes of integrating novel and ground breaking endotherapy to existing practice. ACKNOWLEDGMENTS This work was undertaken at UCLH/UCL who received a proportion of funding from the Department of Health's NIHR Biomedical Research Centres funding scheme. The views expressed in this publication are those of the authors and not necessarily those of the Department of Health. Background Oesophageal cancer carries a poor prognosis as it is often diagnosed at a stage where curative therapy is no longer possible. Squamous cell cancer (SCC) of the oesophagus carries a 5-year survival of 10%-15%. The precursor lesion to SCC is squamous dysplasia. Treatment of these early lesions with endoluminal therapy may help to improve outcomes in these high risk patients. Research frontiers Radiofrequency ablation (RFA) is a novel and minimally invasive field ablation technique that has shown good safety and efficacy for treating patients with Barrett's related neoplasia in the oesophagus. By combining endoscopic mucosal resection for visible lesions and RFA, patients with early squamous neoplasia of the oesophagus may be treated at an early stage of the disease. Innovations and breakthroughs There are only limited series reporting the use of RFA in patients with early squamous neoplasia to date. These data better inform us on patients that are likely to succeed with endoscopic therapy but also the importance of careful staging in these patients before treatment. Applications This prospective study may better inform clinicians about minimally invasive endoscopic therapy in these patients where perhaps more radical treatments are not an option for patients. Peer review The authors present a good study to evaluate the role of radiofrequency ablation in early squamous cell cancer and high grade squamous dysplasia. They provide a new finding of early response to RFA is prognostically important. This could be clinically relevant for physicians to perform RFA in the management of squamous high-grade dysplasia.
2018-04-03T05:34:09.603Z
2013-09-28T00:00:00.000
{ "year": 2013, "sha1": "2a05458113f9944e3394f8c7ac54cda11196dbd6", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v19.i36.6011", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8fdf61a3ee1d801e2ed04fe12f0b5dffe3970955", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259680175
pes2o/s2orc
v3-fos-license
Study of anatomical variations in premolars by cone beam computerized tomography in a radiologic clinic in Piauí Introduction: Root canal cleaning is the main objective of endodontic treatment and requires knowledge of the internal anatomy. The premolars are evidenced in the literature with great anatomical variations. In view of this, studies indicate that the use of Cone Beam Computed Tomography helps in the visualization of highly complex anatomy. Objective: to describe the anatomical variations in maxillary and mandibular premolars using cone beam computed tomography in a radiologic clinic in Piaui. Methods: 54 cone beam computed tomography scans with 160 premolars were used, produced using the Orthopantomograph OP300 equipment and analyzed by multiplanar reconstructions: axial, coronal and sagittal. Data regarding sex, number of roots and canals were recorded to compare and classify according to Vertucci. Results: the maxillary first pre-molars had 63.5% two roots,83.7% with one root and the mandibular pre-molars mostly with one root. Regarding the number of channels, 92.3% of the first premolars had two channels, most of them maxillary second premolars and mandibular premolars only one channel. Vertucci variations of types I, II, III and IV were verified in single-rooted elements, observing a great variation in superior elements. As for the prevalence of sex, only the first superiors showed greater variation in males. Conclusions: the upper first premolars prevailed with a great anatomical variation in relation to the other premolars with prevalence of Vertucci Type I and in males. INTRODUCTION The objective of endodontic treatment is to obtain complete removal of pulp tissue or debris to achieve adequate disinfection of the root canal system and to perform a three-dimensional obturation within the root canal using a biocompatible material 1,2 . Ignorance of morphological and anatomical variations may result in failure to identify all root canals or may result in inadequate instrumentation, leading to endodontic treatment failure 3,4 . Among the group of dental elements, the premolars present a diverse anatomical variation, mainly in relation to the number of roots and canals. Lower premolars may have one to three roots and a diverse anatomical configuration of canals 5 . According to Vertucci et al. 6 (1984), the lower second premolar has only a single root canal at the apex in 97.5% of the teeth studied, two canals in only 2.5%, and three canals with a lower prevalence 6 . The anatomical structure of maxillary premolars is also complex, including bifurcated roots, large variations Rev. Ciênc. Méd. Biol., Salvador, v. 22, n. 1, p. 24-29, jan./abr. 2023 in root morphology, and multiple canals, thus increasing the difficulties faced by the endodontist 7 . As for channel morphology, Vertucci et al. 6 (1984) reported in their studies that 75% of the maxillary second premolars studied had a single foramen, 24% had two foramina and 1% had three foramina 6 . Vertucci in 1984 observed in his studies an anatomical complexity in general in the dental elements, so to facilitate the study and understanding of these anatomical systems he developed a classification that took into account the relationship of the number of canals in relation to a root. This classification helped in the mastery of these canals, positively impacting endodontic treatments. Using this scientific support, this classification continues to be the most used until today for studies of anatomical variations, being able to classify this relationship into 8 types: root and canal 6,7 . For a long time, two-dimensional radiographs were used to evaluate internal anatomical variations, but with the limitation of image superposition. With this, dental imaging developed Cone Beam Computed Tomography (CBCT) as an evolution of conventional computed tomography focused on the maxillofacial region. Its images are presented in three-dimensional shapes, making it useful in endodontic practice to assess the internal composition of the dental element 1 . In this way, CBCT favors the study of anatomy in vivo by providing details of teeth and adjacent structures with a three-dimensional view. This technology uses isotopic voxels from three planes of space with precise linear geometry and measurements of the images obtained. Thus, it details the root morphology without distortion or overlap, allowing a faithful reproduction of the anatomy and morphology of the dental elements and facilitating the diagnosis in endodontics [8][9][10] . Thus, the present study aims to describe the anatomical variations in maxillary and mandibular premolars using cone beam computed tomography in a radiologic clinic in Piauí. METHODOLOGY This study followed the ethical protocols that ensure compliance with Res. 466 For data analysis, the SPSS statistical package, version 26, was used to tabulate the data and perform descriptive statistics with frequencies and percentages in order to describe the collected data. In addition to chi-square tests to verify the distribution of cases, accompanied by the level of significance (p < 0.05). RESULTS Of the 54 CBCTs, 25 were male and 29 were female. During the analysis, 164 premolars were observed, consisting of 101 maxillary and 63 mandibular premolars. The frequency, followed by the percentage represented in the premolar teeth can be seen in Table 2 regarding the number of roots, canals and the Vertucci Classification. It is noteworthy that all these results were accompanied by the level of statistical significance of the chi-square test, safeguarding the character of a non-random distribution. In order to complement the analyses, frequencies of distribution of the numbers of roots were carried out in terms of sex and premolar arches, highlighting the specificity of each tooth (14, 15, 24, 25, 44, 45, 34, and 35), the results are visualized in Table 3, below. DISCUSSION There are several methods for the knowledge of dental anatomy such as root staining, insertion of plastic resins in the canals, scanning microscopy, radiographs use of tomography and microtomography. Given these options, the use of tomography has obtained satisfactory results due to high resolution volumetric records, better manipulation, easy access and lower radiation doses 9 . The results obtained in this investigation provide fundamental anatomical knowledge for the success of endodontic therapies, because the more complex the internal configuration of the roots of these dental elements, the greater the probability of errors during the performance of these therapies 11,12 . The results are in agreement with the study conducted by Burklein et al. 13 (2017) with 62.4% of 644 maxillary first premolars with 2 roots and the second with 82.6% with 1 root13. As in the anatomical evaluation of maxillary premolars by Li et al. (2018) 87.5% of the first with 2 channels and the second with a variation of 50.3% with one channel 12 . On the other hand, the lower premolars had almost a totality with only one root and single canals, and these conclusions can be observed in the study by Corbella et al. 1 (2019) with 90% of mandibular premolars with 1 root3 and Alfawaz et al. 14 (2019) with a large single root rate 14 . Although in this research only 01 lower premolar showed variation in the number of roots, the literature shows a significant percentage of two roots due to flattening in the root formation process 1,9,13 . In addition to the number of roots and channels, studies involving this anatomical analysis approach classify the relationship of channels present in a root according to Vertucci 6 (1984 flattening, resulting in connections between the canals 11,12 . The results obtained in this study on the prevalence of the Type I configuration are given by the large proportion relationship between roots and canals showing agreement with individual studies with these elements 12,14 . In this investigation, we considered the critical analysis in relation to the classification method by Vertucci 6 (1984) as it considers only the main channels. Although the complexity of the canal system is already proven in terms of anatomical findings, it is necessary to create more subtypes considering the particularities of these systems. Since the CBCT images from in vivo studies present in the literature provide the possibility to visualize these anatomical details in greater detail 13,16 . Finally, maxillary first premolars obtained statistically significant results with two roots in males. There is an agreement with two studies that show high rates of anatomical variations in male patients but without a clear relationship with sex 13,17 . Although most studies state that there is no statistical correlation and a consensus in the literature because this information depends more on the number of patients involved in each sex 15,18 . Obtaining that the other premolars in this study presented similar results regarding the variation by sex. CONCLUSION It is concluded that in the year 2020 in the radiological clinic data, only the maxillary first premolars were observed mostly with two roots or two canals, the others with only one root or canal. Evidencing a great anatomical variation in the upper arch. Regarding the Vertucci Classification, most roots presented a Type I configuration, followed by Type II, Type III, and IV. It is evident that the last three are more common in single-rooted maxillary premolars. In addition, only the maxillary first premolars showed greater variations between one and two roots per male. And the other elements, most with only one root between both sexes.
2023-07-12T06:03:30.931Z
2023-06-22T00:00:00.000
{ "year": 2023, "sha1": "ee7b7b7271f20ba75786da0f1665be7a48ec2c8f", "oa_license": "CCBY", "oa_url": "https://periodicos.ufba.br/index.php/cmbio/article/download/51742/29322", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99c5716f98f98701539b02c8b5b943dfc62afa18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225540747
pes2o/s2orc
v3-fos-license
Distribution with a simple Laplace transform and its applications to non-Poissonian stochastic processes In this paper, we propose a novel probability distribution that asymptotically represents a power-law, ψ(t) ∼ t−α−1, with 0 < α < 2. The main feature of the distribution is that it has a simple expression in the Laplace transform representation, making it suitable for performing calculations in stochastic processes, particularly non-Poissonian processes. Introduction Stochastic processes are extensively used to model processes across all scientific disciplines. In addition to physical sciences, stochastic processes are applied to a wide range of fields, from biology and chemistry to material science and social sciences. The main study objective of a stochastic process is the probability density distribution P(x, t), where x = (x 1 , x 2 , . . . , x n ) are the stochastic variables and t represents the time. For Markovian processes, i.e., processes with no memory, or more general for renewal processes, the associated equations frequently change into convoluted forms in space and time. A primary role is played by the continuous time random walk (CTRW), whereby an indefinite number of processes can be modelled (see references [1,2] and reference [3] for an extensive review). An excellent example is provided by the probability density of the random walker position, initially located at x = 0 in the CTRW scheme. The expression of the probability density is [3] P ( where ψ n (τ ) is defined as ψ n (t) = t 0 ψ n−1 (τ )ψ(t − τ )dτ , ψ 0 (t) = δ(t), ψ 1 (t) ≡ ψ(t), (2) and ψ n (τ )dτ is the probability that the n'th step occurred at some time between τ and τ + dτ , moving the random walker toward x. In several processes ψ(τ ) is also called the waiting-time distribution. The function, is the survival probability that ensures that no additional step is taken between the times τ and t > τ. Setting ψ 0 (t) = δ(t) and because at t = 0 the initial condition is P 0 (x) = δ(x), the Laplace transform of P(x, t) is found directly as Setting conditions on P n (x), a further elaboration of equation (4) can be performed; however, we refer the reader to reference [3] for further elaboration of equation (4). For our purposes, we limit ourselves to stress the key role in the resulting expression of the Laplace transform for the waiting-time distribution. The random processes leading to anomalous diffusion, using CTRW or the fractional Fokker-Planck equation (see references [4][5][6] and references therein), have been inten-sively studied using a power-law as the waiting-time distributions, ψ(t) ∼ t −α−1 , with 0 < α < 1. In this regard, the Mittag-Leffler function plays an important role in modeling a power-law distribution. An exhaustive study on this topic can be found in reference [7], where the authors presented, in detail, the key role of the Mittag-Leffler function in renewal processes that are relevant to the theories of anomalous diffusion, particularly in the CTRW approach and fractional Fokker-Planck equation. The Mittag-Leffler function is defined as [8,9], where the associated distribution, the Mittag-Leffler waiting-time density, is, The asymptotic behavior of the Mittag-Leffler waiting-time density is a power-law ψ(t) ∼ t −α−1 with 0 < α < 1, i.e., the first moment of the distribution does not exist. Due to its simple Laplace transform expression,ψ(s) = 1/(s α + 1), the Mittag-Leffler waiting-time density has been extensively used (see reference [7] and references therein). Moreover, when α = 1 the function describes a Poissonian process. Indeed, in this case, Another relevant quantity associated with several stochastic processes is the rate event function, which can be defined as with ψ n (t) defined as in equation (2).This function plays a central role in several stochastic processes [10][11][12][13][14][15][16][17] and it is studied in detail in section 3 (for a detailed explanation see reference [18]). The expressions containing R(t) as part of a more complicated expression can be found in subordination processes, such as in the Montroll-Weiss formalism [1,[19][20][21] or statistics of rare events in renewal theory [22]. From an analytical point of view, the range 1 < α < 2, i.e., a distribution with a finite first moment, is less known. As we may infer from the above discussion, having a simple expression for the Laplace transform of ψ(t),ψ(s) is crucial to be able to invert the Laplace transform of the final expression in the Laplace representation. Particularly, in this paper, a novel probability distribution representing asymptotically a power-law is presented and we focus on non-Poissonian processes where ψ(t) is a power-law asymptotically behaving as t −α−1 with 0 < α < 2. As largely assumed, what matters is the asymptotic expression of the waiting-time distribution rather than its exact expression. However, if the stochastic process can be reduced to a closed expression in the Laplace representation, as usually is the case, the knowledge of the Laplace transform is crucial. Thus, we need to focus on the expression of the Laplace transformψ(s) and not on the details of the waiting-time distribution in the time representation, ψ(t). Laplace transform of a power-law distribution The inversion of the Laplace transform is, in general, a challenging task where exact results are difficult to find. Frequently, we turn to Tauberian theorems to find approximated expressions, in particular, asymptotic expressions. When we have a Laplace transform, let us sayf (s) and we are willing to go back to the t-representation, f(t), we may utilize the Tauberian theorem to obtain f(t) when t → ∞. This implies a Taylor expansion for s → 0 of the transformed function, i.e.,f (s) = a 0 + a 1 s α 1 . . . .. However, the conditions ensuring that the Taylor expansion has a correct correspondence with the asymptotic expression of f(t) are quite strict. Frequently, one inverts the expression forf (s) without ensuring the conditions and checks for a posteriori if the result is correct. When the conditions of the Tauberian theorem do not hold, an important question is where to stop the Taylor expansion. Whether more terms are added or not can produce different results (see the example in reference [23]). In stochastic processes, we frequently deal with the waiting-time distributions. For non-Poissonian processes, the waiting-time distribution is a power-law that admits a complicated expression for its Laplace transform. For example, in reference [24,25] the authors considered a waitingtime distribution given by ψ(t) = α(1 + t) −α−1 . The expression in time representation is simple, whereas the Laplace transform,ψ(s), has a complicated expression, making it difficult to perform the Laplace inversion for the related quantities. Typically, we are interested in the asymptotic behavior of the distributions in time representation. Therefore, the exact form of ψ(t) is not important, whereas having a manageable Laplace transform ofψ(s) is crucial. As we have already seen, a notable example of a power-law distribution with a simple expression for its Laplace transform is the derivative of the Mittag-Leffler function, whose power series expression provides a useful expression, Alternatively, it can be expressed through an integral representation [8]. In the interval t ∈ (0, ∞), the negative derivative of E α (−t α ) for 0 < α < 1 is positive definite and integrable. In other words, properly normalized, it represents a power-law distribution. Its asymptotic behavior is, where, for the sake of simplicity, we set the time scale parameter equal to the unit. Note that for α = 1, the function ceases to be a power-law distribution since Starting from its definition, equation (8), finding the Laplace transform of E α (−t α ) and its negative derivative are not difficult, both are written as a closed expression, i.e., Due to its simple functional form in the Laplace representation, the above distribution has been used to evaluate the inverse Laplace transform of expressions such as the one in reference [26], which will be thoroughly examined in section 4. As mentioned above, the validity region of the Mittag-Leffler distribution as a power-law distribution is limited to the power parameter ranging in the interval 0 < α < 1. To go beyond the above distribution and cross the critical value α = 1, we require functions that have a simple Laplace transform and contain an asymptotic power-law behavior, for which good candidates are defined as follows [27,28], where D α t is the Riemann-Liouville fractional derivative. The asymptotic behavior is, Its Laplace transform is, In principle, E t α has a Laplace transform defined for α < 1; however, as we will see, this constraint can be bypassed. Note that the derivative of the Mittag-Leffler function and function E t α are related to each other. In the case of a rational index, it is straightforward to express the relationship explicitly. For example for α = 1/2 we have, which implies, To determine a new distribution with similar characteristics of the distribution given in equation (10), we shall consider the associated functions with the imaginary argument, or in terms of series . Note that cos α t and sin α t are divergent at t = 0 as t −α , such that their Laplace transform is defined for α < 1. We may overcome this difficulty by selecting an appropriate combination of those functions. We build the required distribution as ψ(t) = sin πα 2 cos t + cos πα 2 sin t cos πα 2 − sin πα 2 cos α t + cos πα 2 sin α t cos πα The cosine function in the denominator is a normalization factor, whereas the trigonometric coefficients in the numerator are selected to erase the first divergent terms of (17). The above expression can be rewritten as Now, we need to show that the function defined in equation (19) is positive and integrable. Let us first show the positivity of ψ(t). We use the following result (see references [27,28]), As the integral in the left-hand side of equation (20) is a positive quantity for α ∈ (−1, ∞), the right-hand side of equation (20) should also be positive. Particularly, for α ∈ (0, 2) the factor sin πα 2 is a positive quantity and consequently ψ(t) has to be positive, which concludes the demonstration. Once we have ensured the positivity of the function, we study the integrability and the asymptotic behavior of ψ(t). At the origin ψ(t) behaves as showing that ψ(t) is integrable at the origin for 0 < α < 2 and α = 1. For t → ∞ we have, Equation (22) shows that ψ(t) is integrable for t → ∞ for 0 < α < 2 and it has the correct asymptotic behavior, namely ψ(t) ∼ t −α−1 . Two plots of ψ(t) are shown in figures 1 and 2. Despite its complicated structure in time representation, its Laplace transform is, The case α = 1 has to be evaluated in equation (23) We have achieved our goal, i.e., to express the Laplace transform of ψ(t) in an expression easy to manipulate, namely through a fraction of powers of the Laplace variable s. Rate event function As an example, let us consider the following quantity, called rate event function, with, and whose expression in the Laplace representation is, The function R(t) describes the number of events for a unit of time, and it is a relevant quantity present in several stochastic processes, including aging processes. Similar expressions can be found in subordination processes such as in the Montroll-Weiss formalism [1,[19][20][21] through the typical kernel in the Laplace representation, Going back to R(t) and usingψ(s) given by equation (23), we obtain its Laplace transform, i.e., It can be observed that for rational values of α, an exact analytical expression can be found for R(t). For example, α = 1/2 gives [29], where erf (z) is an error function. Considering α = 3/2, we have the exact result as in [29], In general, we have that for 0 < α < 1 it holds, while for 1 < α < 2, we obtain, Master equation for a stochastic dichotomous process In this section, we use the distribution previously introduced to study the analytical expression of the distributions associated with a stochastic process. Particularly, we focus on the Lévy walks presented in reference [24] and also studied in references [30,31]. We are also able to confirm that the trajectory approach, using walkers, and the density approach, using the Liouville equation, lead to the same result. The authors of reference [24] generated a stochastic trajectory, x(t), with a walker that starts moving spending time τ 1 in a uniform motion with speed W, and then the walker tosses a coin to decide whether to keep moving in the same direction or reverse it. Subsequently, the walker moves for a time τ 2 with speed W and continues repeating the process. Without loss of generality, we may set W = 1. The adopted time distribution for τ i is, The authors demonstrated numerically that the corresponding probability distribution P(x, t) for t → ∞ is a Lévy distribution. However, to the best of our knowledge, this has not been shown analytically. In reference [26], the above process has been studied adopting the density point of view as a starting point (Liouville equation). The authors of reference [26] found an analytical expression in the double Laplace transform of the distribution P(x, t) given bŷ where s and u correspond to t+x 2 and t−x 2 , respectively, in the space-time representation. The inverse Laplace transform ofP (s, u) has been studied in detail for power-law distribution ψ(t) ∝ t −α−1 with 0 < α < 1 (reference [26]). The inverse Laplace transform of equation (35), for 1 < α < 2, has not been analytically studied due to the difficulty of inverting the double Laplace transform. Using the distribution introduced in equation (18), we may perform a series of exact calculations and, for rational power, the corresponding expression can be reduced to a sum of polynomial-like terms. First, we need to find the relationship between the time scale T of distribution (18) and the time scale T M of ψ M (t) of reference [24], i.e., equation (34). Using equation (22) explicitly written with respect to the time scale T and equating it to the asymptotic expression of ψ M (τ ) given by equation (34), i.e., we may write the connection between the time scale T M and the time scale associated with distribution (18), T, as We set T = 1 and focus on α = 3/2. Due to the evident symmetry of P(s, u), it is enough to consider the term, while the last term in the square bracket on the right side of equation (35) represents the two ballistic peaks multiplied by the survival probability. Performing the calculation, we have,P 1 (s, u) =ψ We may rewrite the equation aŝ where and λ * (u) is the conjugate. Note that at this stage the inversion of Laplace transform with respect to the parameter s can be performed exactly. Using the result of equation (15) and the following equality [28], we may rewrite equation (40), in the limit s → 0, Taking now the limit u → 0 we have, Thus,P with A k (z) given by and u k are the roots of the equation √ 2(1 − i)y 3 − y 2 + z = 0. For the sake of compactness, we omitted the z-dependence of A k and u k . Therefore, asymptotically we have, with a, b and c complex constants. Using inverse Laplace transform, with P(x, t) given by equation (49) and P L (x, t) given by equation (50). The maximum percent error is of the order of 2% Conclusions In this paper, a new distribution, representing a power-law, ψ(t) ∼ t −α−1 asymptotically with 0 < α < 2, has been presented. The main feature of the distribution is having a simple expression of the Laplace transform. Its application in several examples has been shown to produce exact results. We applied the new distribution, rigorously showing the conjecture that the Lévy walks generate a Lévy distribution, as numerically showed in reference [24] and analytically, via the decoupling approximation, in reference [31]. Simultaneously, we have also demonstrated that the trajectory and the density approach lead to the same result. Processes based on a power-law distribution in the region 0 < α < 1 can be analytically studied using the Mittag-Leffler distribution. The main contribution of this paper is that processes based on a power-law distribution in the range 1 < α < 2 can be analytically studied using the presented distribution.
2020-07-09T09:15:27.942Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "cbe496b5ef1ec6d3dbfc786deee0991c7a0dc044", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-5468/ab96b1", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "806e41328013f35af274fe01e9236bc919b6f9fc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
243953662
pes2o/s2orc
v3-fos-license
Transplantation of Human Pluripotent Stem Cell-Derived Cardiomyocytes for Cardiac Regenerative Therapy Cardiovascular disease is the leading cause of death worldwide and bears an immense economic burden. Late-stage heart failure often requires total heart transplantation; however, due to donor shortages and lifelong immunosuppression, alternative cardiac regenerative therapies are in high demand. Human pluripotent stem cells (hPSCs), including human embryonic and induced pluripotent stem cells, have emerged as a viable source of human cardiomyocytes for transplantation. Recent developments in several mammalian models of cardiac injury have provided strong evidence of the therapeutic potential of hPSC-derived cardiomyocytes (hPSC-CM), showing their ability to electromechanically integrate with host cardiac tissue and promote functional recovery. In this review, we will discuss recent developments in hPSC-CM differentiation and transplantation strategies for delivery to the heart. We will highlight the mechanisms through which hPSC-CMs contribute to heart repair, review major challenges in successful transplantation of hPSC-CMs, and present solutions that are being explored to address these limitations. We end with a discussion of the clinical use of hPSC-CMs, including hurdles to clinical translation, current clinical trials, and future perspectives on hPSC-CM transplantation. INTRODUCTION Cardiovascular disease (CVD) is the leading cause of death worldwide (1). In the United States alone, CVD is responsible for ∼655,000 deaths and contributes to $200 billion in spending each year (2). CVD can lead to myocardial infarction (MI), also known as a "heart attack, " which results in restricted blood flow and extensive cell death within the infarct zone. Due to the limited regenerative capacity of the human heart, infarcted myocardium is replaced by fibrotic scar tissue with inferior contractile performance. Over time, pathological remodeling leads to ventricular wall thinning, which can progress to heart failure (3). There is currently no treatment available that can restore lost cardiomyocytes after MI, and conventional therapies typically only manage the symptoms (3,4). Heart transplantation is the only therapy capable of replacing a failing heart, but the shortage of viable donor organs and need for lifelong immunosuppression presents its own set of challenges for heart transplantation as a therapy (5). Therefore, alternative approaches that can restore the function of the patient's heart and replace infarcted myocardium would be a transformative development in cardiovascular medicine. Stem cell therapy for cardiac regenerative medicine has drawn major interest due to the promising capacity of stem cells to differentiate into functional tissue. Several sources have been investigated for stem cell-mediated cardiac regenerative therapy, including both human adult stem cells and human pluripotent stem cells (hPSCs) (6). Unlike adult stem cells, hPSCs have a proven capacity to derive functional cardiomyocytes, and their scalable production in vitro has made hPSCs a favorable cell source for cardiac regenerative medicine (7,8). This review will discuss the origins and characteristics of human pluripotent stem cell-derived cardiomyocytes (hPSC-CMs) and how they are implemented in transplantation techniques (Figure 1). Additionally, we will discuss the potential mechanisms through which these transplantation strategies improve cardiac function and what challenges limit effective hPSC-CM transplantation. Finally, we will end with a discussion of challenges facing clinical translation of these transplantation strategies (Figure 1), current clinical trials involving hPSC-CMs, and future considerations in the field of transplantation of hPSC-CM for cardiac regenerative therapies. HUMAN PLURIPOTENT STEM CELL SOURCES AND DIFFERENTIATION INTO CARDIOMYOCYTES Human embryonic stem cells (hESCs) are a form of hPSCs isolated from human blastocysts cultured for in vitro fertilization. hESCs are capable of unlimited self-renewal and can differentiate into derivatives of all three germ layers (9). The differentiation potential of hESCs has been harnessed to reproducibly generate cardiomyocytes (hESC-CMs) (10). As the production of hESCs involves the destruction of human embryos, there are many ethical controversies that accompany the use of hESCs (11). To overcome these ethical concerns, human induced pluripotent stem cells (hiPSCs) have been explored as a cardiomyocyte source. hiPSCs are reprogrammed somatic cells with the capacity to differentiate into cells of all three embryonic germ layers. The concept behind the development of hiPSCs was that the genes that allow a cell to maintain its pluripotency could be overexpressed in a somatic cell and reprogram it to an ESC-like state (12). Viral vectors (12,13) as well as recombinant proteins (14) and micro RNAs (15,16) have been used to reprogram adult human cells to a pluripotent state. The major methods to derive CMs from hPSCs are embryoid body differentiation, monolayer differentiation, and inductive differentiation (17). Common among all of these methods is the principle of mimicking endogenous embryonic cardiovascular development, including modulation of Wnt, Activin/Nodal, TGF-β, and BMP signaling pathways (18)(19)(20)(21). Currently, hPSC-CM purity following differentiation can reach over 90% (18,19,21). The phenotype of hPSC-CMs resembles that of fetal CMs. For instance, they are morphologically small, spontaneously beat, lack T-tubules, and have underdeveloped and inefficient calcium handling (22). Developments in methods for differentiation and culture are working toward the goal of producing hPSC-CMs with a more mature phenotype, as will be discussed later in this review. TRANSPLANTATION STRATEGIES Delivery routes for cardiac cell therapies have included intravenous injection, intramyocardial injection, intracoronary injection, intrapericardial transplantation, and epicardial patches. Each of these methods have their own strengths and weaknesses regarding cell retention and functional outcomes (23). For hPSC-CM transplantation, intramyocardial injection and epicardial patches have been the most popular delivery routes in pre-clinical studies and first-in-human clinical trials. Therefore, we will focus on these two transplantation strategies in this review. Intramyocardial Injection Early studies in the transplantation of hPSC-CM involved intramyocardial injection of single cell suspensions in mouse (24), rat (25), guinea pig (26), and swine models (27). Although hPSC-CMs demonstrated the ability of to partially remuscularize the animal hearts, cell retention and survival rates were low, and there was insufficient evidence of functional integration. To improve hPSC-CM survival post-transplantation, Murry et al. developed a pro-survival cocktail that led to enhanced cell survival after transplantation, robust cardiac remuscularization, and functional improvement in both small (28)(29)(30)(31)(32) and large (33) animal models of ischemic injury. Murry's group later showed that hPSC-CM injection in a non-human primate model of MI results in extensive remuscularization and electromechanical coupling of grafted cells to host myocardium (33,34). They further confirmed the ability of the engrafted hPSC-CMs to restore function in the non-human primate heart by demonstrating improved left ventricular ejection fraction. However, they also observed transient graft-associated ventricular arrhythmias, which was attributed to the ectopic pacemaker activity of the engrafted hPSC-CMs (33,34). To aid in cell retention following engraftment of hPSC-CM, recent studies have explored injectable three-dimensional hPSC-CM microtissues to provide critical cell-cell interactions and reduce anoikis. For example, Moon et al. demonstrated reduced fibrosis, improved fractional shortening, and prolonged survival of 5-10 cell hPSC-CM aggregates injected into infarcted rat hearts (35). Larger scale hPSC-CM spheroids containing 200,000 cells each have also been implemented to promote improvement in fractional shortening and engraftment rates following infarction in a murine model (36). Spheroids consisting of hPSC-CMs have also been implanted into a porcine model of heart failure, leading to functional improvement (37). However, graft-associated arrhythmias were observed in the swine transplanted with hiPSC-CM spheroids. Epicardial Patches Epicardial patches refer to engineered heart tissues that are attached to the outer surface of the heart, usually adjacent to the infarct region. In addition to providing mechanical support, epicardial patches function as a scaffold to provide cell-ECM interactions that promote hPSC-CM survival and engraftment post-transplantation as well as secretion of cardioprotective paracrine factors (38,39). For example, rodent models of chronic ischemia have been treated with epicardial patches FIGURE 1 | hPSC-CMs are differentiated from hiPSCs and hESCs and transplanted into the infarcted heart through intramyocardial injection or epicardial patches as a cardiac regenerative therapy. Following transplantation, regeneration is driven by paracrine effects of and remuscularization of myocardial tissue by engrafted hPSC-CMs. However, challenges persist that limit successful transplantation of hPSC-CMs and will need to be addressed to achieve effective clinical translation. and demonstrated long-term retention of grafts (40). Despite this progress, patches are not immediately perfused posttransplantation and can be isolated from host myocardium by a layer of fibrotic tissue, limiting nutrient diffusion to cells within the construct post-transplantation (40). To address this, porous patches seeded with hPSC-CMs have been investigated to examine whether the porous nature of the patch would allow sufficient nutrient and oxygen exchange to engrafted cardiomyocytes (41). Munarin et al. have also recently demonstrated that the incorporation of alginate microspheres containing angiogenic factors in hPSC-CM scaffolds could lead to enhanced host vasculature infiltration into the scaffolds and improved cell survival when implanted in a rodent model of acute MI (42). To improve vascular integration with host myocardium, vascular cells (i.e., endothelial cells) have been incorporated into epicardial patches with hPSC-CMs. Biodegradable scaffolds seeded with a triculture of hPSC-CMs, human umbilical vein endothelial cells (HUVECs), and embryonic fibroblasts promoted graft vascularization and anastomosis with host coronary vasculature in rodent hearts (43). Ye et al. combined the use of biomaterials and multiple cell types to investigate a 3D fibrin patch loaded with the pro-survival factor insulinlike growth factor-1 (IGF-1)-encapsulated microspheres seeded with hPSC-CMs, endothelial cells (ECs), and smooth muscle cells (SMCs). When implanted in a porcine model of acute MI, all three cell types integrated with the host, and physiological improvements were observed in terms of improved left ventricle function, myocardial metabolism, and ventricular wall stress (44). Advances in engineered heart tissue have led to the fabrication of clinical scale human cardiac muscle patches (hCMP) consisting of 3D fibrin scaffolds seeded with hPSC-CMs, -ECs, and -SMCs (45,46). The hCMPs exhibited 10% engraftment at 4 weeks post-implantation and promoted significant improvement in cardiac function and reduction in wall stress and infarct size (46). Scaffold-free approaches have also been used to create epicardial patches. Cell sheet technology, developed by Okano et al., involves coating culture dishes with PNIPAAm, a thermo-responsive polymer, to release cells and produce cell sheets upon changing temperature (47). This technique was recently used to fabricate cardiac tissue sheets from hPSCs, which were then implanted into small and large animal injury models to demonstrate their therapeutic potential (48)(49)(50)(51). In addition, Murry et al. developed pre-vascularized cell sheets with enhanced survival and anastomosis with host vasculature upon transplantation in healthy rodent hearts (52). MECHANISMS OF IMPROVING CARDIAC FUNCTION Remuscularization A major goal of cardiac regenerative medicine is to remuscularize the infarcted myocardium, restoring the muscle that was lost to ischemic injury (53). Intramyocardial injection of hPSC-CMs allows the engrafted hPSC-CMs to integrate with host myocardium and directly contribute to contractile function. Functional integration has been evidenced by the formation of gap junctions between host and engrafted cardiomyocytes in various small (28,29,39) and large (33,34) animal models. Epicardial patches can improve hPSC-CM engraftment, provide partial remuscularization to infarcted myocardium, and augment left ventricular function in a dose-dependent manner (54). However, the fibrotic tissue between the patch and myocardium can reduce long-term survival of the hPSC-CMs and prohibit the formation of electromechanical junctions between the engrafted hPSC-CMs and host myocardium, leading to unsynchronized contractions (29). Paracrine Effects In many instances of intramyocardial hPSC-CM transplantation, functional recovery has occurred even without significant hPSC-CM engraftment, leading researchers to hypothesize that paracrine factors (e.g., cytokines, extracellular vesicles, etc.) released by the transplanted cells are partially responsible for improvements in damaged myocardium. This concept was explored through single-cell profiling of hPSC-CMs following their transplantation in a murine acute MI model. Left ventricular function was improved despite limited engraftment, and hPSC-CMs were found to release high levels of proangiogenic and anti-apoptotic factors, suggesting functional benefits came from paracrine activity (55). This is further supported by similar functional recovery obtained by injection of hPSC-cardiac cells and hPSC-cardiac cell-secreted exosomes into infarcted porcine hearts (56). Cardioprotective microRNAs have been identified in hiPSC-CM-derived extracellular vesicles, and extended delivery via a hydrogel patch improved cardiac recovery (57). Due to subnormal formation of electromechanical junctions with host myocardium, epicardial patches typically repair injured hearts through mechanical support and paracrine effects. Given the fibrotic separation, vascular integration between epicardial patches and host myocardium may play a critical role in transporting patch-derived paracrine factors into myocardium (39,41). CHALLENGES TO IMPROVE hPSC-CM TRANSPLANTATION STRATEGIES Although progress has been made in the field of hPSC-CM transplantation, challenges still face transplantation and clinical translation of hPSC-CM therapies. In the next two chapters, we will first discuss the challenges that face the development of successful hPSC-CM transplantation techniques and then the outstanding challenges that limit safe and effective clinical translation of these techniques. Immune Rejection Transplantation of allogenic cells or tissues can elicit an immune response that ultimately leads to graft rejection and can have harmful consequences for the transplant recipient. Solutions include major histocompatibility (MHC)-matching and the production of hPSC banks (58). Shiba et al. performed an MHC matching study in which they transplanted allogenic non-human primate PSC-CMs 14 days after injury in a cynomolgus monkey model of MI. They observed improved cardiac function, along with electrical coupling with the host myocardium and no evidence of immune rejection in the MHCmatched PSC-CM group, suggesting the safety of transplanting MHC-matched, donor-derived hPSC-CMs in humans (59). Eventually, autologous transplantation of hPSC-CMs would be ideal and hiPSC-CMs, in particular, offer a promising source of patient-derived cells. However, manufacturing challenges must be overcome to make autologous hPSC-CM transplantation practical for clinical use. Cell Survival and Retention Low cell survival and retention after transplantation is a central obstacle in the development of effective hPSC-CM-based cardiac regenerative therapy (60,61). To improve survival of intramyocardial injected hPSC-CM single cells, a pro-survival cocktail for injection was developed to address common causes of graft death (62). A recent study found that co-transplantation of hiPSC-CMs with ready-made microvessels from adipose tissue resulted in a six-fold improvement in hiPSC-CM cell survival (63). To promote cell survival in epicardial patches, pre-vascularization strategies have been explored to promote anastomosis of the patches with host vasculature (64,65). Going forward, novel bioengineering approaches (e.g., biomaterials and cellular engineering) could improve hPSC-CM retention (23). Electromechanical Integration of the Graft Due to the wound healing response following MI and intramyocardial injections, fibrosis develops around transplanted hPSC-CMs. This affects signal propagation and proper electromechanical integration of the graft, leading to arrhythmias (66). In studies of hPSC-CM transplantation, intramyocardial engraftment into non-human primates (33,59) and porcine models (67) was associated with transient ventricular arrhythmias (68). To solve these issues, conductive scaffolds can be used to aid in signal propagation (66). Furthermore, engrafted hPSC-CMs have an immature phenotype associated with spontaneous beating, which will affect the electrical signaling in the heart (68). To decrease the presence of arrhythmias, hPSC-CM maturation and ventricular subtype-specific differentiation protocols would be useful to eliminate pacemaker-like activity from engrafted cells (22,34). Epicardial transplantation of hPSC-CM patches has not been shown to elicit arrhythmias in guinea pig (69) and porcine (46) hearts. However, this could be due to fibrotic isolation of the graft and lack of electromechanical coupling with host myocardium (70). hPSC-CM Maturation As mentioned, hPSC-CM have an immature phenotype. Maturation of hPSC-CM involves physiological hypertrophy associated with organization of sarcomeric structure, along with presence of T-tubules (71). hPSC-CM maturation also involves more efficient calcium handling, improved electrophysiological properties and higher contractile force (72). Therefore, transplanted CM with properties that more closely resemble adult myocardium would reduce the risk of arrhythmias and have improved contractile properties (73). Several methods have been investigated for maturation of hPSC-CM, including long-term culture, changes in the culture substrate stiffness, electrical stimulation, and biochemical cues (73). Mechanical loading has also been used to stimulate maturation in iPSC-derived cardiac tissue (74,75). Additionally, tissue engineering methods have been employed to promote maturation. Engineered heart tissue made from a co-culture of hESC-CM and hESC-derived epicardium promoted hESC-CM maturation in terms of enhanced contractility, myofibril structure, and calcium handling (76). Electrical training of hPSC-CMs in three-dimensional culture system has also contributed to advanced morphological maturation of hiPSCs (77). Threedimensional culture containing multiple cell types has also been shown to promote a more mature phenotype of hiPSC-CMs (78,79). Challenges in Clinical Translation of hPSC-CMs There are several safety concerns in the clinical use of hPSC-CM treatments. In addition to potential tumorigenicity and immune rejection, a major roadblock for intramyocardial injection is hPSC-CM graft-associated arrhythmias. Recent evidence has demonstrated the feasibility of pharmacological therapy for hPSC-CM-induced arrhythmias after intramyocardial injection (80). Arrhythmia risk may increase with graft size and, therefore, thorough cell dose-response studies are needed. While studies with hPSC-CM epicardial patches have mostly indicated no arrhythmic burden, the long-term effects of their subnormal electromechanical integration are unclear (81). Most cardiac injury models to date can be classified as acute or subacute MI, with transplantation occurring within minutes to days after infarction. In a clinical setting, hPSC-CM therapy would often be performed months to years after MI as a last resort in patients with chronic heart failure (68). While hPSC-CM transplantation at 2 weeks post-MI can improve cardiac function in rats (82), transplantation at 1 month post-MI showed no functional benefit in rats (83) or guinea pigs (84). Sawa et al. showed that hPSC-CM cell sheet transplantation 1 month post-MI can improve cardiac function in swine, but there was no evidence of graft-host electromechanical integration and very few cells survived long-term (50), which could be attributed to the established fibrotic environment in chronic MI. These discrepancies necessitate further evaluation of animal models of chronic heart failure to determine the potential of hPSC-CM transplantation in a more clinically applicable setting. Scalable manufacturing of clinical-grade hPSC-CMs is also a serious challenge for clinical use and, therefore, several recent studies have focused on large-scale production of clinical-grade hPSC-CMs. Master iPSC cell banks have been developed for clinically compliant sourcing of PSC-derived cells under current good manufacturing practice (cGMP) (85). To increase cell production, PSC aggregate culture and differentiation systems that produce 10 9 hPSC-CMs in a 1 L flask have been developed (86). Serum-free (87) and human serum-based (54) construction protocols for engineered heart tissue (EHT) patches have also been developed to adapt to cGMP for clinical applications. Finally, there is a lack of consensus on the characterization and assessment of hPSC-CM differentiation and maturity (i.e., cell surface markers). Consistency in the assessment of hPSC-CM products is necessary to ensure their quality, reproducibility, and safety for use in humans. To this end, an unbiased integrative proteomics approach could offer comprehensive assessment of hPSC-cardiomyocyte maturation (88). First In-human Clinical Trials With hPSC-CMs Despite the outstanding challenges in the field, first-in-human clinical trials have recently begun involving the transplantation of hPSC-CMs ( Table 1). The first use of hPSC-CMs in humans took place in 2019 in Nanjing, China, and involved intramyocardial injection of hiPSC-CMs in patients with chronic ischemic cardiomyopathy (89). However, cell injection occurred alongside coronary artery bypass grafting, limiting the ability to delineate the therapeutic benefits of hiPSC-CM transplantation. In Japan, a trial at Osaka University is exploring transplantation of an allogeneic hiPSC-CM cell sheet as a sole therapy for ischemic cardiomyopathy (90). Heartseed Inc., a Japan-based biotechnology company led by Prof. Keiichi Fukuda, recently gained approval for a Phase I/II clinical trial of intramyocardial injection of three-dimensional hiPSC-CM spheres to treat heart failure. The largest trial to date has been registered in Germany at University Medical Center Goettingen, investigating the remuscularization capacity of engineered heart tissue containing hiPSC-CMs and stromal cells in patients with heart failure with reduced ejection fraction (HFrEF). CONCLUSIONS AND FUTURE CONSIDERATIONS Transplantation of hPSC-CMs has proven to be a viable strategy for cardiac regenerative therapies. Single-cell injection and tissue-level engineered constructs have served as the basis for promoting functional improvements in injured myocardium. Future research needs to focus on addressing the limitations currently facing the field, as discussed in this review. In particular, the development of a viable strategy to prevent graftassociated arrythmia will have immediate clinical impacts for intramyocardial injection of hPSC-CMs. In addition, paracrine factors play a central role in hPSC-CM mediated functional recoveries; therefore, developing methods to enhance hPSC-CM cardioprotective secretome would have significant impacts to the field. Lastly, optimal doses of PSC-CMs for heart repair need to be determined for safe and effective application in humans. In summary, although the clinical translation of hPSC-CM transplantation faces several significant limitations, immense progress has been made in recent years in the development of potential strategies for hPSC-CM regenerative therapies. It has been proven that engrafted hPSC-CM can make meaningful connections with host cardiomyocytes and provide paracrine factors that stimulate functional recovery of host myocardium. Furthermore, strategies for producing cells at a clinical scale have been explored, as well as methods to mitigate immune rejection, reduce incidence of cardiac arrhythmias, and mature hPSC-CMs. AUTHOR CONTRIBUTIONS SS was responsible for manuscript development and primary authorship under the guidance of YM. RB contributed to organization and conceptualization of the work and was lead author on section First In-Human Clinical Trials With hPSC-CMs. YM contributed to article organization and conceptualization. All authors contributed to the article and approved the submitted version.
2021-11-11T16:47:43.761Z
2021-11-08T00:00:00.000
{ "year": 2021, "sha1": "ffa1ef5bfdc82187f0331e0858e7b3fa9bb04a72", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.707890/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13b0c2db42758b00d4e8abc8f5d6ae534f7dea73", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118684796
pes2o/s2orc
v3-fos-license
PR ] 2 2 A pr 2 01 9 A monotone scheme for G-equations with application to the convergence rate of robust central limit theorem ∗ We propose a monotone approximation scheme for a class of fully nonlinear PDEs called Gequations. Such equations arise often in the characterization of G-distributed random variables in a sublinear expectation space. The proposed scheme is constructed recursively based on a piecewise constant approximation of the viscosity solution to the G-equation. We establish the convergence of the scheme and determine the convergence rate, using the comparison principles for both the scheme and the equation together with a mollification procedure. One of the main applications is to obtain the convergence rate of Peng’s robust central limit theorem for the general situation. Introduction The theory of G-expectations (see [25,26,27,28]) is a natural generalization of classical probability theory in the presence of Knightian uncertainty. That is, random outcomes are evaluated, not using a single probability measure, but using the supremum over a range of possibly mutually singular probably measures. One of the fundamental results in the theory is the celebrated central limit theorem, dubbed as robust central limit theorem by Peng in [27]. It provides a theoretical foundation for the widely used G-distributed random variables in nonlinear probability and statistics. The theorem was first proved in [25] by applying the regularity theory of fully nonlinear PDEs (see [18] and [31]) to G-equations, the latter of which characterize G-distributed random variables. However, no convergence rate was derived in [25]. The corresponding convergence rate was subsequently obtained in [29] and [13] using Stein's method and more recently in [21] using stochastic control method under various model assumptions. In this paper, we build a monotone approximation scheme for the G-equation, and determine its convergence rate by obtaining the error bounds between the approximate solution and the viscosity solution of the G-equation. This in turn provides the convergence rate for Peng's robust central limit theorem for the general situation. The new convergence rate improves all the existing ones obtained under different model assumptions in the literature. Moreover, different from [13], [21] and [29], our method is analytical and is developed under the framework of the monotone approximation schemes for viscosity solutions. Thus, it unveils an intrinsic connection between the convergence analysis of numerical schemes in PDEs and the central limit theorem in probability. It also introduces new tools from the numerical analysis for viscosity solutions to the study of G-expectations and especially its robust central limit theorem. Let (Ω, H,Ê) be a sublinear expectation space, supporting two d-dimensional random vectors X and Y . Recall thatÊ is a sublinear expectation if it satisfies monotonicity, constant preserving, sub-additivity and positive homogeneity properties (see Chapter 1 in [27] or (3.1)- (3.4) in section 3 for further details). With the random vectors (X, Y ) and the sublinear expectationÊ, we introduce the nonlinear function G : In [25,26,27,28], the PDE (1.2) is referred to as the G-equation, which is used to characterize G-distribution. More specifically, let (ξ, ζ) be a pair of G-distributed d-dimensional random vectors characterized by (1.2) under another sublinear expectation E (possibly different fromÊ). That is, for a, b ∈ R d and (ξ,ζ) as an independent copy of (ξ, ζ), the following inequality holds in distribution sense: aξ + bξ, a 2 ζ + b 2ζ d = a 2 + b 2 ξ, (a 2 + b 2 )ζ . Note that the existence of (ξ, ζ) is guaranteed by Proposition 4.2 of [25]. Then, it has been proved in Proposition 4.8 of [25] that (1.2)-(1.3) admits a unique viscosity solution u which admits the representation u(t, x) = E[φ(x + √ tξ + tζ)], (1.4) provided that the initial data φ satisfies some regularity condition. However, it is not clear how to explicitly solve (1.2)-(1.3) in order to characterize the G-distributed random vectors (ξ, ζ) except for some special cases, so a numerical scheme for (1.2)-(1.3) is needed. In this paper, we propose a numerical scheme to approximate the viscosity solution u of (1.2)-(1.3) by merely using the random vectors (X, Y ) underÊ as input. Note that (X, Y ) could follow arbitrary distributions. Our numerical scheme is inspired mainly by Krylov [21]. For ∆ ∈ (0, 1), we introduce u ∆ : [0, T ] × R d → R recursively as (1.5) The above recursive approximation implies that, for any n ∈ N such that n∆ ≤ T and t ∈ [n∆, ((n+ 1)∆) ∧ T ), u ∆ (t, ·) is a constant in t and is given by and at time n∆, there is a jump of the size The main result of the paper is proving the convergence of u ∆ to u and determining its convergence rate. For this, we impose the following assumptions throughout the paper. We make some comments on the above assumptions. Remark 1.2 Assumptions (i) and (ii) are standard in the (robust) central limit theorem literature. The regularity of the initial condition φ implies the regularity of the viscosity solution u (see Lemma 2.1). The bounded from below property of φ guarantees the Fatou's property ofÊ (see (3.5) or Lemma 2.6 in [9]), which will in turn be used to establish an upper bound for the approximation error (see (4.8) in section 4.3). On the other hand, the moment conditions on X and Y are commonly used in the classical central limit theorem. In our setting, they are used to derive the consistency error estimates in section 3. Assumption (iii) is crucial for establishing the regularity of the approximate solution u ∆ (see Lemma 2.2). The regularity of u ∆ will be used in the convergence of the monotone scheme in section 3.3 and the estimates for the mollifiers in sections 4.2 and 4.3. Nevertheless, Assumption (iii) could be relaxed if the viscosity solution u turns out to be a classical solution, in which case the regularity of u ∆ is not needed (see section 4.1). Under the above assumptions, we prove the following results about the convergence of u ∆ to u and the corresponding convergence rate. Theorem 1.3 Suppose that Assumption 1.1(i)-(iii) are satisfied. Then, the following assertions hold. Assertion (i) is proved in section 3.3 and assertion (ii) is proved in sections 4.2 and 4.3. We prove them under the framework of monotone approximation schemes for viscosity solutions. The first step is to rewrite the recursive approximation (1.5) as a monotone scheme, and then derive the key properties for the monotone scheme in section 3. It is precisely where the four axioms of the sublinear expectationÊ are used in an essential way. Using the consistency error estimates derived in section 3.1 and the comparison principle for the approximation scheme established in section 3.2, we obtain a lower bound for the approximation error by a mollification procedure. The upper bound for the approximation error is further obtained by interchanging the roles of the monotone approximation scheme and the original G-equation. This depends crucially on the regularity property of the approximation solution established in section 2. Monotone approximation schemes for viscosity solutions were first studied by Barles and Souganidis [4], who showed that any monotone, stable and consistent approximation scheme converges to the correct solution, provided that there exists a comparison principle for the limiting equation. The corresponding convergence rate had been an open problem for a long time until late 1990s when Krylov introduced the shaking coefficients technique to construct a sequence of smooth subsolutions/supersolutions in [19] and [20]. This technique was further developed by Barles and Jacobsen in a sequence of papers (see [3] and [17] and more references therein), and has recently been applied to solve various problems (see, among others, [5] [7] [12], [14] and [16]). Krylov's technique depends crucially on the convexity/concavity of the underlying equation with respect to its terms. As a result, unless the approximate solution has enough regularity (so one can interchange the roles of the approximation scheme and the original equation), the shaking coefficients technique only gives either an upper or a lower bound for the approximation error, but not both. A further breakthrough was made by Barles and Jacobsen in [1] and [2], who combined the ideas of optimal switching approximation of Hamilton-Jacobi-Bellman equations with the shaking coefficients technique. They obtained both upper and lower bounds of the error estimate, but with a lower convergence rate due to the introduction of another approximation layer. See also [8] for its recent development in a bounded domain without any convexity/concavity assumptions. In the setup of G-equations, since there are no variable coefficients to shake in order to apply the mollification procedure to construct the smooth subsolutions/supersolutions, the corresponding convergence rate for the approximation solution to the viscosity solution turns out to be faster than the ones in the PDE literature (in our case it is β/6). On the other hand, by establishing almost the same regularity property for the approximation solution u ∆ as for the viscosity solution u, we are able to interchange the roles of the G-equation and its approximation scheme, and thus obtain a symmetric upper bound and lower bound for the approximation error. One of the main applications of Theorem 1.3 is the derivation of the convergence rate for the robust central limit theorem, which is discussed in section 5. To illustrate how it works, we provide some preliminary informal arguments to highlight the main ideas and build intuition. Consider d = 1 for simplicity. If we replace the sublinear expectationÊ with the linear expectation E and let {(X i , Y i )} i≥1 be a sequence of i.i.d. copies of (X, Y ) such that E[X] = 0, then the recursive approximation (1.5) reduces to On the other hand, the nonlinear function G defined in (1. The Feynman-Kac formula then implies that . Taking ∆ = 1 n and using Theorem 1.3, we obtain which is precisely the classical central limit theorem (for ξ) and law of large numbers (for ζ). Theorem 1.3 may have potential applications to other problems in G-expectations. To name a few, it could be applied to derive the convergence rates for generalized robust central limit theorems as considered in [6], [22] and [32]. One of the key steps is to construct appropriate monotone approximation schemes corresponding to the sequence of involved random variables. Another application of Theorem 1.3 is to approximate G-expectations as in [10] [11] and [23], which needs the notion of G-Brownian motion developed in [26]. Finally, the convergence analysis of the monotone approximation schemes may also offer new insight for the numerical solutions of (backward) stochastic differential equations driven by G-Brownian motion (see [15] and [24]). The rest of the paper is organized as follows. Section 2 establishes the regularity properties of the viscosity solution and the approximation solution. Sections 3 and 4 are devoted to the monotone approximation scheme and its convergence rate. Section 5 then provides the convergence rate for the robust central limit theorem. Notation. Let δ ∈ (0, 1]. The (semi)norms of a function g : R d → R are defined as Let C lb (R d ) be the space of lower bounded continuous functions g on R d such that [g] 0 < ∞, C δ lb (R d ) be the space of lower bounded continuous functions g on R d such that Similarly, for a function f : Q T → R, we introduce its (semi)norms Furthermore, Finally, for S = R d or Q T , we denote by C ∞ lb (S) be the spaces of lower bounded continuous functions on S with bounded derivatives of any order. Regularity estimates We establish the space and time regularity properties of both u and u ∆ , which are crucial for proving the convergence of u ∆ to u and determining its convergence rate. In particular, the regularity of u ∆ will play a vital role in mollification procedures (see (4.3) in section 4.2 and (4.6) in section 4.3). Lemma 2.1 Suppose that Assumption 1.1(i) is satisfied. Then, for any Proof. Assertion (i) is a direct consequence of the representation formula (1.4), the subadditivity of E and the Hölder continuity of φ. To prove (ii), we may assume t ≤ s. Note that the semigroup property of u implies that In turn, the sub-additivity of E and (i) yield where we also used the positive homogeneity of E in the last equality. Lemma 2.2 Suppose that Assumption 1.1(i)-(iii) are satisfied. Then, for any Proof. We first establish the estimate (i) using induction. It is clear that the estimate holds for t ∈ [0, ∆). In general, suppose the estimate holds for t ∈ [(n − 1)∆, n∆) with n∆ ≤ T . Then, for t ∈ [n∆, ((n + 1)∆) ∧ T ), the sub-additivity ofÊ yields where we also used the constant preserving property in the last inequality. To establish the time regularity for u ∆ in (ii), we divide its proof into four steps. Step 1. We lift the Hölder exponent β to 2 in the estimate (i). Note that the Young's inequality implies that In turn, for α ≥ 0 and ε > 0, let x = α β and y = 1 ε , and we have Hence, it follows from (i) that Step 2. Define T ∆ := {k∆ : k ∈ N}. Then, for τ ∈ [0, T ) ∩ T ∆ and k ∈ N such that τ + k∆ ≤ T , we aim to show that with a and b given in (2.2). Indeed, it is clear that (2.3) holds for k = 0. Suppose (2.3) holds for k ∈ N, then, For the first sublinear expectation on the RHS of (2.4), we havê Since X has no mean uncertainty (cf. Assumption 1.1(ii)), it follows thatÊ[ x − y, For the second sublinear expectation on the RHS of (2.4), using the sub-additivity and positive homogeneity of G, we obtainÊ Since Y is also independent of X (cf. Assumption 1.1(iii)), it follows that On the other hand, the sub-additivity and constant preserving property ofÊ imply that Combining (2.4)-(2.6), we have shown that (2.3) also holds for (k + 1). Step 3. We show that the estimate (ii) holds on τ ∈ T ∆ . Indeed, taking x = y in (2.3) and using G(0, 0) = 0, we obtain for any ε > 0. Minimizing the RHS of the above inequality over ε then yields Step 4. In general, for Similarly, we also have from which we then conclude. Remark 2.3 Note that u ∆ is a piecewise constant approximation of u, so it is not continuous in time (with jumps at the partition points τ ∈ T ∆ ). The discontinuity leads to the additional term ∆ β/2 in the time regularity of u ∆ . Such type of time regularity property also appears in Lemma 2.2 of [21] in a stochastic control setting. Our regularity result could be regarded as a generalization of [21] to the sublinear expectation setting. A monotone approximation scheme for the G-equation The proof of Theorem 1.3 is based on the monotone schemes for viscosity solutions, the framework of which was first introduced by Barles and Souganidis [4]. Hence, we first rewrite the recursive approximation (1.5) as a monotone scheme, and then derive its consistency error estimates. Recall that C lb (R d ) is the space of lower bounded continuous functions on R d . We define a forward operator on C lb (R d ) as Then, from the properties of the sublinear expectationÊ, we immediately deduce that the forward operator S(∆) satisfies Note that (iii) and (iv) imply that S(∆)ψ is convex in ψ. On the other hand, the lower boundedness of ψ guarantee the Fatou's property (see Lemma 2.6 in [9]): Let ψ n ∈ C lb (R d ) converges uniformly to ψ, then The following error estimates play a vital rule to derive the consistency error estimates for the monotone approximation scheme introduced in section 3.1 (see Proposition 3.2(iii)). where the constant C = M 2+α Proof. We only consider the case d = 1, since the general case follows along similar albeit more complicated arguments. Note that for any x ∈ R, In case (i), In case (ii), |D 2 ψ(u) − D 2 ψ(x)| ≤ |D 3 ψ| 0 |u − x|, thus, Regarding term (II), for both cases (i) and (ii), we have Combine the two estimates for terms (I) and (II), we obtain, for any x ∈ R, that Similarly, we obtain lower bounds of E(∆, ψ), and this completes the proof. The monotone approximation scheme For ∆ ∈ (0, 1), we let Q ∆ T := (∆, T ] × R d . Then, based on (1.5) and S(∆), we introduce the approximation scheme as where S : From the properties of the forward operator S(∆) and Proposition 3.1, we obtain the following key properties of the approximation scheme (3.7). Proposition 3.2 Suppose that Assumption 1.1(ii) is satisfied. Then, the following properties hold for the approximation scheme S(∆, x, p, v) given in (3.7). (i) (Monotonicity) For any c 1 , c 2 ∈ R, and any function u ∈ C lb (R n ) with u ≤ v, Proof. Parts (i)-(ii) are immediate, so we only prove (iii). To this end, we split the consistency error into three parts. Specifically, for (t, x) ∈ Q ∆ T , where E is defined in (3.6). Here we only consider the case (b); the case (a) only requires minor modification that is similar to the proof of Proposition 3.1(i). For term (I), Proposition 3.1 (ii) yields Finally, for term (III), we have Combining estimates (3.11)-(3.13), we easily conclude. Remark 3.3 Due to the monotonicity property (i) in Proposition 3.2, the approximation scheme (3.7) is also referred to as the monotone (approximation) scheme in the sequel. Comparison principle for the monotone approximation scheme The monotonicity property (i) in Proposition 3.2 also implies the following comparison principle for the monotone scheme (3.7), which will be used throughout this paper. Most of the arguments follow from Proposition 2.9 of [16] (and Lemma 3.2 of [2]), but we highlight some key steps for the reader's convenience. Proposition 3.4 Suppose that Assumption 1.1(ii) is satisfied, and that (3.14) Proof. Without loss of generality, we assume that since, otherwise, the function w :=v T and by the monotonicity property (i) in Proposition 3.2, Thus, it suffices to prove v ≤v inQ T when ( T , we must have t n > ∆ for sufficiently large n. Then for such n, we use the monotonicity property (i) in Proposition 3.2 again to obtain T , we then must have b − δ n ∆ −1 ≤ 0. Thus, we deduce b ≤ 0 by letting n → ∞, which is a contradiction. Convergence of the monotone approximation scheme We prove Theorem 1.3(i) by showing the convergence of the approximate solution u ∆ to the viscosity solution u. It is based on the monotone schemes for viscosity solutions introduced by Barles-Souganidis in [4], where they show that any monotone, stable and consistent numerical scheme converges, provided that there exists a comparison principle for the limiting equation. To this end, define the semi-relaxed limits of u ∆ by We show that u is a viscosity subsolution of (1.2)-(1.3). A symmetric argument will imply that u is a viscosity supersolution of (1.2), which proves that u = u = u, so u ∆ converges to u locally uniformly. Let φ ∈ C ∞ (Q T ) and (t 0 , x 0 ) ∈ Q T be such that By the definition of u, there exists a sequence {(t n , x n , ∆ n )} n≥1 such that (t n , x n , ∆ n ) → (t 0 , x 0 , 0), and u ∆n (t n , x n ) → u(t 0 , x 0 ). Moreover, by extracting a subsequence if necessary, (t n , x n ) is also the maximum point of u ∆n − φ: Since t 0 > 0 and ∆ → 0, we have t n > ∆ n for large enough n. The monotonicity property (i) in Proposition 3.2 further implies that In turn, using the consistency property (iii) in Proposition 3.2 and letting (t n , x n , ∆) x φ(t 0 , x 0 )) ≤ 0. Next, we show that u(0, x) = φ(x) for x ∈ R d . Let {(t n , x n , ∆ n )} n≥1 be a sequence such that (t n , x n , ∆ n ) → (0, x, 0), and u ∆n (t n , x n ) → u(0, x). Convergence rate of the monotone approximation scheme In this section, we prove Theorem 1.3(ii) by establishing the (uniform) convergence rate of the approximate solution u ∆ to the viscosity solution u. We start with the approximation error in the first time intervalQ T \Q ∆ T , where u ∆ = φ = u| t=0 except at t = ∆. Therefore, the bound for the approximation error in this interval can be easily obtained by the regularity property of u in Lemmas 2.1. This is demonstrated in the following lemma. When t = ∆, we further obtain The conclusion then follows from Lemma 2.1(ii). A special case when the solution u is classical Before we derive a bound for the approximation error in the whole domainQ T and prove Theorem 1.3(ii), we first consider a special case when the solution u of (1.2)-(1.3) is a classical solution with enough regularity. In this case, the regularity of the approximate solution u ∆ is not required so the independence between X and Y in Assumption 1.1(iii) can be relaxed. Instead, a non-degeneracy assumption and more regularity on the initial data φ are imposed. Moreover, the initial data Proof. First, the monotonicity property ofÊ, the boundedness of φ and (1.4) yield that u is bounded. Lemma 2.1 further implies that u ∈ C 1/2,1 b (Q T ). In turn, the regularity theory of fully nonlinear PDEs implies the Hölder continuity of the derivatives of u, i.e. there exists a constant α ∈ (0, 1) depending only on d, σ 2 and M 2 X such that u ∈ C 1+ α 2 ,2+α b (Q ε T ) for any ε > 0 (see Theorem 4.5 in Appendix C of [27], or [18] and [31] for more details). The consistency error estimate (3.9) then yields On the other hand, since the comparison principle in Proposition 3.4 implies Since Assumption 4.2 clearly implies Assumption 1.1(i) (with β = 1), it follows from Lemma 4.1 that sup and the conclusion follows. Remark 4.4 If the solution u has more regularity, say u ∈ C ∞ b (R d ), then we can replace the consistency error estimate (3.9) in the above proof by (3.10), and obtain the convergence rate ∆ 1/2 . In general, since (1.2)-(1.3) only admits a unique viscosity solution u ∈ C β 2 ,β lb (Q T ) (cf. Lemma 2.1) due to the possible degeneracies (σ 2 = 0) of the equation, the above result does not hold. A natural idea is then to approximate the viscosity solution u by a sequence of smooth sub-and supersolutions u ε and, in turn, compare them with u ∆ using the comparison principles for the monotone scheme and the G-equation to obtain a lower and upper bound for the approximation error separately. We carry out this mollification procedure next. Lower bound for the approximation error For u ∈ C β 2 ,β lb (Q T ), we aim to derive a lower bound for the approximation error u − u ∆ within the whole domainQ T . To this end, for ε ∈ (0, 1), we extend the domain of the G-equation (1.2) from Q T to Q T +ε 2 := (0, T + ε 2 ] × R d and still denote the solution as u. Next, we regularize u by a standard mollification procedure: let ρ(t, x) be a nonnegative smooth function with support in (−1, 0) × B(0, 1) and mass 1, and introduce the sequence of mollifiers ρ ε for ε ∈ (0, 1), For (t, x) ∈Q T , we then define Since u ∈ C β 2 ,β lb (Q T +ε 2 ) (c.f. Lemma 2.1), standard properties of mollifiers imply that u ε ∈ C ∞ lb (Q T ), and, moreover, for positive integer i and multiindex j, We observe that the function u(t − τ, x − e) is still a viscosity solution of the G-equation (1.2) in Q T for any (τ, e) ∈ (−ε 2 , 0) × B(0, ε). On the other hand, a Riemann sum approximation shows that there exists a sequence {I n } n≥1 ∈ C lb (Q T ) such that each I n is a convex combination of the functions u(· − τ, · − e) for different (τ, e) ∈ (−ε 2 , 0) × B(0, ε) and that I n converges uniformly to u ε . Since the nonlinear term G(p, X) is convex in p and X, each I n becomes a supersolution of (1.2) in Q T . Using the stability of viscosity solutions, we deduce that u ε (t, x) is still a supersolution of (1.2) in Q T , namely, We are now in a position to establish a lower bound for the approximation error. Proof. Since u ε ∈ C ∞ lb (Q T ) is smooth with bounded derivatives of any order, we substitute u ε into the consistency error estimate (3.10) and use (4.5) and (4.4) to obtain for (t, x) ∈ Q ∆ T . The comparison principle in Proposition 3.4 then implies Next, using (4.3), we further obtain By choosing ε = ∆ 1/6 , we conclude that where the last inequality follows from the estimate (4.1) in Lemma 4.1. Upper bound for the approximation error To obtain an upper bound for the approximation error, we are not able to construct approximate smooth subsolutions of (1.2) due to the convexity of the function G. Instead, we interchange the roles of the G-equation (1.2) and the monotone scheme (3.7) (as in [14] and [17]). Since I ∆ n is lower bounded, we use Fatou's property of the sublinear expectationÊ (see (3.5)) to deduce that, for (t, x) ∈Q ∆ T , We are now in a position to establish an upper bound for the approximation error. Proof. We only need to show the above error estimate holds in Q ∆ T due to Lemma 4.1. Since u ∆ ε ∈ C ∞ lb (Q T ) is smooth with bounded derivatives of any order, we substitute u ∆ ε into the consistency error estimate (3.10) and use (4.8) and (4.7) to obtain . On the other hand, from (4.6) and the fact that u(∆, ·) − u ∆ (∆, ·) ≤ C∆ β/2 , we know that there exists a constant C such that v := u − C(ε β + ∆ β/2 ) is a (viscosity) solution of the G-equation (1.2) with v(∆, ·) ≤v(∆, ·). Thus, the comparison principle for the G-equation (see Theorem 6.3 in [25] Finally, using the estimates (4.6), we conclude that by choosing ε = ∆ 1/6 . Application to robust central limit theorem In this section, we apply Theorems 1.3 and 4.3 to derive the convergence rate of the celebrated robust central limit theorem, first introduced by Peng in [25]. For this, let {(X i , Y i )} i≥1 be a sequence of R d × R d -valued random vectors defined on (Ω, H,Ê) such that (X 1 , Y 1 ) = (X, Y ), Furthermore, assume that X and Y satisfy some moment conditions and there is no mean uncertainty for X, i.e.Ê[X] =Ê[−X] = 0. Then, Peng proved that the sequence {S n } n≥1 defined by for any test function satisfying linear growth condition. See Theorem 5.1 in [25] for its proof. Following Peng's seminal work, a lot of efforts have been made to further obtain the various convergence rates of (5.2) with additional model assumptions (see, for example, [13] [21] and [29]). However, the existing literature on the convergence rates of (5.2) assumes that either X i = 0 or Y i = 0 and, to be best of our knowledge, the convergence rate of (5.2) for the general situation (i.e. X i = 0 and Y i = 0) is still lacking. Our aim is therefore to obtain a general result about the convergence rate of (5.2) using Theorems 1.3 and 4.3. Theorem 5.1 Let {S n } n≥1 be given as in (5.1). Then, the following assertions hold. (i) If Assumption 1.1(i)-(iii) hold, then there exists a constant C depending only on T , C φ , β, Assumption 1.1(ii) and Assumption 4.2 hold, then there exists a constant α ∈ (0, 1) depending on d, σ 2 and M 2 X and a constant C depending only on T , C φ , α, M 2+α Proof. We claim that, for all n ∈ N such that n∆ ≤ T and x ∈ R d , If the representation formula (5.5) holds, then by letting ∆ = 1/n and x = 0, we obtain On the other hand, the representation formula (1.4) implies that Hence, the assertions (i) and (ii) follow from Theorems 1.3 and 4.3, respectively. We are left to show (5.5). We prove by induction on n. Note that the case n = 1 follows directly from (1.5). Next, we claim that for all n ∈ N and g ∈ C lb (R d ), and suppose (5.5) holds for n ∈ N such that n∆ ≤ T . Then, if (n + 1)∆ ≤ T , we use (5.6) to obtain In other words, (5.5) also holds for n + 1. Finally, to show (5.6), we prove again by induction on n. The case n = 1 follows from (X 2 , Y 2 ) d = (X 1 , Y 1 ). Suppose (5.6) holds for n ∈ N, then Y 1 ), ..., (X n , Y n )}, The RHS of the above equality further equals toÊ  where f (x) :=Ê g x + √ ∆X n+2 + ∆Y k+2 and the first equality follows from (X n+2 , Y n+2 ) d = (X n+1 , Y n+1 ). In turn, since (5.6) holds for n, we further havê (xi,yi)=(Xi,Yi),i=2,...,n+1 which completes the proof. In the following, we refine the convergence rates in Theorem 5.1 by imposing further model assumptions, and compare our results with the existing literature. For the latter use, we state the following property (see Proposition 4.1 in [25]) of the nonlinear function G(p, A) given by (1.1). Law of large numbers: Comparison with [13] When X = 0, ξ disappears in (5.2). Choosing ∆ = 1 n , by the representation formula (5.5), we have is independent of (Y 1 , ..., Y i ) for each i = 1, ..., n − 1. If we further let φ(y) = d Θ (y) = inf{|x − y| : x ∈ Θ}, where the subset Θ ⊂ R d is given in G(p, 0) in Proposition 5.2, then it follows that β = 1. In turn, Example 4.3 in [25] implies that Under Assumption 1.1, Theorem 5.1(i) then yields the law of large numbers with its convergence rate n −1/6 (nothing that d Θ (y) ≥ 0): Furthermore, using the above extra assumptions, we can obtain a better convergence rate by refining the consistency error estimates in Proposition 3.1 and Proposition 3.2 (iii). We sketch its proof in the appendix. Note that the above convergence rate is better than the convergence rate n −2/5 in Fang et al [13] for the law of large numbers under sublinear expectations (See Remark 2.3 in [13]). Proposition 5.5 Let {X n } n≥1 be such that X 1 = X ∈ R, X i+1 d = X i and X i+1 is independent of (X 1 , .. We sketch its proof in the appendix. Note that we assume the dimension d = 1 for simplicity; otherwise it is complicated to define X 3 . Finally, under the non-degeneracy Assumption 4.2, we obtain directly from Theorem 5.1(ii) that Proposition 5.6 Let {X n } n≥1 be such that X 1 = X, X i+1 d = X i and X i+1 is independent of (X 1 , ..., X i ) for each i ∈ N. IfÊ[X] =Ê[−X] = 0, σ 2 := −Ê[−|X| 2 ] > 0, and M 2 X =Ê[|X| 2 ] < ∞, then, for any test function φ ∈ C 1 b (R d ), there exists a constant α ∈ (0, 1) depending on d, σ 2 and M 2 X and a constant C depending only on T , C φ , α and M 2+α X such that This convergence rate is consistent with the convergence rate in Theorem 4.5 of Song [29], where the author considers a one-dimensional setting. and thus E(∆, ψ) ≤ C∆|D 4 ψ| 0 .
2019-05-07T14:06:04.306Z
2019-04-15T00:00:00.000
{ "year": 2019, "sha1": "d279eef0504d7b5c3e77a565124bb19e492b8ee9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "23391e5e429dfe4543e6737c35c10593a84d0bb1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
255941884
pes2o/s2orc
v3-fos-license
When it counts -- Econometric identification of the basic factor model based on GLT structures Despite the popularity of factor models with sparse loading matrices, little attention has been given to formally address identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in sparse factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a non-restrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of post-processing posterior draws in Bayesian sparse factor analysis. Introduction Ever since the pioneering work of Thurstone (1935Thurstone ( , 1947)), factor analysis has been a popular method to model the covariance matrix Ω of correlated, multivariate observations y t of dimension m, see e.g.Anderson (2003) for a comprehensive review.Assuming r uncorrelated factors, the basic factor model yields the representation Ω = ΛΛ ⊤ + Σ 0 , with a m × r factor loading matrix Λ and a diagonal matrix Σ 0 .The considerable reduction of the number of parameters compared to the m(m + 1)/2 elements of an unconstrained covariance matrix Ω is the main motivation for applying factor models to covariance estimation, especially if m is large; see, among many others, Fan et al. (2008) in finance and Forni et al. (2009) in economics.In addition, shrinkage estimation has been shown to lead to very efficient covariance estimation, see, for example, Kastner (2019) in Bayesian factor analysis and Ledoit and Wolf (2020) in a non-Bayesian context. In numerous applications, factor analysis reaches beyond covariance modelling.From the very beginning, the goal of factor analysis has been to extract the underlying loading matrix Λ to understand the driving forces behind the observed correlation between the features, see e.g.Owen and Wang (2016) for a recent review.However, also in this setting, the only source of information is the observed covariance of the data, making the decomposition of the covariance matrix Ω into the cross-covariance matrix ΛΛ ⊤ and the variance Σ 0 of the idiosyncratic errors more challenging than estimating only Ω itself. A huge literature, dating back to Koopmans and Reiersøl (1950) and Reiersøl (1950), has addressed this problem of identification which can be resolved only by imposing additional structure on the factor model.Anderson and Rubin (1956) considered identification as a two-step procedure, namely identification of Σ 0 from Ω (variance identification) and subsequent identification of Λ from ΛΛ ⊤ (solving rotational invariance).The most popular constraint in econometrics, statistics and machine learning for solving rotational invariance is to consider positive lower triangular loading matrices, see e.g.Geweke and Zhou (1996); West (2003); Lopes and West (2004), albeit other strategies have been put forward, see e.g.Neudecker (1990), Bai and Ng (2013), Aßmann et al. (2016), Chan et al. (2018), and Williams (2020).Only a few papers have addressed variance identification (e.g.Bekker, 1989) and to the best of our knowledge so far no structure has been put forward that simultaneously addresses both identification problems. In this work, we discuss a new identification strategy based on generalized lower triangular (GLT) structures, see Figure 1 for illustration.This concept was originally introduced as part of an MCMC sampler for sparse Bayesian factor analysis where the number of factors is unknown in the (unpublished) work of Frühwirth-Schnatter and Lopes (2018).In the present paper, GLT structures are given a full and comprehensive mathematical treatment and are applied in Frühwirth-Schnatter et al. (2022) to develop an efficient reversible jump MCMC (RJMCMC) sampler for sparse Bayesian factor analysis under very general shrinkage priors.It will be proven that GLT structures simultaneously address rotational invariance and variance identification in factor models.Variance identification relies on a counting rule for the number of non-zero elements in the loading matrix Λ, which is a sufficient condition that extends previous work by Sato (1992). In addition, we will show that GLT structures are useful in exploratory factor analysis where the Figure 1: Left: ordered sparse GLT matrix with six factors.Center: one of the 2 6 • 6! corresponding unordered sparse GLT matrices.Right: a corresponding sparse PLT matrix, i.e. enforced non-zeros on the main diagonal.The pivot rows (l 1 , . . ., l 6 ) = (1,3,10,11,14,17) are marked by triangles.Non-zero loadings are marked by circles, zero loadings are left blank.factor dimension r is unknown.Identification of the number of factors in applied factor analysis is a notoriously difficult problem, with considerable ambiguity which method works best, be it BIC-type criteria (Bai and Ng, 2002), marginal likelihoods (Lopes and West, 2004), techniques from Bayesian nonparametrics involving infinite-dimensional factor models (Bhattacharya and Dunson, 2011;Ročková and George, 2017;Legramanti et al., 2020) or more heuristic procedures (Kaufmann and Schuhmacher, 2019).Imposing an unordered GLT structure in exploratory factor analysis allows to identify the true loading matrix Λ and the matrix Σ 0 and to easily spot all spurious columns in a possibly overfitting model.This strategy underlies the RJMCMC sampler of Frühwirth-Schnatter et al. (2022) to estimate the number of factors. The paper is structured as follows.Section 2 reviews the role of identification in factor analysis using illustrative examples.Section 3 introduces GLT structures, proves identification for sparse GLT structures and shows that any unconstrained loading matrix has a unique representation as a GLT matrix.Section 4 addresses variance identification under GLT structures.Section 5 discusses exploratory factor analysis under unordered GLT structures, while Section 6 presents an illustrative application.Section 7 concludes. The role of identification in factor analysis Let y t = (y 1t , . . ., y mt ) ⊤ be an observation vector of m measurements, which is assumed to arise from a multivariate normal distribution, y t ∼ N m (0, Ω), with zero mean and covariance matrix Ω.In factor analysis, the correlation among the observations is assumed to be driven by a latent r-variate random variable f t = (f 1t , . . ., f rt ) ⊤ , the so-called common factors, through the following observation equation: where the m × r matrix Λ containing the factor loadings Λ ij is of full column rank, rk (Λ) = r, equal to the factor dimension r.In the present paper, we focus on the so-called basic factor model where the vector ǫ t = (ǫ 1t , . . ., ǫ mt ) ⊤ accounts for independent, idiosyncratic variation of each measurement and is distributed as ǫ t ∼ N m (0, Σ 0 ), with Σ 0 = Diag σ 2 1 , . . ., σ 2 m being a positive definite diagonal matrix.The common factors are orthogonal, meaning that f t ∼ N r (0, I r ) , and independent of ǫ t .In this case, the observation equation (1) implies the following covariance matrix Ω, when we integrate w.r.t. the latent common factors f t : Hence, all dependence among the measurements in y t is explained through the latent common factors and the off-diagonal elements of ΛΛ ⊤ define the marginal covariance between any two measurements y i 1 ,t and y i 2 ,t : where Λ i,• is the ith row of Λ.Consequently, we will refer to ΛΛ ⊤ as the cross-covariance matrix. Since the number of factors, r, is often considerably smaller than the number of measurements, m, (2) can be seen as a parsimonious representation of the dependence between the measurements, often with considerably fewer parameters in Λ than the m(m − 1)/2 off-diagonal elements in an unconstrained covariance matrix Ω. Since the factors f t are unobserved, the only information available to estimate Λ and Σ 0 is the covariance matrix Ω.A rigorous approach toward identification of factor models was first offered by Reiersøl (1950) and Anderson and Rubin (1956).Identification in the context of a basic factor model means the following.For any pair (β, Σ), where β is an m × r matrix and Σ is a positive definite diagonal matrix, that satisfies (2), i.e.: it follows that β = Λ and Σ = Σ 0 .Note that both parameter pairs imply the same Gaussian distribution y t ∼ N m (0, Ω) for every possible realisation y t .Anderson and Rubin (1956) considered identification as a two-step procedure.The first step is identification of the variance decomposition, i.e. identification of Σ 0 from (2), which implies identification of ΛΛ ⊤ .The second step is subsequent identification of Λ from ΛΛ ⊤ , also know as solving the rotational invariance problem.The literature on factor analysis often reduces identification of factor models to the second problem, however as we will argue in the present paper, variance identification is equally important. Rotational invariance.Let us assume for the moment that ΛΛ ⊤ is identified.Consider, for further illustration, the following factor loading matrix Λ and a loading matrix β = ΛP αb defined as a rotation of Λ: For any α ∈ [0, 2π) and b ∈ {0, 1}, the factor loading matrix β yields the same cross-covariance matrix for y t as Λ, as is easily verified: The rotational invariance apparent in (6) holds more generally for any basic factor model (1).Take any arbitrary r × r rotation matrix P (i.e.PP ⊤ = I r ) and define the basic factor model where β = ΛP and f ⋆ t = P ⊤ f t .Then both models imply the same covariance Ω, given by (2).Hence, without imposing further constraints, Λ is in general not identified from the cross-covariance matrix ΛΛ ⊤ .If interest lies in interpreting the factors through the factor loading matrix Λ, rotational invariance has to be resolved.The usual way of dealing with rotational invariance is to constrain Λ in such a way that the only possible rotation is the identity P = I r .For orthogonal factors at least r(r − 1)/2 restrictions on the elements of Λ are needed to eliminate rotational indeterminacy (Anderson and Rubin, 1956). The most popular constraints are positive lower triangular (PLT) loading matrices, where the upper triangular part is constrained to be zero and the main diagonal elements Λ 11 , . . ., Λ rr of Λ are strictly positive, see Figure 1 for illustration.Despite its popularity, the PLT structure is restrictive, as outlined already by Jöreskog (1969).Let ββ ⊤ be an arbitrary cross-covariance matrix with factor loading matrix β.A PLT representation of ββ ⊤ is possible iff a rotation matrix P exists such that β can be rotated into a PLT matrix Λ = βP.However, as example (5) illustrates this is not necessarily the case.Obviously, Λ is not a PLT matrix, since Λ 22 = 0. Any of the possible rotations β = ΛP αb have non-zero elements above the main diagonal and are not PLT matrices either.This example demonstrates that the PLT representation is restrictive.To circumvent this problem in example (5), one could reorder the measurements in an appropriate manner.However, in applied factor analysis, such an appropriate ordering is typically not known in advance and the choice of the first r measurements is an important modeling decision under PLT constraints, see e.g.Lopes and West (2004) and Carvalho et al. (2008). We discuss in Section 3 a new identification strategy to resolve rotational invariance in factor models based on the concept of generalized lower triangular (GLT) structures.Loosely speaking, GLT structures generalize PLT structures by freeing the position of the first non-zero factor loading in each column, see the loading matrix Λ in (5) and Figure 1 for an example.We show in Section 3.1 that a unique GLT structure Λ can be identified for any cross-covariance matrix ββ ⊤ , provided that variance identification holds and, consequently, ββ ⊤ itself is identified.Even if ββ ⊤ is obtained from a loading matrix β that does not take the form of a GLT structure, such as the matrix β in (5), we show in Section 3.3 that a unique orthogonal matrix G exists which represents β as a rotation of a unique GLT structure Λ: which we call rotation into GLT.Hence, the GLT representation is unrestrictive in the sense of Jöreskog (1969) and is, indeed, a new and generic way to resolve rotational invariance for any factor loading matrix. Sparse factor loading matrices.The factor loading matrix Λ given in (5) is an example of a sparse loading matrix.While only a single zero loading would be needed to resolve rotational invariance, six zeros are present and each factor loads only on dedicated measurements.Such sparse loading matrices are generated by a binary indicator matrix δ of 0s and 1s of the same dimension as Λ, where Λ ij = 0 iff δ ij = 0, and Λ ij ∈ R is unconstrained otherwise.The binary matrix δ = I(Λ = 0), where the indicator function is applied element-wise, is called the sparsity matrix corresponding to Λ.The sparsity matrix δ contains a lot of information about the structure of Λ, see Figure 1 for illustration.The indicator matrix on the right hand side tells us that Λ obeys the PLT constraint.The fifth row of the left and center matrices contains only zeros, which tells us that observation y 5t is uncorrelated with the remaining observations, since Cov(y it , y 5t ) = 0 for all i = 5. Variance identification.Constraints that resolve rotational invariance typically take variance identification, i.e. identification of ΛΛ ⊤ , for granted, see e.g.Geweke and Zhou (1996).Variance identification refers to the problem that the idiosyncratic variances σ 2 1 , . . ., σ 2 m in Σ 0 are identified only from the diag-onal elements of Ω, as all other elements are independent of the σ 2 i s; see again (3).To achieve variance identification of σ 2 i from Ω ii = Λ i,• Λ ⊤ i,• + σ 2 i , all factor loadings have to be identified solely from the off-diagonal elements of Ω. Variance identification, however, is easily violated, as the following considerations illustrate. While the three factor loadings (λ 11 , λ 21 , λ 31 ) are still identified from the off-diagonal elements of Ω as before, variance identification of σ 2 4 and σ 2 5 fails.Since Cov(y 4t , y 5t ) = λ 42 λ 52 is the only non-zero element that depends on the loadings λ 42 and λ 52 , infinitely many different parameters (λ 42 , λ 52 , σ 2 4 , σ 2 5 ) imply the same covariance matrix Ω.From these considerations it is evident that a minimum of three non-zero loadings is necessary in each column to achieve variance identification, a condition which has been noted as early as Anderson and Rubin (1956).At the same time, this condition is not sufficient, as it is satisfied by the loading matrix β in (10), although variance identification does not hold.In general, variance identification is not straightforward to verify.We will introduce in Section 4.1 a new and convenient way to verify variance identification for GLT structures. The row deletion property.As explained above, we need to verify uniqueness of the variance decomposition, i.e. the identification of the idiosyncratic variances σ 2 1 , . . ., σ 2 m in Σ 0 from the covariance matrix Ω given in (2).The identification of Σ 0 guarantees that ΛΛ ⊤ is identified.The second step of identification is then to ensure uniqueness of the factor loadings, i.e. unique identification of Λ from ΛΛ ⊤ .To verify variance identification, we rely in the present paper on a condition known as row-deletion property. Definition 1 (Row deletion property AR (Anderson and Rubin, 1956)).An m × r factor loading matrix Λ satisfies the row-deletion property if the following condition is satisfied: whenever an arbitrary row is deleted from Λ, two disjoint submatrices of rank r remain.Anderson and Rubin (1956, Theorem 5.1) prove that the row-deletion property is a sufficient condition for the identification of ΛΛ ⊤ and Σ 0 from the marginal covariance matrix Ω given in (2).For any (not necessarily GLT) factor loading matrix Λ, the row deletion property AR can be trivially tested by a stepby-step analysis, where each single row of Λ is sequentially deleted and the two distinct submatrices are determined from examining the remaining matrix, as suggested e.g. by Hayashi and Marcoulides (2006).However, this procedure is inefficient and challenging in higher dimensions. Hence, it is helpful to have more structural conditions for verifying variance identification under the row deletion property AR.The literature provides several necessary conditions for the row deletion property AR that are based on counting the number of non-zero factor loadings in Λ. Anderson and Rubin (1956), for instance, prove the following necessary conditions for AR: for every nonsingular r-dimensional square matrix G, the matrix β = ΛG contains in each column at least 3 and in each pair of columns at least 5 nonzero factor loadings.Sato (1992, Theorem 3.3) extends these necessary conditions in the following way: every subset of 1 ≤ q ≤ r columns of β = ΛG contains at least 2q + 1 nonzero factor loadings for every nonsingular matrix G.We call this the 3579 counting rule for obvious reasons. For illustration, let us return to the examples in ( 5) and ( 10).First, apply the 3579 counting rule to the unrestricted matrix β in (10).Although the variance decomposition Ω = ββ ⊤ + Σ = ΛΛ ⊤ + Σ 0 , is not unique, the counting rules are not violated, since β has five non-zero rows except for the cases Only for these eight specific cases, which correspond to the trivial rotations , we find immediately that the counting rules are violated, since one of the two columns has only two non-zero elements.This example shows the need to check such counting rules not only for a single loading matrix β, but also for all rotations βP admissible under the chosen strategy toward rotational invariance.On the other hand, if we apply the 3579 counting rule to the unrestricted matrix β in (5), we find that the necessary counting rules are satisfied for all rotations βP αb .For this specific example, we have already verified explicitly that variance identification holds and one might wonder if, in general, the 3579 counting rule can lead to a sufficient criterion for variance identification under AR. Sufficient conditions for variance identification are hardly investigated in the literature.One exception is the popular factor analysis model where Λ takes the form of a dense PLT matrix, where all factor loadings on and below the main diagonal are left unrestricted and can take any in value in R. For this model, condition AR and hence variance identification holds, except for a set of measure 0, if the condition m ≥ 2r + 1 is satisfied.Conti et al. (2014) investigate identification of a dedicated factor model, where equation ( 1) is combined with correlated (oblique) factors, f t ∼ N r (0, R), and the factor loading matrix Λ has a perfect simple structure, i.e. each observation loads on at most one factor, as in ( 5) and (10); however, the exact position of the non-zero elements is unknown.They prove necessary and sufficient conditions that imply uniqueness of the variance decomposition as well as uniqueness of the factor loading matrix, namely: the correlation matrix R is of full rank (rk (R) = r) and each column of Λ contains at least three nonzero loadings. In the present paper, we build on and extend this previous work.We provide sufficient conditions for variance identification of a GLT structure Λ.These conditions are formulated as counting rules for the m × r sparsity matrix δ = I(β = 0) of β and are equivalent to the 3579 counting rules of Sato (1992, Theorem 3.3).More specifically, if the 3579 counting rule holds for the sparsity matrix δ of a GLT matrix Λ, then this is a sufficient condition for the row deletion property AR and consequently for variance identification, except for a set of measure 0. Identification of the number of factors.Identification of the number of factors is a notoriously difficult problem and analysing this problem from the view point of variance identification is helpful in understanding some fundamental difficulties.Assume that Ω has a representation as in (2) with r factors which is variance identified.Then, on the one hand, no equivalent representation exists with r ′ < r number of factors.On the other hand, as shown in Reiersøl (1950, Theorem 3.3), any such structure (Λ, Σ 0 ) creates solutions (β k , Σ k ) with m × k loading matrices β k of dimension k = r + 1, r + 2, . . ., m bigger than r and Σ k being a positive definite matrix different from Σ 0 which imply the same covariance matrix Ω as (Λ, Σ 0 ), i.e.: Furthermore, for any fixed k > r, infinitely many such solutions (β k , Σ k ) can be created that satisfy the decomposition (12) which, consequently, no longer is variance identified.This problem is prevalent regardless of the chosen strategy toward rotational invariance.For illustration, we return to example ( 5) and construct an equivalent solution for k = 3.While the first two columns of β 3 are equal to Λ, the third column is a so-called spurious factor with a single non-zero loading and Σ 3 is defined as follows: We can place the spurious factor loading β i3 in any row i and β i3 can take any value satisfying 0 < β 2 i3 < σ 2 i .It is easy to verify that any such pair (β 3 , Σ 3 ) indeed implies the same covariance matrix Ω as in (9).This ambiguity in an overfitting model renders the estimation of true number of factors r a challenging problem and leads to considerable uncertainty how to choose the number of factors in applied factor analysis.In Section 5, we follow up on this problem in more detail.An important necessary condition for k to be the true number of factors is that variance identification of Σ k in (12) holds.Therefore, the counting rules that we introduce in this paper will also be useful in cases where the true number of factors r is unknown. Overfitting GLT structures.Finally, we investigate in Section 5 the class of potentially overfitting GLT structures where the matrix β k in ( 12) is constrained to be an unordered GLT structure.We apply results by Tumura and Sato (1980) to this class and show how easily spurious factors and the underlying true factor loading matrix Λ are identified under GLT structures, even if the model is overfitting.Our strategy relies on the concept of extended variance identification and the extended row deletion property introduced by Tumura and Sato (1980), where more than one row is deleted from the loading matrix.An extended counting rule will be introduced for the sparsity matrix of a GLT loading matrices β k in Section 4 which is useful in this context. 3 Solving rotational invariance through GLT structures Ordered and unordered GLT structures In this work, we introduce a new identification strategy to resolve rotational invariance based on the concept of generalized lower triangular (GLT) structures.First, we introduce the notion of pivot rows of a factor loading matrix Λ. Definition 2 (Pivot rows).Consider an m × r factor loading matrix Λ with r non-zero columns.For each column j = 1, . . ., r of Λ, the pivot row l j is defined as the row index of the first non-zero factor loading in column j, i.e.Λ ij = 0, ∀ i < l j and Λ l j ,j = 0.The factor loading Λ l j ,j is called the leading factor loading of column j. For PLT factor loading matrices the pivot rows lie on the main diagonal, i.e. (l 1 , . . ., l r ) = (1, . . ., r), and the leading factor loadings Λ jj > 0 are positive for all columns j = 1, . . ., r. GLT structures generalize the PLT constraint by freeing the pivot rows of a factor loading matrix Λ and allowing them to take arbitrary positions (l 1 , . . ., l r ), the only constraint being that the pivot rows are pairwise distinct.GLT structures contain PLT matrices as the special case where l j = j for j = 1, . . ., r.Our generalization is particularly useful if the ordering of the measurements y it is in conflict with the PLT assumption.Since Λ jj is allowed to be 0, measurements different from the first r ones may lead the factors.For each factor j, the leading variable is the response variable y l j ,t corresponding to the pivot row l j . We will distinguish between two types of GLT structures, namely ordered and unordered GLT structures.The following definition introduces ordered GLT matrices.Unordered GLT structures will be motivated and defined below.Examples of ordered and unordered GLT matrices are displayed in Figure 1 for a model with r = 6 factors. Definition 3 (Ordered GLT structures).An m × r factor loading matrix Λ with full column rank r has an ordered GLT structure if the pivot rows l 1 , . . ., l r of Λ are ordered, i.e. l 1 < . . .< l r , and the leading factor loadings are positive, i.e.Λ l j ,j > 0 for j = 1, . . ., r. Evidently, imposing an ordered GLT structure resolves rotational invariance if the pivot rows are known.For any two ordered GLT matrices β and Λ with identical pivot rows l 1 , . . ., l r , the identity β = ΛP evidently holds iff P = I r .In practice, the pivot rows l 1 , . . ., l r of a GLT structure are unknown and need to be identified from the marginal covariance matrix Ω for a given number of factors r.Given variance identification, i.e. assuming that the cross-covariance matrix ΛΛ ⊤ is identified, a particularly important issue for the identification of a GLT factor model is whether Λ is uniquely identified from ΛΛ ⊤ if the pivot rows l 1 , . . ., l r are unknown.Non-trivial rotations β = ΛP of a loading matrix Λ with pivot rows l 1 , . . ., l r might exist such that ββ ⊤ = ΛΛ ⊤ , while the pivot rows l1 , . . ., lr of β are different from the pivot rows of Λ. Very assuringly, Theorem 1 shows that this is not the case: not only the pivot rows, but the entire loading matrices Λ and β are identical, if ΛΛ ⊤ = ββ ⊤ (see Appendix A for a proof). Definition 4 introduces, as an extension of Definition 3, unordered GLT structures under which Λ is identified from ΛΛ ⊤ only up to signed permutations.A signed permutation permutes the columns of the factor loading matrix Λ and switches the sign of all factor loadings in any specific column.This leads to a trivial case of rotational invariance.For r = 2, for instance, the eight signed permutations of the loading matrix Λ defined in (10) are depicted in (11).More formally, β is a signed permutation of Λ, iff where the permutation matrix P ρ corresponds to one of the r! permutations of the r columns of Λ and the reflection matrix P ± = Diag(±1, . . ., ±1) corresponds to one of the 2 r ways to switch the signs of the r columns of Λ. Often, it is convenient to employ identification rules that guarantee identification of Λ only up to such column and sign switching, see e.g.Conti et al. (2014).Any structure Λ obeying such an identification rule represents a whole equivalence class of matrices given by all 2 r r! signed permutation β = ΛP ± P ρ of Λ.This trivial form of the rotational invariance does not impose any additional mathematical challenges and is often convenient from a computational viewpoint, in particular for Bayesian inference, see for e.g.Conti et al. (2014) and Frühwirth-Schnatter et al. (2022). It is easy to verify how identification up to trivial rotational invariance can be achieved for GLT structures and motivates the following definition of unordered GLT structures as loadings matrices β where the pivot rows l 1 , . . ., l r simply occupy r different rows.In Definition 4, no order constraint is imposed on the pivot rows and no sign constraint is imposed on the leading factor loadings.This very general structure allows to design highly efficient sampling schemes for sparse Bayesian factor analysis under GLT structures, see Frühwirth-Schnatter et al. (2022). Definition 4 (Unordered GLT structures).An m × r factor loading matrix β with full column rank r has an unordered GLT structure if the pivot rows l 1 , . . ., l r of β are pairwise distinct. Theorem 1 is easily extended to unordered GLT structures.Any signed permutation β = ΛP ρ P ± of Λ is uniquely identified from ββ ⊤ = ΛΛ ⊤ , provided that ΛΛ ⊤ is identified.Hence, under unordered GLT structures the factor loading matrix Λ is uniquely identified up to signed permutations.Full identification can easily be obtained from unordered GLT structures β.Any unordered GLT structure β has unordered pivot rows l 1 , . . ., l r , occupying different rows.The corresponding ordered GLT structure Λ is recovered from β by sorting the columns in ascending order according to the pivot rows.In other words, the pivot rows of Λ are equal to the order statistics l (1) , . . ., l (r) of the pivot rows l 1 , . . ., l r of β, see again Figure 1.This procedure resolves rotational invariance, since the pivot rows l 1 , . . ., l r in the unordered GLT structure are distinct.Furthermore, imposing the condition Λ l j ,j > 0 in each column j resolves sign switching: if Λ l j ,j < 0, then the sign of all factor loadings Λ ij in column j is switched. Sparse GLT structures In Definition 3 and 4, "structural" zeros are introduced for a GLT structure for all factor loading above the pivot row l j , while the factor loading Λ l j ,j in the pivot row is non-zero by definition.We call Λ a dense GLT structure if all loadings below the pivot row are unconstrained and can take any value in R. A sparse GLT structure results if factor loadings at unspecified places below the pivot rows are zero and only the remaining loadings are unconstrained.A sparse loading matrix Λ can be characterized by the so-called sparsity matrix, defined as a binary indicator matrix δ of 0/1s of the same size as Λ, where δ ij = I(Λ ij = 0).Let δ Λ be the sparsity matrix of a GLT matrix Λ.The sparsity matrix δ corresponding to the signed permutation β = ΛP ρ P ± is equal to δ = δ Λ P ρ and is invariant to sign switching.Hence, for any sparse unordered GLT matrix β, the corresponding sparsity matrix δ obeys an unordered GLT structure with the same pivot rows as β, see Figure 1 for illustration. In sparse factor analysis, single factor loadings take zero-values with positive probability and the corresponding sparsity matrix δ is a binary matrix that has to be identified from the data.Identification in sparse factor analysis has to provide conditions under which the entire 0/1 pattern in δ can be identified from the covariance matrix Ω if δ is unknown.Whether this is possible hinges on variance identification, i.e. whether the decomposition of Ω into ΛΛ ⊤ and Σ 0 is unique.How variance identification can be verified for (sparse) GLT structures is investigated in detail in Section 4. Let us assume at this point that variance identification holds, i.e. the cross-covariance matrix ΛΛ ⊤ is identified.Then an important step toward the identification of a sparse factor model is to verify whether the 0/1 pattern of Λ, characterized by δ, is uniquely identified from ΛΛ ⊤ .Very importantly, if Λ is assumed to be a GLT structure, then the entire GLT structure Λ and hence the indicator matrix δ is uniquely identified from ΛΛ ⊤ , as follows immediately from Theorem 1, since δ ij = 0, iff Λ ij = 0 for all i, j.By identifying the 0/1 pattern in δ we can uniquely identify the pivot rows of Λ and the sparsity pattern below. We would like to emphasize that in sparse factor analysis with unconstrained loading matrices Λ this is not necessarily the case.The indicator matrix δ is in general not uniquely identified from ΛΛ ⊤ , because (non-trivial) rotations P change the zero pattern in β = ΛP, while ββ ⊤ = ΛΛ ⊤ .For illustration, let us return to the example in (5) where we showed that ΛΛ ⊤ is uniquely identified if the true sparsity matrix δ Λ is known.Now assume that δ Λ is unknown and allow the loading matrix β = ΛP to be any rotation of Λ.It is then evident that the corresponding sparsity matrix δ is not unique and two solutions exists.For all rotations where (α, b) ∈ {0, π 2 , π, 3π 2 } × {0, 1}, β correspond to one of the eight signed permutation of Λ given in (11) and the sparsity matrix δ is equal to δ Λ up to this signed permutation.For all other rotations, all elements of β are different from zero and δ is simply a matrix of ones. Rotation into GLT As discussed above, GLT structures generalize the PLT constraint, but one might wonder how restrictive this structure still is.We will show in this section that for a basic factor model with unconstrained loading matrix β there exists an equivalent representation involving a unique GLT structure Λ which is related to β by an orthogonal transformation, provided that uniqueness of the variance decomposition holds. The proof of this result uses a relationship between a matrix with GLT structure and the so-called reduced row echelon form in linear algebra that results from the Gauss-Jordan elimination for solving linear systems, see e.g.Anton and Rorres (2013).Any transposed GLT loading matrix Λ ⊤ has a row echelon form which can be turned into a reduced row echelon form (RREF) B = A ⊤ Λ ⊤ with the help of an r × r matrix A which is constructed from the pivot rows l 1 , . . ., l r of Λ and invertible by definition: Since the RREF of any matrix is unique, see e.g.Yuster (1984), we find that the pivot columns of B coincide with the pivot rows l 1 , . . ., l r of Λ.Hence, for a basic factor model with an arbitrary, unstructured loading matrix β with full column rank r, we prove in Theorem 2 that the RREF of β ⊤ can be used to represent β as a unique GLT structure Λ, where the pivot rows l 1 , . . ., l r of Λ coincide with the pivot columns of the RREF of β ⊤ (see Appendix A for a proof). Theorem 2 (Rotation into GLT).Let β be an arbitrary loading matrix with full column rank r.Then the following holds: (a) There exists an equivalent representation of β involving a unique GLT structure Λ, where G is a unique orthogonal matrix.Λ is called the GLT representation of β. (b) Let l 1 < . . .< l r be the pivot columns of the RREF B of β ⊤ and let β 1 be the r × r submatrix of β containing the corresponding rows l 1 , . . ., l r .The GLT representation Λ = βG of β has pivot rows l 1 , . . ., l r and is obtained through rotation into GLT with a rotation matrix which results from the QR decomposition Would it be possible to obtain a similar results with the factor loading matrix Λ being constrained to be a PLT structure?The answer is definitely no, as has already been established in Section 2 for example (5).As mentioned above, GLT structures encompass PLT structures as a special case.Hence, if a PLT representation Λ exists for a loading matrix β = ΛP, then the GLT representation in ( 16) automatically reduces to the PLT structure Λ, since R = β ⊤ 1 is obtained from the first r rows of β and the "rotation into GLT" is equal to the identity, Q = I r .On the other hand, if the GLT representation Λ differs from a PLT structure, then no equivalent PLT representation exists.Hence, forcing a PLT structure in the representation (1) may introduce a systematic bias in estimating the marginal covariance matrix Ω. Variance identification and GLT structures As mentioned in the previous sections, constraints imposed on the structure of a factor loading matrix Λ will resolve rotational invariance only if uniqueness of the variance decomposition holds and the crosscovariance matrix ΛΛ ⊤ is identified.However, rotational constraints alone do not necessarily guarantee uniqueness of the variance decomposition.Consider, for instance, a sparse PLT loading matrix where in some column j in addition to the diagonal element Λ jj (which is nonzero by definition) only a single further factor loading Λ n j ,j in some row n j > j is nonzero.Such a loading matrix obviously violates the necessary condition for variance identification that each column contains at least three nonzero elements.Similarly, while GLT structures resolve rotational invariance, they do not guarantee uniqueness of the variance decomposition either. In Section 4.1, we derive sufficient conditions for variance identification of GLT structures based on the 3579 counting rule of Sato (1992, Theorem 3.3).In Section 4.2, we discuss how to verify variance identification for sparse GLT structures in practice. Counting rules for variance identification We will show how to verify from the 0/1 pattern δ of an unordered GLT structure β, whether the row deletion property AR holds for β and all its signed permutations.Our condition is a structural counting rule expressed solely in terms of the sparsity matrix δ underlying β and does not involve the values of the unconstrained factor loadings in β, which can take any value in R. For any factor model, variance identification is invariant to signed permutations.If we can verify variance identification for a single signed permutation β = ΛP ± P ρ of Λ, as defined in ( 14), then variance identification of Λ holds, since β and Λ imply the same cross-covariance matrix ΛΛ ⊤ .Hence, we focus in this section on variance identification of unordered GLT structures. In Definition 5, we recall the so-called extended row deletion property, introduced by Tumura and Sato (1980). Definition 5 (Extended row deletion property RD(r, s)).A m × r factor loading matrix β satisfies the row-deletion property RD(r, s), if the following condition is satisfied: whenever s ∈ N 0 rows are deleted from β, then two disjoint submatrices of rank r remain. The row-deletion property of Anderson and Rubin (1956) results as a special case where s = 1.As will be shown in Section 5, the extended row deletion properties RD(r, s) for s > 1 are useful in exploratory factor analysis, when the factor dimension r is unknown.In Definition 6, we introduce a counting rule for binary matrices. Note that the counting rule CR(r, s), like the extended row deletion property RD(r, s), is invariant to signed permutations.Lemma 8 in Appendix A summarizes further useful properties of CR(r, s). For a given binary matrix δ of dimension m × r, let Θ δ be the space generated by the non-zero elements of all unordered GLT structure β with sparsity matrix δ and all their 2 r r! − 1 trivial rotations βP ± P ρ .We prove in Theorem 3 that for GLT structures the counting rule CR(r, s) and the extended row deletion property RD(r, s) are equivalent conditions for all loading matrices in Θ δ , except for a set of measure 0. Theorem 3. Let δ be a binary m × r matrix with unordered GLT structure.Then the following holds: (a) If δ violates the counting rule CR(r, s), then the extended row deletion property RD(r, s) is violated for all β ∈ Θ δ generated by δ. (b) If δ satisfies the counting rule CR(r, s), then the extended row deletion property RD(r, s) holds for all β ∈ Θ δ except for a set of measure 0. See Appendix A for a proof.The special case s = 1 is relevant for verifying the row deletion property AR.It proves that for unordered GLT structures the 3579 counting rule of Sato (1992) is not only a necessary, but also a sufficient condition for AR to hold.In addition, this means that the counting rule needs to be verified only for the sparsity matrix δ of a single trivial rotation β = ΛP ± P ρ rather than for every nonsingular matrix G.This result is summarized in Corollary 4. Corollary 4 (Variance identification rule for GLT structures).For any unordered m × r GLT structure β, the following holds: (a) If δ satisfies the 3579 counting rule, i.e. every column of δ has at least 3 non-zero elements, every pair of columns at least 5 and, more generally, every possible combination of q = 3, . . ., r columns has at least 2q + 1 non-zero elements, then variance identification is given for all β ∈ Θ δ except for a set of measure 0; i.e. for any other factor decomposition of the marginal covariance matrix (c) For r = 1, r = 2, and r = 3, condition CR(r, 1) is both sufficient and necessary for variance identification. A few comments are in order.If δ satisfies CR(r, 1), then AR holds for all β ∈ Θ δ and a sufficient condition for variance identification is satisfied.As shown by Anderson and Rubin (1956), AR is a necessary condition for variance identification only for r = 1 and r = 2. Tumura and Sato (1980, Theorem 3) show the same for r = 3, provided that m ≥ 7. It follows that CR(r, 1) is a necessary and sufficient condition for variance identification for the models summarized in (c).In all other cases, variance identification may hold for loading matrices β ∈ Θ δ , even if δ violates CR(r, 1). The definition of unordered GLT structures given in Section 3 imposes no constraint on the pivot rows l 1 , . . ., l r beyond the assumption that they are distinct.This flexibility can lead to GLT structures that can never satisfy the 3579 rule, even if all elements below the pivot rows are non-zero.Consider, for instance, a GLT matrix with the pivot row in column r being equal to l r = m − 1.The loading matrix has at most two nonzero elements in column r and violates the necessary condition for variance identification.This example shows that there is an upper bound for the pivot elements beyond which the 3579 rule can never hold.This insight is formalized in Definition 7. Definition 7.An unordered GLT structure β fulfills condition GLT-AR if the following constraint on the pivot rows l 1 , . . ., l r of β is satisfied, where z j is the rank of l j in the ordered sequence l (1) < . . .< l (r) : Evidently, an ordered GLT structure Λ fulfills condition GLT-AR if the pivot rows l 1 , . . ., l r of Λ satisfy the constraint l j ≤ m − 2(r − j + 1).For the special case of a PLT structure where l j = j, this constraint reduces to m ≥ 2r + 1 which is equivalent to a well-known upper bound for the number of factors.For dense unordered GLT structures with m (non-zero) rows, condition GLT-AR is a sufficient condition for AR.For sparse GLT structures GLT-AR is only a necessary condition for AR and the 3579 rule has to be verified explicitly, as shown by the example discussed above.Very conveniently for verifying variance identification in sparse factor analysis based on GLT structures, Theorem 3 and Corollary 4 operate solely on the sparsity matrix δ corresponding to β. Variance identification in practice To verify CR(r, s) in practice, all submatrices of q columns have to be extracted from the sparsity matrix δ to verify if at least 2q + 1 rows of this submatrix are non-zero.For q = 1, 2, r − 1, r, this condition is easily verified from simple functionals of δ, see Corollary 5 which follows immediately from Theorem 3 (see Appendix A for details). Corollary 5 (Simple counting rules for CR(r, s)).Let δ be a m × r unordered GLT sparsity matrix. The following conditions on δ are necessary for CR(r, s) to hold: where the indicator function I(δ ⋆ > 0) is applied element-wise and 1 n×k denotes a n × k matrix of ones.For r ≤ 4, these conditions are also sufficient for CR(r, s) to hold for δ. Using Corollary 5 for s = 1, one can efficiently verify, if the 3579 counting rule and hence the row deletion property AR holds for unordered GLT factor models with up to r ≤ 4 factors.For models with more than four factors (r > 4), a more elaborated strategy is needed.After checking the conditions of Corollary 5, CR(r, s) could be verified for a given binary matrix δ by iterating over all remaining r!/(q!(r − q)!) subsets of q = 3, . . ., r − 2 columns of δ.While this is a finite task, such a naïve approach may need to visit 2 r − 1 matrices in order to make a decision and the combinatorial explosion quickly becomes an issue in practice as r increases.Recent work by Hosszejni and Frühwirth-Schnatter (2022) establishes the applicability of this framework for large models. Identification in exploratory factor analysis In this section, we discuss how the concept of GLT structures is helpful for addressing identification problems in exploratory factor analysis (EFA).Consider data {y 1 , . . ., y T } from a multivariate Gaussian distribution, y t ∼ N m (0, Ω), where an investigator wants to perform factor analysis since she expects that the covariances of the measurements y it are driven by common factors.In practice, the number of factors is typically unknown and often it is not obvious, whether all m measurements in y t are actually correlated.It is then common to employ EFA by fitting a basic factor model to the entire collection of measurements in y t , i.e. assuming the model with an assumed number of factors k, a m × k loading matrix β k with elements β ij and a diagonal matrix Σ k with strictly positive entries.The EFA model ( 21) is potentially overfitting in two ways.First, the true number of factors r is possibly smaller than k, i.e. β k has too many columns.Second, some measurements in y t are possibly irrelevant, which means that β k allows for too many non-zero rows. The goal is then to determine the true number of factors and to identify irrelevant measurements from the EFA model ( 21). We will address identification under the assumption that the data are generated by a basic factor model with loading matrix β 0 with r factors which implies the following covariance matrix Ω: Instead of ( 22), for a given k, the EFA model ( 21) yields the alternative representation of Ω: The question is then under which conditions can the true loading matrix β 0 be recovered from (23).Let us assume for the moment that no constraint that resolves rotational invariance is imposed on β 0 or β k . "Revealing the truth" in an overfitting EFA model.A fundamental problem in factor analysis is the following.If the EFA model is overfitting, i.e. k > r, could we nevertheless recover the true loading matrix β 0 directly from β k ?We will show how this can be achieved mathematically by combining the important work by Tumura and Sato (1980) with the framework of GLT structures.We have demonstrated in Section 2 using example (13) that solutions in an overfitting model can be constructed by adding spurious columns (Reiersøl, 1950;Geweke and Singleton, 1980).Additional solutions are obtained as rotations of such solutions.For instance, one of the following solutions may result: , both with the same Σ 3 as in (13).The first case is a signed permutation of β 3 , while the second case combines a signed permutation of β 3 with a rotation of the spurious and Λ's first column involving P αb . In the first case, despite the rotation, both the spurious column and the columns of Λ are clearly visible, while in the second case the presence of a spurious column is by no means obvious and the columns of Λ are disguised. In general, for an EFA model that is overfitting by a single column, i.e. k = r + 1, and β k is left unconstrained, infinitely many representations (β k , Σ k ) with covariance matrix Ω = β k β ⊤ k + Σ k can be constructed in the following way.Let the first r columns of β k be equal to β 0 and append an extra column to its right.In this extra column, which will be called a spurious column, add a single non-zero loading β l k ,k in any row 1 ≤ l k ≤ m taking any value that satisfies 0 < β 2 l k ,k < σ 2 l k ; then reduce the idiosyncratic variance in row l k to σ 2 l k − β 2 l k ,k ; and finally apply an arbitrary rotation P: Interesting questions are then the following: under which conditions is (24) an exhaustive representation of all possible solutions β k in an EFA model where the degree of overfitting defined as s = k − r is equal to one?How can all solutions β k be represented if s > 1? Such identifiability problems in overfitting EFA models have been analyzed in depth by Tumura and Sato (1980).They show that a stronger condition than RD(r, 1) is needed for β 0 in the underlying variance decomposition (22) to ensure that only spurious and no additional common factors are added in the overfitting representation (23).In addition, Tumura and Sato (1980) provide a general representation of the factor loading matrix β k in overfitting representation (23) with k > r. Theorem 6. (Tumura and Sato, 1980, Theorem 1) Suppose that Ω has a decomposition as in ( 22) with r factors and that for some S ∈ N with m ≥ 2r + S + 1 the extended row deletion property RD(r, 1 + S) holds for β 0 .If Ω has another decomposition such that Ω = β k β ⊤ k + Σ k where β k is a m × (r + s)matrix of rank k = r + s with 1 ≤ s ≤ S, then there exists an orthogonal matrix T k of rank k such that where the off-diagonal elements of M s M ⊤ s are zero. The m × s-matrix M s is a so-called spurious factor loading matrix that does not contribute to explaining the covariance in y t , since While this theorem is an important result, without imposing further structure on the factor loading matrix β k in the EFA model it cannot be applied immediately to "recover the truth", as the separation of β k into the true factor loading matrix β 0 and the spurious factor loading matrix M s is possible only up to a rotation T k of β k .However, the truth" in an overfitting EFA model can be recovered, if Tumura and Sato (1980, Theorem 1) is applied within the class of unordered GLT structures introduced in this paper.If we assume that Λ is a GLT structure which satisfies the extended row deletion property RD(r, 1 + S), we prove in Theorem 7 the following result.If β k in an overfitting EFA model is an unordered GLT structure, then β k has a representation, where the rotation in ( 25) is a signed permutation T k = P ± P ρ .Hence, spurious factors in β k are easily spotted and Λ can be recovered immediately from β k . Definition 8 (Unordered spurious GLT structure).A m × s unordered GLT factor loading matrix M Λ s with pivots rows {n 1 , . . ., n s } is an unordered spurious GLT structure if all columns are spurious columns with a single nonzero loading in the corresponding pivot row. Theorem 7. Let Λ be a m × r GLT factor loading matrix with pivot rows l 1 < . . .< l r which obeys the extended row deletion property RD(r, 1 + S) for some S ∈ N. Assume that the m × k matrix β k in the EFA variance decomposition GLT matrix,then (25) reduces to where M Λ s is a spurious ordered GLT structure with pivot rows n 1 < . . .< n s which are distinct from the r pivot rows in Λ.Hence, r columns of β k are a signed permutation of the true loading matrix Λ, while the remaining s columns of β k are an unordered spurious GLT structure with pivots n 1 , . . ., n s . See Appendix A for a proof. Identifying irrelevant variables.In applied factor analysis, the assumption that each measurement y it is correlated with at least one other measurement is too restrictive, because irrelevant measurements might be present that are uncorrelated with all the other measurements.As argued by Boivin and Ng (2006), it is useful to identify such variables.Within the framework of sparse factor analysis, irrelevant variables are identified in Kaufmann and Schuhmacher (2017) by exploring the sparsity matrix δ of a factor loading matrix β 0 with respect to zero rows.Since Cov(y it , y lt ) = 0 for all l = i, if the entire ith row of β 0 is zero (see also (3)), the presence of m 0 irrelevant measurements causes the corresponding m 0 rows of β 0 and δ to be zero.As before, we assume that the variance decomposition (22) of the underlying basic factor model is variance identified. Let us first investigate identification of the zero rows in β 0 and the corresponding sparsity matrix δ for the case that the assumed and the true number of factors in the EFA model ( 21) are identical, i.e. k = r.Since variance identification of ( 22) in the underlying model holds, we obtain that Σ 0 = Σ r , β 0 β ⊤ 0 = β r β ⊤ r and β r = β 0 P is a rotation of β 0 .Therefore, the position of the zero rows both in β 0 and β r are identical and all irrelevant variables can be identified from β r or the corresponding sparsity matrix δ, regardless of the strategy toward rotational invariance. What makes this task challenging in applied factor analysis is that in practice only the total number m of observations is known, whereas the investigator is ignorant both about the number of factors r and the number of irrelevant measurements m 0 .In such a situation, variance identification of Σ k for an EFA model with k assumed factors is easily lost if too many irrelevant variables are included in relation to k.These considerations have important implication for exploratory factor analysis.While the investigator can choose k, she is ignorant about the number of irrelevant variables and the recovered model might not be variance identified.For this reason, it is relevant to verify in any case that the solution β k obtained from any EFA model satisfies variance identification. Under AR this means that the loading matrix of the correlated measurements, i.e. the non-zero rows of β 0 , satisfies RD(r, 1).If variance identification relies on AR, then a minimum requirement for β k to satisfy RD(k, 1) is that 2k + 1 ≤ m − m 0 .If no irrelevant measurement are present, then the well-known upper bound k ≤ m−1 2 results.However, if irrelevant measurements are present, then there is a trade-off between m 0 and k: the more irrelevant measurements are included, the smaller the maximum number of assumed factors k has to be.Hence, the presence of m 0 zero rows in β 0 , while β k in the EFA model is allowed to have m potentially non-zero rows requires stronger conditions for variance identification than for an EFA model where the underlying loading matrix β 0 contains only non-zero rows.More specifically, for a given number m 0 ∈ N of irrelevant measurements, variance identification necessitates the more stringent upper bound k ≤ m−m 0 −1 2 , where m − m 0 is the number of non-zero rows.On the other hand, for a given number of factors k in an EFA model, the maximum number of irrelevant measurements that can be included is given by m 0 ≤ m − (2k + 1). Identifying the number of factors through an EFA model.Let us assume that the variance decomposition (22) of the unknown underlying basic factor model is identified.As shown by Reiersøl (1950), the true number of factors r is equal to the smallest value k that satisfies (23).However, in practice, it is not obvious how to solve this "minimization" problem.As the following considerations show, verifying variance identification for β k in an EFA model can be helpful in this regard. If r is unknown, then we need to find a decomposition of Ω as in ( 23) where Σ k is variance identified.Since the true underlying decomposition ( 22) is variance identified, any solution where Σ k is not variance identified can be rejected.As has been discussed above, any overfitting EFA model, where k > r, has infinitely many decompositions of Ω and therefore is never variance identified.Hence, if any solution Σ k of an EFA model with k assumed factors is not variance identified, then we can deduce that k is bigger than r.On the other hand, if variance identification holds for Σ k , then the decompositions ( 22) and ( 23) are equivalent and we can conclude that r = k, Σ 0 = Σ k and therefore As a consequence, we can identify the true loading matrix β 0 = β k P from β k mathematically up to a rotation P (Anderson and Rubin, 1956, Lemma 5.1). This insight shows that verifying variance identification is relevant beyond resolving rotational invariance and is essential for recovering the true number of factors.This has important implications for applied factor analysis.Most importantly, the rank or the number of non-zero columns of a factor loading matrix β k recovered from an EFA model with assumed number k of factors might overfit the true number of factors r, if variance identification for Σ k is not satisfied and the variance decomposition is not unique.Hence, extracting the number of factors from an EFA model makes only sense in connection with ensuring that variance identification holds. Sparse Bayesian factor analysis A common goal of Bayesian factor analysis is to identify the unknown factor dimension r of a factor loading matrix from the overfitting factor model ( 21) with potentially k > r factors, see, among many others, Ročková andGeorge (2017), Frühwirth-Schnatter andLopes (2018), and Ohn and Kim (2022).Often, spike-and slab priors are employed, where the elements β ij of the loading matrix β k apriori are allowed to be exactly zero with positive probability.This is achieved through a prior on the corresponding m × k sparsity matrix δ k .In each column j, the indicators δ ij are active apriori with a column-specific probability τ j , i.e.Pr(δ ij = 1|τ j ) = τ j for i = 1, . . ., m, where the slab probabilities τ 1 , . . ., τ k arise from an exchangeable shrinkage prior: If γ is unknown, then ( 26) is called a two-parameter-beta (2PB) prior.If γ = 1, then ( 26) is called a one-parameter-beta (1PB) prior and takes the form: Prior ( 27) converges to the Indian buffet process prior (Teh et al., 2007) for k → ∞.As recently shown by Frühwirth-Schnatter (2022), prior (27) has a representation as a cumulative shrinkage process (CUSP) prior (Legramanti et al., 2020). This specification leads to a Dirac-spike-and-slab prior for the factor loadings, where the columns of the loading matrix are increasingly pulled toward 0 as the column index increases.In (28), a Gaussian slab distribution is assumed with a random global shrinkage parameter κ, although other slab distributions are possible, see e.g.Zhao et al. (2016) andFrühwirth-Schnatter et al. (2022). The hyperparameters α and γ are instrumental in controlling prior sparsity.Choosing α = k and γ = 1 leads to a uniform distribution for τ j , with the smallest slab probability τ (1) = min j=1,...,k τ j also being uniform, while the largest slab probability τ (k) = max j=1,...,k τ j ∼ B (k, 1), see Frühwirth-Schnatter (2022).Such a prior is likely to overfit the number of factors, regardless of all other assumptions.A prior with α < k and γ = 1 induces sparsity, since the largest slab probability τ (k) ∼ B (α, 1), while the smallest slab probability τ (1) ∼ B (α/k, 1).To control the small probabilities, which are important in identifying the true number of factors, α is assumed to be a random parameter and learnt from the data under the prior α ∼ G (a α , b α ).γ controls the prior information in (26).Priors with γ > 1 and γ < 1, respectively, decrease and increase the difference between τ (1) and τ (k) .Typically, γ is unknown and is estimated from the data using the prior γ ∼ G (a γ , b γ ). MCMC estimation.For a given choice of hyperparameters, Markov chain Monte Carlo (MCMC) methods are applied to sample from the posterior distribution p(β k , Σ k , δ k |y), given T multivariate observations y = (y 1 , . . ., y T ), see e.g.Kaufmann and Schuhmacher (2019) among many others.In Frühwirth-Schnatter et al. ( 2022), such a sampler is developed for GLT factor models.To move between factor models of different factor dimension, Frühwirth-Schnatter et al. ( 2022) exploit Theorem 7 to add and delete spurious columns through a reversible jump MCMC (RJMCMC) sampler.For each posterior draw β k , the active columns β r (i.e.all columns with at least 2 non-zero elements) and the corresponding sparsity matrix δ r are determined.If δ r satisfies the counting rule CR(r, 1), then β r is a signed permutation of Λ with the corresponding covariance matrix Σ r = Σ k + M Λ s (M Λ s ) ⊤ , where M Λ s contains the spurious columns of β k .These variance identified draws are kept for further inference and the number of columns of β r is considered a posterior draw of the unknown factor dimension r.This algorithm is easily extended to EFA models without any constraints. An illustrative simulation study For illustration, we perform a simulation study and consider three different data scenarios with m = 30 and T = 150.In all three scenarios, r true = 5 factors are assumed, however, the zero/non-zero pattern is quite different.The first setting is a dedicated factor model, where the first 6 variables load on factor 1, the next 6 variables load on factor 2, and so forth, and the final 6 variables load on factor 5. A dedicated factor model has a GLT structure by definition.The second scenario is a block factor model, where the first 15 observations load only on factor 1 and 2, while the remaining 15 observations only load on factor 3, 4 and 5 and the covariance matrix has a block-diagonal structure.All loadings within a block are non-Table 1: Sparse Bayesian factor analysis under GLT and unconstrained structures (EFA) under a 1PB prior (α ∼ G (6, 2)) and a 2PB prior (α ∼ G (6, 2) , γ ∼ G (6, 6)).GLT and EFA-V use only the variance identified draws (M V is the percentage of variance identified draws), EFA uses all posterior draws.Med is the median and QR are the 5% and the 95% quantile of the various statistics over the 21 simulated data sets. zero.The third scenario is a dense factor loading matrix without any zero loadings and the corresponding GLT representation has a PLT structure.For all three scenarios, non-zero factor loadings are drawn as λ ij = (−1) b ij (1 + 0.1N (0, 1)), where the exponent b ij is a binary variable with Pr(b ij = 1) = 0.2. In all three scenarios, Σ 0 = I.21 data sets are sampled under these three scenarios from the Gaussian factor model (1). A sparse overfitting factor model is fitted to each simulated data set with the maximum number of factors k = 14 being equal to the upper bound.Regarding the structure, we compare a model where the non-zero columns of β k are left unconstrained with a model where a GLT structure is imposed.Inference is based on the Bayesian approach described in Section 6.1 with two different shrinkage priors on the sparsity matrix δ k : the 1PB prior (27) with random hyperparameter α ∼ G (6, 2) and the 2PB prior (26) with random hyperparameters α ∼ G (6, 2) and γ ∼ G (6, 6).MCMC estimation is run for 3000 iterations after a burn-in of 2000 using the RJMCMC algorithm of Frühwirth-Schnatter et al. (2022). For each of the 21 simulated data sets, we evaluate all 12 combinations of data scenarios, structural constraints (GLT versus unconstrained) and priors on the sparsity matrix (1PB versus 2PB) through Monte Carlo estimates of following statistics: to assess the performance in estimating the true number r true of factors, we consider the mode r of the posterior distribution p(r|y) and the magnitude of the posterior ordinate p(r = r true |y).To assess the accuracy in estimating the covariance matrix Ω of the data, we consider the mean squared error (MSE) defined by MSE Ω = i ℓ≤i E((Ω r,iℓ − Ω iℓ ) 2 |y)/(m(m + 1)/2), which accounts both for posterior variance and bias of the estimated covariance matrix Ω r = β r β ⊤ r +Σ r in comparison to the true matrix.Table 1 reports, for all 12 combinations the median, the 5% and the 95% quantile of these statistics across all simulated data sets.For inference under GLT structures, posterior draws which are not variance identified have been removed.The fraction of variance identified draws is also reported in the table and is in general pretty high.As common for sparse Bayesian factor analysis with unstructured loading matrices, the posterior draws are not screened for variance identification and inference is based on all draws.Some interesting conclusions can be drawn from Table 1.First of all, sparse Bayesian factor analysis under the GLT constraint successfully recovers the true number of factors in all three scenarios.For most of the simulated data sets, the posterior ordinate p(r = r true |y) is larger than 0.9.Sparse Bayesian factor analysis with unstructured loading matrices is also quite successful in recovering r true , but with less confidence.Both over-and underfitting can be observed and the posterior ordinate p(r = r true |y) is much smaller than under a GLT structure.For both structures, the 2PB prior yields higher posterior ordinates than the 1PB prior. Recently, Hosszejni and Frühwirth-Schnatter (2022) proved that the counting rule CR(r, 1) can also be applied to verify variance identification for unconstrained loading matrices.As is evident from Table 1, the fraction of variance identified draws is however, much smaller than under GLT structures.Nevertheless, inference w.r.t. to the number of factors can be improved also for an unconstrained EFA model by rejecting all draws that do not obey the counting rule CR(r, 1). It should be emphasized that the ability of Bayesian factor analysis to recover the number of factors from an overfitting model is closely tied to choosing a suitable shrinkage prior on the sparsity matrix δ k .For illustration, we also consider a uniform prior for τ j and report the corresponding statistics in Table 2.As expected from the considerations in Section 6.1, considerable overfitting is observed for all simulated data sets, regardless of the chosen structure. Concluding remarks We have given a full and comprehensive mathematical treatment to generalized lower triangular (GLT) structures, a new identification strategy that improves on the popular positive lower triangular (PLT) assumption for factor loadings matrices.We have proven that GLT retains PLT's good properties: uniqueness and rotational invariance.At the same time and unlike PLT, GLT exists for any factor loadings matrix; i.e. it is not a restrictive assumption.Furthermore, we have shown that verifying variance identification under GLT structures is simple and is based purely on the zero-nonzero pattern of the factor loadings matrix.Additionally, we have embedded the GLT model class into exploratory factor analysis with unknown factor dimension and discussed how easily spurious factors and irrelevant variables are recognized in that setup.At the end, we demonstrated the power of the framework in a simulation study. Table 2 : Bayesian factor analysis under GLT and unconstrained structures (EFA) under a uniform prior on τ j .GLT and EFA-V use only the variance identified draws (M V is the percentage of variance identified draws), EFA uses all posterior draws.Med is the median and QR are the 5% and the 95% quantile of the various statistics over the 21 simulated data sets.
2023-01-18T06:42:33.115Z
2023-01-16T00:00:00.000
{ "year": 2023, "sha1": "b0eacbdd22b265b1964fb87b0dbcd125ac9d819c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2225-1146/11/4/26/pdf?version=1700488288", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "b0eacbdd22b265b1964fb87b0dbcd125ac9d819c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics", "Economics" ] }
53513800
pes2o/s2orc
v3-fos-license
Generalized Lennard-Jones Potentials, SUSYQM and Differential Galois Theory In this paper we start with proving that the Schr\"odinger equation (SE) with the classical $12-6$ Lennard-Jones (L-J) potential is nonintegrable in the sense of the differential Galois theory (DGT), for any value of energy; i.e., there are no solutions in closed form for such differential equation. We study the $10-6$ potential through DGT and SUSYQM; being it one of the two partner potentials built with a superpotential of the form $w(r)\propto 1/r^5$. We also find that it is integrable in the sense of DGT for zero energy. A first analysis of the applicability and physical consequences of the model is carried out in terms of the so called De Boer principle of corresponding states. A comparison of the second virial coefficient $B(T)$ for both potentials shows a good agreement for low temperatures. As a consequence of these results we propose the $10-6$ potential as an integrable alternative to be applied in further studies instead of the original $12-6$ L-J potential. Finally we study through DGT and SUSYQM the integrability of the SE with a generalized $(2\nu-2)-\nu$ L-J potential. This analysis do not include the study of square integrable wave functions, excited states and energies different than zero for the generalization of L-J potentials. Introduction The Lennard-Jones potential (L-J) was proposed in 1931 in order to model the concurrence between the long-range attraction and the short-range repulsion in radial interatomic interactions [34]. In a later work, the description of such potential was employed in order to describe the equation of state of a gas in terms of its interatomic forces [35], thus concluding and enhancing an investigation started by Mie in 1903 [38]. The L-J potential is usually used, at the level of classical statistical mechanics, to study the behavior of fluid materials, ranging from simple molecules to polymers and proteins [24,31,37]. In theoretical quantum chemistry, among many applications, we point out: its implementation in the theory of molecular orbitals, allowing to compute the tendency of two electrons in the same space orbital to keep each other apart because of the repulsive field between them [26]; the numerical implementations in order to compute the transferable inter-molecular potential functions (TIPS) in alcohols, ethers and water, that have given an understanding of the interactions of these chemical compounds in solvents [27]. Also a mathematical model that has been proposed for calculating the isosteric heat of adsorption of simple fluids onto flat surfaces. On this respect, theoretical and experimental results were compared in order to study the influence of the choice of the intermolecular potential parameters [41]. Finally, a experimental methodology and theoretical calculations applying the Lennard-Jones potential, for determining micropore-size distributions, obtained from physical adsorption isotherm data, have provided valuable microstructural information, which is still widely used today [25,42,48]. With the increase of numerical techniques, calculations with explicit solutions in physical models don't have in the present the same importance as in past decades. Nevertheless exact solutions when available, have always served as elucidating tools for finding general properties of the system, which otherwise could remain hidden. The main motivation of this paper is the application of supersymmetric quantum mechanics (SUSYQM) and differential Galois theory (DGT) to obtain explicit solutions of the Schrödinger equation (SE) with variants of the Lennard-Jones potential, as well as the set of eigenvalues associated to each solution. SUSYQM, introduced by E. Witten in 1981, is the simplest example where supersymmetry can be dynamically broken [51]. In spite of its initial character of a toy model; SUSYQM has earned importance in the recent decades, because it served as a starting point to the development of attractive theoretical features and concepts like shape invariance, isospectrality and factorization, that give new perspectives to old problems in quantum mechanics, like the integrability of the SE, see for example [1,21,17] and the path integral formulation of classical mechanics [22]. On the other hand, there is plenty of papers in mathematical physics wherein DGT has been applied; see for example [2,4,5,6,7,8] for applications to study the non-integrability of Hamiltonian systems. For applications in the integrability of the SE, see [1,3,9,11,12,13,14]. For applications of differential Galois theory to other quantum integrable systems see [15,46]. The main Galoisian tools used in some of these papers are the Hamiltonian algebrization and the Kovacic's algorithm. These tools have led and still lead, to deduce exact solutions in several areas of mathematical physics. The structure of this paper is as follows. Section 2 is devoted to the theoretical background necessary to understand the rest of the paper. It summarizes topics such as the Schrödinger equation for central potentials, Lennard-Jones potentials 12−6, 10−6 and (2ν−2)−ν, SUSYQM, the De Boer principle of corresponding states, the virial equation and DGT. In Section 3 we study the integrability of the SE with the usual 12 − 6 Lennard-Jones potential, as well as the alternative versions 10 − 6 and (2ν − 2) − ν. Our contributions consist in the deduction of algebraic and physical conditions over the parameters of such SE's to get their integrability in the sense of DGT and the superpotentials in the integrable cases. A first study of physical consequences will also be detailed in this section. In Section 4 some remarks concerning future works are established. Preliminaries The Schrödinger equation for a central potential We are interested in studying a physical model for a many-body system where the main contribution of the interaction of its constituents is pairwise and radial in nature. In addition, the physical conditions of the system (temperature, density, etc) are such, that its quantum behavior is non-negligible. In this section we set shortly the theoretical background, in order to establish our physical model with a central potential, and also the notation to be applied in the rest of the paper [16]. The Hamiltonian for a system of two spinless particles with masses m 1 and m 2 interacting via a radial potential V (| r 1 − r 2 |) is given by It is an usual subject of textbooks in classical mechanics to show that (2.1) can be separated into two parts, one related to the motion of the center of mass R of the system and the other related to the relative motion of the particles. The new coordinate system is given by the following transformation rules where M is the total mass of the system, µ is called the reduced mass. The Hamiltonian in the new coordinates takes the form where p r and p R are the canonical momenta conjugated respectively to the coordinates r = | r 1 − r 2 | and R = | R|. Since we are not dealing with external forces, the motion of the center of mass is uniform rectilinear. For several analysis it is suitable to work in a frame at rest with the center of mass, which is still an inertial reference frame, in that case the Hamiltonian (2.2) is reduced to The Hamiltonian in (2.3) represents the energy of the relative motion of the two particles; it describes the motion of a fictitious particle, the relative particle with a mass given by the reduced mass µ and a position and momentum given by the relative coordinates r and p r . The quantum mechanical model of our interest is based on this Hamiltonian. The usual rules of quantization in the position representation lead to the time-independent Schrödinger equation for our two-particles system Since V (r) is a rational central potential, the eigenfunctions Ψ( r) are separable into radial and angular parts, the last one given by the spherical harmonics The differential equation of our interest corresponds to the radial part of (2.4) as follows where l and m are the usual quantum numbers for angular momentum; k represents the different values of energy for fixed l, and it can be either discrete or continuous. Defining the effective radial potential as V eff (r) ≡ l(l + 1)/ 2µr 2 + V (r) and leaving the second derivative in r on one side, we have 2µ 2 V eff (r) − E k,l u k,l (r) = d 2 dr 2 u k,l (r) (2.5) at this point we define a rescaled potential v eff (r) ≡ 2µ 2 V eff (r) and a similarly rescaled energy 2µ 2 E k,l ≡ ε k,l ; in this case equation (2.5) turns out to be v eff (r) − ε k,l u k,l (r) = d 2 dr 2 u k,l (r). (2.6) In this way it is natural to define a rescaled Hamiltonian as H ≡ − d 2 dr 2 + v(r) in order to recover (2.6): The case for l = 0 defines the non-effective potential, and is also of great interest for our study where we have simplified u k,l=0 and ε k,l=0 to u k and ε k , respectively. We observe that (2.8) is a rescaled version of equations (2.7), (2.8) and (2.9) are the subject of our mathematical and physical analysis in Section 3. The 12 − 6 Lennard-Jones potential and its generalizations The 12-6 Lennard-Jones potential is usually presented in terms of two constants A and B where the negative term −A/r 6 leads to van der Waals attractive fields and comes from the second-order correction in perturbation theory to the dipole-dipole interaction between two atoms [16]. The positive term B/r 12 models the short range electronic repulsion between atoms and has no theoretical justification; it was empirically chosen because it fits reasonably good data coming from experiments with diatomic gases [34]. An alternative version is given by where is the atomic depth of the potential well, σ is the finite distance at which the inter-particle potential is zero and r is the distance between the particles (see Fig. 1). In mathematical terms, σ > 0 and > 0 satisfy that V (σ) = 0 and V 6 √ 2σ = − ; this means that σ is a zero potential length and the point 6 √ 2σ, − is the local minimum of the potential in the interval (0, ∞). It can easily be shown that there is no other critical point in such interval. In physical terms the well depth and the zero potential length σ are parameters that describe the cohesive and repulsive forces that take place in a gas or liquid at the molecular level. measures the strength of the attraction between pairs of molecules and σ is the radius of the repulsive core when two molecules collide. In order to explore with the differential Galois theory the integrability of the Schrödinger equation with the Lennard-Jones potential (2.10) and other related cases, we introduce the generalized effective version with arbitrary powers ν and δ given in [34] where 0 < ν < δ, A > 0, B > 0, C ≥ 0. Its rescaled version is given by The special case for δ = 2ν − 2 and some of its analytic advantages has been studied by J. Pade in [43] In the mentioned article, a special attention has been drawn to the ν = 6 case, and its ability to fit experimental data: (2.14) In Section 3 we will explore the interesting features of (2.14) in the realm of SUSYQM. The second virial coefficient and its dependence on the potential The virial equation of state for a gas expresses the deviation from the ideal behavior as a power series in the density ρ: The coefficients B n (T ) are called the virial coefficients and they are unique real functions of the temperature. The second virial coefficient B 2 (T ) represents the most significant deviation from the ideal behavior, since it is the prefactor in the term of order ρ 2 in the series. It is a customary result from equilibrium statistical mechanics (see [36]) that B 2 (T ) is a radial integral of the pairpotential v(r) given by A thorough study by Keller and Zumino of the properties of (2.15) has shown that a unique potential function can only be obtained from B 2 (T ) if the potential behaves monotonically [28]. This is clearly not the case for the Lennard-Jones potential and all its variants. As a result, there exists an ambiguity in the choice of the microscopic potential, leading to the same thermodynamic function B 2 (T ). In addition to this analytic inexactness there is also the limited range of measurements of B 2 (T ) for low temperatures. The aforementioned limitations lead to several possibilities of choice for v(r), at least from measurements of B 2 , specially for the power of the repulsive term B/r δ . The possibilities range from n = 9 to n = 14 since the early works of Lennard-Jones (see [32,33]) and De Boer (see [19]). We come back to this point in the next section, giving some hints about the applicability of the 10 − 6 potential for low temperatures. The dimensionless Schrödinger equation and the De Boer principle of corresponding states In 1948 J. De Boer introduced a dimensionless representation of the Schrödinger equation employing σ and in order to construct dimensionless lengths and energies [18] r As a result, the radial Schrödinger equation (2.9) for l = 0 can be transformed into the dimensionless form given by provided that the potential V (r) can be expressed in the generic form V (r) = f (r/σ), where f (r) is a well-defined dimensionless interaction function and Λ ≡ /(σ √ µ ) [18]. From (2.17) we see that Λ, the so-called De Boer parameter, is the only parameter in the equation that gives information about the particular microscopic characteristics of the system. From this fact, De Boer was able to formulate his principle of corresponding states, which is a "quantum" generalization of the van der Waals law of corresponding states for classical gases and liquids [23,44,50]. The De Boer principle of corresponding states tells us that two different systems with equal value of Λ have identical thermodynamic properties [18]. In Section 3 we exploit this principle in order to give an interpretation to the supersymmetric integrable model for zero energy, we propose with the 10 − 6 Lennard-Jones potential. Supersymmetric quantum mechanics We implement in this work the simplest realisation of SUSYQM for one-dimensional quantum systems [21], which includes besides the Hamiltonian operator H, two fermionic operators Q ± or supercharges such that they commute with H and satisfy the algebra The second relation means that Q ± are nilpotent operators. A usual representation of the algebra, given in equations (2.18) 20) and A ± are defined in terms of the derivative d dx and an arbitrary complex function w(r), called the superpotential then, from (2.20) it results natural to identify which leads directly to a definition of the so-called partner potentials v ± given by such that Each of the two equations in (2.22) define a Riccati differential equation for the superpotential w, if v ± are known. Let's recall that the superpotential can also be found from the zero-energy base state ψ 0 , by computing w = −ψ 0 /ψ 0 , where ψ 0 is a solution of the Schrödinger equation with the v − potential (see Witten [51]). Riccati equations play a fundamental role in the study of integrability in SUSYQM. For a systematic study of this subject see references [1,3]. Differential Galois theory Exact solutions of differential equation is a hard but important task in different disciplines. Sometimes numerical methods cannot be implemented in general, if the equation has free generic parameters. Differential Galois theory, also known as Picard-Vessiot theory, is a powerful theory to solve explicitly, in the case when it is possible, linear differential equations. Analogous to the concept of field in classical Galois theory, there exists the concept differential field in differential Galois theory, which is a field satisfying the differential Leibniz rules. Similarly, a differential extension L of the differential field K means that K is a subfield of L preserving the differential Leibniz rules. In particular for a given linear differential with coefficients in K, if C L = C K (the field of constants of L is the same field of constants of K) and L is generated over K by a fundamental set of solutions of such differential equation, then L is called the Picard-Vessiot extension of K. Recall that the field of constants of K is defined as In the same way as we are interested in finding the roots of the polynomials over a base field, usually Q, using arithmetical and algebraic conditions, we would like to have explicit solutions of differential equations over a differential base field K = C(x), with field of constants C K = C, using elementary functions and quadratures. The differential Galois theory considers more general differential fields, but for our purpose is enough to consider C(x). Thus, the differential Galois group (DGal(L/K)), as analogically as in the polynomial case, is the group of all differential automorphisms that restricted to the base field coincide with the identity. Moreover if y 1 , y 2 , . . . , y n is a basis of solutions of d n y dx n + a n−1 then for each differential automorphism σ ∈ DGal(L/K) there exists a matrix A σ ∈ GL(n, C) (i.e., a ij ∈ C, 1 ≤ i, j ≤ n and det(A σ ) = 0) such that In this terminology, we say that a linear differential equation is integrable in the sense of differential Galois theory whether the connected identity component of its differential Galois is a solvable group. Moreover, this definition of integrability leads to the obtaining of solutions in closed form if and only if G 0 is solvable, see [49] for full explanation and details. From now on, integrable in this paper means integrable in terms of differential Galois theory, see [47]. To accomplish our purposes, we are interested in second-order differential equations of the form see [10]. Jerald Kovacic developed in 1986 an algorithm to solve explicitly second-order differential equations with rational coefficients given in the form of equation (2.25), see [30]. In [20] another version of Kovacic's algorithm is presented, and it is applied to solve several second-order differential equations with special functions as solutions. The version of Kovacic's algorithm presented here corresponds to [6], see also [1,10,11,14]. As mentioned, Kovacic's algorithm cannot be applied when the coefficients of the secondorder differential equations are not rational functions. Therefore we need to transform such differential equations to apply Kovacic's algorithm. A possible solution to this problem was developed in [1,3,11], the so-called Hamiltonian algebrization. However, we are interested in transformations that preserve the differential Galois group (at least their connected identity component), in other words, the transformation must be either isogaloisian, virtually isogaloisian or strongly isogaloisian, see [1,11]. One important differential equation in this work is the Whittaker's differential equation, which is given by The Galoisian structure of this equation has been deeply studied in [45], see also [20]. The following theorem provides the conditions of the integrability in the sense of differential Galois theory of equation (2.26). The Bessel's equation is a particular case of the confluent hypergeometric equation and is given by for some V ∈ K. Furthermore, the algebraic form of the equation Next, we follow the references [1, 6, 11] to describe Kovacic's algorithm. Thus, to solve second-order differential equations with rational coefficients we use should Kovacic's algorithm, which is presented in Appendix A. Main results In this section we present the main contributions of this paper. First we will show that for the usual ν = 6, δ = 12 Lennard-Jones potential, the Schrödinger equation is non-integrable in the sense of differential Galois theory for any value of energy. In contrast for δ = 10 and ν = 6 we show the integrability, in the sense of differential Galois theory, as a special case of a general theorem for δ = 2ν − 2 with δ, ν ∈ N (see Theorem 3.3 and its subsequent remark). From the physical point of view, the 10 − 6 case is of the most remarkable importance. Since we preserve the physically grounded −1/r 6 term coming from dipole-dipole interactions and responsible of the van der Waals forces; but we replace never the less, the rather arbitrary 1/r 12 term responsible for the repulsion of the particles in the many body system, and leading to a non-integrable differential equation; with an equally arbitrary 1/r 10 term, but leading to an integrable one. We will dedicate the subsequent sections to show the advantages and physical interest of this special choice (see Fig. 2 for a graphic comparison of both potentials). We start by considering the radial Schrödinger equation (2.7) with the generalized effective Lennard-Jones potential (2.13) Setting C(r) as the differential field of equation (3.1) with the derivative d dr , we set alsō A,B,C ∈ C. Theorem 3.1. Schrödinger equation with original 12 − 6 Lennard-Jones effective potential is not integrable in the sense of differential Galois theory for any value of the energy and for all A, B ∈ C * , C ∈ C. Proof . Considering ν = 6 and δ = 12 in equation (3.1) we arrive to the Schrödinger equation with effective original 12 − 6 Lennard-Jones potential. Now, applying the Hamiltonian change of variable z = r 2 over such Schrödinger equation we arrive to the differential equation Now, the change of dependent variable After applying Kovacic's algorithm, see Appendix A, we observe that equation ( Supersymmetric quantum mechanics and the Lennard-Jones superpotential The implementation of Hamiltonian algebrization and Kovacic's algorithm reaches a considerable power in the realm of supersymmetric quantum mechanics. In fact the integrability of secondorder linear equations like the radial Schrödinger equation (3.1) subject of our study, via the Kovacic's algorithm is deeply related with the properties of the solutions of the associated Riccati equation in the supersymmetric extension of the theory [1]. Taking this as a motivation, we go further in this section and propose a superpotential leading to the non-effective part (2.12) of (2.13) (C = 0) as one of two partner potentials. If we denote the superpotential in one dimension as w(r) the corresponding partner potentials are given by equation ( as a consequence we have A simple choice for w(r) is given by where the following identities should hold As a result we identify v δ−ν (r) in (2.12) with v − and we have from (3.3), the following expressions for the partner potentials According to equation (2.23), the corresponding wave function for the zero energy level using v − is given by An example of wave function for this potential is given in Fig. 3. Summarizing we conclude that expressions in (3.5) set conditions for the existence of a superpotential in the form of (3.4), and a supersymmetric extension to equation (3.1) with partner potentials in (3.6). The case for δ = 2(ν − 1) thus appears in a natural way, as a simple condition for defining a supersymmetric model. The property of integrability for zero energy of this case to be proven in Theorem 3.3, makes it an appealing model to further explore the relation between supersymmetry and integrability already studied in [1]. The 10 − 6 Lennard-Jones superpotential and the De Boer parameter As mentioned before the case for ν = 6, δ = 10 is of particular relevance from physical grounds. One of the aims of this work is to explore the analytical advantages of v 10−6 in contrast to v 12−6 . The 10 − 6 potential in terms of the molecular parameters σ and is given by where α is chosen so that is the minimum energy (the well depth) and σ, as mentioned before, is the value where V 10−6 vanishes. As a result we have for this case 1 α ≡ (25/6) 5/3. Thus the rescaled 10 − 6 Lennard-Jones potential reads where we have applied definitions in (3.10) and α ≡ (25/6) 5/3. We will call it from now on the supersymmetric condition (for short SUSY condition) for the 10 − 6 Lennard-Jones potential. In terms of the so-called De Boer parameter Λ ≡ /(σ √ µ ), which gives a degree of the quantum character of the system [18], we have Λ 2 ≈ 0.4303 or similarly Λ ≈ 0.6559. An important remark at this point is that the SUSY condition given in the formĀ = 5 √B will appear again in Theorems 3.2 and 3.3, in the context of the Martinet-Ramis theorem, that is, Theorem 2.1. As a summary of this section, we conclude that the fulfillment of condition (3.12), guarantees not only the solvability of the Schrödinger equation (2.8) with the potential v 10−6 in (3.9) (through the Martinet-Ramis theorem, as we will see) but also the existence of a superpotential given by expression (3.11), which correspondingly leads to v 10−6 as one of the partner potentials v ± (r) defined through the Riccati equations in (3.3). The supersymmetric model thus formulated considers a specific version of the radial Schrödinger equation (2.9) or equivalently the rescaled form (2.8), where we set in both equations l = 0 for the angular momentum, and V 10−6 and v 10−6 are given in (3.8) and (3.9). We will start in the next section, a physical analysis of the model, in the light of the De Boer principle of corresponding states. The low temperature behavior of the 10 − 6 Lennard-Jones gas As mentioned in Section 2, the De Boer principle of corresponding states tells us that two different systems with equal value of Λ have identical thermodynamical properties [18]. In this sense the SUSY condition Λ 2 = 2 / (σ) 2 µ = (1/3) 5/3 ≈ 0.4303 given in (3.12); and representing a definite set of combinations of values of the parameters σ, , and µ; that accounts for Λ 2 = (1/3) 5/3; is defining through the principle of corresponding states, a specific set of physical systems with equivalent thermodynamical properties. These systems have the special feature of being described by a Supersymmetric potential of the form (3.11) leading to (3.14) with the potential (3.13) as the partner potential V − (r) ≡ 2 2µ v − (r) in (3.6) with ν = 6. We have found after a brief review of the literature, a significant coincidence between the specific value for Λ 2 ≈ 0.4303 and the value of Λ 2 = 0.456 reported by Miller, Nosanow and Parish [39] for a second-order liquid to gas phase transition of a Bose-Einstein condensate at zero temperature. Since their calculation is an approximate one, made in the framework of the variational method; it is a worthy task (to be done elsewhere) to investigate the advantages of our exact approach to the calculation of properties of such many-body systems at low temperatures in the context of the quantum extension to the principle of corresponding states. In Fig. 4 we see a plot of the second virial coefficient calculated numerically for both the 12−6 and 10 − 6 potentials from the integral definition in (2.15). Relying on B 2 (r) as a quantity that gives information of the microscopic pair-potential (with the previously mentioned limitations) we see an asymptotic closeness of both functions for low temperatures, that hints for the reliability of our supersymetric model with the 10−6 Lennard-Jones potential, near the absolute zero. Integrability of the 10 − 6 Lennard-Jones potential and its generalization The following result is valid for any potential v(r) belonging to a differential field. • The differential Galois groups of the radial equation and Schrödinger equation with effective potential are subgroups of SL(2, C). • The transformation ϕ is strongly isogaloisian. Figure 4. B 2 (T ) vs. T / plot for the 12 − 6 (black) and 10 − 6 (grey) Lennard-Jones potentials for the same molecular parameters σ and . The closeness of both functions for low temperatures near to absolute zero is a hint of the reliability of the 10 − 6 potential in that region. Proof . We proceed according to each item. • Applying the transformation given in equation (2.24) and equation (2.25) we obtain it because 2/r is the coefficient of the first derivative of the radial equation after separation of variables. Thus, applying the change of variable ϕ : u → ϕ(u) = ru we arrive to the Schrödinger equation with effective potential. • The Wronskian of two independent solutions of the Schrödinger equation with effective potential is constant and constants are in the base field. Similarly, the Wronskian of two independent solutions of the radial equation belongs to the base field. Therefore, automorphisms over such solutions acts by multiplication of matrices belonging to SL(2, C), that is, σ(U ) = A σ U , σ(ϕ(U )) = A σ ϕ(U ) and det(A σ ) = 1. Thus, A σ ∈ SL(2, C). • Applying the differential automorphism σ over ϕ(u) we observe that σ(ϕ(u)) = σ(r)σ(u) = rσ(u), which implies that differential Galois group only depends on the solutions u because r is in the base field and the differential Galois group will be the same with the same base field. Thus, the transformation ϕ is strongly isogaloisian. Thus we conclude the proof. The following result corresponds to the integrability conditions for the 10 − 6 L-J potential and its generalization. Proof . The Schrödinger equation given in equation (3.1), with zero energy, is transformed into the Whittaker's differential equation (2.26) through the change of variables Applying Martinet-Ramis theorem we have Assuming m ∈ Z we obtain which is the integrability condition for the Schrödinger equation with (2ν − 2) − ν L-J potential and its wave function corresponds to equation (3.7) with δ = 2ν − 2. Remark 3.4. We observe that Theorem 3.3 refers to the integrability in the sense of differential Galois theory, which is not related with square integrable wave functions. Another key point is that we are not considering energies different than zero and excited states, this is an open problem for this generalized potential. In particular, the theorem includes the 10 − 6 L-J potential, for C = 0 and ν = 6. Therefore the Schrödinger equation with L − J 10-6 is integrable for zero energy when A = ± √ B(8m + 4 ± 1), while energies different than zero and excited states were not considered in this paper. Moreover, for zero energy and m = −1 we recover the integrability condition obtained through SUSYQM for this potential, i.e., the Schrödinger equation with 10−6 L-J potential is also integrable in the sense of differential Galois theory for A ∈ ±3 √ B, ±5 √ B . Final remarks and open questions In this paper we have shown that there exist no explicit solutions of the radial Schrödinger equation with the usual 12 − 6 Lennard-Jones potential for any value of the energy. We have proposed an alternative supersymmetric model with a 10 − 6, v − partner potential, that preserves the −1/r 6 van der Waals attraction. We have found through the De Boer principle of corresponding states, initial hints that this model could represent a low temperature system determined by a Λ 2 ≈ 0.4303 value of the 2nd power of the De Boer parameter. We have studied possible generalizations of the Lennard-Jones potential, where the Schrödinger equation is integrable in the sense of differential Galois theory. Further work can be developed looking for similar theorems of integrability in the sense of differential Galois theory for E = 0 and excited states, for the 10 − 6 potential and other generalizations. Relations between square integrable wave functions and solutions in closed form of SE for generalizations of L-J potentials should be explored in further works too. We hope that this paper can be the starting point of further works involving SUSYQM, DGT and statistical mechanics, which are not easy topics. Although we tried to write a readable preliminaries about these topics, we know that it was not enough and the reader should complement with references suggested by us, otherwise this paper could be a large paper, which was not the target. A Kovacic algorithm The version of Kovacic's algorithm presented in this appendix is based in the improved version given in [6]. There are four cases in Kovacic's algorithm. Only for cases 1, 2 and 3 we can solve the differential equation, but for the case 4 the differential equation is not integrable. It is possible that Kovacic's algorithm can provide us only one solution (ζ 1 ), so that we can obtain the second solution (ζ 2 ) through Step 2. Find D = ∅ defined by D = n ∈ Z + : n = α ε(∞) ∞ − c∈Γ α ε(c) c , ∀ (ε(p)) p∈Γ . If D = ∅, then we should start with the case 2. Now, if Card(D) > 0, then for each n ∈ D we search ω ∈ C(x) such that Step 3. For each n ∈ D, search for a monic polynomial P n of degree n with ∂ 2 x P n + 2ω∂ x P n + ∂ x ω + ω 2 − r P n = 0. If success is achieved then ζ 1 = P n e ω is a solution of the differential equation. Else, case 1 cannot hold. Case 2. Search for each c ∈ Γ and for ∞ the corresponding situation as follows: Step 1. Search for each c ∈ Γ and ∞ the sets E c = ∅ and E ∞ = ∅. For each c ∈ Γ and for ∞ we define E c ⊂ Z and E ∞ ⊂ Z as follows: (c 1 ) If •(r c ) = 1, then E c = {4}. If D = ∅, then we should start the case 3. Now, if Card(D) > 0, then for each n ∈ D we search a rational function θ defined by θ = 1 2 c∈Γ e c x − c . If P n does not exist, then case 2 cannot hold. If such a polynomial is found, set φ = θ + ∂ x P n /P n and let ω be a solution of Then ζ 1 = e ω is a solution of the differential equation. Case 3. Search for each c ∈ Γ and for ∞ the corresponding situation as follows: Step 1. Search for each c ∈ Γ and ∞ the sets E c = ∅ and E ∞ = ∅. For each c ∈ Γ and for ∞ we define E c ⊂ Z and E ∞ ⊂ Z as follows: (c 1 ) If •(r c ) = 1, then E c = {12}. Step 2. Step 3. Search for each n ∈ D, with its respective m, a monic polynomial P n = P of degree n, such that its coefficients can be determined recursively by P −1 = 0, P m = −P, where i ∈ {0, 1, . . . , m−1, m}. If P does not exist, then the differential equation is not integrable because it falls in case 4. Now, if P exists search ω such that m i=0 S i P (m − i)! ω i = 0, then a solution of the differential equation is given by where ω is solution of the previous polynomial of degree m.
2018-09-19T04:25:03.000Z
2018-03-03T00:00:00.000
{ "year": 2018, "sha1": "f808343c8386599ff8bb93bc3fc917152f5e77d3", "oa_license": "CCBYSA", "oa_url": "https://www.emis.de/journals/SIGMA/2018/099/sigma18-099.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f808343c8386599ff8bb93bc3fc917152f5e77d3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
267122892
pes2o/s2orc
v3-fos-license
Embracing AI in English Composition: Insights and Innovations in Hybrid Pedagogical Practices : In the rapidly evolving landscape of English composition education, the integration of AI writing tools like ChatGPT and Claude 2.0 has marked a significant shift in pedagogical practices. A mixed-method study conducted in Fall 2023 across three sections, including one English Composition I and two English Composition II courses, provides insightful revelations. The study, comprising 28 student respondents, delved into the impact of AI tools through surveys, analysis of writing artifacts, and a best practices guide developed by an honors student. Initially, the study observed a notable anxiety and mistrust among students regarding the use of AI in writing. However, this apprehension gradually subsided as students increasingly integrated these tools into their writing processes, indicating a shift from skepticism to practical application. The analysis of writing artifacts, particularly early drafts, revealed distinct patterns of AI tool usage, differentiating between students utilizing the tools effectively and those attempting to shortcut the writing process. The final papers, while not overtly indicating AI usage, demonstrated nuanced integration of AI in iterative and recursive tasks like refining arguments and developing ideas at the paragraph level. This suggests a trend toward a hybrid model of writing instruction, where traditional methods are complemented by strategic use of emergent technologies. The study underscores the importance of revised instructional strategies that blend conventional writing techniques with guidance on effective and ethical AI tool usage. It highlights the potential of AI tools in supporting the writing process while also cautioning against over-reliance. The findings of this study offer valuable insights for educators and institutions aiming to develop a balanced and effective hybrid writing instruction model, catering to the needs of contemporary English composition classrooms while maintaining academic integrity. Introduction The impact of generative AI on the writing process, particularly in educational settings, has been profound and multifaceted.At the outset of January 2023, these AI tools, including the likes of ChatGPT, were met with skepticism and even outright banning in many colleges and school districts (Yu, 2023).Educators and administrators were wary of the potential for misuse, such as plagiarism and the erosion of fundamental writing skills (Kishore et al., 2023).However, as the year has progressed, the same institutions began to recognize the potential benefits of these tools and shifted toward embracing AI literacy and integration into the classroom (Famaye et al., 2023).The evolution in perspective underscores a significant shift in attitudes toward technology in education, reflecting a broader trend of digital transformation across various sectors (Moraes et al., 2023). More specifically, the field of English composition is undergoing a radical transformation, mirroring changes brought about by previous technological advancements.The advent of generative AI tools has prompted educators to rethink traditional strategies for teaching writing (Fitria, 2023;Imran & Almusharraf, 2023).The future of sentence-level writing instruction, as well as the stages of the writing processfrom idea generation to drafting and organizationis being reevaluated in light of these new tools (AlAfnan et al., 2023).Not surprisingly, this disruption is particularly evident in English composition classrooms, where the use of AI writing tools continues to be viewed with suspicion by some, while others have outright banned their use (Tlili et al., 2023).Despite this, the continued integration of these tools into various writing platforms, including mainstream applications like Microsoft Office, suggests that their presence at a foundational level in word processors is inevitable (Jo, 2023). The potential for a fundamental pedagogical shift in the teaching of writing is already reflected in literature on using the large language model (LLM) ChatGPT in writing pedagogy.Studies have shown that ChatGPT generally motivates learners to develop reading and writing skills, indicating its potential positive impact in educational settings (Ali et al., 2023).However, there are concerns about the authenticity and reliability of content generated by ChatGPT.For instance, a study found that while the generative aspects of the LLM can generate content that appears believable, human-written articles score better in terms of completeness, credibility, and scientific content (Haq et al., 2023).Furthermore, the use of the tool in various domains including education and training has been documented, but its effectiveness and appropriateness in these contexts remain subjects of ongoing research and debate (Arif et al., 2023).Another study highlighted that while ChatGPT can assist in creative and essay writing, its use did not significantly improve essay quality compared to control groups (Bašić et al., 2023).Overall, these studies suggest a complex and nuanced view of ChatGPT's role in writing pedagogy, highlighting both its potential benefits and challenges. As we navigate this new generative world in the Digital Age, this study seeks to uncover effective strategies for incorporating AI into writing instruction, balancing the benefits of technological advancement with the need to maintain core writing skills and academic integrity.Given the growing body of research on the use of generative AI tools like ChatGPT in writing pedagogy, it presents a compelling case for further exploration into their pedagogical applications (Hutson & Plate, 2023;Shidiq, 2023).This study aims to elucidate the specific use cases of integrating such generative writing tools, particularly focusing on the English composition classroom.By examining the integration of these tools in educational settings, the study seeks to uncover the nuances of how AI can complement and potentially transform traditional writing instruction methods. The methods of the study involved a detailed examination of student interactions with AI writing tools, their impact on writing processes, and the subsequent effects on student learning outcomes.Through surveys, analysis of writing artifacts, and pedagogical evaluations, the study provides a comprehensive view of the role of AI in writing pedagogy.The results indicate a complex interplay between AI tools and traditional writing instruction, revealing both the potential advantages and limitations of AI in enhancing writing skills.These findings are crucial for educators and curriculum designers in understanding how to effectively integrate AI tools in writing courses, balancing technological innovation with essential writing competencies.The study's insights contribute to the evolving landscape of writing pedagogy, offering guidance on leveraging AI tools to enrich the English composition classroom experience while maintaining academic integrity and fostering critical writing skills. Literature Review The current state of scholarship on the impact of AI on writing pedagogy highlights the growing influence of digital technologies on educational practices.Research indicates that digital writing syllabi often align with pedagogical scholarship in practices like direct instruction and critical analysis, although some divergence exists in reflective pedagogy (Hamilton, 2019).This suggests that while digital tools are increasingly integrated into writing instruction, they also necessitate a reevaluation of traditional teaching methods.At the same time, AI technologies in writing instruction and assessment are reshaping material-discursive relations of difference, impacting teaching and learning with racializing assemblages (Dixon-Román et al., 2020).The use of these new tools underscores the need for educators to consider the broader social and ethical implications of implementing AI in educational settings.Additionally, AI in education has been noted to improve administrative functions, curriculum personalization, and overall learning quality, signifying a shift toward more individualized and efficient educational practices (Chen et al., 2020). Studies specifically on using AI to teach writing, however, have been limited to areas outside of the composition classroom.For instance, research by Tang (2021) has investigated the growing popularity of AI-based writing algorithms in business writing practices.The study explores the integration and impact of these tools, emphasizing their strengths and potential drawbacks.It provides an empirical basis for understanding how AI is reshaping professional writing practices and what this means for the future of writing education.Previous applications of AI in higher education, on the other hand, including profiling, prediction, assessment, and adaptive systems, are gaining traction, but the involvement of educators in these developments remains crucial (Hutson et al., 2022;Zawacki-Richter et al., 2019).The potential impact of AI on learning, teaching, and education, Tuomi (2022) previously noted, necessitates policy-oriented work, research, and forward-looking activities to address both opportunities and challenges.Moreover, the impact on writing pedagogy is influenced by learning management systems (LMS) and academic analytics, affecting how computers and composition scholars consider writing instruction and assessment (Duin & Tham, 2020).These developments highlight the intersection of technology and pedagogy, calling for a nuanced understanding of how AI tools can be effectively integrated into writing instruction while considering their broader educational implications. These previous studies were, however, pre-generative AI.The research landscape since 2023 regarding the use of ChatGPT and other AI writing tools in English composition classrooms has been dynamic and insightful.Whereas previous uses of AI were generally confined to a select group of staff within organizations that operated student information systems and LMS, the use of these new generative tools has democratized their access and has seen a proliferation across all areas of academia from faculty, staff, and, of course, students.As such, a notable trend is the growing effectiveness of these tools in enhancing student capabilities in reading, writing, and critical thinking.Alharbi (2023) highlights the categorization of AI-powered writing assistance tools into four main groups: automated writing evaluation tools, corrective feedback tools, machine translators, and GPT-3 automatic text generators.The research suggests that these tools can significantly improve students' writing skills by providing varied types of support tailored to learner needs.The study points toward the increasing sophistication of AI in providing personalized and effective writing assistance in foreign language classrooms.These findings are echoed in the study by Nazari et al. (2021), whose work delves into the application of AI-powered digital writing assistants in higher education, revealing improvements in student engagement, self-efficacy for writing, and emotional responses compared to non-equipped AI environments.This study points toward the multifaceted benefits of AI in fostering a more engaging and effective learning experience.Ali et al. (2023) and Purnama et al. (2023) highlight that ChatGPT motivates learners to develop their writing skills and improves student engagement in online writing courses.In a broader educational context, Zhao and Nazir (2022) discuss how AI-based applications are enhancing the educational system by promoting English language learning and inclusivity.Their work underscores the role of these new tools in creating multimode production and usage, which sustains the effectiveness of learning experiences, especially in the post-COVID-19 era where digital learning tools have become increasingly vital.The study supports previous research on the topic.For instance, a study by Mohamed Haggag (2021) indicates International Journal of Changes in Education Vol. 1 Iss. 1 2024 that AI-powered tools can significantly improve reading and writing skills for TOEFL-ITP test-takers, suggesting their broader applicability in language learning and test preparation.This research underscores the potential of AI in enhancing students' language proficiency, providing a compelling case for the integration of such tools in academic environments. However, this burgeoning field of research also surfaces critical concerns and nuances.While ChatGPT and other generative writing tools have been praised for their accurate and reliable inputs in areas like creative writing, essay writing, and prompt generation (Taecharungroj, 2023), there is a pressing need to address the authenticity of the content it generates.Studies by Haq et al. (2023) and Perkins (2023) indicate that while the content produced by ChatGPT is often believable, it may lack the completeness, credibility, and scientific rigor found in humanwritten compositions.Moreover, academic integrity concerns loom large, as the adoption of these tools in educational settings, especially in digital writing and composition, needs transparent guidelines and ethical considerations (Perkins & Roe, 2023).While ChatGPT shows potential in boosting student performance and aiding in various aspects of writing, educators have found themselves navigating a changing landscape to ensure that the use of AI complements rather than compromises academic integrity. These studies reveal that while the use of ChatGPT and other AI writing tools in English composition classrooms has been explored in various studies since 2023, these investigations remain somewhat limited in scope and depth.The existing research provides valuable insights into the potential benefits and challenges of integrating AI into writing pedagogy, highlighting the need for further exploration in this area.Consequently, this study aims to address the gaps identified in the current literature by conducting a more comprehensive investigation into the use of AI writing tools in English composition teaching and offer actionable steps to integrate new strategies into writing pedagogy furthering work by McKnight (2021).In her pre-generative AI study, "Electric Sheep?Humans, Robots, Artificial Intelligence, and the Future of Writing," thought-provoking questions were raised about the future of writing education in an era where AI and humans coexist.Her research espoused a shift toward a posthuman perspective, with humans needing less input in the writing process due to the advancement of AI language models.The hypothetical future is upon us and educators can no longer ignore these new tools in writing instruction.Therefore, by delving deeper into the practical applications, ethical considerations, and pedagogical outcomes of these tools, this study seeks to contribute a more nuanced understanding of their role and efficacy in enhancing writing instruction in educational settings. Methodology This study utilizes a mixed-methods research design to investigate the use of AI writing tools in English composition classrooms comprehensively.The qualitative component includes detailed analysis of student writing artifacts and interviews, while the quantitative component comprises surveys designed to capture a broad spectrum of student perceptions and experiences.The combination of these methods aims to provide a holistic view of the impact of AI tools on writing pedagogy. The study involved 28 students from three different English composition courses, one English Composition I and two English Composition II courses.The participants were selected based on their enrollment in these courses during the study period, ensuring a varied range of experiences and perspectives on the use of AI in their writing process. Surveys were administered at the beginning and end of the courses to gauge students' initial perceptions and track changes over time.The survey consisted of both closed-ended questions for quantitative analysis and open-ended questions for qualitative insights.Questions were designed to capture students' attitudes toward AI writing tools, their experiences using these tools, and their perceptions of the impact on their writing skills and processes. Before delving into the specific questions about the use of AI in the writing process, the survey gathered essential demographic information from the participants.This included age, gender, ethnicity, major, and first-generation status, among other factors. Collecting such data provides a context for understanding the diverse backgrounds and perspectives that might influence student experience and attitude toward AI in their coursework. The following questions were then posed to gather insights specifically about student interactions with AI in their writing process: 1. Did you like the AI generator essay exercises being part of the writing process in class? 2. How willing were you to experiment with AI during your writing process? 3. How challenging did you find working with AI to be in the writing process? 4. At what stage of the writing process do you naturally think to use AI for?(Select all that apply) 5. How do you perceive AI in the writing process: as a collaborative tool or do you feel alienated from the writing process?6.What types of interaction do you desire from an AI writing assistant?7. Do you prefer AI to be self-reflective or to offer advice during the writing process?In other words, do you want it to help you reflect on your own writing or suggest ways to make it better?8.For the final paper, did you choose to utilize AI in the writing of it?9. Why or why not? 10.Please provide any further insight into your experience and the usefulness of AI essay generators for college composition classes. These questions were designed to explore various aspects of AI use in the writing process, from general attitudes and willingness to experiment with the technology to perceptions of its challenges and benefits.By addressing specific stages of the writing process and the desired types of interaction with AI tools, the survey aimed to capture a comprehensive view of how students integrate AI into their academic work and their overall satisfaction with the experience.The responses provide valuable insights into the effectiveness and impact of AI writing tools in enhancing the educational experience in composition classes. Writing artifacts, including both rough drafts and final drafts of paper assignments, were systematically collected throughout the course.The collection process was standardized to ensure consistency.The analysis involved both qualitative and quantitative methods.Qualitatively, the study examined changes in writing style, coherence, and originality.Quantitatively, it assessed aspects such as the frequency and type of AI tool usage, improvements in grades or writing quality, and other measurable changes. An honors student developed a best practices guide focusing on the pedagogical integration of AI tools in writing instruction.The development process involved a literature review, consultation with educators experienced in AI, and trial and error in applying various strategies in real classroom settings.The content of the guide includes strategies for effectively combining traditional writing instruction with AI tools, ethical considerations, and tips for maintaining academic integrity. In-depth interviews with selected participants were conducted to gain deeper insights into their experiences and opinions about using AI in writing.The interview questions were semi-structured, allowing for flexibility in responses while ensuring that all relevant topics were covered.The interviews provided valuable context to the quantitative data collected and helped identify themes and patterns not apparent in the survey or artifact analysis. Quantitative data from the surveys were analyzed using statistical methods appropriate for the type of data collected.Qualitative data from open-ended survey responses, writing artifacts, and interviews were analyzed using thematic analysis to identify common themes and patterns.The results from both qualitative and quantitative analyses were then triangulated to provide a comprehensive understanding of the impact of AI on writing pedagogy. By employing these detailed methods, the study aims to offer a well-rounded perspective on the use of AI writing tools in English composition classrooms, contributing valuable insights and strategies for the effective integration of technology in education.This approach enhances the replicability of the study and provides a solid foundation for future research in this area. Demographics The demographic analysis of the participants revealed a diverse group in terms of academic standing, age, gender identity, ethnicity, and educational background.The cohort predominantly consisted of First-Year students, comprising 63.33% (n = 17) of the respondents, followed by Sophomores at 23.33% (n = 7), and Juniors at 13.33% (n = 4).There were no participants from Senior year or other categories (Figure 1).In terms of age demographics, the vast majority of participants fell into the 18-24 age group, accounting for 96.43% (n = 27) of the sample.Only one participant (3.57%) was in the 35-44 age bracket, with no participants in the other age categories.Gender identity among the participants was predominantly Male (60.71%, n = 17), with Female participants making up 35.71% (n = 10) of the cohort.Only one participant (3.57%) preferred not to specify their gender identity. Perceptions of use of AI The survey responses revealed significant insights into students' perceptions and experiences with AI in the writing process within the classroom.A substantial majority of the students, 79% (n = 22), expressed that they liked having AI generator essay exercises as part of the writing process in class (Figure 2).This positive response suggests a general receptiveness and interest among students in integrating AI tools into their learning experience.The remaining 21% (n = 6) were uncertain, marked by their response "Maybe," indicating a degree of ambivalence or curiosity about the role of AI in their writing practice. When asked about their willingness to experiment with AI during their writing process, a significant portion, 62.07% (n = 17), indicated they were "Very Willing" to engage with AI tools (Figure 3).An additional 27.59% (n = 8) reported being "Somewhat Willing," suggesting a general openness among the majority of students to incorporate AI into their writing routines.Only a small fraction, 6.9% (n = 2), were "Somewhat Unwilling." However, in terms of the challenges faced while working with AI in the writing process, the responses were more varied (Figure 4).A group of students, 41% (n = 11), found it "Somewhat easy," while 34% (n = 10) considered it "Neither easy nor difficult," reflecting a range of experiences in adapting to AI tools.Interestingly, 14% (n = 4) found it "Extremely easy," suggesting that for some, AI tools were intuitive and user-friendly.However, 11% (n = 3) did find it "Somewhat difficult," indicating that challenges and learning curves were present for a minority of the cohort.In all, these results indicate a predominantly positive reception of AI tools in the English composition classroom, with a majority of students showing willingness and interest in integrating AI into their writing process.However, the variation in the perceived challenge of using AI highlights the need for tailored approaches to support all students effectively. Perceptions of AI in the writing process The survey responses provided further insights into how students perceive and utilize AI during various stages of the writing process.Students reported using AI at different stages of the writing process.The most common stage was during Drafting, with 28.57% (n = 14) of students utilizing AI for this purpose (Figure 5).This was closely followed by Brainstorming and Outlining, each cited by 24.49% (n = 11) of respondents.Editing was mentioned by 16.33% (n = 7), and a smaller number, 6.12% (n = 3), used AI for Finalizing their work.This distribution suggests that students find AI tools particularly useful in the initial and middle phases of writing, such as developing ideas and structuring their work. When asked about their perception of AI in the writing process, a significant number of students viewed AI as a collaborative tool (Figure 6).Specifically, 41% (n = 12) felt that AI was "Somewhat Integrated" into their writing process, while 21% (n = 5) described it as "Highly Integrated."However, 31% (n = 9) remained Neutral, and a small fraction felt alienated by AI, with 7% (n = 1) each for "Somewhat Alienated" and "Highly Alienated."This indicates a generally positive perception of AI as a helpful tool in the writing process, though not without some reservations. Students showed diverse preferences for the types of interactions they desire from AI writing assistants.The majority, Perceived ease of use of AI in writing process 50% (n = 14), desired Creative Input from AI, indicating a preference for AI assistance in generating ideas or content (Figure 7).This was followed by Fact-Checking (17.86%, n = 5), other types of interaction (14.29%, n = 4), Syntax Suggestions (10.71%, n = 3), and Grammar Correction (7.14%, n = 2).These responses highlight a desire for these tools that offer more than just mechanical corrections, leaning toward creative and contentrelated assistance.Likewise, in terms of the role played by these generative tools, a large majority, 78.57% (n = 22), preferred a mix of both self-reflective capabilities and advice-giving during the writing process.This indicates a desire for AI tools that not only offer practical suggestions for improvement but also help students reflect on their writing.Only 14.29% (n = 4) preferred AI to exclusively offer advice, and a minority of 7.14% (n = 2) preferred AI to be self-reflective. Regarding the final paper, a significant majority of the students, 71.43% (n = 20), chose to utilize AI in writing it, while 28.57% (n = 8) did not.This usage rate suggests a high level of acceptance and integration of AI tools in the completion of significant academic tasks.In order to determine why they did or did not use said tools for the final paper, an open-ended question followed.The sentiment analysis of these responses reveals a diverse range of opinions and experiences.A notable number of students expressed positive sentiments, highlighting the utility of the tools in facilitating various aspects of the writing process.For instance, one student appreciated the initial structure it provided: Desired interaction with AI during writing process "I like using AI as a way for me to see an outline and then I am able to build on that outline myself and fine tune it to suit my ideas."The sentiment was echoed by others who found AI particularly helpful in overcoming challenges in starting and organizing their papers.Use was also credited for enhancing enjoyment and interest in writing, with one student stating, "I was interested in what it was going to be like being given the option to use AI and I thoroughly enjoyed it, it actually made me enjoy English as I have never been a big fan but this changed that." Conversely, some students expressed neutral to mixed feelings about using AI.While they acknowledged the benefits of AI in assisting with brainstorming and structuring, they still preferred a more personal touch in their writing.A student captured this sentiment by saying, "I think overall I still like doing the paper by myself more."This perspective suggests a preference for individual effort and a desire to maintain a personal connection with their work. Negative sentiments were also present, primarily centered around concerns regarding authenticity, personal relevance, and the depth of AI-generated content.Some students felt that relying on AI might reduce the effort and authenticity of their work, as one mentioned, "Because it feels like I put no effort into it."Others were wary of AI's ability to convey personal experiences or emotional depth, with a student expressing, "I did not because I felt that AI could not portray the emotional message I was trying to make."Many of the same sentiments resurfaced in the last open-ended question. The final question of the survey asked students to "provide any further insight into your experience and the usefulness of AI essay generators for college composition classes."The sentiment analysis of these responses once again reveals a range of attitudes and insights.For instance, several students expressed a positive outlook on the use of AI, appreciating its ability to enhance the writing process.One student highlighted the tool's capacity for idea generation: "It greatly sped up the writing process : : : it comes up with ideas my mind just hadn't gotten to yet."This sentiment of AI as a facilitator in the creative process was echoed by others, with one noting the AI's utility in helping transition from just passing grades to potentially achieving A's.The flexibility and variety of AI tools were also appreciated, as one student mentioned liking "being able to test and experiment with various AI models." However, the responses were not uniformly positive.Some students offered more nuanced views, acknowledging both the advantages and disadvantages of AI.For example, one student candidly stated, "I didn't hate it.It has its advantages and disadvantages," suggesting a balanced perspective.Another student reflected on their journey of integrating AI into their work, "I later got more used to integrating it rather than exploiting it so it felt like it was actually my own work and thoughts," indicating a shift from mere reliance to a more thoughtful application of AI.There were also insights into how students perceive the role of AI in their learning process.Some saw AI as a valuable tool for struggling students or for learning basic writing principles, while others viewed it as a temporary aid rather than a permanent solution.One student remarked, "I think it's a fine tool to learn how to use : : : I don't think it should be required." Overall, the responses from students about using AI in their writing process are varied, showing that while many see its value, a significant number approach it with caution.This diversity in viewpoints highlights the importance of adopting a nuanced approach to incorporating AI in educational contexts.Such an approach should acknowledge the advantages of AI, particularly in providing creative and structural support in writing, while also addressing concerns related to authenticity and personal input in student work.The results reflect a general inclination among students to value AI for more than just grammatical assistance, seeing it as a tool that can enhance the overall quality and creativity of their writing.The widespread use of AI in crafting final papers underscores its perceived effectiveness and utility in academic writing, suggesting its growing significance in the educational landscape. Analysis The survey results from the study reveal several key themes and implications regarding student engagement with AI tools.The overwhelming majority of students reported positive engagement with AI essay generators, as indicated by their willingness to experiment with these tools and incorporate them into various stages of the writing process.The high percentage of students who found AI useful for drafting and brainstorming suggests that AI tools are particularly valued for their ability to facilitate the initiation and organization of ideas.This observation aligns with current pedagogical trends that emphasize the importance of scaffolding in writing instruction, where AI can play a supportive role.Along the same lines, students predominantly perceived AI as a collaborative tool in the writing process, viewing it as an integrated part of their writing strategy.This perception reflects a growing trend in educational technology where digital tools are seen as partners in learning rather than mere aids.The ability of AI to provide creative input and assist in brainstorming and drafting resonates with the students' need for dynamic support in their writing journey. Despite the overall positive reception, the study also uncovered concerns about the authenticity and personal relevance of AIgenerated content.A small yet significant number of students expressed reservations about over-reliance on AI, fearing it might diminish their personal effort and creative input.This concern highlights the need for a balanced approach in integrating AI into writing pedagogy, one that leverages its capabilities while fostering students' individual writing skills and creativity.The diverse ways in which students utilized AI tools, ranging from brainstorming to editing and finalizing, indicate the versatility of these tools in catering to different aspects of the writing process.This versatility suggests that AI tools can be adapted to a wide range of writing tasks and styles, accommodating various learning preferences and needs.Finally, several students reported improvements in their writing skills and overall performance in English composition classes, attributed to their use of AI tools.This finding suggests that AI can act as a catalyst in enhancing students' writing abilities, particularly for those who traditionally struggled with writing tasks.However, the extent to which these improvements can be solely attributed to AI use remains an area for further investigation. One element that is crucial to discuss and had an impact on student perceptions of AI use in the writing process involves ethics.Given the predominantly negative academic narrative surrounding AI tools, there were several students who refused to participate and dropped the class due to the requirements.A review of the ethical considerations of academic integrity need to be considered here briefly before continuing with the instructor and student observations, especially considering the changes in content generation made possible by generative technologies.Several models have previously been proposed, such as Ashford (2021), who introduced the Academic Integrity Model (AIM), combining behavioral ethics and hybrid app-human agency to foster socio-techno responsibility among app-centric students.This model emphasizes the importance of developing ethical responsibility in students as they interact with AI and other digital tools in academic settings (Ashford, 2021).Likewise, Wong et al. (2018) also explored the use of mobile augmented reality trails on International Journal of Changes in Education Vol. 1 Iss. 1 2024 university campuses to engage students in learning about academic integrity and ethics.Their findings suggest that such innovative approaches can effectively change student perspectives on ethical dilemmas and promote a deeper understanding of academic integrity. The volume "Ethics and integrity in education and research" coordinated by Sanud and Popoveniuc (2019), as reviewed by Amanoloae (2020), serves as a comprehensive resource for stimulating rigorous debates on research ethics in academia.This work highlights the continuous need for critical discussion and ethical guidance in educational research and practice.On the other hand, Zawacki-Richter et al. (2019) call for more critical reflection on the challenges, risks, and ethical approaches to AI in higher education.Their systematic review underscores the need for a deeper understanding of AI applications' ethical implications in the educational landscape.Bozkurt et al. (2021) reflect on the revolutionary changes AI has brought to education, including personalization and online learning.However, they emphasize that ethics remains an understudied area, urging for more research and guidelines to ensure ethical use of AI in educational settings.Holmes et al. (2022) advocate for a community-wide framework for ethics in AI in education, combining multidisciplinary approaches and robust guidelines.This framework aims to address the complex ethical issues arising from AI use in education and ensure responsible student learning. The ethical considerations surrounding the use of AI in education, particularly in writing instruction, have become increasingly prominent with the advent of new generative technologies.While previous models and studies, such as the AIM by Ashford (2021) and the augmented reality trails researched by Wong et al. (2018), provide a foundation for understanding the ethical implications of AI, the emergence of new generative forms of AI necessitates a reevaluation and expansion of these ethical frameworks.These advanced AI tools offer unprecedented capabilities in content generation, making it imperative to address the unique ethical challenges they present. It is crucial to clarify that in this study, students were not using AI tools unsupervised or attempting to pass off AI-generated content as their own work.Instead, the study emphasized the importance of students' active role in the writing process, using AI as a tool to aid and enhance their writing rather than replace it.This distinction is essential in mitigating ethical concerns, as the responsible use of AI involves recognizing and crediting AI's contribution to the creative process.However, ethical violations can arise when the use of AI is not properly addressed or is outright banned.Such prohibitive measures may become increasingly untenable given the pervasive integration of AI into various forms of content creation and word processing tools.To navigate these ethical waters effectively, educators and institutions must develop clear guidelines that articulate the student's role in using AI, emphasizing collaboration, supervision, and transparency.By fostering an environment of ethical awareness and responsibility, students can learn to harness the power of AI in their writing while upholding the values of academic integrity and intellectual honesty.As AI continues to evolve, so too must our ethical frameworks and educational practices to ensure that they remain relevant and effective in guiding students through the complex landscape of digital learning and content creation. Instructor and student observations The instructor of record, along with a collaborating honors student, provided valuable insights into the use of AI in writing assignments.They observed that while all students initially experimented with AI, most of them gradually increased their reliance on it, with some papers being roughly 90% AI-generated.The honors student, in developing a best practices document, focused on guiding students on what not to do with AI, addressing concerns about over-reliance and loss of originality.A noteworthy observation was about a student who initially struggled with lower grades.After consultation and guidance on effectively using AI, this student's performance improved significantly, suggesting the potential of AI as a tool for academic enhancement when used appropriately. A common sentiment among students was discomfort with the idea of AI taking over their writing, leading to feelings of lack of ownership and agency over their work.This concern about control was evident in their reluctance to fully embrace AI initially.Despite this initial discomfort, the survey results indicated a shift in perception, with most students expressing a liking for AI by the end of the course.Regarding the tools used, all students utilized ChatGPT 3.5, with some experimenting with Claude.However, some students faced limitations with Claude, such as running out of responses.The financial aspect also played a role, with students showing reluctance to pay for AI services given their initial apprehensions. Over the course of the term, students' use of AI became more frequent and serious.Initially, there was anxiety about the effectiveness of AI, especially among Creative Writing students who remained anxious throughout.In contrast, Composition students were more practical in their approach.Most students preferred using AI for drafting, citing its efficiency in generating language and helping overcome the mundane aspects of writing an essay.However, they found it less effective for targeted editing and finalizing, as AI tended to rewrite rather than make precise edits.On the other hand, integrating AI into the writing process was a nuanced experience for students.Some felt disconnected, as though the words were not entirely their own.Others used AI as a tool for editing and developing ideas, maintaining a sense of ownership by being actively involved in the ideation process.This approach was likened to group work, where students felt it was essential to contribute to the idea generation to feel a sense of ownership over the final output. These observations underscore the complexity of integrating AI into writing pedagogy.While AI can be a powerful tool for enhancing writing skills and efficiency, its use needs to be balanced with maintaining students' sense of ownership and originality in their work.The evolution of students' attitudes toward AIfrom initial discomfort to eventual acceptancereflects the potential for AI to become an integral part of the writing process, provided it is used as a collaborative tool rather than a replacement for student effort and creativity.The instructor and honors student's insights highlight the importance of guiding students in responsible and effective AI use, ensuring that AI serves as an aid to their writing process rather than undermining their development as independent and critical thinkers. Adopting a hybrid instructional approach The study underscores the importance of a hybrid instructional model in English composition courses, blending traditional writing methods with AI tools.This approach should aim to develop the creative and analytical skills of students in tandem.Instructors may consider guiding students in using AI as an adjunct to their writing, especially during brainstorming and drafting phases.This International Journal of Changes in Education Vol. 1 Iss. 1 2024 integration is crucial for leveraging the strengths of the tool in idea generation and language production while maintaining the rigor of traditional writing techniques.At the same time, it is essential for educators to instill a culture of critical engagement with AI-generated content among students.This involves training students not just to edit AI contributions for grammatical accuracy and stylistic coherence but also to evaluate the relevance and validity of the information provided by AI.Such critical engagement is vital for students to retain ownership of their work and to hone their evaluative and analytical skills. Given the diverse levels of familiarity and proficiency with AI tools among students, comprehensive training on effective AI usage is vital.This training should encompass the technical aspects of navigating AI tools and the ethical implications, including avoiding plagiarism and upholding academic honesty.Instructors should motivate students to explore the creative potential of AI in their writing.This could involve assignments where students are tasked with rewriting their work in various rhetorical styles or employing AI for imaginative idea development.These activities not only make writing assignments more engaging but also demonstrate the enriching capacity of AI in creative endeavors. In order to assist others in adopting AI tools for writing instruction, it is useful to provide examples of assignments that have been used successfully in first-year English composition classes.These assignments are designed to integrate AI into different stages of the writing process, helping students to understand and leverage the capabilities of AI tools while developing their writing skills.Here are some examples of standard assignments that can be adapted to include AI integration: AI-Assisted topic selection and brainstorming An initial assignment might involve students using AI tools like ChatGPT to brainstorm and select topics for their essays.Students can input their interests or general ideas into the AI tool, which then generates a list of potential essay topics or questions.Students can then refine these suggestions into a specific topic for their essay.This assignment helps students leverage AI for creative brainstorming and topic selection, a crucial first step in the essay writing process. Developing outlines with AI Once a topic is selected, students can use AI to help develop an outline for their essay.They can ask the AI tool to provide a basic structure for their selected topic, including potential thesis statements, main arguments, and supporting points.Students can then expand on this outline, adding their own ideas and research.This assignment helps students understand how to use AI to create a structured approach to their writing, ensuring that all necessary elements are included. AI-Generated drafts and student revision In this assignment, students can input their outlines into an AI tool, which then generates a rough draft of the essay.Students are then tasked with revising and improving this draft, adding their own analysis, examples, and personal voice.This process allows students to see how AI can aid in generating content but also emphasizes the importance of their own critical thinking and writing skills in producing a final, polished essay. Peer review with AI insights Students can use AI tools to provide initial feedback on their peers' drafts.By inputting their classmate's essay into the AI tool, they can receive suggestions on grammar, style, and content. Students can then use this AI-generated feedback as a starting point for their own peer reviews, adding their own insights and suggestions.This assignment helps students understand how to critically evaluate writing and provides an opportunity for them to learn from both AI and human feedback. Reflective essay on AI use As a meta-cognitive activity, students can write a reflective essay on their experience using AI in their writing process.They can discuss how they used the AI tool, what they learned from the experience, and how they see AI tools influencing their future writing.This assignment encourages students to critically reflect on the role of technology in writing and education more broadly. These assignments provide a structured way for students to interact with and learn from AI tools throughout the writing process.By integrating AI into different stages of essay writing, from brainstorming to revision, students can gain a deeper understanding of both the capabilities and limitations of AI, and how to best leverage this technology to improve their writing skills. As AI technologies continue to advance, the curriculum should evolve accordingly.Educators must stay abreast of the latest AI developments and adapt their teaching strategies to include updated best practices and guidelines for AI use in writing assignments.Finally, educators must be conscious of the digital divide and strive to ensure equitable access to AI tools for all students.This might entail offering alternative resources or support for students who lack access to premium AI services, ensuring that no student is disadvantaged in their learning experience due to technological or financial constraints. Best practices resource As part of the study, an honors student developed a Best Practices document (Table 1), emphasizing the importance of editing AI-generated content.This practice helps students feel a sense of ownership over their work.The document also highlighted the preference for AI tools that provide creative input, serving as a catalyst for idea generation rather than just focusing on syntax and grammar corrections.These practices, developed by and for students, can be integrated into the curriculum to guide students in effectively and ethically using AI in their writing process. An exceptionally effective application of ChatGPT is in crafting structured essay outlines.These outlines, tailored to specific essay genres such as persuasive or argumentative, provide students with a robust framework for their writing.This structured approach is instrumental in ensuring that all essential elements of the essay are methodically addressed.By leveraging the generative writing tool to lay out a clear structural blueprint, students can navigate the complexities of their essays with greater ease and precision.This methodical planning stage is critical in shaping an essay that is not only well-organized but also comprehensive in covering the necessary points. Beginning the writing process with concrete sources marks another strategic use of ChatGPT.By feeding the AI tool with substantiated materials like scholarly articles or literary excerpts, students anchor their essays in factual and credible content.This tactic significantly diminishes the AI's tendency to generate fictitious quotations, thus upholding the essay's academic integrity.The infusion of factual sources into the AI-generated content ensures that the essay remains grounded in reality, enhancing both its credibility and scholarly value. The utility of the tool in producing detailed and content-rich paragraphs, especially for the body of the essay, cannot be overstated.Instructing the AI to analyze specific media or sources results in paragraphs that are not only well-crafted but also rich in analysis.This feature becomes particularly beneficial when constructing the core sections of the essay, where depth of content and thoroughness of analysis are paramount.By utilizing the LLM for this purpose, students can add substantial substance and complexity to their essays, elevating the quality of their arguments and discussions. Addressing the sense of alienation and lack of personal agency in writing is crucial in the effective use of ChatGPT.To mitigate these concerns, personalizing the AI-generated content is paramount.This involves adapting the AI's output to align with the student's distinctive writing style, be it through tonal adjustments or stylistic alterations.Editing out elements that are not characteristic of the student's writing or adjusting the AI's tone to mirror personal preferences are key strategies in this personalization process.Such customization not only ensures that the final essay resonates with the student's unique voice but also maintains a sense of ownership and individuality in their work. However, it is critical for students to avoid excessive reliance on LLMs for their essay writing.General instructions to the AI often lead to generic and shallow responses, and there is a potential risk of AI-generated fictitious quotations, which raises ethical and academic concerns.Therefore, students must engage critically with the responses of the tool, rigorously editing and refining them to ensure the final essay transcends mere regurgitation of common thoughts and truly reflects their insights and creativity.The outline method, which involves creating a thesis, drafting an outline with the help of ChatGPT, and then elaborating on the essay with additional AI tools, stands out as a recommended approach.This method not only facilitates the development of well-structured and cohesive essays but also allows students to imbue their personal insights and creativity into their work. The integration of AI tools like ChatGPT into English composition courses presents a range of practical implications for education and English language teaching, necessitating expanded discussion and more detailed recommendations for educators and policymakers. One of the primary implications is the potential transformation of traditional pedagogical approaches.AI tools offer innovative ways to engage students in the writing process, from initial brainstorming to final editing.Educators should consider how to systematically incorporate these tools into their curriculum, such as through specific assignments or modules focused on AI literacy.However, this integration must be done thoughtfully to ensure that AI supports and enhances learning outcomes rather than undermining the development of critical thinking and writing skills. The ethical use of AI in education is another critical area of concern.As AI becomes more prevalent, educators need to instill a sense of ethical responsibility in students.This includes Educators should also encourage students to critically engage with AI-generated content, teaching them to discern the quality and relevance of the information provided by AI tools.Professional development for educators is crucial in realizing the benefits of AI in education.Training programs should be provided to help educators become proficient in using AI tools, understand their pedagogical applications, and stay informed about the latest developments and ethical considerations in AI.Encouraging a community of practice among educators can also facilitate the sharing of experiences, strategies, and best practices for integrating AI into teaching. From a policy and infrastructure standpoint, ensuring equitable access to AI tools is paramount.Policymakers need to ensure that all students and educators have access to the necessary technology and training, regardless of their socioeconomic background.This might involve investing in digital infrastructure, providing subsidies or grants for purchasing AI tools, and ensuring that training programs are accessible to all educators.Moreover, as AI continues to evolve, ongoing research and monitoring are necessary to understand its long-term impacts on education and to continually update policies and teaching strategies accordingly.This includes examining how AI affects student learning outcomes, teacher-student interactions, and the overall educational experience.Policymakers should consider these factors when developing regulations and guidelines for AI use in education. These recommendations offer a comprehensive framework for leveraging ChatGPT as an invaluable ally in the essay writing process.By engaging constructively with the AI tool, structuring essays through well-crafted outlines, initiating the writing process with factual sources, and personalizing the AI-generated content, students can harness the capabilities of ChatGPT while ensuring the authenticity and academic integrity of their work.Balancing AI assistance with creative input is imperative, enabling students to use ChatGPT not as a substitute for their efforts but as a powerful tool augmenting their academic endeavors. Conclusion The exploration of generative AI in educational writing processes, particularly through tools like ChatGPT, represents a pivotal advancement in academic research and pedagogy.This study's findings confirm that the integration of AI in English composition courses is not merely a technological trend but a transformative shift challenging and reshaping traditional teaching methodologies.By embracing AI tools, educators are reevaluating and innovating upon conventional writing instruction methods, thereby enriching the educational landscape. The role of LLMs such as ChatGPT in writing pedagogy has been underscored by this research, demonstrating their capacity to effectively engage and motivate learners in developing reading and writing skills.Yet, this study does not shy away from the contentious nature of AI-generated content, addressing the critical debates surrounding the authenticity and reliability of AI in educational contexts.It is precisely this contentious nature that makes the balanced and ethical integration of AI into educational settings both a challenge and a necessity.The need for nuanced strategies that harmoniously blend AI's innovative capabilities with the preservation of core writing skills and academic integrity is more pressing than ever. This study makes a substantial contribution to the ongoing discourse in the field of English language education and AI use.It provides a comprehensive analysis of student interactions with AI writing tools and their impact on the writing process and learning outcomes.The insights gleaned from this research offer a valuable roadmap for educators and curriculum developers, suggesting methods for effectively integrating AI into writing courses to enhance learning experiences while maintaining academic rigor. As we look to the future, the imperative for continued research and validation in this field remains.There is a wealth of specific use cases for AI tools like ChatGPT in educational settings that remain unexplored.Further examination of their impact on student writing proficiency and broader pedagogical applications is necessary.Additionally, staying attuned to the rapidly evolving AI technology landscape and its educational implications is crucial for ensuring that writing pedagogy remains relevant and responsive to the needs of students in the Digital Age. As this study emphatically asserts, the transformative potential of generative AI in writing instructions demonstrable and calls for educators to embrace the challenges and opportunities presented by AI, using it as a powerful tool to enrich writing instruction and foster critical thinking and effective communication skills in students.By adopting a balanced approach that emphasizes both the innovative aspects of AI and the importance of maintaining academic integrity, educators can ensure the continued value and integrity of academic writing in an increasingly digital world.This study thus serves as a clarion call for proactive adaptation and thoughtful integration of AI in English language education, setting a precedent for future research and practice in the field. Figure 1 Figure 1 Cohort-level status Figure 2 Figure 2 Perceptions of AI in class assignments Figure 5 Use of AI during writing process Table 1 Best practices for using ChatGPT for English compositionInternational Journal of Changes in Education Vol. 1 Iss. 1 2024 understanding the limitations of AI, recognizing the importance of original thought and effort in academic work, and ensuring academic integrity.Developing clear guidelines and ethical frameworks for using AI in academic settings is essential.
2024-01-24T16:17:01.753Z
2024-01-22T00:00:00.000
{ "year": 2024, "sha1": "ffe7ec5b081363d92d530addaccc24267ce4319a", "oa_license": "CCBY", "oa_url": "https://ojs.bonviewpress.com/index.php/IJCE/article/download/2290/779", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9bf6491e2bef97b8b0b3697ff76cade554a13954", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [] }
231592914
pes2o/s2orc
v3-fos-license
Discourse-Aware Unsupervised Summarization for Long Scientific Documents We propose an unsupervised graph-based ranking model for extractive summarization of long scientific documents. Our method assumes a two-level hierarchical graph representation of the source document, and exploits asymmetrical positional cues to determine sentence importance. Results on the PubMed and arXiv datasets show that our approach outperforms strong unsupervised baselines by wide margins in automatic metrics and human evaluation. In addition, it achieves performance comparable to many state-of-the-art supervised approaches which are trained on hundreds of thousands of examples. These results suggest that patterns in the discourse structure are a strong signal for determining importance in scientific articles. Introduction Single document summarization aims at shortening a text and preserving the most important ideas of the source document. While abstractive strategies generate summaries with novel words, extractive strategies select sentences from the source to form a summary (Nenkova et al., 2011). Despite recent advances in abstractive summarization, extractive models are still attractive in cases where faithfully preserving the original text is the priority. For example, legal arguments can hinge on the exact wording of a contract (Farzindar and Lapalme, 2004), and ensuring the factual correctness of a summary can be critical in the health or scientific domains, which is a known weakness of current abstractive methods (Kryściński et al., 2019). Supervised neural-based models have been the dominant paradigm in recent extractive systems, at least for short news summarization (Nallapati et al., Introduction although anxiety and depression are often related and coexist in pd patients, recent research suggests that anxiety rather than depression is the most prominent and prevalent mood disorder in pd. Related Work furthermore, since previous work, albeit limited, has focused on the influence of symptom laterality on anxiety and cognition, we also explored this relationship . Methodology this study is the first to directly compare cognition between pd patients with and without anxiety. Result the findings confirmed our hypothesis that anxiety negatively influences attentional setshifting and working memory in pd. Result moreover, anxiety has been suggested to play a key role in freezing of gait (fog), which is also related to attentional set-shifting. Future work s. future research should examine the link between anxiety, set-shifting, and fog, in order to determine whether treating anxiety might be a potential therapy for improving fog. Table 1: Example of a PubMed article's summary produced by our model HIPORANK. The hierarchical and directed graph combined with discourse-aware edge weighting allow HIPORANK to generate summaries that cover topics from different sections of the scientific article. 2017; Dong et al., 2018;Zhou et al., 2018;Liu and Lapata, 2019;Narayan et al., 2018b;Zhang et al., 2019b). These models usually employ the encoderdecoder structure and have achieved promising performance on news datasets such as CNN/DailyMail (Hermann et al., 2015), and NYT (Sandhaus, 2008). However, these models cannot easily be adapted to out-of-domain data that have greater length and fewer training examples such as scientific article summarization (Xiao and Carenini, 2019) due to two significant limitations. First, they require large domain-specific training pairs of source documents and gold-standard summaries, which are often not available or feasible to create (Zheng and Lapata, 2019). Second, the typical setup of using a tokenlevel encoder-decoder with an attention mechanism does not scale well to longer documents (Shao et al., 2017), as the number of attention computations is quadratic with respect to the number of tokens in the input document. We instead explore unsupervised approaches to address these challenges on long document summarization. We show that a simple unsupervised graph-based ranking model combined with proper sophisticated modelling of discourse information as an inductive bias can achieve unreasonable effectiveness in selecting important sentences from long scientific documents. For the choice of unsupervised graph-based ranking model, we follow the paradigm of LexRank (Erkan and Radev, 2004) and PACSUM (Zheng and Lapata, 2019). In these methods, sentences are nodes and weighted edges represent the degree of similarity between sentences. Summary generation is formulated as a node selection problem, in which nodes (i.e., sentences) that are semantically similar to other nodes are chosen to be included in the final summary. In other words, they determine node importance by defining a notion of centrality in the graph. In addition, we augment the document graph with directionality and hierarchy to reflect the rich discourse structure of long scientific documents. In particular, our method relies on two insights about the discourse structure of long scientific documents. The first is that important information typically occurs at the start and end of sections; i.e., they tend to appear near section boundaries (Baxendale, 1958;Lin and Hovy, 1997;Teufel, 1997). We implement this using an asymmetric edge weighting function in a directed graph which considers the distance of a sentence to a boundary. The second is that most sentences across section boundaries are unlikely to interact significantly with each other (Xiao and Carenini, 2019). We implement this insight by injecting hierarchies into our model, introducing section-level representations as graph nodes in addition to sentence nodes. By doing so, we convert a flat graph into a hierarchical non-fully-connected graph, which has two advantages: 1) reduced computational cost and 2) pruning of distracting weak connections between sentences across different sections. We call our approach Hierarchical and Positional Ranking model (HIPORANK) and evaluate it on summarizing long scientific articles from PubMed and arXiv (Cohan et al., 2018). Empirical results show that our method significantly improves performance over previous unsupervised models (Zheng and Lapata, 2019;Erkan and Radev, 2004) in both automatic and human evaluation. In addition, our simple unsupervised approach achieves performance comparable to many expensive state-of-the-art supervised neural models that are trained on hundreds of thousands of examples of long document pairs (Xiao and Carenini, 2019;Subramanian et al., 2019). This suggests that patterns in the discourse structure are highly useful for determining sentence importance in long scientific articles, and that explicitly building in biases inspired by this structure is a viable strategy for building summarization systems. Extractive Summarization of Long Scientific Papers Despite the success of deep neural-based models on news summarization, these approaches typically face challenges when applied to long documents such as scientific articles. Furthermore, these approaches are often blind to the topical information resulting from the structured sections in scientific articles (Xiao and Carenini, 2019). Two recent neural supervised models address these issues. Subramanian et al. (2019) used the introduction section as a proxy for the whole document, while Xiao and Carenini (2019) divided articles into sections and used non-auto-regressive approaches to model global and local information. Besides neural approaches, most previous scientific article summarization systems employ traditional supervised machine learning algorithms with surface features as input (Xiao and Carenini, 2019). Surface features such as sentence position, sentence and document length, keyphrase score, and fine-grain rhetorical categories are often combined with Naive Bayes (Teufel and Moens, 2002), CRFs and SVMs (Liakata et al., 2013), LSTM and MLP (Collins et al., 2017) for extractive summarization over long scientific articles. To the best of our knowledge, the only unsupervised extractive summarization model for long scientific documents relies on citation networks (Qazvinian and Radev, 2008;Cohan and Goharian, 2015), by extracting citation-contexts from citing articles and ranking Figure 1: Example of a hierarchical document graph constructed by our approach on a toy document that contains two sections {T 1 , T 2 }, each containing three sentences for a total of six sentences {s 1 , . . . , s 6 }. Each double-headed arrow represents two edges with opposite directions. The solid and dashed arrows indicate intra-section and inter-section connections respectively. When compared to the flat fully-connected graph of traditional methods, our use of hierarchy effectively reduces the number of edges from 60 to 24 in this example. these sentences to form the final summary. Our proposed method is different from their settings, where we perform single document summarization based on the long source article. Method Our proposed method combines simple graphbased ranking algorithms with a two-level hierarchical model of the rich discourse structures of long scientific documents (Teufel, 1997;Xiao and Carenini, 2019). We incorporate this discourse information into the graph as inductive biases through the construction of a directed hierarchical graph for document representation (Figure 1 and Section 3.2) and through the asymmetric edge weighting of edges with boundary functions (Section 3.3). Graph-based Ranking Algorithm Graph-based ranking algorithms for summarization represent a document as a graph G = (V, E), where V is the set of vertices that represent sentences or other textual units in the document, and E is the set of edges that represent interactions between sentences. The directed edge e ij from node v i to node v j is typically weighted by w ij = f (sim(v i , v j )), where sim is a measure of similarity between two nodes (e.g. cosine distance between their distributed representations), and f can be an additional weighting function. These algorithms select the most salient sentences from V based on the assumption that sentences that are similar to a greater number of other sentences capture more important content and therefore are more informative. Hierarchical Document Graph Creation To create a hierarchical document graph, we first split a document into its sections, then into sentences 2 . To create the hierarchy, we allow two levels of connections in our hierarchical graph: intrasectional connections and inter-sectional connections as shown in Figure 1. Intra-sectional connections aim to model the local importance of a sentence within its section. It implements the idea that a sentence that is similar to a greater number of other sentences in the same topic/section should be more important. This is realized in our fully-connected subgraph for an arbitrary section I, where we allow sentence-sentence edges for all sentence nodes within the same section. Inter-sectional connections aim to model the global importance of a sentence with respect to other topics/sections in the document, as a sentence that is similar to a greater number of other topics is deemed more important. However, calculating sentence-sentence connections across different sections is computationally expensive and may also suffer from performance degradation due to weak edges between sentences that are unrelated as a result of being from different sections (Mihalcea and Tarau, 2004). To address these issues, We introduce section nodes on top of sentence nodes to form a hierarchical graph. For inter-section connections, we only allow section-sentence edges for modeling the global information. This choice makes our approach more computationally efficient while greatly limiting the number of irrelevant intersection edges that arise from the fact that sections in scientific documents typically have independent topics (Xiao and Carenini, 2019). In contrast, traditional graph-based ranking algorithms have a flat fully-connected graph document with no sections. Asymmetric Edge Weighting by Boundary functions To calculate the weight of an edge, we first measure similarity between a sentence-sentence pair sim(v I j , v I i ) and a section-sentence pair sim(v J , v I i ). While our method is agnostic to the measure of similarity, we use cosine similarity with different vector representations in our experiments, averaging a section's sentence representations to obtain its own. While the similarities of two graph nodes are symmetric, one may be more salient than the other when considering their discourse structures (Baxendale, 1958;Teufel, 1997). Based on these discourse hypotheses of long scientific documents, we capture this asymmetry by making our hierarchical graph directed and inject asymmetric edge weighting over intra-section and inter-section connections. Asymmetric edge weighting over sentences Our asymmetric edge weighting is based on the hypothesis that important sentences are near the boundaries (start or end) of a text (Baxendale, 1958). We reflect this hypothesis by defining a sentence boundary function d b over sentences v I i in section I such that sentences closer to the section's boundaries are more important: where n I is the number of sentences in section I and x I i represents sentence i's position in the section I. α ∈ R + is a hyper-parameter that controls the relative importance of the start or end of a section or document. The sentence boundary function allow us to incorporate directionality in our edges, and weight edges differently depending on if they are incident to a more important or less important sentence in the same section. Concretely, we define the weight w I ji for intra-section edges (incoming edges for i) as: (2) where λ 1 < λ 2 such that an edge e ji incident to i is weighted more if i is closer to the text bound-ary than j. Edges with a weight below a certain threshold β can be pruned (i.e., set to 0). Asymmetric edge weighting over sections Similarly, to reflect the hierarchy hypothesis over long scientific documents proposed by Teufel (1997), we also define a section boundary function d b to reflect that sections near a document's boundaries are more important: where N is the number of sections in the document and x I represents section I's position in the document. This section boundary function allows us to inject asymmetric edge weighting w JI i to intersection edges: where λ 1 < λ 2 such that an edge e JI i incident to i ∈ I is weighted more if section I is closer to the text boundary than section J. Importance Calculation We compute the overall importance of sentence v I i as the weighted sum of its inter-section and intrasection centrality scores: where I is the set of sentences neighbouring v I i and D is the set of neighbouring sections in the hierarchical document graph; µ 1 is a weighting factor for inter-section centrality. Summary Generation Lastly, we generate a summary by greedily extracting sentences with the highest importance scores until a predefined word-limit L is passed. Most graph-based ranking algorithms recompute importance after each sentence is extracted in order to prevent content overlap. However, we find that the asymmetric edge scoring functions in (2) and (4) naturally prevent redundancy, because similar sentences have different boundary positional scores. Our method thus successfully extracts diverse sentences without recomputing importance. Experimental Setup This section describes the datasets, the hyperparameter choices, the baseline models, and the evaluation metrics used in the experiments. Datasets Our experiments are conducted on PubMed and arXiv (Cohan et al., 2018), two large-scale datasets of long and structured scientific articles with abstracts as summaries. The average source article length is four to seven times longer than popular news benchmarks (Table 2), making them ideal candidates to test our method. Implementation Details Our model's hyperparameters for testing are chosen from the ablation studies on the validation sets. The test results are reported with the following hyperparameter settings: λ 1 = 0.0, λ 2 = 1.0, α = 1.0, with µ 1 = 0.5 for PubMed and µ 1 = 1.0 for arXiv. We fix λ 2 to 1 and the choices of λ 1 ∈ {−0.2, 0, 0.5}. represent whether the edge between a less boundary-important sentence and a more boundary-important sentence is 1) negatively weighted, 2) pruned, or 3) down-weighted. λ 1 < λ 2 such that an edge e ji incident to i is weighted more if i is closer to the text boundary than j. α ∈ {0, 0.5, 0.8, 1.0, 1.2} controls the relative importance of the start or end of a section or document. µ 1 ∈ {0.5, 1.0, 1.5} controls how much we weigh intra-section sentence importance vs. inter-section sectional importance. For each dataset, we experimented with different pretrained distributional sentence representation models. The dimension of sentence representations is model-dependent (details in Section 6.2). We used the publicly released BERT model 3 (Devlin et al., 2019), PACSUM BERT model 4 (Zheng and Lapata, 2019), SentBERT and Sen-tRoBERTa 5 (Reimers and Gurevych, 2019), and BioMed word2vec representations 6 (Moen and Ananiadou, 2013). A section's representation is calculated as the average of its sentences' representations. The similarity between sentences or sections is defined to be the cosine similarity between the distributed representations. Baselines We compare our approach with previous unsupervised and supervised models in extractive summarization. In addition, we also compare it with recent neural abstractive approaches for completeness. Evaluation Methods We evaluate our method with automatic evaluation metrics -ROUGE F1 scores (Lin, 2004). ROUGE-1 and ROUGE-2 compute unigram and bigram overlaps between system summaries and reference summaries, while ROUGE-L computes the longest common sub-sequence of the two. In addition, we design a human evaluation experiment (details in Section 5.2) to compare our model with the best unsupervised summarization model -PACSUM (Zheng and Lapata, 2019). As far as we know, we are the first to perform human evaluation 3 https://github.com/huggingface/transformers 4 https://github.com/mswellhao/PACSUM 5 https://github.com/UKPLab/sentence-transformers 6 http://bio.nlplab.org/word-vectors on the 2018 PubMed and arXiv datasets (Cohan et al., 2018). Human evaluation over long scientific articles require annotators to comprehend a full domain-specific long article and compare multiple summaries for quality evaluation. Due to the challenging nature of the task, previous papers choose to skip it and purely rely on automatic evaluations to judge the system performance. Automatic Evaluation Results Tables 3 and 4 summarize our automatic evaluation results on the PubMed and arXiv test sets with the best hyperparameters, as described in Section 4.2. The first blocks in Table 3,4 include the lead and the oracle baselines. The second and the third blocks in the tables present the results of supervised abstractive models, and of supervised extractive models. ROUGE-2 oracle summaries are used as gold standard summaries for training supervised extractive models, which likely contributes to their better ROUGE-2 scores. The last blocks compare previous unsupervised models with our approach. Our model outperforms all other unsupervised approaches by wide margins in terms of ROUGE-1,2,L F1 scores on both PubMed and arXiv datasets. We also show that PACSUM is biased towards selecting sentences that We also see a similar trend on arXiv (the plots with more details can be found in the appendix). appear at the beginning of a document while our method selects sentences in every section and near the article boundaries, similar to the oracle (Figure 2). This overlap with gold standard summaries suggests our use of discourse structure and hierarchy plays a significant role in our method's performance. Interestingly, despite limited access to only the validation set for hyperparameter tuning, our method achieves performance scores that are competitive with supervised models that require hundreds of thousands of training examples, outperforming almost all abstractive and extractive models on ROUGE-L. This suggests that our discourseaware unsupervised model is surprisingly effective at selecting salient sentences in long scientific document and perhaps should be used as a strong baseline to accessing the merits of supervised approaches for learning content beyond discourse. Human Evaluation We asked the human judges 7 to read the reference summary 8 (abstract) and present extracted sentences from different summarization systems in a random and anonymized order. The judges are asked to evaluate the system summary sentence according to two criteria: 1) content coverage (whether the presented sentence contains content from the abstract); and 2) importance (whether the presented sentence is important for a goal-oriented reader even if it isn't in the abstract (Lin and Hovy, 1997)). Table 5 presents the human evaluation results. HIPORANK is shown to be significantly better than PACSUM in both content coverage and importance (p = 0.002 and p = 0.007 with Mann-Whitney U tests, respectively). We also measure inter-rater reliability using Fleiss' κ (46.56 for content-coverage and 41.37 for importance). These results help sup- port that our method's use of hierarchy and discourse structure improves summarization quality. 6 Ablation Studies 6.1 Component-wise Analysis Table 6 presents the ablation study to assess the relative contributions of the boundary function and the hierarchical information. We keep all the hyperparameters unchanged with respect to the best setting in Section 4.2 and either vary the positional function or the hierarchical structures. We also found that the improvement of each components are stable across all the hyperparameters we tested (more details in the appendix). The first block of Table 6 reports the ablation results with different positional functions: no positional function (Erkan and Radev, 2004;Mihalcea and Tarau, 2004), lead bias function (Zheng and Lapata, 2019), and our proposed boundary function. We can see that using the wrong positional function hurts the model's performance when comparing no positional function with lead bias function. Our boundary positional function outperforms the lead or no positional functions significantly. The second block of Table 6 reports the results with or without the hierarchical structure. We observe that adding the hierarchical information results in a huge performance improvement. Effect of Embeddings To disentangle the effect of sentence representation, we show PubMed test set results of our best model with different sentence embeddings in Table 7. While pretrained transformer models finetuned on sentence similarity improve performance, HIPORANK still consistently outperforms previous state-of-the-art unsupervised models (Table 3) even with random embeddings. These results once again suggest that our method's improvement can indeed be attributed to the use of hierarchy and discourse structure, rather than to the the choice of representations. To further inspect our model's stability across different hyperparameter choices, we conducted fine-grained analysis across all different hyperparameter settings as below. Stability of Hyperparameters Stability w.r.t. Discourse Structure To evaluate the impact and the stability of discourse structure informed edge weighting (Section 3.3), we first compared our boundary positional function (Eqn. 1,3) to PACSUM's lead positional function, as well as the standard undirected approach over different hyperparameter settings. Figure 3 (a) shows that our method consistently performed better on the PubMed validation set, across different hyperparameters and embedding models outlined in Section 4.2. Stability w.r.t. Hierarchy We then evaluated the effect of adding hierarchy (Section 3.2) on top of our boundary positional function. In addition to decreasing the computational cost, Figure 3 (b) shows that incorporating hierarchy further improved ROUGE-L consistently across different hyperparameters and embedding models we tested. Application to other genres While our work here is focused on long scientific document summarization, we believe that our approach is promising for other genres of text, provided that the right discourse-aware biases are given to the model. Indeed, one version of our model with our proposed boundary function can be seen as a generalization of PACSUM, which achieves state-of-the-art performance on unsupervised summarization of news by exploiting the well known lead bias of news text (Zheng and Lapata, 2019;Grenander et al., 2019). We leave such explorations of adapting HIPORANK to other genres to future work. Conclusion We presented an unsupervised graph-based model for long scientific document summarization. The proposed approach augments the measure of sentence centrality by inserting directionality and hierarchy in the graph with boundary positional functions and hierarchical topic information grounded in discourse structure. Our simple unsupervised approach with rich discourse modelling outperforms previous unsupervised graph-based summarization models by wide margins and achieves comparable performance to state-of-the-art supervised neural models. This makes our model a lightweight but strong baseline for assessing the performance of expensive supervised approaches for long scientific document summarization. A.1 Different Hierarchical Structure Besides our proposed hierarchical model (Figure 4 (c), hierarchy-add) in the paper, we also proposed and experimented with another novel hierarchical graph by introducing section-section connections (Figure 4 (b), hierarchy-multiply). In this hierarchical setting, we multiply a sentence's sectional importance with its sentence importance (Eqn. (2)) to form the final centrality score: Our empirical results indicate the hierarchymultiply model always outperforms no-hierarchy models ( (Figure 4 (a)) but under performs hierarchy-add. Nevertheless, Table 8 shows that adding any hierarchical structure results in performance improvement by wide margins when compared to the no-hierarchy model. Figure 5 shows the sentence positions in source document for extractive summaries generated by different models on the arXiv validation set. We can again see that PACSUM is biased towards selecting sentences that appear at the beginning of a document while our method selects sentences in every section and near the article boundaries, similar to the oracle.
2021-04-09T17:53:15.850Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b9a21c2bf389ba693cd4692a028c7f2821b1804e", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.eacl-main.93.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "bd13f913504f3b0cfa629a82c95b14e1067d1993", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
9356220
pes2o/s2orc
v3-fos-license
Uncovering hidden flows in physical networks Understanding the interactions among nodes in a complex network is of great importance, since they disclose how these nodes are cooperatively supporting the functioning of the network. Scientists have developed numerous methods to uncover the underlying adjacent physical connectivity based on measurements of functional quantities of the nodes states. Often, the physical connectivity, the adjacency matrix, is available. Yet, little is known about how this adjacent connectivity impacts on the"hidden"flows being exchanged between any two arbitrary nodes, after travelling longer non-adjacent paths. In this Letter, we show that hidden physical flows in conservative flow networks, a quantity that is usually inaccessible to measurements, can be determined by the interchange of physical flows between any pair of adjacent nodes. Our approach applies to steady or dynamic state of either linear or non-linear complex networks that can be modelled by conservative flow networks, such as gas supply networks, water supply networks and power grids. In this Letter, we avail from the flow tracing method, known in electrical engineering [19][20][21][22][23][24][25][26][27][28], to calculate the hidden flow between any two nodes, by only requiring information about the adjacent flows between any two connected nodes. This work provides a rigorous way to calculate hidden flows, which in turn enables one to gauge the non-adjacent interactions among nodes in a network, for networks whose non-adjacent nodes are far apart. The applicability of the method is enormous since flow networks can be used as simple models of flow behaviour to many complex networks, such as transportation networks, water supply networks and power grids. We extend the method to provide an immediate picture of how nodes interact non-adjacently in non-linear networks by constructing linear equivalent models to these networks. Flow networks describe a system that exchanges physical flows. Physical flows are usually recognised as the transference of a physical entity (such as the electric charge, a liquid, a solid, a gas volume, cars, airplanes, air, etc) from one node to another in a giving unit of time. But they can also be, in a more general sense, probabilities or the information rate (in bits/s). In a flow network, there are source nodes that input physical flows (a generator in a power-grid, for example) and sink nodes from which the physical flows leave the network (a consumer in a power-grid, for example). Flow networks can have several configurations, and for each configuration there are several scientific challenges. This work deals with flow networks that are conservative (i.e., total inflow arriving in a node is equal to total outflow leaving it) and whose rule of flow exchange is linear, such as is the case of a direct current electric network. Moreover, the edges carrying the flows are uncapacitated, allowing any arbitrary flow intensity. A remarkable challenge in the area of flow networks is to trace the flow between two non-adjacent nodes (or edges). In lieu of studying flows provided by adjacent connections, tracing methods enable one to calculate the amount of flow exchanged from one node (or edge) to another node (or edge), after travelling through several different paths in the network, a quantity being referred in this work as the "hidden" flow. This computationally doable complex task in small flow networks becomes impractical in larger complex flow networks. The present work reduces this complicated tracing mathematp-1 ical process into a trivial manipulation of the so called extended incidence matrix K that can be easily calculated from information on the flows along the edges. We then demonstrate that the hidden flows between any arbitrary pair of nodes can be calculated by our result condensed in Eq. (14). This result, rigorously derived for directed flow networks (preferential direction of flows) and to networks without closed looping flows (where flows circle around a closed path loop) was also extended to the treatment of networks whose flows are undirected and networks that present closed loops. Finally, we also show how to extend this result to understand the non-adjacent interactions between any pair of nodes in more general dynamical networks, such as phase oscillator networks, whose behaviour can be well represented by a conservative flow network. Flow Networks. -A flow network is a digraph, G(V, E), where V and E are the sets of nodes and edges, respectively. A flow network normally contains three types of nodes: (i) the source node [e.g., node 1 or 2 in Fig. 1 (a)], which has a source injecting flow into the network; (ii) the sink node [e.g., node 3 or 4 in Fig. 1 (a)], which has a sink taking flow away from the network; (iii) the junction node [e.g., node 5 in Fig. 1 (a)], which distributes the flow. We define f ij to be the adjacent flow, or simply the flow which is the measurable flow coming from nodes i to j through edge {i, j} ∈ E. f ij = 0 if nodes i and j are not physically connected. We begin our analysis with the conservative flow networks [29] satisfying: (i) f ij = −f ji ; (ii) j∈V f ij = 0, where node i is a junction node; (iii) there is no loop flow representing a closed path in a flow network, where a loop flow is shown in Fig. 1 (b); (iv) every node must be connected to at least one other node in the network. A path in a digraph G from node i to node j, j} j, is an alternating sequence of distinct nodes and edges in which the directions of all edges must coincide with their original directions in G. The hidden flow, f i→j , is defined to be the summation of the flows going from node i to j through all possible paths from node i to j. Normally, we can measure or calculate the adjacent flows in a flow network, but it is not easy to obtain the hidden flows, a quantity typically not accessible through measurements. We find the calculation of hidden flows based on the information of adjacent flows, in a conservative flow network, by the "flow tracing" method. Define the node-net exchanging flow at node i by If node i is a source node, we have f i > 0; we denote f i by f s i as the amount of the source flow being injected into the network from a source at node i. We set f s i = 0 if node i is a sink node or a junction node. If node i is a sink node we have f i < 0; we denote f t i = −f i > 0 to indicate the amount of the sink flow leaving the network from the sink at node i. We set f t i = 0 if node i is a source node or a junction node. Assume there is a positive flow from node i to node j, denoted by f ij > 0. We use f out ij to indicate f ij as an outflow from node i arriving at node j, and f in ij to represent f ij as an inflow at node j coming from node i. f ij can be positive, negative or zero in a flow network. However, we restrict any outflow or inflow at a node to be a non-negative number. This means that, if f ij < 0, we force f out ij and f in ij to be zeros. Analogously, f ij < 0 means f ji > 0, we have f out ji > 0 to denote the outflow from node j to node i and f in ji > 0 to be the inflow at node i from node j. Define the total inflow at node i by and the total outflow at node i by In a conservative flow network, the total inflow of a node is equal to its total outflow, i.e., f out i = f in i . We assume f out i = f in i > 0, ∀i, meaning that each node in a flow network must exchange flow with other nodes, i.e., no node is isolated. Flow tracing by proportional sharing principle. -The proportional sharing principle (PSP) [24,30] states that for an arbitrary node, a, with m inflows and n outflows (Fig. 2) in a conservative flow network, (i) the outflow on each outflow edge is proportionally fed by all inflows, and (ii) by assuming that node i injects a flow f in ia to node a, and node j takes a flow f out aj out of node a, we have that the node-to-node hidden flow from node i to node j via node a is calculated by p-2 Title or by Equations (4) and (5) result in the same value of f i→j , since f out a = f in a . Equation (4) represents the downstream flow tracing method, where we start tracing the hidden flow from a source node i to a sink node j, by using the percentage, f out aj /f out a , to indicate the percentage of f in ia that goes to j. Equation (5) denotes the upstream flow tracing method, where we trace the flow from a sink node j to a source node i, by knowing the proportion of f out aj is provided by f in ia . The percentage f out aj /f out a in Eq. (4) and f in ia /f in a in Eq. (5) are related to the flows on edges. They are similar to the probability of jumping from a node to one of its neighbours in a biased random walk process [31][32][33], where a similar percentage is related to the weight of edges. We only deal with the downstream flow tracing in the Letter and explain the upstream flow tracing in the Supplementary Material [34]. Define the downstream coefficient at node a for the outflow f out aj by to indicate the proportion of the outflow at edge {a, j} to the total outflow at node a. Define the upstream coefficient at node a for the inflow f in ia by denoting the proportion of the inflow at edge {i, a} to the total inflow at node a. Then the calculation of f i→j can be simply expressed by f i→j = f in ia κ d aj or f i→j = f out aj κ u ai . Define the sink proportion and source proportion at node a by respectively, where the sink proportion, ι t a , indicates the proportion of the sink flow to the total outflow at node a, and the source proportion, ι s a , indicates the proportion of the source flow to the total inflow at node a. By defining the sink proportion and source proportion, we are now able to calculate the source-to-sink hidden flow from a source at node i to a sink at node j denoted by f si→tj . From Eq. (2), we know that f s i is a part of f in i , where f s i is the source flow at node i. From Eq. (8), we know the proportion of f s i to f in i . According to the PSP, we can then calculate the source-to-sink hidden flow by It is possible to trace (calculate) the hidden flows from any arbitrary pair of nodes in a flow network using either the downstream or the upstream approach. However, all the paths connecting a pair of nodes must be considered. In particular, the hidden flow from two adjacent nodes will include the flow exchanged along the adjacent connection and all the flows travelling along other longer paths connecting these two adjacent nodes. Suppose one wants to calculate the hidden flow f i→j from two nonadjacent nodes i and j, and there are two possible paths, P 1 (i, j) = i{i, k}{k, j}j and P 2 (i, j) = i{i, l}{l, g}{g, j}j, P 1 with length 2 and P 2 with length 3. Each path produces a hidden flow, f (1) i→j and f (2) i→j , respectively. The total hidden flow from i to j is thus calculated using This process is feasible when dealing with small flow networks, as illustrated in the Supplementary Material [34], where we show how to trace hidden electric current flows in a direct current (DC) electric network. But it becomes impractical when dealing with large networks, for which the number of paths carrying flows can grow exponentially fast with the size of the network. To circumvent this challenging calculation, the use of the extended incidence matrix, K, proposed in Refs. [25][26][27], is taken forward. Flow tracing by extended incidence matrix. -The downstream extended incidence matrix, K, in a flow network with N nodes is an N × N dimensional matrix, defined by From Eqs. (9) and (10), we have where K is an invertible matrix [25,27,28], thus, F out = K −1 F s , implying that, K −1 ij being an entry (i th row, j th column) of the matrix K −1 . Equation (12) indicates that the outflow of node i, f out i , is fed by every source f s j . More specifically, K −1 ij represents the proportion of the source inflow in the source node j that goes to node i. Knowing that the source-to-node hidden flow from source node j to node i is given by f sj→i = ι s j f j→i , Eq. (13) thus implies that for a source node j with ι s j = 0, C ij f in j represents the node-to-node hidden flow from node j to node i, i.e., f j→i = C ij f in j . The tracing of flows from source to nodes, previously known in the literature, only applied to source nodes. To extend it to any other general situation, including the tracing of flows from and to edges, sinks and junction nodes, we introduce an equivalence principle. We treat any sink or junction node as a hypothetical source node, without altering the original network topology and flows. If node j is a sink or junction node with a total inflow f in j > 0 and ι s j = 0, we treat node j as a hypothetical source node with f s j = f in j > 0, where the hypothetical source takes the place of all the edges injecting flows into j. By this treatment, we can hypothetically treat node j as a source node with ι s j = f s j /f in j = 1, in Eq. (13), such that the node-to-node hidden flow from node j to node i can also be calculated by Thus, from our analysis, C ij = K −1 ij is a donwstream contribution factor indicating how much hidden flow goes from node j to i, i.e., f j→i = C ij f in j for any pair of nodes. Now, we show how non-adjacent hidden flows can be traced in conservative flow networks. Notice for networks whose non-adjacency nodes are far apart from each other, the hidden flows can gauge how non-adjacent interactions emerge in the studied system. Let i, j, m, n, p, q be different nodes in a conservative flow network, where node i has a source, node j has a sink, nodes m, n are connected by edge {m, n} with f mn > 0, and nodes p, q are connected by edge {p, q} with f pq > 0. The non-adjacent interaction includes: (i) the node-to-node hidden flow from node i to j is f i→j = C ji f in i ; (ii) the source-to-node hidden flow from source node i to node j is f si→j = ι s i f i→j ; (iii) the node-to-sink hidden flow from source node i to sink node j is f i→tj = f i→j ι t j ; (iv) the source-to-sink hidden flow from node i to j is f si→tj = ι s i f i→j ι t j ; (v) the node-to-edge hidden flow from node i to edge {m, n} is f i→{m,n} = f i→m · κ d mn ; (vi) the edge-to-node hidden flow from edge {m, n} to node j is f {m,n}→j = κ u nm · f n→j ; and (vii) the edge-to-edge hidden flow from edge {p, q} to {m, n} is f {p,q}→{m,n} = κ u qp · f q→m · κ d mn . To illustrate the calculation of these hidden flows, as well as the calculation of the matrices involved in it, in the Supplementary Material [34] we trace the flows in an electric network using our downstream extended incidence matrix approach. Extension to flow networks with closed loops and with undirected flows. -Loops: If the closed loop (or loops) is inside a larger network, one needs first to identify the existence of a loop. A closed loop at the node i with a length P exists in a network if [A P ] ii > 0, where [A P ] ii represents the term ii in the power to P of the adjacency matrix of the network. The source node of the loop is any node receiving input flow, and the sink node is the one containing an edge with an outflow, and whose path length connecting it to the source node is the longest. We consider a network with 4 nodes, with a loop flow as in Fig. 1 Fig. 1(b), the loop is formed by 1{1, 4}{4, 3}{3, 2}{2, 1}1. To break-up the loop, one firstly choose a source and a sink node, where flows enter and leave the closed loop, respectively. Node 1 is the only source node. The sink node to be chosen must be the one whose length of a direct path connecting it to the source node is the longest one. We choose node 3 as the sink node. Then, one needs to determine all the directed paths connecting the source node (node 1) and to the sink node (node 3), and all the directed paths connecting the sink to the source nodes. Among all paths, one takes only the paths that have the same flow directions as the original network N . These directed paths form the subnetworks whose net flow represents the original network flow and from which the hidden flows are calculated. We show, in Fig. 3, the subnetworks of the network in Fig. 1(b). Panel (a1) represents a directed path and its flows from node 1 to node 3. Panels (a2) and (a3), with the same directed path subnetwork, show the directed paths connecting nodes 3 to 1 . Notice that a negative source and sink, in nodes 1 and 3, respectively, in panel (a2), is equivalent to a positive sink and source nodes, respectively, as represented in panel (a3). In panels (b1)-(b3), we show another practical way to determine the break up of the network with a closed loop. Once a loop, and a source and a sink nodes, are identified, we remove it from the network. Panel (b1) is the subnetwork after the loop removal. The closed loop is formed by merging the flows represented in panels (b2) and (b3), and it has a constant flow of 1 unit. One restores the original network by adding the subnetworks in panel (a1) and (a3), or by adding the subnetworks in panels (b1), (b2), and (b3). Calculating hidden flows of the original network needs to take into consideration of hidden flows in all subnetworks. One subnetp-4 Title work [panel (a1)], let us call it N 1, is formed by the nodes 1, 3, and 4. Node 2 is absent and, therefore, to preserve edge flows one is required to make f s 1 (N 1) = f s 1 + f 21 (N ) and f t 3 (N 1) = f t 3 + f 32 (N ). From this network, f 1→4 = 5, f s1→t4 = 3, f s1→t3 = 2. The other network [panel (a2)], let us call it N 2, is formed by the nodes 1, 2, and 3, so node 4 is now absent and therefore, to preserve edge flows we are required to make f s . These equations lead to f s 1 (N 2) < 0 and f t 3 (N 2) < 0, whose flows are indicated in panel (a2). The hidden flow from node 2 and 4 is zero, since no subnetworks contribute to a hidden flow from node 2 to 4. Undirected flow networks: Similarly, our method can also be applied to an undirected flow network if the network can be split into two independent unidirectional networks. For example, under the assumption that all traffic roads are bidirectional, we can separate the transportation network of a city into two networks. One network includes all the left-hand roads and the other one contains all the right-hand roads. Thus, both separated networks become unidirectional networks. Non-adjacent interaction in non-linear networks. -Next, we extend our tracing hidden flow approach to study non-linear systems by constructing linear model analogous to the non-linear networks. Let the equatioṅ indicate a dynamic scheme describing the behaviour of N coupled nodes, where x i is the dynamical variable of each node, S(x i ) is the isolated dynamic function, L ij is the element of the Laplacian matrix, and H(x i , x j ) is an arbitrary coupled dynamic function. We treat the system as a flow network by interpreting f i (t) = S(x i ) −ẋ i as the node-net exchanging flow at node i. The value and sign of f i (t) may change over time. If f i (t) > 0 (or f i (t) < 0), we treat node i as a source (or sink) node at time t and the source (or sink) flow is f s ). If f i (t) = 0, we treat node i as a junction node at time t. Let f ij (t) = L ij H(x i , x j ) be the adjacent flow from node i to node j. If f ij (t) > 0, we have f out ij (t) > 0 as the outflow from node i and f in ij (t) > 0 as the inflow at node j at time t. If f ij (t) < 0, we have f out ji (t) > 0 as the outflow from node j and f in ji (t) > 0 as the inflow at node i at time t. By doing this interpretation, we are constructing an equivalent linear conservative flow network that behaves in the same way as the non-linear network described by Eq. (15). This enables us to calculate the non-adjacent interactions in the equivalent linear flow network which informs us about the non-adjacent interactions in the original non-linear network. We consider a revised Kuramoto model [35][36][37] as an example, which is given bẏ where K is the coupling strength, L ij is the entry of the Laplacian matrix, θ i and ω i indicate the phase angle and natural frequency in a rotating frame, respectively. In this rotating frame,θ i =θ j = 0, ∀i = j, when the oscillators emerge into frequency synchronisation (FS) for a large enough K [38]. In the FS state, all the node-net exchanging flows f i = ω i −θ i = ω i and all the adjacent flows f ij = KL ij sin(θ i − θ j ) are constants, since sin(θ i − θ j ) are constants. Let α ij = |f ij |/ max{|f ij | : ∀i, j} be a normalised variable in [0,1] indicating the adjacent interaction strength between oscillator i and j, where max{|f ij | : ∀i, j} is the maximum of all absolute values of adjacent flows. Since f ij = −f ji , we have α ji = α ij . Every hidden flow is traced by considering that flows are directed. This implies that all the calculated hidden flows are non-negative and at least one of f i→j and f j→i is 0. We let β ij = β ji = max{f i→j , f j→i }/ max{f i→j : ∀i, j} be the nonadjacent interaction strength between oscillator i and j, where max{f i→j , f j→i } is the non-zero one between f i→j and f j→i , and max{f i→j : ∀i, j} is the maximum of all hidden flows. This definition of the non-adjacent interaction strength allows us to compare α ij and β ij for the same pair of nodes in a network. We construct three types of networks with 25 nodes, namely the Erdös-Rényi (ER) [1,39], Watts-Strogatz (WS) [40] and Barabási-Albert (BA) models [41]. The dynamic behaviour of the nodes in these networks follows Eq. (16). Figure 4 shows the comparison of the adjacent interactions and the non-adjacent interactions when the oscillators emerge into FS with a large enough K. Figures 4 (a), (b) and (c) show the adjacent interaction strengths, α ij , for ER, WS and BA networks, respectively. Figures 4 (d), (e) and (f) demonstrate the non-adjacent interaction strengths, β ij , for ER, WS and BA networks, respectively. Figure 4 (d) exposes some hidden interactions that Fig. 4 p-5 (a) does not show to exist in an ER network. By comparing Figs. 4 (b) and (e), we see that a randomly rewired edge in a WS network not only produces interaction between the two adjacent nodes connected by this edge, but also creates functional clusters among nodes close to the two adjacent nodes. So, complex systems can in fact be better connected than previously thought. We constructed the BA network by assigning smaller labels to nodes with larger degrees. Both Figs. 4 (c) and (f) illustrate the strong interactions among the nodes with large degrees (small labels). Figure 4 (c) shows that the interactions between unconnected nodes with small degrees (large labels) are weak or inexistent, though, such interactions are revealed in Fig. 4 (f). Through this comparison, we understand that two nodes in a network may strongly interact with each other even if they are not connected by an edge. Figure 5 shows the simulations results of the adjacent interaction strength and non-adjacent interaction strength for these networks when FS is not present. Final results are taken by averaging the results of 100 timepoints that are uniformly chosen in the time scale [10,20], i.e., α ij = 100 k α ij (t k )/100 and β ij = 100 k β ij (t k )/100, where α ij (t k ) and β ij (t k ) are the values of α ij and β ij at the k th time-point. The dynamic behaviour of the oscillators in these networks is described by the Kuramoto model by assigning a small coupling strength, such that the oscillators are in an incoherent state. Comparing the results in Fig. 5 with that when FS is present, we find that those pairs of nodes which are not interacting through hidden flows when FS is not present, also present no evident non-adjacency interactions when FS is present. This suggests that the existence of nonadjacent interaction between a pair of nodes strongly depends on the network topological features of the network rather than the coupling strength. Conclusion. -In this Letter, we introduced the proportional sharing principle and the extended incidence matrix to calculate the hidden flows in flow networks, and further extended this approach to trace the non-adjacent hidden flows in non-linear complex systems which can analogously be represented by linear flow networks. This allows us to understand the non-adjacency interactions among nodes either under a steady state (e.g., when FS is present in the Kuramoto model) or a dynamic state (e.g., when FS is not present in the Kuramoto model) in such a complex system. Our study illustrated that the nodes in a network not only interacts with their neighbours, but can also strongly influence those who are not directly connected to them. By comparing the results of the non-adjacent study for the Kuramoto model when FS is present and that when FS is not present for different topological networks, we concluded that the emergence of non-adjacent interaction between a pair of nodes strongly depends on the topological features of the networks rather than the coupling strength between nodes. We have extended our analysis to flow networks that present closed loops and for those that present undirected flows. The solution for these challenging problems is to break the network into subnetworks that only contain directed flows. The method can also be applied to weighted networks, as long as the weighted network can be modelled as a conservative flow network. This work opens up a new area of research into nonadjacent interactions in complex networks, facilitating and enabling research that aims at unravelling complex behaviour as a function of the network topology. There is also great potential to link this work to other works in the area of complex networks, such as the link prediction problem [42], and to the study of information and energy transmission in complex networks [43][44][45]. These potenp-6 Title tial extentions will further widen the applicability of the method in the real world. It is worth mentioning that our work assumed at the outset that the adjacency matrix of the system as well as the adjacency physical flows is known a priori. Therefore, works such as those in Ref. [42] predicting the existence of a physical link should be used prior to our method. * * * Chengwei Wang is supported by a studentship funded by the College of Physical Sciences, University of Aberdeen. Uncovering hidden flows in physical networks Example of Flow Tracing in a DC Network. -We build up a MATLAB model to simulate a direct current (DC) network shown in Fig. 6 to illustrate the flow tracing process. The flow quantity f is given by the electric current I in this model. Nodes 1 and 2 are two nodes with current sources where I s 1 = 3A and I s 2 = 5A, respectively. The resistances of resistors are randomly chosen within the set of integer numbers [1,10], shown in Tab. 1. The sink flow leaving from the sink nodes 9 and 10 are measured by the current scopes as I t 9 = 4.51A and I t 10 = 3.49A. The current directions are shown in Fig. 7. Next, we show how to calculate the source-to-sink hidden currents from the current source I s 1 and I s 2 to the sink I t 9 and I t 10 by different methods. Resistor Using the Downstream Flow Tracing Method. As shown in Fig. 7, there are two paths from node 1 to node 9, which are P 1 (1, 9) = 1 {1, 3} 3 {3, 6} 6 {6, 9} 9, and P 2 (1, 9) = 1 {1, 4} 4 {4, 7} 7 {7, 9} 9.
2017-05-22T19:36:32.000Z
2016-11-21T00:00:00.000
{ "year": 2017, "sha1": "aa9dce34f6be59df47bde2c3842fd3373e210865", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.03193", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "09d2288a7dcf96802a07f40a14e0ddd4bf2bd669", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology", "Physics", "Computer Science" ] }
234357708
pes2o/s2orc
v3-fos-license
Two novel feature selection algorithms based on crowding distance In this paper, two novel algorithms for features selection are proposed. The first one is a filter method while the second is wrapper method. Both the proposed algorithms use the crowding distance used in the multiobjective optimization as a metric in order to sort the features. The less crowded features have great effects on the target attribute (class). The experimental results have shown the effectiveness and the robustness of the proposed algorithms. Introduction Feature selection problem is well known problem in data mining field. It is used in the classification methods to reduce the number of the features in datasets. Formally, feature selection procedure selects a subset of P significant features from a whole set of N input features, with P<N preserving a good or better accuracy compared to the entire N features [1]. It should be noted that feature selection is different from dimensionality reduction. Dimensionality reduction methods create new features by combinations of attributes, whereas feature selection methods use a subset of the present attributes without changing them. Several methods were developed to solve feature selection problem that can be regrouped in three general classes of methods: filter methods, wrapper methods and embedded methods. In the filter feature selection methods, each potential feature is weighted and ranked according to a defined feature selection measure, the selection procedure consists to take the best k features. The filter methods select the potential feature independently to the classifier used. Some examples of some filter methods include the Pearson Correlation Coefficient [2], and relief feature selection [3]. Unlike filter feature selection methods, Wrapper methods consider the feature selection problem as an optimization problem, where an objective function based on a predictive model is used to assess the accuracy of each selected subset of features. In this class, we can find complete search methods like branch and bound [4] or incomplete search methods like local search methods, greedy search, or metaheuristics [5]. Generally, the wrapper methods give better results than the filter methods. However, there are two main drawbacks that limit the use of these methods. the first limit is the runtime complexity required for the selection. The second limit is that the performance of these methods depends on the classifier algorithm used as objective function. Finally, the embedded methods integrate the feature selection in the construction process of prediction model. In wrapper type selection methods, the classification process is divided into two parts: a learning stage and a validation stage to validate the selected subset of features. On the other hand, the built-in methods can use all the learning examples to build the system. This is an advantage that can improve the results. Another advantage of these methods is their speed compared to wrapper approaches because they avoid the classifier to be restarted for each subset of characteristics. Examples of embedded algorithms are the LASSO, Elastic Net and Ridge Regression [6]. As we have mentioned, the filter methods depend greatly on the metric used to assess each feature. In this work, we proposed the use of a new metric for ordering the features based on the famous crowding distance used in the multiobjective optimization [7]. The features are handled as points in multiobjective space where the objectives are the samples. The crowding distances of the entire features are sorted in descending order. Consequently, the selected features are ranked in the top of the features ranking. The proposed algorithms are assessed with well-known datasets and they were compared against wellknown algorithms. The experimental results have proved the effectiveness of the proposed algorithms. In the following, we will explain in more details the proposed algorithm and the experimental study. The proposed feature selection methods based crowding distance In this work, two feature selection algorithms based on crowding distance are proposed. the first algorithm is a filter method while the second is wrapper method. Both the two algorithms use the crowding distance to sort the features. The use of the descending crowding distance is motivated by the following assumption that the most isolated features have great impacts on the target feature (class) while two closed (or crowded) features A and B have a close impact on the target feature. Therefore, it is preferably to select first the most isolated features than the most crowded features. The crowding distance is adapted as follows: First of all, the features are sorted according to all the sample Sm where a sample Sm plays the role of an objective function in multiobjective optimization problems. The vectors of sorted indices Im are found. The crowding distance CD for each feature is computed using the following equations: where Im(i) is the i-th index from the m-th vector of indices, Sm,max and Sm,min are the maximal and minimal values of the m-th Sample data, respectively. The value CDm for the two extreme features is set to infinity. Geometrically, the crowding distance is the average side length of the cuboid defined by features surrounding a particular feature (see Fig. 1). The less crowded features with a great value of CD are the preferred features. Wrapped algorithm based crowding distance The wrapped algorithm is based on greedy method where at each step one feature is added to the selected features, if the accuracy of the classifier is enhanced then we keep this feature otherwise it is discarded. The choice of the feature to be added is given by the order of the features computed by the crowding distance. At each step, the fitness of the current solution is computed by a given classifier. Moreover, we can add other termination criteria to stop the algorithm early like accuracy threshold. The outline of the proposed algorithm is given in the figure 3. Implementation and results The proposed filter and wrapped feature selection algorithms based on crowding distance are implemented under MATLAB R2016a environment, and all experiments were carried out on a Windows 10 64-bit computer with an Intel i3 (2.3 GHz) processor and 4 GB RAM. To evaluate the proposed algorithms, six popular datasets were used. The details about the used datasets are described in the table 1. Due to the randomness of the k-fold cross-validation, for one dataset, each algorithm is executed 30 times, and the best, mean, std, and worst results are reported. For all the algorithms, the multiclass SVM classifier is used with kfold=5. For the filter algorithms, the number of selected features is fixed to 10 for Ionosphere, Breast, Heart, and Sonar datasets, while for large datasets like Ovarian and Colon datasets, the number of selected features is fixed to 150. Table 2 shows the experimental results found by our filter algorithm called Filter Crowded Features. Moreover, our algorithm is compared to the most popular filter algorithms: Pearson Correlation Coefficient [2], Relief Feature [3] and Variance Feature Selection [8]. Conclusion In this paper, two algorithms for feature selection are presented. The main feature of the proposed algorithm is the use of the crowding distance to order the features from the most important to the less important. The first algorithm is a filter method whereas the second is wrapped method. The
2021-05-12T01:16:36.547Z
2021-05-11T00:00:00.000
{ "year": 2021, "sha1": "eeb74b66f843c4c47672c64b8f8f1548184ea8a9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eeb74b66f843c4c47672c64b8f8f1548184ea8a9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221238293
pes2o/s2orc
v3-fos-license
Pain workshop ESMO: Africa (response) The COVID-19 pandemic is shining a light onto a tragic paradox. Pain, and the suffering people experience as a result of pain, is one of the greatest neglected global health challenges we face while the means to alleviate pain are some of the least costly and most cost effective global health interventions.1 We live in a world where the strategies to manage pain have become more significant than the pain itself, and a world where privilege and postcode is more likely to determine when and how pain is alleviated. The opioid epidemic in the USA has put off kilter a decade of work to establish essential medicines for palliative care.2 European Society for Medical Oncology (ESMO) pain guidelines, developed by an expert working group for non-experts, are relevant in each African country.3 However, these guidelines are only as effective as the capacity within each country for implementation. We will review the barriers and strategies for addressing challenges which are even more pertinent as we reflect on the impact on the COVID-19 pandemic on already stretched health systems. Much is already known and widely acknowledged about the barriers. Myths, misconceptions and fears about opioids such as morphine remain prevalent among policy makers, healthcare workers and the community. This opiophobia includes beliefs that all morphine use is addictive, that side … The COVID-19 pandemic is shining a light onto a tragic paradox. Pain, and the suffering people experience as a result of pain, is one of the greatest neglected global health challenges we face while the means to alleviate pain are some of the least costly and most cost effective global health interventions. 1 We live in a world where the strategies to manage pain have become more significant than the pain itself, and a world where privilege and postcode is more likely to determine when and how pain is alleviated. The opioid epidemic in the USA has put off kilter a decade of work to establish essential medicines for palliative care. 2 European Society for Medical Oncology (ESMO) pain guidelines, developed by an expert working group for non-experts, are relevant in each African country. 3 However, these guidelines are only as effective as the capacity within each country for implementation. We will review the barriers and strategies for addressing challenges which are even more pertinent as we reflect on the impact on the COVID-19 pandemic on already stretched health systems. Much is already known and widely acknowledged about the barriers. Myths, misconceptions and fears about opioids such as morphine remain prevalent among policy makers, healthcare workers and the community. This opiophobia includes beliefs that all morphine use is addictive, that side effects will be dangerous, or that it should only be used for dying patients with cancer. Most healthcare workers are not trained in the safe use of opioids in their basic or in-service training. Regulatory frameworks and policies for protection and procurement of medicines while offering important systems and protections have created a number of unintended consequences which restrict access to those most in need of pain relief. Limitations on who can prescribe morphine, how many cosignatures are required even when death is close and time-limited, add to lack of robust procurement, insufficient stock, suboptimal storage and absence of any buffer. In many African countries, morphine is only available in major city centre hospitals, or national cancer centres, and not in health centres in rural regions. Patients in severe pain due to illnesses such as cancer may struggle to afford their medications alongside other out of pocket health and travel costs, thus pushing them further into a spiral of debt and poverty. STRATEGIES The paper by Krause et al 4 puts forward a number of important strategies, including: developing Afrocentric education tools around pain and palliative care; improving communication skills among health care providers, so that essential pain relief can be dispensed accurately and adequately; tackling the misconceptions and misinformation about opioids, through a concerted campaign for improvement in health literacy and essential health information for all. We suggest that there are additional practical strategies that could be developed. Economies of scale through regional procurement, as has been developed for other medications, could create a pan-Africa platform for investing in palliative care and pain relief. This may allow a range of affordable formulations that allow for cost-effective procurement and distribution, as well as flexibility, for patient use. Rigorous, protective yet flexible systems for monitoring opioid distribution, such as using mobile phone technology, could create linkages which can support procurement and match supply and prescriber availability to patients. Although there remain many challenges, much has been accomplished across the African continent by palliative care and hospice services supported by many pioneering national palliative care associations. The works of the pan-African Palliative Care Association (APCA), the International Association for Hospice and Palliative Care (IAHPC) and the International Children's Palliative Care Network (ICPCN), have drawn together excellent resources to support services and build adequate pain relief at national levels. The WHO Planning and Implementing Palliative Care services published research conducted by the University of Edinburgh and Makerere University in their Integrate project (funded by a Tropical Health Education Trust and the UK's Department of International Development), which set out templates to show how these implementation strategies can be integrated into health systems. 5 6 Renee and colleagues note that ESMO is committed to assisting in achieving better care for patients with cancer in Africa, through the ESMO Designated Centre of Integrated Oncology and Palliative Care, and they ask for high-level advocacy to enforce regulations that monitor and evaluate morphine availability and palliative care training. With this vital 'pan-African public healthcare initiative', we would also suggest that a monitoring consortium to track and analyse policy, health system, socioeconomic and political responses to access and delivery of the guidelines through the journal is established. Palliative care and pain management is being redefined in the current era taking account of global developments. 7 Coming back to the current challenges of COVID-19, it is even more poignant to note the role opioids, such as morphine, have in the management of refractory breathlessness and the implications of the current challenges in access, 'In this most challenging time, health responders can take advantage of palliative care know-how to focus on compassionate care and dignity, provide rational access to essential opioid medicines, and mitigate social isolation at the end of life and caregiver distress'. 8 However, the global supply chains are being disrupted further, and it is conceivable that the existing procurement processes for opioids may become unaffordable or even, unavailable. If ever there was a time for global solidarity and for ensuring palliative care is integrated into health and education systems alongside community empowerment and compassionate, holistic, dignified care for those in pain and at the end of life, that time has come. Suffering and our response to suffering is a fundamental part of our humanity. We lose a part of ourselves when we fail to respond to suffering, and even more so when relatively cheap but highly effective solutions exist. We need to understand the granular detail of the barriers in each African country, which prevent access to opioids. Using evidence along with local knowledge and cooperation, we should together build a system which will not fail those who are suffering from pain. Contributors All authors participated in writing this document. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Not required. Provenance and peer review Commissioned; internally peer reviewed. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, any changes made are indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
2020-08-23T13:06:04.845Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "a18b97b1907807b432a090190831ffa9f2a7dfcb", "oa_license": "CCBYNC", "oa_url": "https://www.esmoopen.com/article/S2059-7029(21)00175-7/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8935798f6c216680b26a4413f364641e99f4d59c", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
49715489
pes2o/s2orc
v3-fos-license
Pediatric oncology services in Colombia Background: In low-income countries, a child diagnosed with cancer has an 80% chance of dying, while in high-income countries more than 80% survive the disease. In Colombia, a middle-income country, the government issued new legislation that promotes the generation of comprehensive care units; nevertheless, seven years after its expedition, no institution has been recognized as such by the Ministry of Health. The objective of this study was to characterize the current offer of oncological services for cancer care in children and to identify the institutions that can be constituted in Units of Comprehensive Care of Childhood Cancer in Colombia. Methods: descriptive study of secondary source, the Special Register of Health Providers of the Ministry of Health and Social Protection was consulted, in order to identify the institutions that had enabled hospitalization services of medium or high complexity, chemotherapy, specialized consultation, emergencies, oncological surgery, and radiotherapy or nuclear medicine. The information is reported in absolute frequencies. Results: Seventy one institutions have hematology-oncology consultation, 39 institutions have chemotherapy and hospitalization services of medium or high complexity, and 18 have radiotherapy enabled. Only nine of the institutions include all the services that are necessary for comprehensive care. Conclusion: Colombia has a sufficient supply of services for the care of children with cancer. Only a minority are in institutions that have the capacity to guarantee the integrality of the attention. Introduction Cancer in children aged 0 to 14 is considered a rare disease, it represents between 0.5% to 2% of all cancer cases in the world. It is estimated that 200,000 new cases of cancer in children aged under 15 are diagnosed annually worldwide 1 . 84% of children dying from cancer live in countries with low or intermediate income, where there is limited access to health care and cancer care 2 . Clinical results differ substantially; while in low-income countries a child diagnosed with cancer has an 80% chance of dying, in high-income countries over 80% survive the disease 3,4 . In Colombia, cancer is the second cause of death after deaths for external causes in the group of 0-14 years of age 5 . Each year, 1,322 new cases of cancer are diagnosed 6 . In the five-year period 1990-95 the survival of children with Acute Lymphoblastic Leukemia in Colombia was 40.9%; and for the five-year period 2005-9, it was 53.8%. Although it shows an improvement, the results are much lower compared to other countries in America 7 . The differences in the results between the countries have a multifactorial origin that involves a series of factors, such as the social and economic determinants of each region 6 Taking into account that it is a pathology of low frequency and high complexity, health systems need to organize their offer of services to guarantee that access to the diagnosis and treatment of children with cancer is concentrated in institutions that have specialized human talent, biomedical technologies and the necessary infrastructure for the complexity of care. In this sense, twinning programs between hospitals located in countries with great experience and others located in low-income countries have allowed to improve survival 8,9 . In Colombia there have been multiple barriers to access timely treatments for children with cancer: the delay by insurers in the delivery of authorizations for care, the delay in the delivery of medicines, and the fragmentation of services and interinstitutional transfers to achieve comprehensive care 10,11 . Due to these problems, the national government issued a new legislation as legal support to reduce cases of death due to cancer in children and persons aged under 18 12,13 . Colombian laws promote comprehensive treatment and they have delegated the Ministry of Social Protection to sector the services taking into account the demand needs and geographical location. They also created the National Advisory Council for Childhood Cancer to followup and monitor the implementation of these laws, as well as the national policies and plans that derive from it. Within the framework of this regulation, the creation of Child Cancer Care Units (CCCU) was defined as units "located in hospitals or clinics of level III and IV of pediatric complexity, or with pediatric services of level III or IV" 12 . The definition of the concept of UACAI transformed the model of habilitation of the pediatric oncological services towards a model in which the provider institutions must guarantee the services related to the care of children with cancer, in order to guarantee the integrality in the attention and the optimization of resources 13 . However, seven years after its dissemination, the country does not have any UACAI recognized under that name by the Ministry of Health. In this sense, this article makes a descriptive analysis of the offer of institutions providing health services in Colombia that could be constituted as Units of Comprehensive Care CCCU with the purpose of promoting its implementation. Materials and Methods This is a descriptive operational study that uses the Special Registry of Health Providers (SRHP) of the Ministry of Health and Social Protection as a secondary source, consulted on August 29, 2016: This registry is permanently updated by the territorial health entities, which makes it changeable in time. To identify the institutions that provide health services (hereafter referred as IPS) that can be constituted in CCCU, the technical annex "Manual of Habilitation" was taken from the Ministry of Health and Social Protection, which considers three central standards: organization or structure, management and health outcomes 13 . The study defined in its scope of the first standard "Organization of the CCCU". The Habilitation manual describes the services with which CCCU meets the requirement of "it may provide" and have available for its conformation: medium or high complexity hospitalization service and chemotherapy service. In the same way and as a criterion in the identification made in this study, the provider institutions that were enabled in the REPS services were taken into account such as: Specialized external consultation, emergencies, oncological surgery, radiotherapy or nuclear medicine. The search was parameterized taking into account the following variables: group of services, service code, name of the provider, level of complexity, legal nature of the provider and province. The search was oriented to IPSs and not to independent providers; the search profile used was guest, the codes of the services consulted were: 391 oncology and pediatric hematology consultation, 374 pediatric surgery consultations, 227 pediatric oncological surgeries, 709 chemotherapy, 711 radiotherapy, 715 nuclear medicine, 102 pediatric general hospitalizations, 501 emergency services. The search strategy in the REPS focused on the following route: 1. Identification of the initial universe of providers that prescribe treatments in pediatric oncology: providers who had one of the following services enabled: pediatric hematology and oncology consultation and pediatric surgery consultation. 2. Identification of qualified chemotherapy services in any form of ambulatory or hospital care and pediatric hospitalization of medium or high complexity. 3. Identification of support services and therapeutic complementation of radiotherapy or nuclear medicine. 4. Identification of emergency services and pediatric oncological surgery. This last result was converted for the study into the final input that shows the potential number of institutions providing health services that can structure their services under a care strategy as CCCU (Fig. 1). For the analysis there were simple frequencies obtained by province and by service. In a progressive manner, there were institutions excluded that did not have all the services that from a theoretical rather than a regulatory point of view should constitute an UACAI, such as pediatric hospitalization, outpatient oncology and pediatric hematology consultation and pediatric surgery consultation, chemotherapy in any form of ambulatory or hospital care and hospitalization, radiotherapy or nuclear medicine, emergencies and pediatric oncological surgery. Results According to the SRHP as of the cutoff date of August 29, 2016, there were 71 Provider Institutions of Health Services identified, "Universe" of the country that has specialized consultation of oncology and hematology or consultation of oncological surgery for pediatric cancer care; they are distributed in 19 of the 32 provinces of the country. 69 IPS have a pediatric oncology and hematology consultation, and 11 also include oncological surgery ( Table 1). The Province of Atlántico registers 13 institutions, a greater number than other Provinces that have capital cities with similar characteristics, such as Antioquia or Valle del Cauca, which register a lower number of IPS, six and seven respectively; that way Atlántico reports just the same number of IPS than the City of Bogotá D.C. (Bogotá represents the entire Province of Cundinamarca). Among the 71 Healthcare Provider Institutions (IPS), 39 institutions distributed in 15 Provinces "have/may provide" chemotherapy and hospitalization services of medium or high complexity ( Table 1). The Provinces that did not meet the search criteria for hospitalization services of medium and high complexity and chemotherapy were Cesar, La Guajira, Magdalena and Meta. Of the 39 registered IPS with qualified hospitalization services of medium or high complexity and chemotherapy, the ones that had support services and therapeutic complementation of radiotherapy or nuclear medicine were verified, being identified a total of 21 IPS distributed in 11 Provinces ( Table 2). As it can be seen, 18 IPS have enabled the radiotherapy service, 12 of them have enabled the nuclear medicine service and nine have the two services described above. The number of institutions that met the requirements to establish themselves as CCCU were reviewed, that is to say, that they had the services to guarantee the integrality in the diagnosis and treatment of children with cancer. Table 3 shows that 9 of the IPS (located in the Provinces of Atlántico, Santander, Valle del Cauca and Bogotá City) met the criterion of concentrating the greatest number of services in the same physical space. In relation to this, however, only four of the nine IPS comply with pediatric oncological surgery offer, eight with radiotherapy and seven with nuclear medicine. Discussion This study makes an analysis of the offer of pediatric cancer services in Colombia that fulfill the guarantees to establish a diagnosis and comprehensive treatment for patients with cancer. According to the study, it is found that of the 71 qualified institutions, that is, guaranteed to offer oncological services for children with cancer, only 21 of them have hospitalization, a chemotherapy room, a Hematology-oncology clinic and a pediatric oncological surgery clinic; and only 9 (12%) of the institutions are able to guarantee the integrality of that care in Colombia. High-income countries have defined the criteria that cancer centers must fulfill in order to be able to offer care for inpatients and outpatients diagnosed with childhood cancer. Emphasis has been placed on the fact that the facilities must ensure timely accurate diagnosis, the administration of intensive chemotherapy, emergency management for serious complications 24 hours a day, intensive care services, and timely and complete blood support (blood bank) among others, and to have a network of hospitals that offer treatments as part of a shared attention [14][15][16] . This shared network is important because the radiotherapy service is not always found within the hospitals, and this does not stop the care from being comprehensive, as long as the care is guaranteed if required, particularly for cases of central nervous system tumors. This is the case of the hospital network in Chile under the PINDA program 16 . Human talent is an essential requirement and institutions must have a multidisciplinary team led by pediatric hematologist/ oncologists with the support of pediatricians, subspecialists in some areas of pediatrics, pediatric surgeons, intensive pediatricians, rehabilitators, nurses and other professionals 13,14 . Since the number of cases of pediatric cancer is relatively low, the quality of treatment is guaranteed when the same institution receives a significant volume of children with cancer. Likewise, there must be available educational programs for patients and family members, school programs, including contact with teachers who teach students at home or hospital, as well as support with reincorporation to school, and social support programs to help families with their concerns about economic difficulties and about the treatment and expenses that are going to be incurred 17 . Without compliance with these minimum conditions, it is very difficult for children, adolescents and young adults to benefit from the progress made in high income countries, due to the fact that an accurate diagnosis, adequate treatment, and medical and social support care depend on a multidisciplinary team and an infrastructure enabled in the institutions to treat cancer. According to the present study, 19 out of the 32 Provinces of Colombia have a pediatric oncology service enabled, and these are concentrated in six provinces (Bogotá, Atlántico, Valle del Cauca, Antioquia, Santander and Risaralda) which is adequate taking into account that Cancer in children is a rare pathology. It is striking that the Province of Atlántico, with a population approximately four times smaller than the city of Bogotá, has the same number of institutions with oncology services enabled. As a possible explanation to this situation, it is found that most of the institutions that offer these services are private institutions that offer a broad service portfolio regardless of their ability to guarantee integral conditions in the care of children with cancer. In Colombia, the authorization of health services has been allowed, such as outpatient services, chemotherapy or hospitalization of children with cancer; without the need for them to be integrated within the same institution, which hardly guarantees a comprehensive and continuous care 18 . Faced with the objectives set in Colombia since 2010 11 , the goal of implementing comprehensive care for children with cancer has not been achieved. In the first place, it is found that the resolution defining the rating of UACAI was only published in July 2016 12 . Secondly, the authorization is voluntary, which means that the institutions do not have a motivation to do so, since a great effort to have all the required services is required. On the other hand, it is allowed that the UACAI be located in centers of "medium complexity, " and that they may have services outside the same institution, which is a bit against the objective of having integral treatment centers; with the exception of the service of radiotherapy, which can certainly be shared by several institutions. This is how the regulation states, for example, that the hospitalization service may be available outside the (health) institution if it only has ambulatory surgery enabled 12 . A critical element that negatively affects the care of children and adolescents with cancer is that it allows potential CCCU not to have 24-hour emergency services in the pediatric hematology-oncology units as a requirement for habilitation, which is fundamental for the care of children. In this regard, Dang-tan et al. 19 , reported on the delays in the diagnosis of pediatric solid tumors that, in general, the diagnosis was timelier when patients with suspicion of cancer were treated for the first time in an emergency service. On the other hand, and more importantly, is the need to have an immediate service for the complications caused by diseases or treatments that may endanger the lives of cancer patients. Taking into account that the only source of information for performing this study was the REPS, there are some limitations because only information related to infrastructure could be included; the REPS does not allow to identify certain requirements demanded as "central of mixtures" or "program of pain and palliative care"; in the first case, the REPS does not identify these physical environments; in the second case, it does not identify programs. The fact of being the only source of information constitutes its main weakness. It is desirable to supplement the information with other primary sources. It is also possible, even though it is little feasible, for many providers to register authorized services that are not offering or that are inactive. Table 2. Distribution by province of IPS with oncological consultation services, medium and high complexity pediatric hospitalization, chemotherapy, radiotherapy and/or nuclear medicine.
2018-07-08T00:31:46.088Z
2018-03-30T00:00:00.000
{ "year": 2018, "sha1": "11553d7fc0bb85e6bb8e46d3929d53b76f73eec3", "oa_license": "CCBY", "oa_url": "http://colombiamedica.univalle.edu.co/index.php/comedica/article/download/3377/4334", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "11553d7fc0bb85e6bb8e46d3929d53b76f73eec3", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
34045609
pes2o/s2orc
v3-fos-license
The Shrinking Thyroid: How Does Thyroid Size Change Following Radiation Therapy for Laryngeal Cancer? BACKGROUND AND PURPOSE: External beam radiation therapy (XRT) for head and neck cancer is known to induce hypothyroidism and cause morphologic changes in the thyroid gland. This retrospective study investigates change in the size of the thyroid gland detectable by CT after XRT for laryngeal cancer. MATERIALS AND METHODS: The measured width of the thyroid lobes in 61 patients treated nonsurgically with XRT for laryngeal cancer between 2000 and 2003 on posttherapy CT was compared with that on pretherapy CT. Absolute and percentage changes in measured thyroid width following XRT were analyzed according to chemotherapy administration and posttherapy thyroid function. RESULTS: Eighty-five percent (52/61) of patients had a decrease in the width of the thyroid gland. The average change in width measuring −4.7 mm and −13.8% (SD, 5.7 mm and 19.9%) occurred at an average of 758 days following completion of XRT (mean, 402-1534 days) and was significant (P = .002). Average change in width between hypothyroid patients (n = 19, −6.1 mm and −20.0% change) and euthyroid patients (n = 42, −4.1 mm and −11.1% change) was not significant (P = .20 absolute change and P = .11 percentage change). The average change in width between patients receiving chemotherapy (n = 31, −5.5 mm and −16.1% change) and patients not receiving chemotherapy (n = 30, −3.9 mm and −11.5% change) was not significant (P = .26 absolute change and P = .37 for percentage change). CONCLUSIONS: Most nonsurgical patients receiving XRT for laryngeal cancer have a significant decrease in the width of their thyroid glands detected on CT. The average change in the size of the thyroid gland does not differ when development of hypothyroidism or chemotherapy administration are considered. E xternal beam radiation therapy (XRT) is an integral part of treatment for head and neck cancer. Clinicians providing posttherapy care for these patients recognize that changes can occur in the thyroid gland after XRT to the head and neck leading to hypothyroidism. Hypothyroidism is documented in approximately 20%-67% of patients receiving XRT for Hodgkin or non-Hodgkin lymphoma and approximately 20%-31% of patients receiving XRT for head and neck cancers in the first 5 years after treatment. [1][2][3][4][5][6][7][8][9][10][11][12] Hypothyroidism typically occurs within 5 years after completing XRT. Morphologic changes to the thyroid gland after XRT, such as the development of solid or cystic masses, are well documented by sonography and histology. [13][14][15][16][17][18] The thyroid gland is known to decrease in the size after XRT as demonstrated by sonography. 18 Patients with laryngeal cancer are followed after treatment with CT of the neck to evaluate treatment response and monitor for recurrence; however, radiologists rarely mention changes in the size of the thyroid gland on follow-up CT reports following XRT for head and neck cancer. We have occasionally reported a decrease in the size of the thyroid gland in these posttherapy patients. We asked the following questions: How does the thyroid gland change in the size following XRT on CT, does the administration of neoadjuvant or adjuvant chemotherapy affect the change in the size of the thyroid gland following XRT, and does the change in the size of the thyroid gland differ between patients who develop hypothyroidism and those who remain euthyroid during the follow-up period? Materials and Methods Two hundred forty-five patients receiving XRT for laryngeal cancer between June 2000 and January 2003 at our cancer center were considered for this study. Institutional review board approval was obtained, and patient information was protected according to the Health Insurance Portability and Accountability Act standards. Operative notes, clinical notes, laboratory data, and neck CT studies were collected from the electronic medical records. Patients undergoing total laryngectomy, partial laryngectomy, cervical lymph node dissection, or tracheostomy as part of the initial treatment for their laryngeal cancer were excluded. Patients who did not complete their full prescribed dose of radiation therapy were excluded. Patients who did not have pre-XRT or post-XRT CT scans of the neck available for review or who had extension of their primary tumor into the thyroid gland were excluded. Patients with a diagnosis of a thyroid disorder before therapy for laryngeal cancer were excluded. Patients who did not have specific laboratory documentation of thyroid function or a clinical note addressing thyroid function at follow-up visits were excluded. The age of each patient, the date of completion of the prescribed dose of XRT, and the use of neoadjuvant or adjuvant chemotherapy were collected from the electronic medical records. Contrast-enhanced CT of the neck was performed on a 4-detector LightSpeed Plus CT scanner (GE Healthcare, Milwaukee, Wis). Scanning was performed at 5-mm section thickness from the top of the orbits to the hard palate with the gantry angled in the plane of the skull base and 2.5-mm section thickness through the neck from the midramus of the mandible through the thoracic inlet with the gantry angled in the plane of the larynx. Patients received 65-mL iohexol (Omnipaque; GE Healthcare) at 1.5 mL/s with a 50-second delay in scanning followed immediately by 35-mL iohexol at 1 mL/s. Scanning began at the thoracic inlet and proceeded cranially. Images were reconstructed to 1.25-mm section thickness for viewing in head and neck soft-tissue window settings (width 300 HU and level 70 HU) on PACS workstations. A diagnostic radiology resident (M.M.M.-T.) and a senior neuroradiologist (A.J.K.) performed the thyroid measurements by using the electronic measurement calipers on a PACS workstation. They were not blinded to the clinical data collected from the patients' charts. A single axial CT section through the widest part of the thyroid gland was chosen, and the greatest width of each lobe of the thyroid gland excluding the isthmus was measured to the nearest tenth of a millimeter on the pre-XRT neck CT and compared with the same axial section on each post-XRT CT. The time after therapy when each post-XRT CT was acquired was recorded. If the patient underwent delayed surgery for persistent or recurrent disease, data were not collected from subsequent CTs. The thyroid stimulating hormone (TSH) levels were retrieved from the electronic medical records system when available. The normal range for TSH at the cancer center laboratory is 0.5-5.5 mIU/L. TSH values Ͼ5.5 mIU/L were recorded as elevated and a sign of thyroid gland failure. Free T4 levels were not routinely obtained in all patients and were, therefore, not recorded for the purpose of this study. The follow-up clinic notes were reviewed, and note was made of a clinician placing the patient on thyroid hormone replacement therapy as a sign of thyroid gland failure. Statements based on physical examination findings and patient symptoms in the follow-up clinic notes indicating normal thyroid gland function were recorded as positive indicators of a euthyroid state. The absolute change and the percentage change in the summed width of the 2 lobes of the thyroid gland compared with the pre-XRT CT image were calculated for each post-XRT CT measurement. The measurements were grouped into quarter years on the basis of the interval the scanning was performed after XRT completion, and the average and SD for the absolute and percentage change in thyroid gland width were calculated. Linear regression was applied to the average absolute change and the average percentage change for measurements acquired in each quarter year to detect a trend in the change of thyroid gland size with time. Patients were then subdivided within each quarter-year group into those who developed hypothyroidism by the end of the study and those who remained euthyroid throughout the study. A 2-tailed t test was used to detect differences in the absolute change in size and the percentage change in size of the thyroid gland within each quarter year. A 2-tailed paired t test was applied to the pre-XRT and the latest post-XRT measurements of thyroid gland width for each patient to determine if there was a significant change in thyroid gland size after XRT. A 2-tailed t test was used to detect differences in the absolute change in size and the percentage change in size of the thyroid gland in patients who received chemotherapy compared with those who did not and in patients who developed hypothyroidism during the follow-up period compared with those who did not. Finally, patients were placed into 4 categories on the basis of administration of chemotherapy and development of hypothyroidism during the follow-up period, and the percentage change and absolute change in thyroid gland width calculated from the latest post-XRT CT of each group were compared by using the Kruskal-Wallis test. Results Two hundred forty-five patients received XRT for laryngeal cancer during the study period. Eighty-six patients were excluded because they did not have pre-XRT and at least 1 year of post-XRT CT of the neck performed at our institution. Seventy-five patients were excluded because they had surgical intervention, either a total or partial laryngectomy, neck dissection, or tracheostomy before completing at least 1 year of post-XRT CT follow-up imaging. Three patients were excluded because they did not complete their prescribed dose of radiation therapy. Three patients were excluded because they had a pretreatment diagnosis of hypothyroidism, and 17 patients were excluded because there was no laboratory data or clinical note documenting thyroid function at follow-up visits. Sixty-one patients fulfilled the criteria for inclusion in the study. There were 56 cases of squamous cell carcinoma, 1 case of carcinoid, 1 case of rhabdomyosarcoma, 1 case of synovial sarcoma, and 2 cases of small cell cancers of the larynx. The average patient age was 57.3 years with a range from 20 to 80 years. Thirty-one patients received chemotherapy as part of their treatment, and 30 did not. The average total time for CT follow-up of these patients was 758 days after completing XRT, with a range from 402 to 1534 days. Figure 1 shows the graphs with best-fit lines for average absolute and percentage change in size of the thyroid gland on follow-up CTs of all patients grouped by quarter year since completion of XRT. Eighty-five percent (52/61) of patients had a measured decrease in size of their thyroid gland, and 15% (9/61) of patients had a measured increase in size of their thyroid gland on follow-up CT. Figure 2 is an example of a patient who had a decrease in size of the thyroid gland after radiation therapy. The average change in thyroid gland size measured on the last follow-up CT for each patient was Ϫ4.7 mm and Ϫ13.8% (SD, 5.7 mm and 19.9%). The change in width of the thyroid gland after XRT as measured on the last follow-up CT was statistically significant according to the 2-tailed paired t test (P ϭ .002). The average change in the size of the thyroid gland measured on the final CT study was Ϫ6.1 mm and Ϫ20.0% (SD, 6.8 mm and 24.3%) for patients developing hypothyroidism during the follow-up period and Ϫ4.1 mm and Ϫ11.1% (SD, 5.1 mm and 17.2%) for patients remaining euthyroid during the follow-up period. The differences in absolute change and percentage change in the size of the gland between the hypothyroid and euthyroid groups were not statistically significant using a 2-tailed t test (P ϭ .20 for absolute change in size and P ϭ .11 for percentage change in size). The average change in size of the thyroid gland measured on the final CT study was Ϫ5.5 mm and Ϫ16.1% (SD, 6.4 mm and 22.3%) for patients receiving chemotherapy and Ϫ3.9 mm and Ϫ11.5% (SD, 4.8 mm and 17.2%) for patients not receiving chemotherapy. The differences in absolute change and percentage change in the size of the gland between the chemotherapy and no chemotherapy groups were not statistically significant using a 2-tailed t test (P ϭ .26 for absolute change in size and P ϭ .37 for percentage change in size). We compared the absolute and percentage change in size of the thyroid gland following XRT, taking into consideration the functional status of the thyroid gland and the use of chemotherapy, with the Kruskal-Wallis test, which did not show a statistically significant difference in absolute or percentage change in size of the thyroid gland (H ϭ 4.68 with 3 df yielding a P value of .20 for difference in absolute change in size and H ϭ 7.10 with 3 df yielding a P value of .07 for percentage change in size). Discussion The biochemical changes to the thyroid gland function caused by XRT to the head and neck are well documented in the clinical literature. [1][2][3][4][5][6][7][8][9][10][11][12] XRT causes microvascular and parenchymal damage to the gland, which results in decreased function several years after completing XRT. Eighty-five percent (52/61) of the patients included in this study demonstrated a measured decrease in the width of the thyroid gland on follow-up CT, with measurable changes occurring in several patients within the first quarter year following completion of XRT. The wide range in the absolute and percentage change in the width of the gland suggests that the causes are likely multifactorial, including radiation-induced vascular damage, pa-renchymal cell damage, autoimmune-mediated damage, and fibrosis of the capsule preventing compensatory hypertrophy of the gland. 2 Fifteen percent (9/61) of patients included in the study had a measured increase in the size of the thyroid gland on follow-up CT. Again, this is likely multifactorial, including enlargement resulting from vascular, parenchymal, and autoimmune-mediated damage as well as compensatory hypertrophy of the gland as hormonal feedback systems stimulate the functionally damaged gland. The data presented in Fig 1 show a progression in average thyroid gland size decrease as time from XRT therapy increases. These averages by quarter year include all patients regardless of whether they had an increase or a decrease in the size of the gland, showing that the phenomena that cause a measureable decrease in the size of the gland predominate over those that cause enlargement of the gland during the study period in this population of patients. This study demonstrates that the average size of the thyroid gland decreases after XRT for laryngeal cancer on follow-up CT, and the result is statistically significant but does not demonstrate a statistically significant difference in thyroid gland size after XRT for patients who developed hypothyroidism or patients who received chemotherapy compared with those who did not. The difference in percentage change in the size of the thyroid gland in euthyroid-versus-hypothyroid patients approaches statistical significance with a 2-tailed t test yielding a P value of 0.11. Similarly, the difference in percentage change in the size of the thyroid gland when both thyroid functional status and administration of chemotherapy were considered approaches statistical significance with a Kruskal-Wallis test yielding a P value of .07. Thirty-one percent (19/61) had biochemical failure of the thyroid gland within the study period, which is at the upper limit of hypothyroidism rates following XRT for head and neck cancer published in the clinical literature, despite the fact that the literature states that it takes up to 5 years to develop hypothyroidism after XRT to the neck. [1][2][3][4][5][6][7][8][9][10][11][12] The percentage of patients developing hypothyroidism during the study period is likely exaggerated because lack of clinical data on thyroid function was an exclusion criterion for the study. A disproportionate number of euthyroid patients were probably excluded because it may be easier to document positive signs and symptoms of a disorder rather than negative signs and symptoms. Thyroid function was likely not tested or reported for all patients because of the subspecialty referral and follow-up patterns in our cancer center, with many patients receiving follow-up care outside the center. In addition, the average follow-up time of patients in this study was 758 days-just over 2 years. Previous studies have shown that biochemical failure of the gland after XRT can take up to 5 years to develop, and more patients may have gone on to develop thyroid gland failure if followed for a longer time period. Imaging follow-up and thyroid function testing intervals were not standardized in this retrospective study. There was a bias toward patients with higher stages of disease at presentation because these patients with a higher stage were more likely to be followed by CT for a year or more after XRT than patients with stage I laryngeal cancer, who were more likely to be followed by clinical examination and laryngoscopy. These patients with a higher stage may have received a higher radiation dose delivered to the thyroid gland because of the inclusion of nodal chains in the low neck in the radiation port. This study did not estimate radiation dose to the thyroid gland. Although anecdotal evidence of a visually detected change in the size of the thyroid gland in post-XRT patients motivated this study, we did not rigorously test the measurement thresholds for visual detection of thyroid gland size change by axial CT. We do not intend for radiologists to report the thyroid gland width on every post-XRT follow-up neck CT; rather, we seek to make the imaging community aware that posttherapy CT can demonstrate a change in the size of the thyroid gland. Conclusions In summary, the thyroid gland changes in the size after XRT for laryngeal cancer, with most patients demonstrating a decrease in the size of the gland and a minority of patients demonstrating an increase in the size of the gland. This study does not demonstrate a statistically significant average change in the size of the thyroid gland on follow-up CT scans after XRT between patients who develop hypothyroidism and patients who remain euthyroid or between patients who receive chemotherapy and those who do not during the time course of follow-up for this study.
2017-05-01T02:57:13.111Z
2009-03-01T00:00:00.000
{ "year": 2009, "sha1": "50deb1bccf9dbe03b6807bb51f2fccf1ba481f92", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/30/3/613.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "50deb1bccf9dbe03b6807bb51f2fccf1ba481f92", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245062268
pes2o/s2orc
v3-fos-license
Effects of COVID‐19 lockdown on lifestyle behaviors in children with obesity: Longitudinal study update Abstract Objective A previous report from our group identified directionally unfavorable dietary and lifestyle behavior trends in longitudinally monitored children and adolescents with obesity early in the COVID‐19 pandemic lockdown. The current study aimed at extending these previous observations in youths with obesity on the dietary and lifestyle behavioral consequences of the extended COVID‐19 lockdown in Verona, Italy. Methods The sample included 32 children and adolescents with obesity participating in the longitudinal OBELIX study. Diet and lifestyle information were collected pre‐pandemic, 3 weeks into the national lockdown, and 9 months later when home confinement continued to be mandatory. Changes in outcomes over the study time points were evaluated for significance using repeated‐measures ANOVA and post‐hoc pairwise t‐tests with Bonferroni corrections. Results As previously reported, meals/day, fried potato intake, and red meat ingestion increased significantly (p < 0.001) during the initial lockdown. Sleep time and screen time increased and sports participation decreased significantly (p < 0.001) during the initial lockdown. These changes in health behaviors remained significantly different from baseline at the second lockdown assessment, with the exception sleep time returned to baseline levels. Conclusions Unfavorable diet and lifestyle behavioral changes in response to the initial COVID‐19 lockdown in children and adolescents with obesity have largely been sustained over the course of the pandemic. There is an urgent need to intervene on these behaviors to prevent further deleterious effects on long‐term child health; access to weight management care is critically important for these children. In addition to intervening on these behaviors, our findings should help to inform ongoing lockdown policies. Methods: The sample included 32 children and adolescents with obesity participating in the longitudinal OBELIX study. Diet and lifestyle information were collected pre-pandemic, 3 weeks into the national lockdown, and 9 months later when home confinement continued to be mandatory. Changes in outcomes over the study time points were evaluated for significance using repeated-measures ANOVA and post-hoc pairwise t-tests with Bonferroni corrections. Results: As previously reported, meals/day, fried potato intake, and red meat ingestion increased significantly (p < 0.001) during the initial lockdown. Sleep time and screen time increased and sports participation decreased significantly (p < 0.001) during the initial lockdown. These changes in health behaviors remained significantly different from baseline at the second lockdown assessment, with the exception sleep time returned to baseline levels. Conclusions: Unfavorable diet and lifestyle behavioral changes in response to the initial COVID-19 lockdown in children and adolescents with obesity have largely been sustained over the course of the pandemic. There is an urgent need to intervene on these behaviors to prevent further deleterious effects on long-term child health; access to weight management care is critically important for these children. In addition to intervening on these behaviors, our findings should help to inform ongoing lockdown policies. | INTRODUCTION The coronavirus disease 2019 (COVID-19) pandemic has continued to have profound social, health, and economic consequences. Among these untoward effects is the abrupt closure of in-class school programs for children and adolescents, who by necessity must remain in their homes during the lockdown as a means of limiting COVID-19 transmission. In an earlier report from a longitudinal cohort study, we found that house-bound children and adolescents with obesity living in Verona, Italy displayed worsening of their diet and lifestyle behaviors including sleep time and activity levels. 1 The recent retrospective cohort study of Woolford and colleagues, 2 using Kaiser Permanente Southern California electronic health record data, supports and extends these obser- | BEHAVIORAL QUESTIONNAIRE The data collection instrument consisted of 12 questions about dietary patterns (e.g., portions of red meat, potato chips, etc.) and sleep, sports participation, and screen watching. A meal was defined as a structured, nonliquid ingestive event, including breakfast, lunch, afternoon snacks, and dinner. The investigator conducted 10-min telephone interviews with the parents of each participant. Time spent in sports prior to and during the lockdown were defined as any physical activity (e.g., jogging, playing in the backyard, etc.) as it was not possible to participate in organized sports such as soccer, swimming, volleyball, and basketball. Some educational programs were broadcasted during the lockdown, although the screen time question related specifically to nonschool activities. The current study was approved by the Hospital Institutional Review Board (Protocol: 5384, 01/29/2019). All parents and children provided informed consent. | Questionnaire observations The health behavior questionnaire findings are presented in Table 2. | DISCUSSION The current longitudinal study of children and adolescents with obesity was fortuitously started as part of an unrelated project taking place several months before the COVID-19 pandemic unfolded in Europe. Once the lockdown was in place, we were able to again query the parents of our participants using the same dietary and behavioral instruments that were used in the baseline study. Our findings, derived from the third evaluation of the cohort as they remained confined at home, revealed that the unfavorable diet and lifestyle behavioral changes in response to the initial COVID-19 lockdown have largely been sustained over the course of the pandemic. 1 Specifically, three dietary measures, meals/day, fried potato intake, and red meat ingestion, increased significantly during the initial lockdown. -527 reported that overweight and obesity increased during the pandemic. The absolute increase in percent overweight and obesity among 5-through 11-year-old was 8.7%, greater than among 12 to 15 and 16-to 17-year-old adolescents (5.2% and 3.1%), respectively. In another recent study, Lange et al. 3 The current study has several limitations, including that data was acquired in a small sample from the parents of children and adolescents with obesity. Additionally, we did not have quantitative measures including weight, height, and activity levels at either of the lockdown time points, so our inference is limited to the self-reported behavioral changes. In conclusion, our study again affirms the worsening of dietary and behavioral patterns in children and adolescents with obesity who were confined to their homes during the COVID-19 pandemic. These observations are concordant with the striking increase in overweight and obesity among persons in the 2-to 19-year-old age group now reported in several studies. 2,3 School closures, more non-educational screen time, poor diet, and fewer opportunities for physical activity all likely contribute to this adverse health trajectory. There is an urgent need to intervene on these behaviors to prevent further deleterious effects on child health; access to weight management care is critically important for these children. In addition to intervening on these behaviors, our findings should help to inform ongoing lockdown policies.
2021-12-12T16:12:54.570Z
2021-12-10T00:00:00.000
{ "year": 2021, "sha1": "4e25346f15cba551e01f8ac64f265a95c0a961f2", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/osp4.581", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "6e2580f34c2b130f40fbe742cee8bcd0910a29c6", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247655188
pes2o/s2orc
v3-fos-license
A Comparison of Promethee and TOPSIS Techniques based on Bipolar soft Covering based rough sets The uncertainty in the data is an obstacle in decision-making problems. In order to solve problems with a variety of uncertainties a number of useful mathematical approaches together with fuzzy sets, rough sets, soft sets, bipolar soft sets have been developed. The rough set theory is an effective technique to study the uncertainty in data, while bipolar soft sets have the ability to handle the vagueness, as well as bipolarity of the data in a variety of situations. This study develops a new methodology, which we call the theory of Bipolar soft covering-based rough sets (BSCB-RSs), which will be used to propose a new technique to solve decision-making problems. The idea introduced in this study has never been discussed earlier. Furthermore, this concept has been explored by means of a detailed study of the structural properties. By combining the BSCB-RSs model with two traditional decision-making methods (the PROMETHE-II method and the TOPSIS method), we introduce a novel method for addressing multi-criteria group decision-making (MCGDM) problems. We give an application in multi-criteria group decision making (MCGDM) to show that the proposed technique can be successfully applied to some real world problems including uncertainty, namely, the selection of site for renewable energy pro ject ( Earth Dam ). The effectiveness of the proposed method is validated by comparing it with existing methods. The showed techniques exhibit the practicability, feasibility and sustainability of Site selection. Both MCGDM methods give one Site as conclusion. I. INTRODUCTION M ANY complicated problems in business, social sciences, engineering, management sciences, military, medical sciences, economics and many other fields involve uncertain data. These problems cannot be solved using classical mathematical methods. The classical mathematical model is rational model of decision making which is based on the assumption that managers have access to complete information and are capable to an optimal decision by weighting every alternatives. Because of that, the mathematical model is too complex, the exact solution cannot be found. To overcome this difficulty, a number of researchers are attempting to determine some appropriate approaches and a number of mathematical theories to cope with uncertainty in data, such as Fuzzy Set Theory, Rough Set Theory, Interval Mathematical Theory, Vague Set Theory, Graph Theory, Automata Theory, Decision-Making Theory etc., are formulated to solve such problems, and have been found only partially successful. These theories reduced the distance between the classical mathematical designs and the vague real-world data. In 1965, fuzzy set theory [85] was suggested to model fuzzy data by Zadeh. However, in this theory, determining of membership function is rather difficult sometimes. Therefore, in 1999, Molodtsov [46] proposed the notion of soft set as a completely new approach for modelling uncertainty, free from this difficulty. Unlike classical mathematics, where exact solution of a mathematical model is required, soft set theory instead requires an approximate description of an object as its initial point. The choice of adequate parameterization tools such as words, real number, functions etc., make soft set theory very convenient and easy to apply in practice. Many interesting applications of soft set theory can be seen in [5], [8], [17], [51]. The rough set theory [59], [60], is another successful mathematical tool for dealing uncertainties. In this theory, uncertainty is represented by a boundary region of a set. Pawlak used the upper and lower approximations of a collection of objects to investigate how close the objects are to the information attached to them. Feng et al. [27], [28], proposed the relationships among soft sets, rough sets and fuzzy sets, obtaining three types of hybrid models: rough soft sets, soft rough sets, and soft-rough fuzzy sets. Shabir et al. [63] redefined a version of soft rough set known as modified soft rough set (MRS-set). Soft set theory [46] and Rough set theory [59] are regarded as effective mathematical approaches to address uncertainty. In 2011, Feng et al. [28] established a relationship among these two theories and introduced the concept of a new hybrid version of the soft rough sets (SRSs), that can give better approximations over Pawlak's RS theory in some cases ( [28], Example 4.7). This approach can be viewed as a generalization of RS theory. The idea of covering based rough sets was proposed by Zakowski [86]. Then Pomykala [61] introduced several additional approximation operators by using coverings, inclusive of two pairs of dual approximations. Some researchers researched the covering based rough sets and the general covering based rough sets in [78], [104], [107]. Yao [82] in particular examined the two pairs of dual operators by using coverings induced by binary relations. Couso and Dubois [15] studied the two pairs in the framework of incomplete information. In particular in 1998, Bonikowski et al. [11] put forth a covering based rough set model based on the notion of minimal description. Likewise Zhu [106] proposed several covering based rough set models and discussed their relationships. Tsang et al. [67] and Xu and Zhang [78] proposed additional covering based rough set models. Liu and Sai [40] compared Zhu's covering based rough set models and Xu and Zhang's covering based rough set models. Some recent important properties of covering based rough set models have appeared in [43], [83], [108]. An expanded overview of the advances about covering based rough sets appeared in some recently published articles like Yao and Yao [83] and D'eer et al. [19], [20]. In numerous sorts of data analysis, the bipolarity of the data is a key component to be taken into consideration while developing mathematical models for some issues. Bipolarity discusses the positive and negative aspects of the data. The positive data addresses what is assumed to be possible, while the negative data addresses what is not possible or certainly false. The concept that lies behind the presence of bipolar information is that a wide assortment of human decisionmaking depends on bipolar judgemental thinking. For example, sweetness and sourness of food, participation and rivalry, friendship and hostility, effects and side effects of drugs are the two different aspects of information in decision-making and coordination. The soft sets, the fuzzy sets, and the rough sets are not appropriate tools to handle this bipolarity. Based on the need of presenting both positive and negative sides of data, notion of bipolar soft set and its operations such as union, intersection and complement were first defined by Shabir and Naz [66]. After this research, BSSs have become increasingly popular with researchers. Karaaslan and Karatas [32] redefined bipolar soft sets with a new approximation providing opportunity to study on topological structures of bipolar soft sets. Also Naz and Shabir [54] proposed the concept of fuzzy bipolar soft sets and investigated their algebraic structures. Bipolar soft rough sets were firstly introduced by Karaaslan and Cagman [31] to handle roughness of bipolar soft set, which is a combination of RS theory and BSSs. They also provide applications of BSRSs in decision making. Malik and Shabir [53] introduced the idea of rough fuzzy bipolar soft sets in 2019. Multi-criteria decision-making (MCDM) method is referred as a method used for scoring or ranking a finite number of alternatives by considering multiple criteria attached to the alternatives. MCDM concerns with evaluating and selecting alternatives that fit with the goals and necessity. The preference ranking organization method for enrichment evaluation (PROMETHEE) and the techniques for the order of preference by similarity to positive ideal solution (TOPSIS) are the two most known techniques developed to handle multi-criteria decision making problems. The biggest difference between PROMETHEE and other MCDM methods is the inner relationship of PROMETHEE during the decisionmaking process [50]. It is well adapted to the decision problems where a finite set of alternatives is to be outranked subjected to multiple conflicting criteria [6], [10], [68]. The PROMETHEE method is based on pairwise comparisons of alternatives with respect to each criterion. According to [71], the PROMETHEE has at least three advantages. The first advantage is its user-friendly outranking method. The second advantage is the success of PROMETHEE in applications to real-life planning problems. Another advantage of PROMETHEE lies on completeness of ranking. The PROMETHEE I and PROMETHEE II allow partial and complete ranking of alternatives, respectively. The PROMETHEE I is used to obtain partial ranking while PROMETHEE II is used for complete ranking. These two methods were developed in [9], [12]. On the other hand, the main TOPSIS concept measures the distance between each alternative and ideal solution. Hwang and Yoon [26] implemented the multiple criteria decision making method and applications. Such methods were based on crisp knowledge and could not accommodate information that was imprecise. In 2000, fuzzy version of TOPSIS method was suggested in Chen's [14] research work. Several TOPSIS related approaches have since been suggested and applied to various multiple criteria decision making problems. Chen and Tsao [16] suggested interval valued fuzzy set based method of TOPSIS. VOLUME 4, 2016 A. MOTIVATION Based on the above descriptions for MCDM and the basic principle of bipolar soft rough sets, this paper attempts to propose a novel approach to multi-criteria group decision making (MCGDM) problems by combining the bipolar soft covering based rough set with two traditional decision-making methods. In particular, a summary of motivations of this paper is provided as follows: (1) If we recap all of the preceding arguments, we can see that bipolar soft sets can deal with the bipolarity of information about specific objects using two mappings. The positivity of the information is handled by one mapping, while the negative is measured by the other. Given the link between rough sets and bipolar soft sets, one attempts to investigate the roughness of bipolar soft sets has been made: by Karaaslan and Caman [31]. This is the primary motivation for us to present and investigate a novel approach to bipolar soft set roughness using Bipolar soft covering-based rough sets (BSCB-RSs), as well as to discuss their application in the decision-making and able to describe the best and worst side in decision making. (2) To address the issue of data processing in decisionmaking, the superior performance of rough set theory has been demonstrated. Liang et al. [37] investigated a decisionmaking approach that combines the TOPSIS method with a decision-theoretic rough set. Zhang et al. [102] recently applied covering based fuzzy rough set models to solve the issue of company recruitment decision-making. In addition, several MCDM approaches focused on covering based fuzzy rough set theory have been investigated in the literature [100], [103]. This paper extends the PROMETHEE and TOPSIS approaches using covering based bipolar soft rough sets and applies it to the optimal earth dam power plant site selection problem in order to widen the application ranges of covering based fuzzy rough set theory in MCDM. (3) The proposed method not only considers the opinions of key decision-makers but also incorporates the past experiences by CB-BSR-approximations in actual scenarios. B. AIM OF THE PROPOSED STUDY The main goal of this study is to present another interesting and novel version of bipolar soft rough sets by utilizing BSCB-RSs. We highlight the article by the following pioneering work: • A novel concept known as BSCB-RSs is proposed. • Some important structural properties of BSCB-RSs are investigated in detail. • Two comprehensive MCGDM methods in the framework of BSCB-RSs is introduced and the validity of these approaches is also verified by a practical example. • The effectiveness of the proposed method is validated by comparing it with existing methods. C. OUTLINE OF THE PAPER The article has been organized in the following manner. Section 2 gives an overview of some basic ideas, which are required for the understanding of our research work. Section 3 starts by characterizing some bipolar soft covering-based bipolar soft operators. Further, we discuss the relationship between these operators and their properties. Moreover, based upon these operators, we proposed the notion of BSCB-RSs. The notion is further investigated by considering its important structural properties in detail. Section 4 proposes a new decision-making method to MCGDM problems based on the PROMETHEE method and the TOPSIS method. After that, we give an illustrative example of the proposed decision making technique to show that the technique can be effectively applied to some real-life problems in section 5. In section 6, a comparison analysis is made between the proposed model and some other well-known decision making techniques. At the last, section 7 concludes with a summary of the present work and a suggestion for further research. II. PERLIMINARIES In this section, we recall some essential notions related to rough set, soft set, soft rough set, bipolar soft set, bipolar soft rough sets and Soft covering based soft rough sets that would be accommodating in the upcoming discussion. Throughout this paper, we will use ℑ for an initial universe, E for set of parameters, C for a non-empty subset of the parameters set E and P(ℑ) for the power set of ℑ, unless stated otherwise. Definition 1: [59] Let ℑ be a non-empty finite universe, and R be an equivalence relation over ℑ. Then the pair (ℑ; R) is said to be Pawlak approximation space. If A ⊆ ℑ, then A may or may not be written as a union of some equivalence classes of ℑ. If A is written as a union of some equivalence classes, then A is called R−definable; otherwise it is called R−undefinable. If A is R−undefinable then, it can be approximated with the help of the following two definable subsets: (2) Equations (1) and (2) are called lower and upper approximations of A with respect to the equivalence relation R, respectively, where the equivalence class [a] R of an element a ∈ ℑ is the set consists of all objects b ∈ ℑ such that (a, b) ∈ R, that is, Moreover, the boundary region (area of uncertainty) of rough set is defined as: Bnd R (A) = R(A) − R(A). Definition 2: [46] Let ℑ be a set of objects called the universe, C be a non-empty subset of parameters (attributes). Then a pair ( F,C) is said to be a soft set over ℑ, where F is a mapping given by F : C −→ P(ℑ). Thus, a soft set over ℑ gives a parameterized family of subsets of the universe ℑ. For ε ∈ C, F( ε) is considered to be a set of ε-approximate elements of ℑ by the soft set ( F,C). Feng et al. established a link between soft set and rough set and introduced the idea of a new hybrid model of "soft rough set" based on a different granulation structure known as "soft approximation space". Definition 3: [28] Let P = ( F,C) be a soft set over ℑ. Then the pair P * = (ℑ, P) is called a soft approximation space. The lower and upper soft rough approximations of any set A ⊆ ℑ is defined as follows,respectively: A is said to be soft P * −definable; otherwise A is called a soft P * −rough set. Definition 4: [47] Let C be a set of parameters. Then, NOT set of C, denoted by ⌉C, is defined by ⌉C = {⌉ ε : ε ∈ C} where ⌉ ε = not ε for ε ∈ C. Definition 5: [66] The triplet ψ = ( F, H, C) is called a bipolar soft set over a universe ℑ, in which F, H are mappings given by F : 0. Thus, a bipolar soft set over ℑ gives two parameterized families of subsets of the universe ℑ and the condition F( ε) ∩ H(⌉ ε) = / 0, for all ε ∈ C, ⌉ ε ∈⌉C, is imposed as a consistency constraint. From now onward, set of all bipolar soft sets over the universe ℑ will be referred to by BS ℑ Definition 6: Karaaslan and Çagman [31] presented the concept of bipolar soft rough set, which is a combination of rough set and bipolar soft set. Definition 7: [31] Let ( F, H, C) ∈ BS ℑ . Then ϕ = (ℑ, ( F, H,C)) is said to be a bipolar soft approximation space. Based on ρ, the following four operators are defined for any A ⊆ ℑ : Which are called soft ρ-lower positive approximation, soft ρ-lower negative approximation, soft ρ-upper positive approximation and soft ρ-upper negative approximation of A, respectively. Definition 8: [84] A soft set ( F,C) over ℑ is called a full soft set if ε∈C F( ε) = ℑ. Definition 9: A full soft set P = ( F,C) over ℑ is called a covering soft set if F( ε) ̸ = / 0, for all ε ∈ C. Yüksel et al. [84] proposed soft covering based rough sets, which is a fusion of soft set and covering based rough set. Definition 10: [84] Let K P = ( F,C) be a covering soft set over ℑ. Then the pair (ℑ, K P ) is called a soft covering approximation space. Definition 11: [84] Let (ℑ, K P ) be a soft covering approximation space and t ∈ ℑ. Then the soft minimal description of t is defined as follows: Md We only need the basic properties of an object to describe it, not all of them. The goal of the minimal description notion is to achieve this. Definition 12: [84] Let ρ = (ℑ, K P ) be a soft covering approximation space. For a set A ⊆ ℑ, soft covering lower and upper approximations are, respectively, defined as: called the soft covering positive, negative, and boundary regions of A, respectively. In addition, if S ρ (A) ̸ = S ρ (A), then A is said to be soft covering based rough set; otherwise A is called soft covering based definable. The properties satisfied by soft covering lower and upper approximations can be found in [84]. III. BIPOLAR SOFT COVERING BASED SOFT ROUGH SETS From the concept of bipolar soft set, we know that a bipolar soft set is determined by the two set-valued mappings, one from a set of parameters to the power set of the universe and the other from a not set of parameters to the power set of the universe. In this section, we use a special kind of bipolar soft covering with rough set and establish a bipolar soft covering approximation space and present its basic properties. Remark 1: By using Definition 5 of the bipolar soft set, we The mappings F and H are given as below: Now, according to Definition 14, we can easily see that bipolar full soft set ℘ is a bipolar soft covering over ℑ. Definition 15: Let K ℘ = ( F, H, C) be a bipolar soft covering over ℑ. Then the pair (ℑ, K ℘ ) is called a bipolar soft covering approximation space. Definition 16: Let ρ = (ℑ, K P ) be a bipolar soft covering approximation space and t ∈ ℑ. Then the bipolar soft minimal description of t is defined as follows: We only need the basic properties of an object to describe it, not all of them. So we use the minimal description concept for this purpose. Definition 17: Let ρ = (ℑ, K ℘ ) be a bipolar soft covering approximation space. For a set A ⊆ ℑ, based on ρ, bipolar soft covering lower and upper approximations are, respectively, defined as: Which are called bipolar soft covering ρ-lower positive approximation, bipolar soft covering ρ-lower negative approximation, bipolar soft covering ρ-upper positive approximation and bipolar soft covering ρ-upper negative approximation of A, respectively. Generely, the two pairs given as: are called bipolar soft covering rough approximations of A ⊆ ℑ with respect to ρ. Moreover, if BS ρ (A) ̸ = BS ρ (A), then A is called bipolar soft covering based bipolar soft rough set, otherwise A is called bipolar soft ρ−definable. In addition, bipolar soft covering positive region and negative region of A is defined as, respectively: BS . Further, the boundary region (or area of uncertainty) of bipolar soft covering based rough set is defined as: As an illustration, according to Definition 17, For A 1 = {t 1 , t 3 , t 5 } ⊆ ℑ, bipolar soft covering ρ-lower positive approximation, bipolar soft covering ρ-lower negative approximation, bipolar soft covering ρ-upper positive approximation and bipolar soft covering ρ-upper negative approximation of A 1 , respectively, can be calculated as: So, the lower and upper approximations of A 1 given as: So, the lower and upper approximations of A 2 given as: , A 2 is a bipolar soft covering based definable set. Now, we investigate some properties of the bipolar soft covering lower and upper approximations. Theorem 1: Let ℘ = ( F, H, C) be a bipolar soft covering over ℑ, ρ = (ℑ, K ℘ ) be a bipolar soft covering approximation space and A, B ⊆ ℑ. Then the bipolar soft covering lower and upper approximations have the following properties: , Proof 1: From Definition 17, we can easily prove the properties 1, 2 and 3. Theorem 2: Let ℘ = ( F, H, C) be a bipolar soft covering over ℑ, ρ = (ℑ, K ℘ ) be a bipolar soft covering approximation space and A, B ⊆ ℑ. Then the bipolar soft covering lower and upper approximations have the following properties: , by using Definition of bipolar soft covering ρ-lower positive approxima- , by using Definition of bipolar soft covering ρ-lower negative approximation, we have and then give a counter example for reverse inclusion. Let u ∈ BS ρ + (A) ∪ BS ρ + (B), by using Definition of bipolar soft covering ρ-lower positive approximation, we have , by using Definition of bipolar soft covering ρ-lower negative approximation, we have 3) The proof of this assertion is similar to the proof of (1). 4) The proof of this assertion is similar to the proof of (2). A. MULTI-ATTRIBUTE GROUP DECISION MAKING BASED ON BSCB-RSS USING PROMETHEE TECHNIQUE. In this section, a multi-criteria group decision-analysis (MCGDA) approach, based on the promethee method combined with bipolar soft covering based rough set is presented to solve multi-criteria decision-making problems. Promethee is a rapid, flexible and progressive method for pair-wise comparison in MCDM. This method considers the outranking flows for evaluating alternatives. The concept is built on pairwise comparison between alternatives and calculates two outranking flows for each alternative, namely positive and negative outranking flows. The positive outranking flow gives a measure of how the alternative outranks all the other, while the negative outranking flow gives a measure of how the alternative is outranked by all the others. The higher φ + (a) is the better alternative when φ + (a) represents the power of a. On the other hand, the smaller φ − (a) is the better alternative when φ − (a) represents the weakness of a. In the following, we present an algorithm over the bipolar soft covering based rough set hybrid with Promethee. We apply this algorithm for selection of most optimal site for earth dam. Let ℑ = {t 1 ,t 2 , ...,t n } be the finite universe of objects, C = { ε 1 , ε 2 , ..., ε m } be the set of all possible parameters and ℘ = ( F, H, C) be a bipolar soft set over ℑ. Suppose that G = {p 1 , p 2 , ..., p k } is a set of expert persons, Y 1 ,Y 2 , ...,Y k are nonempty subsets of ℑ, represent results of primary evaluations of expert persons p 1 , p 2 , ..., p k , respectively and bipolar soft set D 1 , D 2 , ..., D r are the actual result that previously obtained for problems in different places or different times. ..., r). are said to be bipolar soft lower approximation matrix and bipolar soft upper approximation matrix, respectively. Here Definition 19: Let [BS] ρ + ,ρ − and [BS] ρ + ,ρ − be bipolar soft lower approximation matrix and bipolar soft upper approximation matrix, respectively. Then is called weighted covering based parameter matrix, where each entry is of the form .., r. Definition 21: Let [S] ρ + ,ρ − be the standardized covering based decision matrix. Then the corresponding normalized covering based decision matrix is defined as: Definition 22: Let [S] ρ + ,ρ − be the normalized covering based decision matrix. Then the corresponding average weighted normalized covering based decision matrix is defined as: where each entry u i j = η Definition 23: Let [U] ρ + ,ρ − = [u i j ] r×k be the average weighted normalized covering based decision matrix. Then determine the deviation by pairwise comparison by using the following equation and d j (a, b) denotes the difference between the evaluations of a and b on each criterion. 1) Proposed Algorithm In this section, we present the algorithm for the established method of considered multi criteria group decision making problem in section 4.1. Step 2: Construct D 1 , D 2 , ..., D r bipolar soft covering based Soft sets using the real results. Step 9: Determination of deviation by pairwise comparison. Step 10: Determine the multi-criteria preference index. Step 11: Calculate the net flow values and rank accordingly. 2) Case Study In this subsection, the bipolar soft covering based rough set model for selection of appropriate Dam site is applied in a numerical example. It is shown in fig 1. A decision maker group formed for this reason, consisting of a geographer, an energy engineer and a map engineer. Let the sites t 1 , t 2 , t 3 , t 4 , t 5 , t 6 be selected as the alternative for Earth dam site location. The decision maker group evaluate these alternatives and for selection of a suitable alternative we will use selection criterion. In order to determine effective factors in selecting an appropriate site, extensive studies were conducted and the most effective attributes (criteria and subcriteria) were selected. These attributes are shown in Fig 1. A brief explanation about the attributes is presented: ε 1 ) Topographical conditions: It is critical to have a secondary valley or stone abutments with proper topography around the main river while building a dam spillway. In addition, because the main river is U or S shaped, the length of tunnels, channels, and other water transfer systems to divert or transfer water from upstream to downstream during dam building and afterward is limited. In general, the best location for a dam reservoir and its body is where a vast valley with high walls connects to a narrow canyon with tenacious walls. ε 2 ) Hydrological: This criteria consists of four subcriterion, which is presented below. SC 1 ) River flow regime: At the dam location, the river's permanent or seasonal flow regime is critical. Seasonal rivers convey more silt and have poorer water quality, making water resource management more difficult owing to inaccurate water delivery into reservoirs. As a result, it is suggested that the flow be maintained indefinitely. SC 2 ) Annual yield: The yearly yield is the annual volume of water that passes through the cross section of the river in the dam site, and it plays a vital part in determining where the dam should be built. SC 3 ) Volume of reservoir: When the reservoir generated after dam building has larger volume, the surface area of the reservoir water increases, which has a greater impact on the climate, but it also increases the possibility for evaporation and water pollution. On the other hand, if the dam is built in a VOLUME 4, 2016 location where the surface area of the reservoir water will not be considerably affected by raising the volume of the reservoir water, (Because of the valley's steep slope), the height and hydrostatic pressure of the water will rise, which will benefit energy generation and downstream water transfer. However, the dam's body would be subjected to more force, and the structure would need to be raised and strengthened. As a result, the dam should be built in such a way that the reservoir capacity is determined by the aforementioned factors, such as leakage and other losses are optimal. SC 4 ) Probable maximum flood: The largest volume of water produced by thawing snow and ice or other atmospheric precipitation that is likely to occur in rivers is known as the probable maximum flood. ε 3 ) Lateral impacts: There are three subcriterion in this criteria, which are listed below. SC 1 ) Environmental impacts: Other factors that play a part in determining the dam site include changing weather conditions, vegetation, and wildlife. SC 2 ) Social impacts: The social consequences of population centre displacement and integration of different ethnic cultures as a result of the demolition of residential areas for dam construction, reservoir dewatering, and downstream dam water use should all be considered. SC 3 ) Political impacts: Dam construction purposes for decreasing political tensions, such as water supply for a community, preventing grievances, and immigration of people of a border city, should all be taken into account. ε 4 ) Damage: This criteria has two sub-criteria, which are listed below. SC 1 ) Dam body and reservoir: Environmental damages, such as the destruction of mines, historical monuments, agricultural fields, and residential areas; road, railway, and power line displacement; and changes in the path of oil and gas pipelines, telecommunication facilities, among other things, should be addressed. SC 2 ) Probable dam break: Material and moral damages caused by a possible dam collapse are essential factors to consider when choosing a dam site, and the dam should be built in an area where the amount of harm caused by a possible dam break is minimal. ε 5 ) Health dam site: The dam location must be in an area with few seams and tracks, as well as a low risk of tectonic activity such as earthquakes, landslides, and subsidence. Furthermore, greater results will be realised in the dam location with reduced permeability and liquefaction properties of soil and natural materials. Furthermore, the region's soil mechanical qualities (compaction, consolidation, and so on) as well as the type of geological layers in the region have an impact on reservoir water quality. Then NOT set of parameters of C is ⌉C, ⌉ ε ∈⌉C. Step 1: Primary evaluations of experts persons (geographer, energy engineer and map engineer) p 1 , p 2 and p 3 are: Step 2: Real results in five different periods are represented as bipolar soft covering over ℑ, The real result in D 1 period choose the set of parameters as: if ε =⌉ ε 5 The real result in D 2 period choose the set of parameters as: The real result in D 3 period choose the set of parameters as: The real result in D 4 period choose the set of parameters as: if ε =⌉ ε 2 The real result in D 5 period choose the set of parameters as: Step 3: Using the Definition 17, to calculate the operators BS D q (Y j ), BS D q (Y j ), for j = 1, 2, 3 and q = 1, 2, ...., 5. Step 5: Now we construct weighted covering based parameter matrix [W ] ρ + ,ρ − by using Equation (11), which is given as: Step 6: Compute standardized covering based decision matrix [S] ρ + ,ρ − by using Equation (12), we have VOLUME 4, 2016 Step 7: We construct normalized covering-based decision matrix [N] ρ + ,ρ − by using Equations (13) and (14), which is given as: Step 8: Now we construct average weighted normalized covering-based decision matrix [V ] ρ + ,ρ − by using Equations (15), which is given as: Step 9: We calculate the deviation by pairwise comparison by using Formula (16), which is given below. Step 10: Next, we calculate the multi-criteria preference index by using Formula (18). TOPSIS is a useful multi-criteria group decision making (MCGDM) technique for ranking of design alternatives and selection of the best alternative in concept evaluation process through computation of Euclidean distances. The aggregating function calculated in TOPSIS represents "closeness to ideal solution". TOPSIS uses vector normalization to make criteria of same units. The basic principle of TOPSIS is that the alternative that has been chosen as the best, should have the shortest distance from the positive ideal solution (PIS) and the farthest from the negative ideal solution (NIS). In this subsection, we apply bipolar soft TOPSIS method to solve proposed problems to make a comparison with bipolar soft Promethee method. The procedure of TOPSIS technique under bipolar soft covering based rough sets environment is explained as follows: As in the subsection of bipolar soft Promethee, Steps 1-8 have already been done in previous Subsect (A). ??. So we move on step 9-11. Definition 28: Let [U] ρ + ,ρ − = [u i j ] r×k be the average weighted normalized covering based decision matrix. Then the expressions The separation measurements of each alternative to NIS is calculated as: Definition 30: Let S ⊤ i and S ⊥ i be the separation measurements of the positive ideal solution and the negative ideal solution, respectively. Then the relative closeness of alternatives to ideal solutions (represented as ℘ i ) is defined as: 1) Proposed Algorithm In this section, we present the algorithm for the established method of considered multi criteria group decision making problem in section 4.2. Step 1-8: These steps have already been done in the previous Subsect(A). ??. Step 9: Find positive ideal solution (PIS) and negative ideal solution (NIS). Step 10: Calculate separation measurements of PIS S ⊤ i and NIS S ⊥ i for each alternative. Step 11: Calculate relative closeness ℘ i of alternatives to ideal solution and rank accordingly. 2) Numerical Example In Sect. IV-A2, the decision-making problems have presented using bipolar soft Promethee method. Here, we present these applications using bipolar soft TOPSIS method to take into account the comparison of bipolar soft Promethee method and bipolar soft TOPSIS method. Steps 1-8 have already been done in Sect. IV-A2. So we move on step 9-11. Step 9: The positive ideal solution (PIS) and negative ideal solution (NIS) by using the Equations (20) and (21) Step 10: The separation measurements of PIS and NIS for each parameter by using the Equations (22) and (23) are: Step 11: The relative closeness of alternatives to the ideal solution by using Equation (24) are which indicate that Site t 3 is the best site for earth dam. Figure 2 illustrates the visual representation of the site rankings. V. DISCUSSION AND COMPARATIVE ANALYSIS In this section, we address validity of the proposed method, advantages, and disadvantages, as well as a comparison of the proposed techniques to several existing techniques. A. VALIDITY OF THE PROPOSED MODEL: 1) As we all know, aggregation is a vital stage in classical group decision making approaches for gathering the preferences or opinions of all decision-makers. In our proposed decision making approaches, every decision-maker expressed their opinion as a Bipolar Soft set, and afterward, all opinions given by decision-makers are aggregated through the usage the Bipolar soft Covering based approximations, and then a compromise optimal proposal is acquired. So, the Bipolar soft covering based rough sets approach to MCGDM provides a different strategy to aggregate the preferences of decisionmakers. Therefore, the proposed decision making approaches ( Promethee and TOPSIS ) are valid and offer a novel technique and perspective to investigate GDM problems in real life. The basic idea of these both techniques (Promethee and TOPSIS) is given below: i) Promethee (Preference Ranking Organization Method for enrichment evaluations) methods are family of outranking methods including Promethee I, II, III, IV, V and VI. Promethee I is partial outranking mehod, Promethee III to VI are actually having the fundamental basics of Promethee II with the little variations in assumption and methdology. In this article, we have used the Promethee II technique which is a complete outranking method. This method compare the alternatives pairwise for each criterion, finding the strength of prefferring one over the other. This method considers the outranking flows for evaluating alternatives. The concept is built on pairwise comparison between alternatives, and calculates two outranking flows for each alternative, namely positive and negative outranking flow. ii) TOPSIS (Technique of Preference by similarity to the ideal solution) is the goal, aspiration and reference level model. This technique measure how good alternatives reach determined goals ans aspirations. TOPSIS' key principle is to choose the solution that has the shortest distance from the positive ideal solution and the farthest distance from the ideal negative solution. To measure the relative closeness levels of alternatives to the positive and negative ideal solution, Euclidean distance access is used. B. 2) ADVANTAGES OF THE PROPOSED MODEL: In general, real-world MCDM and MCGDM problems arise in a complicated environment under uncertain and imprecise data, which is hard to address. The proposed technique is exceptionally appropriate for the scenario when the data is complex, vague, and uncertain. Especially, when the existing data is depending on the bipolar information by decisionmakers. A few benefits of proposed techniques ( Promethee and TOPSIS) are listed below: i) The proposed approach considers positive and negative aspects of each individual alternative in the form of a bipolar soft set. This hybrid model is more generalized and appropriate to deal with aggressive decision making. ii) Classical Promethee and TOPSIS techniques do not provide a clear framework for assigning the weights. But, our proposed techniques are effective in solving MCGDM problems when the weights information of criteria is completely unknown. iii) The proposed MCGDM technique is more effective for discrete data problems. iv) The proposed method takes into account not only the opinions of key decision-makers, but also previous experi-ences with bipolar soft covering approximations in actual scenarios. As a result, it is a more comprehensive method for better interpreting available information and, as a result, making decisions using artificial intelligence. v) The proposed MCGDM techniques are simple to comprehend and may be applied to decision making real life situations. C. 3) DISADVANTAGES OF THE PROPOSED TECHNIQUE: Some minor flaws are there in the proposed techniques which are discussed below: i) Although there are some differences among the optimal decision-making results (the optimal alternatives) and the ranking results determined by these two decision making methods, this phenomenon is normal in decision-making theory. Decision-makers can select a method according to actual requirements and their own interest. ii) These techniques have complicated structure, the large data in the form of bipolar information. Such large data is hard to deal with, due to massive calculations, which are not so natural to perform. However, one could create a MATLAB programming code to make these complicated calculations simpler. D. 4) COMPARISON WITH SOME EXISTING METHODS: There are several approaches in the literature that can be used to solve MCGDM problems. Each of these MCGDM techniques has its own set of advantages and disadvantages. The capability of every technique relies on the problem under consideration. In this section, we compare the proposed MCGDM technique to some current MCGDM techniques in fuzzy and bipolar fuzzy environments, and we discuss the significance of the proposed MCGDM strategies. We talk about comparative analysis of proposed strategy with soft covering-based rough sets [84], fuzzy soft set [3], covering-based rough fuzzy set [43], picture fuzzy set [7], generalized hesitant fuzzy rough sets [64]. All these techniques have their own value in the literature. If we compare all these techniques with our proposed strategies, we investigate the following points. (i) The previously-mentioned techniques cannot catch bipolarity in decision making which is a fundamental aspect of human thinking and behavior. (ii) Besides, these techniques do not ensure harmony in the opinions of decision-makers. (iii) The models presented in [7], and [43] are well known for their ability to solve some decision making problems by describing the idea of decision-makers with a crisp number. They fail to handle some group decision making problems due to the uncertainty of the objective world and the complexity of the decision-making problems. For example, several experts disagree about the degree to which an element belongs to a set and cannot compromise one another. One prefers to assign 0.4, whereas the other prefers 0.6. In this situation, a rough set model based on bipolar soft covering could be an excellent solution. (iv) When we compare our proposed result to the technique described in [4], we can see that the optimal alternative in this method is obtained simply by using the tabular form of bipolar soft sets, whereas the optimal alternative in our proposed model is obtained by using the bipolar soft covering based rough approximations. (v) If we apply the recent approache proposed in [31] to our Example 5, we get the following ranking among the alternatives (shown in following Table) and the corresponding pictorial depiction is given in Figure 3. Methods The final ranking The best The worst The rough set theory is arising as an incredible theory and has different applications in numerous fields. On the other hand, the bipolar soft sets are the appropriate mathematical model to deal with the uncertainty as well as the bipolarity of the data. In this paper, we presented a general approach for the bipolar soft covering based rough bipolar soft set. some algebraic properties of fuzzy bipolar soft covering approximations have been studied as well. We discussed a dicision making problem with the information having uncertainty as well as bipolarity and applied the bipolar soft covering approximations to iron out this problem. In the real world, in a complex enviornment where competing systems of reasoning, ambiguous and imprecise knowledge must be taken into account, decision-making problems take place. Multi-criteria approaches for decision-making are used to face such uncertainty. PROMETHEE-II and TOPSIS are two of these processes. We have defined the process, methodology and significance of two well-known MCGDM methods in this research paper, namely, the PROMETHEE-II method and TOPSIS method by using bipolar soft covering based rough bipolar soft set. These approaches have been used to address site selection-related decision-making problems. These algorithm provides three key benefits over the present algorithms. Firstly, it manipulates the bipolarity of the data, endowed with uncertainty. Secondly, this algorithm accommodates the opinions of any (finite) number of decision-makers about any (finite) number of alternatives. Thirdly, with the best decision, it also yields the worst decision. Furthermore, a practical application demonstrates the validity of this methodology. Finally, a comparison analysis of the proposed model is performed. There are several study topics that need to further exploration. Firstly, it is a potential topic to study some theoretical aspects on CB-BSRSs, such as attribute reductions [75], [76], granular structures [18], [75], and others. Secondly, the combination of CB-BSRSs with other important traditional MCDM methods [24], [26], [49] is also a promising research direction. We will investigate these topics in the future.
2022-03-25T15:33:29.755Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "bc35a338cb47dfcf22778523e285dd46595cf166", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09739717.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "4b55760233ae1dc09bf758bd79d0d4faa17dba7b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
218763448
pes2o/s2orc
v3-fos-license
Information scrambling at finite temperature in local quantum systems This paper investigates the temperature dependence of quantum information scrambling in local systems with an energy gap, $m$, above the ground state. We study the speed and shape of growing Heisenberg operators as quantified by out-of-time-order correlators, with particular attention paid to so-called contour dependence, i.e. dependence on the way operators are distributed around the thermal circle. We report large scale tensor network numerics on a gapped chaotic spin chain down to temperatures comparable to the gap which show that the speed of operator growth is strongly contour dependent. The numerics also show a characteristic broadening of the operator wavefront at finite temperature $T$. To study the behavior at temperatures much below the gap, we perform a perturbative calculation in the paramagnetic phase of a 2+1D O($N$) non-linear sigma model, which is analytically tractable at large $N$. Using the ladder diagram technique, we find that operators spread at a speed $\sqrt{T/m}$ at low temperatures, $T\ll m$. In contrast to the numerical findings of spin chain, the large $N$ computation is insensitive to the contour dependence and does not show broadening of operator front. We discuss these results in the context of a recently proposed state-dependent bound on scrambling. Scrambling refers to the way a closed chaotic quantum system delocalizes initially simple information such that it becomes inaccessible to all local measurements. Scrambling can be identified as a quantum analogue of the classical butterfly effect, as first discussed in a condensed matter context [8], and more recently explored in the context of holographic field theories and many-body systems such as the SYK model [9][10][11][12]. Scrambling can be studied for generic quantum systems by calculating out-of-time-ordered correlation (OTOC) functions, which, for geometrically local systems, gives rise to a state dependent velocity of information propagation-the butterfly velocity [13][14][15]. OTOC functions can be measured for engineered quantum many body systems in the lab, with many proposals [16][17][18][19][20][21][22][23][24] and subsequent experiments [25][26][27][28][29][30][31]. For quantum systems at the semiclassical limit, the deviation of an OTOC function from its initial value grows exponentially with time, with an exponent that can be viewed as a quantum analogue of the classical Lyapunov exponent λ L [9], although the connection to classical chaos is subtle [32,33]. Deforming the contour along which path integrals are evaluated is a general technique one can use to regulate quantities in field theory and it leads to different choices of OTOCs at finite temperature, based on the contour on the thermal circle used to define it. One particular choice of contour leads to a well-behaved version of the OTOC that obeys a bound [34], λ L ≤ 2π/β, where β is the inverse temperature. This bound was later understood in the more general context of the growth of operator complexity and thermalization [35,36]. However, exponents arising from other versions of OTOCs can have a strong dependence on the choice of contour [37,38]. In this work, we systematically study the temperature and contour dependence of OTOCs in generic quantum systems with spatial locality and a mass gap. Our motivation for this study comes from two directions. First, we want to understand possible contour dependence of OTOCs in a non-perturbative calculation. Second, we want to understand the temperature dependence of various characteristics of scrambling as a system is cooled below its mass gap. At high temperature, we indeed find contour dependence of the OTOC. At low temperature, where our expectation is that the physics is that of a weakly interacting dilute gas of quasiparticle excitations, we find that the rate of growth of scrambling is exponentially suppressed while the butterfly velocity is of order the sound speed. Technically, these results are obtained by studying a gapped spin chain at large size numerically and a field theory model analytically. The remainder of the introduction provides neccessary background material for our study. However, in a quantum system at a finite temperature, T, this norm can be evaluated in several ways. Let us denote ρ = e −βH /T r(e −βH ) as the thermal density matrix (β = 1/T is the inverse temperature). For any 0 ≤ α ≤ 1, is a Frobenius norm of the thermally smeared commutator ρ (1−α)/2 [W 0 (t), V x ]ρ α/2 , which encodes a notion of the size of operator spreading. Two choices of the squared commutator which have been studied in the literature, are the 'regulated' squared commutator, C r (t, x) = C 1/2 (t, x), and the 'unregulated' squared commutator, . When the expressions of the regulated and unregulated squared commutators are expanded, they contain terms which are thermally smeared versions of out of time ordered four point correlators of the form W 0 (t)V x W 0 (t)V x , evaluated on two distinct thermal contours, as shown in Fig. 1 a and b. In this work, we study these two squared commutators, and explore the difference in the physics that they capture [37,38]. B. Lyapunov exponent, butterfly velocity, and wavefront broadening The squared commutator in holographic models, or in quantum systems with a semiclassical limit, grows exponentially at early times with a 'Lyapunov exponent' λ L , C(t) ∼ e λ L t . In spatially local systems, the time argument can be replaced by the appropriate velocity determining the speed of information scrambling, called the 'Butterfly velocity' [13,14,39,40]. The butterfly velocity is state dependent analogue of the microscopic Lieb Robinson velocity [41]. However, interacting local quantum systems which are not in a semi-classical limit (that is, the number of local degrees of freedom is finite, and not large as in the case for systems with a semiclassical limit), show a qualitatively different behavior. As studies of random unitary circuits [42,43], stochastic local Hamiltonian spin models [44], and numerical studies on deterministic quantum spin models [15,[45][46][47] have shown, the near wave-front behavior of the squared commutator is, This behavior satisfies a ballistically growing and a broadening operator wavefront, where v B is the Butterfly velocity and p is the broadening coefficient. For p = 1, the broadening is diffusive, which is observed in the case of random unitary circuits [42,43]. This ballistic-diffusive form doesn't exhibit an exponential 'chaotic' behavior. Until now, most studies of broadening were done at infinite temperature. However, unlike the 'Lieb Robinson velocity' of local quantum systems, the 'Butterfly velocity' is a state dependent information spreading velocity, and hence is a temperature dependent quantity. Furthermore the Lyapunov exponent and butterfly velocity could depend non-trivially on the choice of the contour. In this paper we explore these questions through a combination of numerical studies on quantum spin systems and analytical studies of tractable semi-classical field theory models. C. Summary of our results In this work we use a combination of numerical and analytical techniques to study the temperature and contour dependence of squared commutator in strongly interacting, gapped, local quantum systems. We do this firstly using a novel numerical technique based on matrix prod- [15,42,43,45]; but here we confirm the persistence of the broadening behavior even at low temperatures. For the regulated squared commutator we notice a strong temperature dependence of the broadening coefficient and butterfly velocity. We observe that at temperatures lower than the gap, β > m −1 , the butterfly velocity is consistent with a power-law ((βm) −a with a > 0) behavior. For the unregulated squared commutator, on the other hand, we observe that the butterfly velocity and the broadening coefficient have no observable temperature dependence, and in fact remain constant even as the temperature is tuned from β = 0 to β > m −1 . This confirms a strong contour dependence of the OTOC [37,38]. We also numerically study the contour dependence of ∂ t C (α) (t, x) and make a comparison with the chaos bound to demonstrate that the bound doesn't apply to these squared commutators. 2. While the MPO technique can access temperatures below the gap, it is challenging to access very low temperatures. In order to calculate the temperature dependence at low temperatures, in Sec. III, we calculate the behavior of the regulated and unregulated squared commutator in the paramagnetic phase of the 2 + 1D non-linear O(N ) model. This is a gapped strongly interacting theory for which we can analytically calculate the scrambling behavior at large N using a diagrammatic ladder technique. We find that the Lyapunov exponent is λ L ∼ e −βm /β, and the butterfly velocity is v B ∼ (βm) −1/2 at low temperatures such that β >> m −1 . This shows that the butterfly velocity has the same scaling as the speed of sound of semiclassical massive particles. The field theory calculation can't, however, reproduce the broadening behavior or the contour dependence, indicating that finite N corrections need to be taken into account for those features. 3. In Sec. IV, we summarize our results and compare the numerical and analytical approaches. We discuss the relation between the temperature dependence of butterfly velocity obtained in this paper with a recently derived temperature dependent bound on butterfly velocity [47]. The bound is not sensitive to the contour dependence, and we show that it is consistent with temperature dependence of the butterfly velocities observed in Sec. II and III. OF SCRAMBLING We now numerically study scrambling in a spatially local quantum system, consisting of tensor product of finite dimensional local Hilbert spaces, like spins on a lattice. The Hamiltonian is assumed to be a sum of geometrically local terms, and the lattice has a well defined position label. Operators acting on vectors in a Hilbert space H can be viewed as vectors on a 'doubled' Hilbert space H L ⊗ H R . Here the tensor product structure refers to the two copies -'left' and 'right'of the state Hilbert spaces. We introduce the notation |..) to denote the operator as a vector. A local operator acting on the 0 position in the lattice, |W 0 ), can be time evolved in the Heisenberg picture, One can now probe the evolved operator using a second local operator at a position x by constructing its commutator, The squared commutator can be obtained by squaring this operator which measures the extent of quantum information scrambling in the system. The α dependent squared commutator defined in Eq. 1 can be expressed as a norm of an operator state, A. Model and numerical method We consider the mixed field quantum Ising model, with E 0 = 4J 2 + 2h 2 x + 2h 2 z , on a one dimensional lattice. The X and Z matrices are the usual Pauli matrices. The parameters chosen are, J = 1, h x = 1.05, h z = 0.5. Time is measured in the units of J −1 = 1. This is a gapped system, and the spectral gap between the ground state and the first excited state is ∼ 1.13 as extracted from small size exact diagonalization. We want to calculate C u,r (t, x) for large system sizes and upto long times, and we employ the Matrix product operators (MPO) based technique to time evolve operator states which extends the time dependent density matrix renormalization group (t-DMRG) technique [48][49][50][51] to superoperators [15]. We first time evolve the local operator W by doing time evolution using superoperator H ⊗ I − I ⊗ H * on the operator state, following Eq. 3. We also obtain |ρ) by evolving the identity |I) operator state in imaginary time. Now, we can construct the operator state |O α (t, x, β)) as defined in Eq. 5, for α = 1/2(1), and its norm squared is the required (un)regulated squared commutator. In the MPO based method, at each Trotter step, we must truncate the MPO to a fixed bond dimension, thereby introducing errors. However, we will demonstrate that our numerical procedure converges (for small values of the squared commutator) at large system sizes (L ∼ 200) and upto long times t ∼ 100, even at low temperatures, which makes it a powerful method to study the temperature and contour dependence of quantum information scrambling. We consider a L = 200 spin chain with the mixed field Ising Hamiltonian as in Eq. 6. We start with an operator X 20 , a Pauli X operator localized at the site 20, and construct the squared commutator with Z operators at all sites of the chain. We perform the MPO-TEBD method with Trotter steps, δt = 0.005 for time evolution (to generate X(t)) and δβ = 0.05 for imaginary time evolution (to generate ρ), for bond dimensions χ = 4, 8 (regulated) and χ = 8, 16 (unregulated). To calculate the regulated and unregulated squared commutators, we need to construct the MPOs |O 1/2 (t, x, β)) and |O 1 (t, x, β)), as defined in Eq. 5, respectively. For |O 1/2 ) we need to perform two MPO multiplications, ρ 1/4 → [X 20 (t), Z x ]ρ 1/4 → ρ 1/4 [X 20 (t), Z x ]ρ 1/4 , while for |O 1 ), we need to perform one MPO multiplication, ρ 1/2 → [X 20 (t), Z x ]ρ 1/2 . The details of the numerical implementation, which include a comparison to exact diagonalization, discussions on convergence with bond dimension, and the fitting procedure, are provided in App. A. A heuristic justification of why the MPO approximation works is as follows -it was shown in [15] that the commutator [X(t), Z x ] has a small operator entanglement outside the light-cone. It is also well understood that the thermal density matrix ρ satisfies an area law in mutual information [52], and hence is expected to be reliably approximated by a low bond dimension matrix product operator. These two arguments imply that the operator |O α (t, x, β)) as defined in Eq. 5, which is an MPO multiplication of powers of ρ and the commutator [X(t), Z x ], should have a small operator entanglement outside the light-cone (i.e. when the squared commutator is small), and hence can be well approximated by a low bond dimension MPO. As has been pointed out previously, in [15,46,53], the MPO-TEBD method can capture the qualitative features of scrambling only if the scrambling data has converged with bond dimension. We ensure that all our further analysis is done on scrambling data only in the spatio-temporal domain where it has converged with bond dimension. We plot the contours of the squared commutator in Fig. 2, and demonstrate that the contours converge very well for small values of the squared commutator. The shape of the contours, where the data has converged, show that the wavefront propagates ballistically with a velocity. B. Broadening of the wavefront Without any numerical fitting, we demonstrate the broadening behavior of the operator wavefront even at low temperatures in the Fig. 3. We extract the spatial separation δx between two chosen contours of the log C r , and plot its time dependence in the insets of Fig. 3. A positive (and an increasing) slope implies a broadening behavior. In Fig. 3, we show data for the regulated case, but a similar study for the unregulated squared commutator also demonstrates a broadening behavior. Thus, the Figs. 2 and 3 together show that the early time (before the light-cone is reached) behavior of the squared commutator has a ballistic growth and a broadening wavefront. In [15,44,54], it was argued that the squared commutator, near the wavefront, when C(x, t) << 1, can be captured by the following ansatz, One can identify the broadening coefficient p as, We now fit our data to the ansatz in Eq. 7 to extract the Lyapunov exponent, butterfly velocity and broadening coefficient. C. Temperature dependence of butterfly velocity We extract the butterfly velocity, velocity dependent Lyapunov exponent and the broadening coefficient from the obtained numerical data by fitting them to the near wave-front ansatz in Eq. 7. In Fig. 4a, we plot fitted v B (β)/v B (0) as a function of β for the unregulated case, and see that the fitted butterfly velocity has almost no discernible temperature dependence. In Fig. 4b, we plot the same for the regulated case, and notice a strong temperature dependence. The low temperature behavior is consistent with a power law decrease in the butterfly velocity as a function of β, as is shown in the inset of Fig. 4b. In Sec. III, we show that at the low temperature limit of an analytically tractable field theory model with a mass gap m, the butterfly velocity has a temperature scaling which is the same as the equipartition behavior -1/βm. The asymptotic low temperature behavior in the MPO calculation (even though the temperatures we access here are not very low compared to the spectral gap) is close to the 1/βm behavior, as is demonstrated in Fig. 4b. In App. A, we also study the temperature dependence of the broadening coefficient p. In Fig. 15, we show that p for the unregulated case has a very weak dependence on temperature and remains practically constant as the temperature is lowered. The regulated case, however, has an increasing trend for p with decreasing temperature. D. Contour dependence and chaos bound For a symmetrically defined out of time ordered correlation function, there exists the Maldacena-Shenker-Stanford (MSS) chaos bound λ L ≤ 2π/β [34]. The symmetric OTOC is defined as, This is related to the regulated squared commutator, as the C r (t, x), when expanded, In [34], it was proven that the following bound exists, Given this result, one might conjecture that the related quantity ∂ t log C r (t, x) also satisfies the same bound. To study this, we can calculate ∂ t log C r (t, x) along different 'rays' x = vt [45]; if the near wavefront scrambling ansatz (Eq. 7) is satisfied, then ∂ t log C u,r along a ray of velocity v is . At sufficiently large v, this will violate the chaos bound. In Fig. 5, we plot the ∂ t log C r,u (t, x), for a fixed 'ray' x = t, obtained from fitting of the unregulated and regulated cases to the ansatz, as a function of β and notice that the unregulated case is practically constant, and can violate the bound at lower temperatures. We confirm this without numerical fitting, in App. B, Fig. 16. In App. B we also study ∂ t log C r,u (t, x = vt), as a function of 'ray' velocity v. We find that at high ray velocities v, both ∂ t log C r (t, vt) and ∂ t log C u (t, vt) violate the bound. This shows that the MSS bound doesn't hold for the squared commutators we considered. E. Summary of findings from the MPO numerics By studying squared commutators for large-sized, gapped spin chain which is spatially local, and has finite dimensional local Hilbert spaces, we got three distinctive features. First, the spatial locality leads to a ballistic wavefront propagating at the butterfly velocity, which has distinct temperature scaling for the regulated and unregulated cases. In the unregulated case the velocity is constant, while for the regulated case, the velocity decreases with temperature. Second, the wavefront broadens with time for both contours, and thus the squared commutator doesn't have pure exponential growth. Third, there are numerical indications that the chaos bound is not satisfied for these squared commutators. Can we explain these behaviors using an analytically tractable model? In particular, can we understand the low temperature limit which is not accessible in the spin chain numerics? We explore that in the next section, where we consider a non-linear O(N ) model in 2 + 1D, which is spatially local, and solvable at large N . We study the scrambling behavior at low temperatures for the gapped phase of the model, and find that the butterfly velocity indeed varies as T /m at low temperatures. However, we will find that the field theory calculation doesn't show contour dependence or wavefront broadening. that dimensionality will not affect qualitative features of the temperature and contour dependence. The critical phase diagram [55] of this model is shown in Fig. 6. We analyse this model using Ladder sum techniques developed in [14,56] (see also [57-60]), and study both the temperature and contour dependence of the squared commutators. The real time lagrangian for this theory is given by, The action is given by x L, where the space-time integration x is over 2 + 1D. We have set the speed of light c and to 1. The parameter g (which determines the bare mass) can be tuned across a quantum critical point that occurs at g = g c , and v is the self-interaction coupling constant. We consider the strong coupling (large v) and large-N limit. In [14], scrambling behavior was studied at the critical point g c , by evaluating the regulated squared commutator using a perturbative ladder sum calculation with 1/N as the small parameter [14,56]. Following the diagrammatic techniques used in these studies, we study scrambling on the paramagnetic phase of the model at g > g c , where there are quasiparticle-like excitations with finite bare mass m. We study the temperature dependence of the scrambling in the low temperature limit βm >> 1. The main goal of this section is to analytically obtain temperature dependence of the butterfly velocity at low temperatures. We didn't have access to very low temperatures in Sec. II, and we intend to explore the regime βm >> 1 using this field theory model. The generalized squared commutator in different contours given in Fig. 1 is given by, The regulated and the unregulated squared commutators are given by C r = C 1/2 , and C u = C 1 , respectively. We summarize the results of this section before showing the explicit calculations. Using the ladder-sum calculation, we find that both the regulated and unregulated squared commutators have the following early time behavior, where the 'Lyapunov' exponent, λ 0 ∼ e −βm /β, and the butterfly velocity, v B ∼ (βm) −1/2 . This implies that at low temperatures, the butterfly velocity has the same temperature scaling as the speed of sound (which also scales as (βm) −1/2 ) of the semi-classical gas of dilute quasiparticle excitations of the paramagnetic phase of the O(N ) model at low temperature. A. Basic diagrammatics and low temperature relaxation rate We introduce auxiliary Hubbard Stratonovich (HS) field λ to solve the interacting problem. The Euclidean Lagrangian we consider is The HS field λ is chosen so that it generates a zero temperature mass, m, such that, − λ √ N = m 2 . The HS field also acts as a Lagrange multiplier, fixing (at large N), φ 2 a = N g . At finite temperature T , the constraint imposed by the HS field is Here, and in the rest of the paper, p stands for At finite temperature, the mass will be modified, as a function m(β). We restrict ourselves to low temperature, assuming the hierarchy of scales Λ >> m >> β −1 . This implies m(β) ≈ m, i.e., the thermal mass is approximately the same as the bare mass. The perturbative calculation of the squared commutator can be set up using the basic ingredients -the real time retarded and Wightman propagators of the fields φ a and the HS field λ. The retarded propagators are identified as horizontal lines, while the Wightman propagators are denoted as the vertical lines in the diagrams (in momentum space). For the φ field, bare Euclidean propagator in imaginary time where, ρ is the thermal density matrix, ρ = e −βH /(Z = T r(e −βH )). The retarded propagator is In the Fourier space, they are related by analytic continuation of the Matsubara frequencies, G R (ω, k) = −G(iω n → ω, k). We can calculate and denote the retarded bare propagator as, The spectral function is defined as The bare φ spectral function is given by, The generalized Wightman function is defined as, By going to the spectral representation, we show in App. C, For the λ field, the bare Euclidean propagator is G At infinite v, one can dress the λ propagators as shown in Fig. 7. In that case, where Π is the one loop φ bubble, The retarded polarization bubble is given by analytic continuation, The resummed retarded λ propagator is then denoted as, From the λ spectral function, A λ (ω, k) = −2Im[G R (ω, k)], we can define the generalized λ Wightman function, We need to dress the bare φ propagator, for which we need to calculate the self energy as given in where Σ R is the retarded self energy. In App. D and App. E we calculate the polarization bubble ( Fig. 7) and the self energy ( Fig. 8) respectively, in the low temperature regime, βm >> 1. From the self energy, we can obtain the relaxation rate of φ quasiparticles at momentum q, which is defined as, In App. E, we demonstrate that at q = 0, the inverse lifetime τ −1 φ = Γ q=0 [61], can be evaluated at low temperature, For general q, we have, where, R (1/2) 1+ (k, q) is given in Eq. 64 in App. E. B. Ladder sum calculation We finally calculate the regulated squared commutator, given in Eq. 13 perturbatively in 1/N , using the ladder-sum rules described in [14], which we will extensively use. The calculation boils down to solving a Bethe Saltpeter equation in momentum space for the out of time ordered 4 point function, as shown in Fig. 9. There are two sides of the ladder, which are connected by 'rungs' -which are the Wightman functions. The first diagram on the RHS of Fig. 9 is the 'free' x)] 2 , which doesn't have any exponential in time behavior, hence is not important for diagnosing chaos. There are two types of rungs -the Type I and Type II rungs correspond to the second and third diagram on the RHS of the top line in Fig. 9 respectively. The expressions for the two rung contributions can be easily written down from the diagram; for example, the Type I rung can be expressed as, The result for the Type II rung is very similar, with the replacement G (α) ω, p , p), where, We set up the Bethe Saltpeter equation by defining a function f (ν, k; ω, p), such that, As was shown in [14], it is convenient to consider the "on-shell" ansatz for f (ν, k; ω, p), We can approximate the product of the retarded Green functions by their most singular (in ν) terms (for small k, such that Γ k−p ≈ Γ p ), Further, we have, p − k−p ≈ k.∇ p p , and for small p, ∇ p p ≈ p/m. The Bethe Saltpeter equation can now be written as [14], where, , and, The inverse life-time Γ p was defined in Eq. 29. Recall α = 1/2 refers to the regulated case, while, α = 1 refers to the unregulated case. Because of the spectral relation in Eq. 21, we have, (ω). Thus, the kernel functions are also related simply as, R 1,2 (p , p) = e β( p − p)/2 R (1/2) 1,2 (p , p). We calculate the kernel functions from the Type I and Type II rungs, R we get the following low temperature approximation, We can extract the temperature scaling of the kernel integration, by rescaling p, p → with the kernel matrix defined as, We create a discrete 2D grid of rescaled non-dimensionalized momenta, with a hard cutoff of Λ = 1. This is justified as the kernel matrix is exponentially suppressed in |p − p| 2 . We want to find the temporal behavior of C r,u (t, x). We can thus replace −iν → ∂ t in Eq. 38 and solve the matrix equation for its eigenvalues. If there are real positive eigenvalues, we can infer that there is an exponential growth in the regulated squared commutator. We denote the leading eigenvalue as λ r,u L (k). Temperature scaling of the butterfly velocity First, let us restrict to k = 0. From Eq. 38, we have, λ r,u L (k = 0) ∼ e −βm /βN . By numerically finding the largest eigenvalue of the matrix equation we assert that the leading eigenvalue is always real and positive, leading to an exponential growth in the squared commutator. The details of the numerical computation are given in Appendix H, and the results for both the regulated and the unregulated cases are demonstrated in Fig. 10. Furthermore, the relevant inverse time-scale is also given by Γ 0 = e −βm /βN , (Eq. 28). Hence, we can rescale the Bethe Saltpeter equation by this scale, and introduce a rescaled external momentum, u = k/ √ βmΓ 0 , and a rescaled timẽ The matrix equation can be now recast as, For small u, the eigenvalues of this matrix equation can be approximated bỹ where, χ r,u,± k is the eigenvector of the matrix eigenvalue in Eq. 38. If there are no singularities in χ r,u,± k , we can assume the two terms in the integral depends only on the saddle points of the exponents. Recalling u = k/ √ βmΓ 0 , the two saddle points are given by, When C r,u (t, x) is evaluated, one of the terms will be exponentially suppressed in x compared to the other. Keeping only the leading term, we get, The first term comes from the pure exponential growth that was present for the u = 0 case, and the second term is reminiscent of the broadening form of the squared commutator in Eq. Since λ 0,2,i ∼ Γ 0 , we get the following temperature dependence of the butterfly velocity, v r,u B ∼ 1 βm . (46) Note that this is the same scale as the speed of sound of the ideal classical gas at finite temperature. Hence the butterfly velocity from the regulated squared commutator of this essentially classical gas has the same temperature scaling as the speed of sound. Furthermore, the particular temperature scaling 1/βm of the butterfly velocity arises because the thermal scale is the appropriate scale to non-dimensionalize the momenta, and doesn't depend on the exact form ofλ L (u). From the numerically obtained eigenvalues, we can see from Fig. 11, that the butterfly velocity from regulated and unregulated squared commutators are the same at low temperatures, This shows that the ladder calculation is insensitive to contour dependence. At fixed t, for a fixed difference of C r,u (t, x), one finds from Eq. 44 that the spread = x−v B t ∼ constant. This implies that this form of the squared commutator doesn't have a broadening behavior. A similar exercise for the spin chain result in Eq. 7, would show a time dependent spread, ∼ t p/(p+1) , implying broadening. In deriving these results, we assumed that the integral expression of the squared commutator in Eq. 42 is dominated by the saddle point contribution. In [60], it was noted that OTOCs obtained from ladder sum calculations generically have a pole in momentum space wherever λ L (k) = 2π/β, However, in the O(N ) theory, the chaos exponent λ L (k) ∼ 1/N is N suppressed, hence these poles occur at parametrically large values of the momentum. Provided that x/t is N -independent, the saddle point momentum is always closer to the real axis than the pole and hence controls the integral. For example, as we have seen from the k dependence of λ L (k) in Fig. 19 in Appendix. H, if λ L (k = i|k|) ∼ λ 0 β|k| 2 /m at large imaginary k, then the closest pole in the upper half plane would be at |k| ∼ m β N λmax λ 0 . This momentum is very large due to large N and the large ratio λ max /λ 0 . C. Summary of findings from the field theory calculation In this section, we studied the temperature and contour dependence of squared commutator in a solvable large N local model using the ladder technique. We find that our analysis can describe the temperature scaling of the butterfly velocity. However, it is insensitive to the contour of thermal ordering. This is not unexpected, as the ladder method is not expected to exhibit contour dependence [62]. It also doesn't capture the broadening behavior that was observed in Sec. II. The field theory model differs from the spin chain numerics in two ways -the number of spatial dimensions, and in the fact that the spin chain has finite local Hilbert space unlike the field theory model, which is solvable at large N -an effectively classical description. It is thus likely that the broadening and the contour dependence are sourced by quantum fluctuations due to the finiteness of the local Hilbert space [44], which is not captured in this calculation. IV. DISCUSSIONS In this paper we have studied the temperature and contour dependence of quantum information scrambling for local gapped interacting systems in two different models and for a wide range of temperatures. We first introduced a tensor network based technique to calculate both regulated and unregulated squared commutators in quantum spin chains at temperatures ranging across the spectral gap. For the regulated case, the butterfly velocity decreases with lowering temperature, and is consistent with a power law v B ∼ β −a for a > 0 at intermediate-to-large β. We also observe a strong contour dependence, and point out that the butterfly velocity obtained from the unregulated squared commutator remains insensitive to the temperature variation. In fact, a careful study of ∂ t C(t, x) shows that the chaos bound cannot be generalized away from the special contour ordering used to prove it. To get an analytical handle on local gapped systems at temperatures lower than what can be accessed in the spin chain numerics, we use a perturbative field theoretic ladder sum technique, and calculate the temperature dependence of the squared commutator in the paramagnetic phase of the O(N ) model. There we confirmed that the characteristic speed of information scrambling at low temperature is proportional to the speed of sound of a classical gas, i.e. v B ∼ β −1/2 , confirming the intuition that the low temperature state can be accurately modeled as a weakly interacting dilute gas of massive quasiparticles. However, the scrambling in this model is insensitive to the contour, and also doesn't have the broadening feature. The strong contour dependence we observe in our spin-chain numerics is in the spirit of the results from previous Schwinger-Keldysh calculations in [37,38], which showed similar contour dependence. Our result for the strongly interacting quantum spin chain compliments their pertur-bative arguments. These results taken together suggest that the unregulated case accesses high energy modes even at low temperatures, thereby remaining insensitive to the effects of temperature. Although we did not find such behavior in the O(N ) model at leading order in 1/N , we expect higher order corrections will modify this conclusion since there are multiple energy scales in the problem in addition to temperature. The numerical study also reveals the existence of a wave-front broadening effect that persists even at low temperatures. This feature is not captured in the field theory calculations, and remains an interesting theoretical challenge for the future. As was suggested in [44], quantum fluctuations due to the finiteness of the local Hilbert spaces will play a significant role in the broadening behavior. Using Lieb Robinson [41] bounds, it has recently been demonstrated [47] that locality and short ranged correlations imply temperature dependent bounds on the butterfly velocity defined from the unregulated squared commutator. In App. I, we review the derivation of this bound and extend it to the regulated case. In particular, it can be shown that the butterfly velocity (obtained from either unregulated or regulated cases) obeys the bound, These bounds are consistent with a constant butterfly velocity at low temperatures v B ∼ constant (unregulated case from spin chain numerics) and with a butterfly velocity proportional to a power of temperature v B ∼ β −a for a > 0 (regulated case from the spin chain dynamics and field theory calculation, with a = 1/2). The existing bounds are contour independent and hence cannot constrain the contour dependence. The strong contour dependence that we observe has non-trivial implications for temperature dependent scrambling studies in future experiments. Our work shows that the regulated and the and χ = 16. The left and the right figures correspond to β = 0 (c) and β = 2 (d) respectively. Even at the low temperature, the data is seen to be converged for the range −50 < log C u < −15. of C down to ∼ e −60 . In order to demonstrate the convergence of the obtained squared commutator with bond dimension, we plot the log of the regulated and unregulated squared commutators as a function of time for different spatial differences in Fig. 13. Even without numerical fitting, it is clear from Fig. 13 that the regulated squared commutator has a strong temperature dependence, while the unregulated squared commutator is much less sensitive to temperature even when the temperature is tuned from β = 0 to β = 2 > m −1 , where the mass is the spectral gap ∼ 1.13. It is also seen that the early time data converges well with bond dimension. As has been noted before in [53], the qualitative lightcone behavior of the unconverged data obtained from the MPO method can be qualitatively different; hence for all our analysis and fitting we only use numerical data which are shown to converge. We fit the converged data using least squared error method to the near wave-front ansatz of Eq. 7. The goodness of fit is studied in Fig. 14, where the data collapse to the fitted model is shown at different temperatures. The unregulated squared commutator was studied using a similar numerical technique in [47]. Our results indicate that the butterfly velocity obtained from the unregulated squared commutator is constant as function of temperature, even at temperatures lower than the gap, in contradiction with the indicated result from [47]. We checked the case for the [Z(t), Z] type squared commutators as well, and our results are the same for both cases. In [47], the fitting was done for a much smaller spatio-temporal region 20 < x < 45 and 1 < t < 5 (in our units), and for a much smaller range log C u > −22, as compared to the situation considered here. We also study the temperature dependence of the broadening coefficient obtained from the fitting in Fig. 15a (regulated) and Fig. 15b(unregulated). For the unregulated case, we see a fairly constant p which is insensitive to decreasing temperature. The regulated case shows an increasing trend with decreasing temperature. B. CONTOUR DEPENDENCE AND CHAOS BOUND We analyse in detail the contour dependence of ∂ t C u,r , as was done in Sec. II D. In Fig. 16, we sketch how ∂ t C u,r is found without numerical fitting. We first pick out data along a 'ray' x = t, wherever the squared commutator has converged, and study ∂ t C u,r numerically. In Fig. 16b the averaged ∂ t log C u,r along this ray is plotted as a function of β, and compared against the bound on chaos. The result is similar to Fig. 5, which was obtained by fitting to the near wavefront ansatz. Given the constancy of the unregulated case, the chaos bound could be violated at lower temperatures. These results are for a particular ray x = t, and as a function of β. We can also study ∂ t log C u,r as function of the ray velocity v, where x = vt, for a particular β. If the near wavefront scrambling ansatz (Eq. 7) is satisfied, then ∂ t log C u,r along a ray of velocity v is given As v is increased beyond the v B , the near wavefront ansatz predicts that the chaos bound can be violated. We test this numerically in Fig. 17, and we see that indeed ∂ t log C u,r (t, vt) deviates from its near ansatz prediction at higher v. We also compare ∂ t log C u,r (t, vt) against the chaos bound as a function of ray velocity v in Fig. 18, and see that for high ray velocities, the bound is violated for both the regulated and unregulated cases. Note however that the analysis on the data is done only on the domain where the data has converged and also lies along the rays -severely restricting the domain on which numerical differentiation can be reliably done to obtain ∂ t log C u,r (t, vt). C. SPECTRAL REPRESENTATION AND THE GENERALIZED WIGHTMAN FUNCTION From the definition of the generalized Wightman function in Eq. 20, we go to the Fourier space, and expand in terms of many body eigenstates |n with energy E n and momentum P n , In Heisenberg representation, φ(t, x) = e −iP x e iHt φ(0, 0)e iP x e −iHt . This allows us to write the spectral representation of the generalized Wightman function, The spectral function can be similarly expanded in the spectral representation, Comparing the two spectral representations, we get the following relation, T=0 At T = 0, the polarization bubble can be evaluated exactly, by changing the Matsubara sum to an integral, The retarded Polarization bubble is obtained by analytically continuing to real frequencies, Π(q, iν n → ν + i0 + ). The integral can be exactly evaluated, and we obtain, For ν 2 ≥ q 2 + 4m 2 , For ν 2 < q 2 + 4m 2 , Finite T Here, we obtain the low temperature correction to the T = 0 polarization. At finite T, we introduce the function b(z) = (e βz − 1) −1 and the φ polarization bubble can be calculated, Using b(−z) = −b(z) − 1 and for our hierarchy of scales, b( k ) ≈ e −β k << 1 for any k, we can replace b(−z) → −1. The retarded polarization bubble is obtained by analytically continuing from the imaginary Matsubara frequency to real frequency, Π(iν n , q) → Π R (ν + i0 + , q). Using Cauchy imaginary value theorem, the imaginary part can be obtained to be (restricting to ν > 0) The first term is the T = 0 result, which was also obtained in the previous paragraph. At finite T , the only modification is the second term, which we now evaluate. The exponent has a maxima at θ = π, which lies in the allowed domain of θ. Doing the integral, we get the full correction, for ν < q, E. SELF ENERGY CALCULATION To study the temperature dependent relaxation time of the bosonic quasiparticles, we need to evaluate the self energy of φ. The relevant diagrams are shown in Fig. 8. The imaginary part of the self energy has contribution only from the first diagram in Fig. 8, and can be evaluated to give, Note, at low temperature, the second term in the imaginary part of the self-energy can be ignored. Recalling the definition of the Wightman function, we have, The inverse lifetime, or the relaxation rate of φ can be written in terms of the imaginary part of the self energy, Note, | k − q | < |k − q|. The Wightman function G From the calculations in Sec. D, one can read off the expression for Im[Π R ] which is exponentially suppressed in βm. In the denominator, any temperature dependence can be ignored, because of the leading T = 0 behavior of Re[Π R ]. Thus, we have the following approximation for R 1 (k, q), At low temperature, the relaxation rate can be approximated by the Laplace method, since the integrand has a factor exponential in βm (arising from both the prefactor sinh and R 1 functions in Eq. 63). We define the phase coherence inverse time scale as, τ −1 φ = Γ q=0 [61], which can be evaluated, The momentum dependent Γ q can be evaluated numerically, F. LADDER CALCULATION IN DIFFERENT CONTOURS The ladder calculation sets up a diagrammatic calculation of the squared commutator in terms of retarded Green functions and Wightman functions of the fields φ and λ. Here we give a sketch of how it works, following [14], while also extending their results to the unregulated squared commutator. Consider the generalized squared commutator, To go to the interaction representation for the φ fields, we introduce time evolution operators in the interaction picture, where the subscript 0 indicates that the fields time evolve under the non-interacting part of the Hamiltonian. We further drop the factors of N and the index structure to obtain, By expanding up to second order of λ, we get, where we have suppressed the spatial dimension. By combining fields from both 'sides of the ladder' in the expanded expression Eq. 69, we get the two distinct types of rungs -the contributions which are called the Type I and Type II rungs in Sec. III B. The contour dependence appears in the form of the contour dependence of the Wightman functions. For example, the Type I rung is a contour dependent λ-Wightman function, is the bare φ spectral function, given in Eq. 19. We have also defined the function, Q(ω) = [2 sinh(βω/2)] −1 . Inserting the spectral function in the expression for in Eq. 31, allows us to integrate over ω . We introduce notation x = p − p, y = p +p 2 and ω = ω − ω. We also denote x/2±p =: ± . We now have the following expression for G In this expression, because of the delta functions, one can replace the arguments of Q by ± ± . Note, at low temperature, Q( ± ) ≈ e −β ± /2 , and Q(− ± ) ≈ −e −β ± /2 . We can also use the fact that G R,λ (ω, −q) = G R,λ (ω, q), and that the real and imaginary parts of G R,λ (ω, q) are even and odd functions of ω respectively. This allows for the following simplification, We finally arrive at a simple expression for G where ω = p − p , and x = p − p. The only delta functions in the equation above that can be satisfied are δ(ω + + − − ) and δ(ω − + + − ). We can impose the delta function to do the p radial integration, which fixes the radial component at p * (θ) = ω 2 ω 2 −x 2 −4m 2 ω 2 −x 2 cos 2 θ , where θ is the angle with x. This can be followed by the angular integration approximated by the Laplace method, since there is an exponential factor with large βm in the exponent. The calculation closely follows the evaluation of ImΠ R at finite T in Appendix D. The final expression for R is, (75) We can similarly evaluate the R (1/2) 2− , for which the relevant function is G eff (− p , p , p , p). We further define, ω = p + p , and x = p − p. The only delta function in the equation above that can be satisfied is δ(ω − + − − ). We can impose the delta function to do the p radial integration, which fixes the radial component at p * (θ) = ω 19. The maximum eigenvalue λ L e βm β is determined by taking the linear extrapolation of λ L e βm β at each grid interval dp to dp → 0. The error is determined as the uncertainty in the extrapolation from its 95% confidence interval. The graph here is shown for the unregulated calculation at β = 2. H. DETAILS OF NUMERICS OF LADDER CALCULATION Here we provide some details of the numerical computation of the ladder sum. We fix the mass as m = 1, and do all the calculation in these units. Having determined the approximate values of the kernel functions R 1,2 , we need to discretize the 2D momentum space to set up the matrix form of the kernel integration. For that purpose, we set up a hard momentum cut-off of |p x |, |p y | ≤ 1. The choice is justified for the kernel in rescaled momenta, which is exponentially suppressed -exp −|p − p | 2 /8 . Next, we create 2D grid of momenta, with the momentum interval dp determined by the number of points that we consider -40 by 40, 50 by 50 and 60 by 60 grids. Next, we set up the matrix form of the kernel,K p p = dp 2K (p , p), given in Eq. 38. The matrices are of sizes, 1600 by 1600, 2500 by 2500, and 3600 by 3600, respectively. In constructing the matrix, we need to evaluate Γ p by performing a 2D integration (in Eq. 29) within the grid area (|p x |, |p y | ≤ 1). We find the maximum magnitude eigenvalue of the matrix, and find that the largest magnitude eigenvalue has a positive real part, thereby resulting in exponential growth. The eigenvalues are then extrapolated to the dp → 0 limit by a linear extrapolation. Errors in the estimation are denoted as the errorbars for this eigenvalue (see Fig. 19). In Fig. 20, we study the external momentum dependence of the largest magnitude eigenvalue The butterfly velocity can be defined as the largest velocity for which the Lyapunov exponent is positive, v B (ρ) = sup {v : λ(v, ρ) ≥ 0} . We define the support of the commutator, O as a region S of diameter 2R(v, t), around a point 0. The scrambling velocity is defined as the rate of increase of this support, v S (ρ) = lim t→∞ R(v, t) t . We consider the Hamiltonian H to be defined on a lattice, composed of geometrically local terms, and such that it has a finite gap. We introduce the shifted zero expectation-value Hamiltonian, H = H − T r(ρH). We can divide the shifted Hamiltonian into terms supported inside and outside Let us consider the near wavefront ansatz, In [47], it was shown that for the unregulated squared commutator, the rate of change of butterfly velocity with temperature, ∂ β v B can be bounded, where ∆v = v/v B − 1, ξ > 0 is the finite correlation length, and h is given by, At low temperature, β → ∞, ρ ∼ |0 0|. From Eq. 83, h ∝ 0|h i |0 , and hence 0, which implies, We first review the proof for the unregulated case due to [47] and then also extend the bound to the butterfly velocity obtained from the regulated squared commutator, and show that the same low temperature behavior as in Eq. 84 holds in that case as well. However, we note that the bound can't differentiate between a power-law vanishing butterfly velocity at low temperature and a constant butterfly velocity. Low temperature behaviors of both the regulated and unregulated cases which were obtained in Sec. II, i.e., v B ∼ β −1/2 and v B ∼ constant respectively, are consistent with Eq. 84. We first discuss the bound on butterfly velocity obtained from the unregulated squared commutator as given in [47]. We differentiate C u with respect to the inverse temperature β to obtain, We want to upper bound |∂ β C u |. By separating out the contributing terms to two parts -inside and outside a ball of radius R + δ around the point x 0 (a region we call S ), we have, For the terms outside the ball S , we invoke the Exponential Clustering Theorem, which states, for two operators W 1 and W 2 supported on non-overlapping regions A and B on a lattice system with a gapped Hamiltonian, there exist, ξ and N , such that, |T r (ρW 1 W 2 ) − T r (ρW 1 ) T r (ρW 2 )| ≤ N min{|∂A|, |∂B|} W 1 W 2 e −|A−B|/ξ , where, |A − B| is the minimum distance between the regions A and B. Here, ξ is the correlation length, which is finite because of the presence of the gap. The Exponential Clustering Theorem The 'inside' terms in the RHS of Eq. 86, can be bounded in the following way, where, h is a maximum over the different terms of the shifted Hamiltonian, and V r is the size of the region of radius r, i.e., V r = 2r + 1. Two convenient choices of h are, Combining both the contributions, we get, Usually at late times, C(t → ∞) = e λ(v,ρ)t . For v > v B , λ(v, ρ) < 0. We can choose δ = (−ξλ(v, ρ) + )t for some positive , which makes the second term in Eq. 92 subleading compared to the first term, and hence can be dropped. Essentially, the contribution to the bound from sufficiently outside the support of the operator O can be dropped. Now, using the ansatz C u = e λ(v,ρ)t , we obtain the following bound for the rate of change of the Lyapunov exponent, ξλ(v, ρ)) from the definition of the scrambling velocity from Eq. 79. We can further analyze this scrambling bound by using the near wavefront ansatz, Let's introduce the short hand ∆v = v/v B − 1. For this ansatz, we have, Close to the Butterfly velocity, i.e., when v v B , the last term is the leading term. Thus for ∆v = 0 + , we have the bound on rate of change of butterfly velocity, Now, say β → ∞. For the gapped system, ρ = |0 0|. We can estimate h using the definition, in Eq. 83. For this ρ, h ∝ 0|h i |0 , and hence 0, which implies, Note, however, unlike the assertion in [47], this doesn't imply a freezing out of the Butterfly Velocity at temperatures below the gap. In fact, even power-law ansatz, v B ∼ β −a for a > 0, satisfies the above bound, and our observation v B ∼ β −1/2 is certainly admissable. J. SCRAMBLING BOUNDS FOR REGULATED SQUARED COMMUTATOR We can extend the bounds to the butterfly velocity from regulated squared commutator, C r = −T r √ ρO √ ρO , as well. Differentiating with β, we obtain, Now, we invoke the Araki bound [64], which states, in 1 dimensional quantum lattice systems with a gap, for any finitely supported operator A with support R, the operator ρ s Aρ −s is also supported, upto exponential correction, on a ball of support R + l(βs), where l(x) is and entire function not larger than exponential in x. Thus, the support of ρ 1/2 Oρ −1/2 , and hence of Oρ 1/2 Oρ −1/2 has radius ∼ R + Ae Bβ , for appropriately defined numbers A, B. Hence, the entire argument of the previous section follows by replacing R → R + l(β/2), and we can bound the rate of change of Lyapunov exponent and Butterfly velocity obtained from the regulated squared commutator as well. In particular, in deriving these bounds, the effect of this thermal broadening can be ignored, since, l(β)/t → 0, as t → ∞. Hence, all the scrambling bounds derived for the unregulated case also follow naturally for the regulated case. K. CARBON COST OF SIMULATIONS Here we quote the approximate carbon cost of the numerical simulations. The template is from scientific-conduct.github.io. This provides a lower bound of the carbon cost.
2020-05-22T01:01:04.939Z
2020-05-21T00:00:00.000
{ "year": 2020, "sha1": "3104adbf4b962280c6d119b4aead31b73fb77518", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.10814", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3104adbf4b962280c6d119b4aead31b73fb77518", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231201746
pes2o/s2orc
v3-fos-license
Fuzzy Output Support Vector Machine Based Incident Ticket Classification SUMMARY Incident ticket classification plays an important role in the complex system maintenance. However, low classification accuracy will result in high maintenance costs. To solve this issue, this paper proposes a fuzzy output support vector machine (FOSVM) based incident ticket clas-sification approach, which can be implemented in the context of both two-class SVMs and multi-class SVMs such as one-versus-one and one-versus-rest. Our purpose is to solve the unclassifiable regions of multi-class SVMs to output reliable and robust results by more fine-grained analysis. Experiments on both benchmark data sets and real-world ticket data demonstrate that our method has better performance than commonly used multi-class SVM and fuzzy SVM methods. Introduction For the maintenance and management of a complex IT infrastructure, when an event which is not part of the standard system operation and which may cause an interruption or a reduction happens is captured by the monitoring system, an incident ticket manifesting the event is automatically created. In general, a ticket contains some unstructured texts describing problem symptoms using the natural language. Once an incident ticket appears, we need rapid allocation of skilled maintenance experts to bring an abnormal service back to normal, which highly depends on the accurate ticket classification. As an initial step of rapid management, ticket classification is used to identify problem types of tickets based on the problem descriptions recorded in tickets [1], [2]. In a typical ticket system, the ticket classification is done manually by system administrators to assign a problem type. This manual process is time-consuming and error-prone. Hence, we need an automated approach to classify incident tickets with a high accuracy. Since ticket problem descriptions are made of unstructured text and support vector machines (SVMs) have been successfully applied to many text classification scenes including email, news, and so on, we use it to solve the problem of incident ticket classification. However, the existing multi-class SVMs can not perfectly resolve the problem of the unclassifiable regions. When they are applied to ticket Manuscript classification, they will suffer from a relatively high misclassification cost. Thus, this paper proposes a decision margin based fuzzy output SVM approach to reduce the unclassifiable regions and improve the accuracy of the incident ticket classification. The rest of the paper is organized as follows: Sect. 2 gives related work, followed by primary concepts regarding fuzzy support vector machines in Sect. 3. Section 4 presents a fuzzy output SVM approach based on decision margin for the multi-class SVM techniques. Section 5 validates it on several benchmark data sets and a real-world ticket dataset; and finally, Sect. 6 concludes the paper. Related Work The incident ticket classification problem belongs to document classification in nature, where a document is a short free-text ticket problem description. Since we discuss a ticket classification approach based on SVMs, which is a supervised approach, we review the literature of SVMs and supervised ticket classification approaches. Supervised Ticket Classification If problem type information in historical tickets is available in the training ticket data, supervised ticket classification algorithms are also applicable. The most popular approach is to apply machine-learning techniques to automatically build a classifier on a set of pre-classified tickets to classify new tickets. Various supervised machine-learning techniques proposed for automatic text classification, such as support vector machines (SVM), Naive Bayes, and maximum entropy, have been applied for maintenance and incident ticket classification [2]- [6]. For instance, [4] applies an SVM to predict the most appropriate ticket resolution group. [3] proposes a Multinomial Naïve Bayes (MNB) algorithm to classify tickets. In summary, because of the size and complexity of incident tickets, supervised classification algorithms have been the method of choice for incident ticket classification, relying on labeled tickets from on managed infrastructure to automatically create signatures for similar infrastructure. SVMs SVMs gain wide application due to its high generalization Copyright c 2021 The Institute of Electronics, Information and Communication Engineers ability and better performance than other traditional learning machines in recent years. But SVMs still suffer from the problem of unclassifiable regions [7], where the samples are non-determinable. Traditional Multi-class SVMs classify unclassifiable samples into a class randomly. It will decrease the generalization of learning machines. Moreover, in some practical problems such as in medical diagnosis and ticket classification, these unclassifiable samples draw more attention and should be given more precise analysis. There are mainly two ways of solving unclassifiable samples: one is to use continuous decision functions, and the other is to apply a fuzzy support vector machines (FSVM) [8]- [10]. The core study of FSVM is how to produce the fuzzy membership function. Some studies adopt a fuzzy membership function to each input point and reformulate the SVM so that different input points can make different contributions to the learning of decision surface [11]. Using the decision function obtained by training the SVM, [12] defined a truncated polyhedral pyramidal membership function for each class to solve the unclassifiable regions. In theory, the generalization ability of their proposed FSVM is superior to that of conventional SVMs. Many FSVM models [13] can be considered as the modifications or extensions of SVMs to reduce the effect of outliers or noises in training samples and have successfully applied to many applications such as text classification, fault diagnosis, and so on. Existing FSVM can give a reasonable decision for both classifiable and unclassifiable samples, but there are still two problems that need to be considered: (1) a sample is unclassifiable if the membership function gets the same maximum on two or more classes; (2) the classification results are unreliable if the difference of the values of decision function on different classes is very small. Fuzzy Support Vector Machine Consider training data pairs: where belongs to one of the classes, for a two-class classification problem, while for a kclass (k > 2) classification problem. In the basic form, SVM learns linear decision rules described by a weight vector and a threshold value. Its basic idea is to map data into a high dimensional space and find a separating hyperplane with the maximal margin. Thus, the linear two-class SVM model is: However, in real-world applications, even the linearly separable data may not be ideally linearly separable because of measurement errors or noises. Thus, we relax the constraints of Eq. (1) slightly to allow misclassified points by introducing a slack variable {ξ i ≥ 0} l i=1 . In this soft margin SVM, data points on the incorrect side of the margin boundary have a penalty that increases with the distance from it. The number of misclassifications is reduced by solving: where the parameter C controls the trade-off between the slack variable penalty and size of the margin. Decreasing the value of C means to allow more misclassifications and to relax the hyperplane tension. To solve this constrained quadratic optimization problem, we find its dual using Lagrange multipliers and its dual form is: The above SVM is an essentially linear model, but it can be easily generalized to non-linear decision rules by replacing the inner-product ( where the function φ is a real one that projects the data into a higher dimensional space. We only need to find the kernel matrix whose entry (i, j) will correspond to the inner product of feature space vectors In general, a multi-class problem is converted into a certain number of two-class problems. There are two commonly used strategies: one versus rest (1-v-r) and one versus one (1-v-1). In the 1-v-r method, the ith SVM classifier is trained by assigning positive labels to all the samples in the ith class, and negative labels to all other samples. The training time of the standard method measures linearly with k. In the 1-v-1 method, it constructs k(k − 1)/2 binary classifiers by training on only two out of classes each time. Thus, the size of the 1-v-1 classifiers may grow super-linearly with k. The combination of these binary classifiers to determine the label assigned to each new input can be made by different policies. For example, we count the votes of each subclassifier using the majority vote policy and the class with most votes is the final decision. There will remain some unclassifiable regions while converting a multi-class SVM into a two-class SVM with the majority vote algorithm (see Fig. 1). To resolve the unclassifiable regions for 1-v-r and 1-v-1 Fig. 1 Unclassifiable region by the two-class formulation strategy, fuzzy SVM is an effective way [8]. Take 1-v-1 as an example, which can be briefly introduced as follows. Let the ith decision function that classifies class i and class j be . For an input vector x, we calculate where sgn = 1 x > 0 0 x ≤ 0 and classify using the majority vote policy. In this formulation, however, there are unclassifiable region remaining (as shown in Fig. 1, the shadow regions), where each f i (x) has the same value. [12] introduced two new membership functions: and where Then the sample is classified into the class. Thus, the unclassified region is shown in Fig. 1 (1-v-1) is resolved as shown in Fig. 2. FOSVM Although the existing FSVM methods can give a proper decision for unclassifiable samples, they have some limitations. On one hand, the fuzzy membership functions proposed in these algorithms may acquire the same maximum on two or more classes, thus the samples are still unclassifiable. Meanwhile, the difference of decision function value on different classes may be small in unclassifiable regions. In these cases, the classification results are unreliable and sensitive to the noise. On the other hand, in some application domains, such as in medical diagnosis and ticket classification, these unclassifiable samples are typically key points of analysis. They need be given more robust and credible classification results. To deal with the abovementioned issues, we propose a decision-margin based fuzzy output support vector machines (FOSVM) framework which can be implemented in the context of both two-class SVM and multi-class SVM such as 1-v-1 and 1-v-r. One-versus-Rest FOSVMs The 1-v-r method resolves the following k two-class problems: Then we can get k decision functions f j (x). Instead of the majority vote algorithm, we definite a new decisions function. Definition 1 (decision margin): The difference between the maximal value and the second maximal value of the decision function is called decision margin (DM). where max 1 j=1...k f j (x) and max 2 j=1...k f j (x) represent the maximum and second maximum of the decision function respectively. We adopt the membership function as follows. where DM(x) is decision margin defined as Eq. (11), α and β are two adjustable parameters. The final output of classifier is: where δ is a threshold according to the given problem. If the decision margin is less than δ, FOSVM refuses to predict the clear label. For simplicity, take a three-class classification problem for an example, the whole feature space will be divided into three regions, as shown in Fig. 3. Clear Classifiable region: max1 Membership function defined in Eq. (12) has the following character: it has the highest confidence level in clear classifiable regions, the lowest confidence level in unclassifiable regions, and the medium confidence level in classifiable regions. For example, if δ = 0.5, the membership function has the following form as shown in Fig. 4, where α = − ln 0.25/(−2 + δ), β = δ. One-versus-One FOSVMs Similar to the above 1-v-r FOSVM method, we can also construct the 1-v-1 FOSVM. First, we compute the decision value of class j by constructing k(k − 1)/2 classification functions f ji (x). Then we compute the decision margin DM(x) according to Eq. (11). Finally, we get the output of classification via Eq. (12) and Eq. (13). Our proposed method is also suitable for the two-class problem. In this case, DM(x) = f i (x), i = 1, 2 and f 1 (x) = − f 2 (x). The values of parameters are α = − ln 0.25 and β = 0 respectively. Using the same decision function, we can get the final result. The membership function has the form as shown in Fig. 5. For example, if a sample belongs to one class with a possibility of 80%, it belongs to another class with 20% possibility. To sum up, our proposed methods are defined so that, for the data in classifiable regions, the classification results are the same with the standard 1-v-1 and 1-v-r methods; for the data in partial classifiable regions, the classifier can also give rational results; for the data with the membership value less than the given threshold, the classifier will not give a hard label. In this case, we can adopt other methods to deal with these samples such as applying prior knowledge or nesting algorithm for multi-classification problems. Experiments In this section, we evaluate our methods on several benchmark data sets and the ticket dataset and compare them with the standard multi-SVM and popular FSVM. Table 1 shows the summary of the datasets used in the experiments. These datasets are from UCI repository with the number of classes ranging from 2 to 6. Glass6 is an original glass dataset with 214 samples, glass3 is a subset of the glass dataset of three classes including 'float processed building windows', 'non-float processed building windows' and 'non-window glass', totally 197 samples. Benchmark Datasets We tried different kernels including linear, polynomial, quadratic, Gaussian and RBF, and parameters C and δ were determined by the ten-fold cross-validation method. Their Table 1 The description of benchmark datasets Table 2 Performance compare on four benchmark datasets Table 3 Performance comparison on the ticket dataset best results are reported. The samples with the membership score less than the given threshold were reclassified using the method [14]. Each algorithm was repeated 10 times. The experimental results are shown in Table 2. From Table 2, we observe that compared with standard multi-class SVM techniques, 1-v-1 and 1-v-r, the proposed decision-margin based FOSVM classifies the samples in a more fine-grained way. The samples with a higher confidence have the same classification results with other two methods, while the samples with a lower confidence will be handled specially. Our method has better performance in terms of accuracy on the problems that require a high accuracy; especially those applications that emphasize the 'hard samples' such as in medical diagnosis and credit risk assessment. Ticket Dataset The real world ticket datasets are collected from an account of a large cloud-oriented IT service center. This account consists of over 200 monitored servers and network devices. The dataset has 100K+ tickets that cover a period of eight months. The number of problem types covered by these tickets is 95, and the distribution of tickets of different problem types is highly unbalanced. We choose 10000 tickets that belong to the largest K problem types according to their original proportions as the experimental dataset. The critical ticket attributes used in our experiments include "problem type" and "description", where the attribute "problem type" denotes the problem cause, and the attribute "description" denotes the problem symptom. Two ticket representation approaches including the tf-idf term weighted scheme and word2vec are applied to obtain a vector representation for SVM. We compare the FOSVM with 1v-1 SVM, 1-v-r SVM and FSVM. We randomly select 75% samples as the training set and the rest 25% as the testing set. Other settings are the same as those used in Sect. 5.1 The evaluation measures used in this experiment are precision, recall, and the F1 measure (F1 Score). These measures are standard accuracy metrics used in classification problems, and their definitions are expressed as follows: where K is the number of chosen problem types, TP i , FP i and TN i are True Positives (how many tickets were classified as a specified problem type c i and they were indeed labeled as c i in the data set), False Positives (how many were classified as a specified type c i while they truly are with a type c j , c j c i ) and false negatives (how many were classified as a specified type while they truly are with a type c i , c j c i ), respectively. Table 3 shows the performance comparison in terms of accuracy with other commonly used multi-class SVM techniques. We can see that the algorithm FOSVM performs best in terms of the ratio of unclassifiable samples, precision, recall and F1 score. FOSVM can reduce the ratio of unclassifiable samples and obtain more believable results than other algorithms. Further, compared to the tf-idf representation approach, the word2vec-based representation gets a better accuracy in terms of precision, recall and F1 score, which means that the ticket presentation has a positive impact on its classification performance. Conclusions When a critical system exhibits an incident during its operation, system maintenance teams are expected to rapidly allocate skilled resources to bring an abnormal service back to normal as short as possible. In a typical ticket management system, the ticket classification is done manually by system administrators to assign a problem type, which is time-consuming and error-prone, especially when there are a large number of tickets. In this paper, the proposed a decision margin based FOSVM can be used to deal with this issue in an automated way. Our algorithm can resolve the unclassifiable regions of multi-class SVMs to improve the classification accuracy. Experiments on both the benchmark datasets and real-world ticket data have validated the effectiveness of the proposed algorithm. Moreover, we observed that different incident ticket representation approaches has a significant impact on classification performance. We adopted two different representation approaches, including the traditional tf-idf and word2vec-based one, and found that the ticket representation using the word embedding technology can help us improve classification accuracy.
2021-01-07T09:06:42.717Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f3b711a6aeba2bd86919edf3aad87bb25fa9f805", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/transinf/E104.D/1/E104.D_2020EDP7044/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "de21ceb0a833b19422a0812b48bb8a136ac00b4c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
210164274
pes2o/s2orc
v3-fos-license
Semiring Programming: A Declarative Framework for Generalized Sum Product Problems To solve hard problems, AI relies on a variety of disciplines such as logic, probabilistic reasoning, machine learning and mathematical programming. Although it is widely accepted that solving real-world problems requires an integration amongst these, contemporary representation methodologies offer little support for this. In an attempt to alleviate this situation, we introduce a new declarative programming framework that provides abstractions of well-known problems such as SAT, Bayesian inference, generative models, and convex optimization. The semantics of programs is defined in terms of first-order structures with semiring labels, which allows us to freely combine and integrate problems from different AI disciplines. Introduction AI applications, such as robotics and logistics, rely on a variety of disciplines such as logic, probabilistic reasoning, machine learning and mathematical programming. These applications are often described in a combination of natural and mathematical language, and need to be engineered for the individual application. Declarative formalisms and methods are ubiquitous in AI as they enable re-use and descriptive clarity. Initial approaches, such as that by Kowalski [51], were rooted in logic, but they have eventually engendered an impressive family of languages. In knowledge representation and constraint programming, for example, languages such ASP [17] and Essence [36] are prominent, which use SAT, SMT and MIP technology [8,64]. In machine learning and probabilistic reasoning, statistical relational learning systems and probabilistic programming languages such as Markov Logic [61], Church [39] and Problog [21] are increasingly used to codify intricate inference and learning tasks. In mathematical programming and optimization, disciplined programming [40] and AMPL [31] have been developed. Finally, the DARPA project Probabilistic Programming for Advancing Machine Learning is motivated in the same declarative spirit. 1 Across disciplines in AI, it has become increasingly clear that taming the model building process, admitting reusable descriptions in expressive languages, and providing general but powerful inference engines is essential. Be that as it may, it is widely accepted that solving real-world AI problems requires an integration of different disciplines. Consider, for example, that a robot may decide its course of action using a SAT-based planner, learn about the world using Kalman filters, and grasp objects using geometric optimization technology. But contemporary declarative frameworks offer little support for such universality: knowledge representation and constraint formalisms mostly focus on model generation for discrete problems, probabilistic programming languages do not handle linear and arithmetic constraints, and finally, optimization frameworks work with linear algebra and algebraic constraints to specify the problem and thus are quite different from the high-level descriptions used in the other disciplines and do not support probabilistic or logical reasoning. Of course, hybrid approaches that treat different computations as independent but communicating processes is an option to address this challenge, but these integrations may not be transparent. So what is lacking here is a universal modeling framework that allows us to declaratively specify problems involving logic and constraints, mathematical programs, as well as discrete and continuous probability distributions in a simple, uniform, modular and transparent manner. Such a framework, together with a generic inference mechanism, would greatly simplify the development and understanding of AI systems with integrated capabilities, and would tame the model building process. There has been recent progress on this front. A key observation made in [49] is that reasoning about possible worlds is fundamental to many tasks in computer science, including dynamic programming [24], constraint programming [13], database theory [41], probabilistic inference [4], probabilistic logic programming [21], and network analysis [6]. In fact, these tasks essentially invoke a version of the sum product problem [4], but differ in the exact operations carried out over the possible worlds, which can then be recast in the very same way via semirings. The resulting framework, referred to as algebraic model counting (AMC) [49], shows that the computation underlying all these tasks can be defined over a certain class of arithmetic circuits, which then implies that they can be solved via a single algorithm that obtains local solutions and composes them to yield a global solution. The main limitation of the AMC proposal is that the underlying language is propositional logic, and so the semantics is that of classical logic and computations are essentially defined over discrete spaces. In this paper, we propose a new declarative framework called semiring programming (SP) that attempts to generalize AMC. Our main thesis is to still formulate computations as the sum product problem, but we will rigorously define a semantics that not only defines AMC in unbounded domains, but also non-standard (e.g., non-monotonic) ones. To put the proposal in perspective, consider that Eugene Freuder [32] famously quipped: "constraint programming represents one of the closest approaches computer science has yet made to the Holy Grail of programming: the user states the problem, the computer solves it." The underlying idea was summarized in the slogan: (constraint) program = model + solver The vision of SP builds upon this equation in that: (semiring) program = logical theory + semiring + solver Thus a model is expressed in a logical system in tandem with a semiring and a weight function. As a framework, SP is set up to allow the modeler to freely choose the logical theory (syntax and semantics) and so everything from nonclassical logical consequence to real-arithmetic is fair game. Together with a semiring and these weights, a program computes the count. Our task will be to show that the usual suspects from AI disciplines, such as SAT, CSP, Bayesian inference, and convex optimization can be: (a) expressed as a program, and (b) count is a solution to the problem, that is, the count can be a {yes, no} answer in SAT, a probability in Bayesian inference, or a bound in convex optimization. In other words, the count is shown to semantically abstract challenging AI tasks. We will also demonstrate a variety of more complex problems, such as matrix factorization, and one on compositionality. Informally, from an expressiveness viewpoint, SP is designed to be: • universal, in that it can represent classical problems from disciplines ranging from logic to mathematical programming, and it inherits the strengths of both camps; • declarative, which means that domain knowledge can be expressed in a program in a natural, human-readable way, and that it is possible to easily cope with changes in requirements and information in a principled manner; • generic, in that it permits instantiations to a particular language -including real arithmetic (as needed in machine learning), quantification and interpreted symbols (as in satisfiability modulo theory, or SMT), and nonclassical consequence (as in answer set programming, or ASP) -since these often correspond closely to the kind of problems they attempt to formalize and solve; • solver-independent, in that inference assumes the role of algorithms, and the formulation of the inference problems is separate from the solution strategies; • model-theoretic, in that the semantics of programs is defined using first-order structures, in service of providing meaning to classical model generation problems ranging from SAT to convex optimization. To reiterate, the aim is to synthesize problems and techniques across important disciplines in AI, towards a modeling framework that allows one to freely combine and integrate a wide range of specifications. To our knowledge, a number of prior proposals have made promising progress towards these goals, but have fallen short in certain critical features; see the penultimate section for discussions. We reiterate that the thrust of the proposal is to rigorously generalize standard AMC, so as to semantically abstract problems arising from model generation in propositional satisfiability, first-order declarative programming, machine learning, data mining, constraint satisfaction, and optimization. In attempting such a generalization, of course, we will have to give up the existing investigations on the use of propositional arithmetic circuits, including the simple evaluation scheme on local solutions informing global ones [49]. That is not surprising given the infinitary nature of the framework, but we hope the framework will provide the necessary foundations for investigating a general solver scheme that works on both finite and infinite domains. Moreover, we illustrate the framework using a programming language inspired by SMT syntax, but this is meant to be a prototypical starting point for a full fledged programming interface to SP. As can be inferred from above, we would consider the following three desiderata to be essential in the design of such an interface: 1. The logical language should be expressive, supporting a rich ontology of types including lists and graphs. 2. The language should have a semantics defined by a measure on a set of worlds respecting semiring operators. 3. The language should provide abstractions for specifying the solutions of (arbitrary) problems involving logical reasoning, discrete-continuous probability distributions and discrete-continuous optimization formulations in a simple, uniform, modular and transparent manner. The formulation in the sequel will provide insights on how such a language could be obtained. This paper is organised as follows. In Section 2, we briefly review logic, weighted model counting and semirings; in Section 3; we introduce the semiring programming framework and illustrate it on a number of examples; in Section 4, we discuss how different semirings can be combined; in Section 5 different strategies for solvers are introduced; and finally, in Sections 6 and 7 we discuss related work and conclude. Logical Setup The framework is developed in a general way, agnostic about the meaning of sentences. We adopt (and assume familiarity with) terminology from predicate logic [25]. Definition 1. A theory T is a triple (L, M, ⊲), where L is a set of sentences called the language of the theory, M a set called the models of the theory, and ⊲ a subset of M × L called the satisfiability relation. The set L is implicitly assumed to be defined over a vocabulary vocab(L) of relation and function symbols, each with an associated arity. Constant symbols are 0-ary function symbols. Every L-model M ∈ M is a tuple containing a universe dom(M), and a relation (function) for each relation (function) symbol of vocab(L). For relation (constant) symbol p, the relation (universe element) corresponding to p in a model M is denoted p M . For φ ∈ L and M ∈ M, we write M ⊲ φ to say that M satisfies φ. We say a formula φ is valid iff φ is satisfied at every model, which is then written as ⊲φ. We let M(φ) denote {M ∈ M | M ⊲ φ}. Finally, the set lits(L) denotes the literals in L, and we write l ∈ M to refer to the L-literals that are satisfied at M. This formulation is henceforth used to instantiate a particular logical system, such as fragments/extensions of firstorder logic, as well as non-classical logical consequence. Example 4. Define a theory (L, M, ⊲) where L is a first-order language involving 0-ary functions {c, . . . , d}, inequalities ≤, ≥, <, >, =, . Let M be the set of mappings from {c, . . . , d} to R. We define M ⊲ φ for M ∈ M and φ ∈ L as in first-order logic, assuming in particular that =, <, >, 0, 1, +, ×, /, −, exponentiation and logarithms have their usual interpretations [8]. (That is,"1 + 0 = 1" is true in all models, as is "x > y ≡ ¬(y > x)," and so on [9].) This theory can be used to reason about linear arithmetic (i.e., allowing formulas such as c + d ≤ 5 and d ≤ e, where c, d, e are integers or reals) and non-linear arithmetic (i.e., allowing formulas such as c + d 2 ≥ e). Weighted Model Counting Semiring programming draws from the conceptual simplicity of weighted model counting (WMC), which we briefly recap here. WMC is an extension of #SAT, where one simply counts the number of models of a propositional formula [37]. In WMC, one accords a weight to a model in terms of the literals true at the model, and computes the sum of the weights of all models. Definition 5. Suppose φ is a formula from a propositional language L with a finite vocabulary, and suppose M is the set of L-models. Suppose w : lits(L) → R ≥0 is a weight function. Then: is called the weighted model count (WMC) of φ. Here, in the context of L-models M, we simply write t = M⊲φ u to mean t = {M∈M|M⊲φ} u. The formulation elegantly decouples the logical sentence from the weight function. In this sense, it is clearly agnostic about how weights are specified in the modeling language, and thus, has emerged as an assembly language for Bayesian networks [19], and probabilistic programs [29], among others. Commutative Semirings Our programming model is based on algebraic structures called semirings [52]; the essentials are as follows: Definition 6. A (commutative) semiring S is a structure (S, ⊕, ⊗, 0, 1) where S is a set called the elements of the semiring, ⊕ and ⊗ are associative and commutative, 0 is the identity for ⊕, and 1 is the identity for ⊗. Abusing notation, when the multiplication operator is not used, we simply refer to the triple (S, ⊕, 0) as a semiring. Example 7. The structure (N, +, ×, 0, 1) is a commutative semiring in that for every a, b ∈ N, a + 0 = a, a × 1 = a, a + b = b + a, and so on. Semiring Programming In essence, the semiring programming scheme is as follows: • Input: a theory T = (L, M, ⊲), a sentence φ ∈ L, a commutative semiring S, and a weight function w. The scope of these programs is broad, and so we will need different kinds of generality. Roughly, the distinction boils down to: (a) whether the set of models for a formula is finite or infinite; (b) whether the weight function can be factorized over the literals or not (in which case the weight function directly labels the models of a theory); and (c) whether ⊲ is defined in a classical (monotonic) manner or not. The thrust of this section is: (i) to show how these distinctions subsume important model generation notions in the literature, and (ii) providing rigorous definitions for the count operator. In terms of organization, we begin with the finite case, before turning to the infinite ones. We present an early preview of some of the models considered in Figure 1. Finite Here, we generalize the WMC formulation to semiring labels, but also go beyond classical propositional logic. So, a propositional language with a finite vocabulary is a finite theory, regardless of (say) standard or minimal models. Similarly, a first-order language with a finite Herbrand base is also a finite theory. Essentially, as in WMC, we sum over models and take products of the weights on literals but w.r.t. a particular semiring. Needless to say, we immediately subsume the framework of algebraic model counting (AMC) [49]: finite propositional theory, S a commutative semiring, and w a factorized weight function. Suppose ⊲ is the standard satisfaction relation in propositional logic. Then SP is equivalent to AMC, that is, the computation of every SP instance can be defined as an AMC task and vice versa. Let us consider a few examples. As with Definition 5, the framework is agnostic about the modeling language. But for presentation purposes, programs are sometimes described using a notation inspired by the SMT-LIB standard [7]. Example 12. We demonstrate MPE (most probable explanation) and WMC. Consider the theory, semiring and weight function w from Figure 2, which specifies, for example, a vocabulary of two propositions p and q, w(p) = 1 and w(¬p) = 2. In accordance with that semiring, the weight of a model, say {p, ¬q}, of the formula F is 1 × 4 = 4. Thus, for the semiring (N, max, ×, 0, 1), we have: which finds the most probable assignment. Consider the semiring (N, +, ×, 0, 1) instead. Then: which gives us the weighted model count. Example 13. Extending the discussion in [45], we consider a class of mathematical programs where linear constraints and propositional formulas can be combined freely. See Figure 3 for an example with non-linear objectives. Formally, quantifier-free linear integer arithmetic and propositional logic are specified as the underlying logical systems, and the domains of constants are typed. The program declares formulas F, G, and H. The counting task is non-factorized, and our convention for assigning weights to models is by letting the declare-weight directive also take arbitrary formulas as arguments. Of course, TRUE holds in every model, and so, the weight of every model is determined by the evaluation of x1 * x2 at the model, that is, for any M, its weight is * M (x1 M , x2 M ). For example, a model that assigns 1 to x1 and 1 to x2 is accorded the weight 1 * 1. Computing the count over (N, max, 0) then yields a model of H with the highest value for x1 * x2. Encoding finite domain constraint satisfaction problems as propositional satisfiability is well-known. The benefit, then, of appealing to our framework is the ability to easily formulate counting instances: (set-logic FOL) (set-algebra [NAT,+,0]) (set-type COLOR={r,b,g}) (set-type NODE={1,2,3}) (declare-predicate node (NODE)) (declare-predicate edge (NODE,NODE)) (declare-predicate color (NODE,COLOR)) N = (node(1) and node(2) and node (3)) E = (edge(1,2) and edge(2,3) and edge(3,1)) DATA = (N and E) CONS = / * coloring constraints (omitted) * / (declare-weight TRUE 1) (count (DATA and CONS)) Constraints are Boolean-valued functions [33], and so constraints over X can be encoded as L-sentences, as in the example below: Figure 4 for a counting instance of graph coloring: edge(x,y) determines there is an edge between x and y, node(x) says that x is a node, and color(x,y) says that node x is assigned the color y. The actual graph is provided using the formula DATA, which declares a fully connected 3-node graph. Also, CONS is a conjunction of the usual coloring constraints, e.g., an edge between nodes x and y means that they cannot be assigned the same color. Let M be a set of first-order structures for the vocabulary {edge, node, color}, respecting types from Figure 4. The interpretation of {edge, node} is assumed to be the same for all the models in M and is as given by DATA. Basically, then, the models differ in their interpretation of color. One model of φ = (DATA and CONS), for example, is {color(1,r), color(2,g), color(3,b)}. The weights of all models is 1, and so, for (N, +, 0) we get: 2 #(φ, w) = 6. To summarize, the following result is easily shown for semiring programs: 3 for SAT and CSP, and θ • ∈ R for the rest. Then for any θ, there is a T = (L, M, ⊲), S, w and φ ∈ L such that Non-standard A particular advantage of defining a logical theory in the way we did is that the framework is immediately applicable to model-level operations with non-standard semantics. We give a notable example that is simple to capture but which to the best of our knowledge has not been previously formulated in a semiring framework like ours. Definition 17. A stable model environment is defined as follows. Let T = (L, M, ⊲) be a logical theory, where L is defined over a set of propositions P and M is the set of mappings from P to {0, 1}. Let us split P into two disjoint sets of variables founded variables P f and standard variables P s . An answer set program δ is a tuple (P, R, C) where R is a set of rules of form: a ← b 1 ∧ . . . ∧ b n ∧ ¬c 1 ∧ . . . ∧ ¬c m such that a ∈ P f , the body variables {b 1 , . . . , c m } ⊆ P, and C is a set of constraints over the propositions (specified as rules with an empty head). A rule is positive if its body only contains positive founded literals. The least assignment of a set of positive rules R, written L(R) is the one that that satisfies all the rules and contains the least number of positive literals. Given an assignment M ∈ M and a program δ, the reduct of M wrt δ, written, δ M is a set of positive rules that is obtained as follows: for every rule r, if any c i ∈ M, or ¬b j ∈ M for any standard positive literal, then r is discarded, otherwise, all negative literals and standard variables are removed from r and it is included in the reduct. An assignment M ∈ M is a stable model of a program δ iff it satisfies all its constraints and M agrees with L(δ M ) wrt P f . We say two assignments M and M ′ agree wrt P ′ iff the set of positive literals (restricted to P ′ ) in M is identical to the set in M ′ , and the set of negative literals (restricted to This is basically an adaptation of the answer set programming by SAT formulation in [3]. By way of the semirings considered in Example 11, it immediately follows that: It is important to note that the formulation only semantically characterizes the task of finding stable models as well as stable model counting. Algorithmic solutions for the task may very well involve other ideas [3]. Infinite: Non-factorized Defining measures [42] on the predicate calculus is central to logical characterizations of probability theory [43,20]. We adapt this notion for semirings to introduce a general form of counting. For technical reasons, we assume that the universe of the semirings is R. Let Σ be a σ-algebra over M and so every E ∈ Σ is a measurable set of points. Suppose f (x 1 , . . . , x k ) : R k → R is a convex function that we are to minimize. Consider the semiring (R, inf, 0) and a measure µ such that for any E ∈ Σ: finds the infimum of the f -values across the assignments in E. Then, µ(M(φ)) = #(φ, µ) gives the minimum of the convex function in the feasible region determined by φ. Suppose f is a concave function that is to be maximized. We would then use (R, sup, 0) instead, which finds the supremum of the f -values across assignments in E ∈ Σ. Example 22. For a non-trivial example, consider the problem of matrix factorization, a fundamental concern in information retrieval and computer vision [22]. Given a matrix I ∈ R p×n , we are to compute matrices L ∈ R p×k and R ∈ R n×k , such that e = ||I − LR T || is minimized. Here, || · || denotes the Frobenius Norm. Using real arithmetic, we provide a formulation in Figure 5. (Free variables are assumed to be implicitly quantified from the outside.) Let T = (L, M, ⊲) be the theory of real arithmetic, where L includes the following function symbols: {input, left, right, app, err}. Here, input(x,y) is a real-valued function such that x ∈ {1, . . . , p} and y ∈ {1, . . . , n} in that input(m,n) is the entry at the m th row and the n th column of the matrix I; these entries are specified by DATA. Letting φ = (DATA and F and G), the set {M ∈ M | M ⊲ φ} are those L-models whose interpretation of input is fixed by DATA. Basically, these models vary in their interpretations of {left, right}, which determines their interpretations for app and err. Here, app computes the product of the matrices left and right, and err computes the Frobenius Norm wrt app and input. Let Σ be a σ-algebra over M. The weight function in Figure 5 determines a measure µ such that for any E ∈ Σ: Therefore, #(φ, µ) yields the lowest err value; the model M such that err M = #(φ, µ) is one with the best factorization of matrix I. Infinite: Factorized Despite the generality of the above definition, we would like to address the factorized setting for a number of applications, the most prominent being probabilistic inference in hybrid graphical models [10]. Consider, for example, a joint distribution on the probability space R × {0, 1}. Here, it is natural to define weights for each random variable separately, prompting a factorized formulation of counting. More generally, in many robotic applications, such hybrid spaces are common [67]. The main idea then is to apply our definition for counting by measures to each variable independently, and construct a measure for the entire space by product measures [42]. A second technicality is that in the finite case, the set of literals true at a model was finite by definition. This is no longer the case. For example, suppose x is a real-valued variable in a language L, and M is a L-model that assigns Definition 19). Define the product measure µ * . = µ 1 × · · · × µ k on the measurable space (D 1 × · · · × D k , Σ 1 × · · · Σ k ) satisfying: 4 Intuitively, E 1 ∈ Σ 1 , . . . , E k ∈ Σ k capture sets of assignments, and the product measure considers the algebraic product of the weights on assignments to terms. As before, for Finally, precisely because the measures are defined on the domains of the terms, we obtain these for all of the satisfying interpretations using the construction [φ]. Example 24. We demonstrate the problem of finding the volume of a polyhedron, needed in the static analysis of probabilistic programs [62,20]. Suppose T = (L, M, ⊲) and φ ∈ L is as in Example 20, that is, φ defines a polyhedron. For every 0-ary function symbol x i ∈ {x 1 , . . . , x k } with domain D i = R, let Σ i be the set of all Borel subsets of R, and let µ i be the Lebesgue measure. Thus, for any E ∈ Σ i , µ i (E) gives the length of this line. Then, for the semiring (R, +, ×, 0, 1), the + operator sums the lengths of lines for each variable, and × computes the products of these lengths. Thus, #(φ, µ * ) is the volume of φ. To see this in action, suppose φ = (2x ≤ 5)∧(x ≥ 1)∧(0 ≤ y ≤ 2). Then M(φ) = x → n, y → m | n, m ∈ R, ⊲φ x,y n,m , that is, all assignments to x and y such that φ x,y n,m is a valid expression in arithmetic. Therefore, Example 25. We demonstrate probabilistic inference in hybrid models [10] by extending Example 24. Consider a probabilistic program: In English: X is drawn uniformly from [0,1] and Y ∈ {0, 1} is the outcome of a coin toss. If X > .6 and Y is not 1, the program terminates successfully. Suppose we are interested in the probability of DONE, which is expressed as the formula: Suppose Towards Compositionality A noteworthy feature of many logic-based knowledge representation formalisms is their compositional nature. In semiring programming, using the expressiveness of predicate logic, it is fairly straightforward to combine theories over possibly different signatures (e.g., propositional logic and linear arithmetic), as seen, for example, in SMT solvers [8]. A more intricate flavor of compositionality is when the new specification becomes difficult (or impossible) to define using the original components. This is a common occurrence in large software repositories, and has received a lot of attention in the AI community [53]. In this section, we do not attempt to duplicate such efforts, but propose a different account of compositionality that is closer in spirit to semiring programming. It builds on similar ideas for CSPs [13], and is motivated by machine learning problems where learning (i.e., optimization) and inference (i.e., model counting) need to be addressed in tandem. More generally, the contribution here allows us to combine two semiring programs, possibly involving different semirings. For simplicity of presentation, we consider non-factorized and finite problems over distinct vocabularies. In essence, the Cartesian product for the semirings is extended for arbitrary theories and weight functions. The meaning of formulas rests on the property that L 1 and L 2 do not share atoms. 5 It is now easy to see that the counting for problem for T works as usual: that is, for any φ ∈ L, 1 , M 2 )). Example 29. We demonstrate a (simple) instance of combined learning and inference. Imagine a robot navigating a world by performing move actions, and believes its actuators need repairs. But before it alerts the technician, it would like to test this belief. A reasonable test, then, is to inspect its trajectory so far, and check whether the expected outcome of a move action in the current state matches the behavior of the very first move action. More precisely, the robot needs to appeal to linear regression to estimate its expected outcome, and query its beliefs based on the regression model. We proceed as follows. Environment 1 Let T 1 = (L 1 , M 1 , ⊲ 1 ) be the theory of real arithmetic with vocab(L 1 ) = {s 0 , s 1 , . . . , s k , a, b, e}. Suppose that by performing a move action, the robot's position changes from s i to s i+1 . Let φ 1 ∈ L 1 be as follows: The idea is that the values of s i are the explanatory variables in the regression model and s i+1 are the response variables, that is, the trajectory data is of the form {(s 0 , s 1 ), (s 1 , s 2 )}. In other words, models in M 1 interpret s i as given by the data, and models differ in their interpretation of a, b and thus, e. For the data in φ 1 , we would have a model M where e M = 0, a M = 1 and b M = 1, and so #(φ 1 , w 1 ) = 0. Composition Let (T , S, w) be the composition of the two environments with T = (L, M, ⊲) and S = (S, ⊕, 0). Suppose φ ∈ L is as follows: ). Then the robot can obtain the weight of repair and φ using: where, of course, the first argument is the error of the regression model and the second is 0 because φ ∧ repair is inconsistent. That can be contrasted to the count below: It is also easy to see that #(φ, w) = (0, .3). As in WMC [19], suppose the robot obtains the probability of a query q given φ using: where the division is carried out by ignoring the regression error. Then the probability of ¬repair given φ is 1. Thus, no repairs are needed. Example 30. We show that by combining theories, a natural semantics can be given to hybrid problems such as task and motion planning. Here, the concern is to integrate a symbolic high-level planner, described in logic, and geometric constraints; see [65] for terminology and notation. Suppose Z is the theory of integer arithmetic that interprets a motion planning space (C, N, p) where C = Z is the configuration space, N ⊂ C is an obstacle region, and p ∈ C is the initial pose of a robot's gripper. (For simplicity, we consider a one-dimensional setting.) The result of doing a motion t is the pose p + t. We axiomatize that x is reachable from y by following t without touching obstacles as: IsReachable(x,y,t) ≡ (x = y + t) ∧ ∀z(z ≤ t ⊃ (y + z N)). Next, suppose T is a propositional theory that interprets a task planning space (S , A, i, g), where A is a set of actions, S a set of states, and i, g ∈ S are the initial and goal states. The task is to synthesize action sequences reaching g. However, in robotic applications, actions are predicated on geometric constraints; e.g., the precondition for pickup(o,p,t) -the action of picking up an object o wrt a trajectory t and the gripper's pose p -is the sentence: Interestingly, IsGripperFree is from the language of T but IsReachable(x,y,z) is from that of Z, and using our semantic setup, such complex systems can be easily interpreted. To describe an illustrative counting problem, suppose that for every Z-model M and T -model M ′ , (M, M ′ ) determines a combined task-motion plan of some length. A plan is valid iff g can be reached from i by following this plan wrt the domain's axioms (e.g., avoiding obstacle regions and satisfying action preconditions). Suppose plans are of the form t 1 , a 1 , . . . , t m , a n , where a i ∈ A and t j are motions. Suppose actions and motions incur costs, and the weight of a model is i cost(a i ) + j cost(t j ) wrt the plan it determines. Assuming Z ∪ T is a finite theory (for simplicity), the semiring (R, min, +, 0, 1) yields a cost optimal plan. 6 Solver Construction The upshot of semiring programming is that it encourages us to inspect strategies for a unified inferential mechanism [47]. This has to be done carefully, as we would like to build on scalable methodologies in the literature, by restricting logical theories where necessary. In this section, we discuss whether our programming model can be made to work well in practice. Let us consider two extremes: • Option 1: At one extreme is a solver strategy based on a single computational technique. Probabilistic programming languages, such as Church [39], have made significant progress in that respect for generative stochastic processes by appealing to Markov Chain Monte Carlo sampling techniques. Unfortunately, such sampling techniques do not scale well on large problems and have little support for linear and logical constraints. • Option 2: At the other extreme is a solver strategy that is arbitrarily heterogeneous, where we develop unique solvers for specific environments, that is, (T , S) pairs. Option 3: We believe the most interesting option is in between these two extremes. In other words, to identify the smallest set of computational techniques, and effectively integrate them is both challenging and insightful. This may mean that such a strategy is less optimal than Option 2 for the environment, but we would obtain a simpler and more compact execution model. To that end, let us make the following observations from our inventory of examples: • Finite versus infinite: variable assignments are taken from finite sets versus infinite or uncountable sets. • Non-factorized versus factorized: the former is usually an optimization problem with an objective function that is to be maximized or minimized. The latter is usually a counting problem, where we would need to identify one or all solutions. • Compositionality: locally consistent solutions (i.e., in each environment (T , S)) need to be tested iteratively for global consistency. Thus, Option 3 would be realized as follows: • Factorized problems need a methodology for effective enumeration, and therefore, advances in model counting [37], such as knowledge compilation, are the most relevant. For finite theories, we take our cue from the Problog family of languages [29,48], that have effectively applied arithmetic circuits for tasks such as WMC and MPE. In particular, it is shown in [49] how arbitrary semiring labels can be propagated in the circuit. See [27,28] for progress on knowledge compilation in CSP-like environments. For infinite theories, there is growing interest in effective model counting for linear arithmetic using SMT technology [20,10,11]. Like in [48], however, we would need to extend these counting approaches to arbitrary semirings. • For non-factorized problems, a natural candidate for handling semirings does not immediately present itself, making this is a worthwhile research direction. 7 Appealing to off-the-shelf optimization software [1] is always an option, but they embody diverse techniques and the absence of a simple high-level solver strategy makes adapting them for our purposes less obvious. In that regard, solvers for optimization modulo theories (OMT) [64] are perhaps the most promising. OMT technology extends SMT technology in additionally including a cost function that is be maximized (minimized). In terms of expressiveness, CSPs [58] and certain classes of mathematical programs can be expressed, even in the presence of logical connectives. In terms of a solver strategy, they use binary search in tandem with lower and upper bounds to find the maximum (minimum). This is not unlike DPLL traces in knowledge compilation, which makes that technology the most accessible for propagating arbitrary semiring labels. • Compositional settings are, of course, more intricate. Along with OMT, and classical iterative methods like expectation maximization [50], there are a number of recent approaches employing branch-and-bound search strategies to navigate between local and global consistency [34]. Which of these can be made amenable to compositions of SP programs remains to be seen however. Overall, we believe the most promising first step is to limit the vocabulary of the logical language to propositions and constants (i.e., 0-ary functions), which make appealing to knowledge compilation and OMT technology straightforward. It will also help us better characterize the complexity of the problems that SP attempts to solve. 8 At first glance, SP is seen to naturally capture #P-complete problems in the factorized setting, both in the finite case [37] and the infinite one [23]. In the non-factorized setting, many results from OMT and mathematical programming are inherited depending on the nature of the objective function and the domains of the program variables [46,8,64]. By restricting the language as suggested, the applicability of these results can be explored more thoroughly. Related Work Semiring programming is related to efforts from different disciplines within AI, and we discuss representative camps. In a nutshell, SP can be seen as a very general semantical framework, as noted in Figure 6. Statistical modeling Formal languages for generative stochastic processes, such as Church [39] and BLOG [56], have received a lot of attention in the learning community. Such languages provide mechanisms to compactly specify complex probability distributions, and appeal to sampling for inference. Closely related to such proposals are probabilistic logic programming languages such as Problog [21] that extends Prolog with probabilistic choices and uses WMC for inference [29]. In particular, a semiring generalization of Problog, called aProbLog [48], was the starting point for our work and employs the semiring variation of WMC for inference [49]. A recent extension of aProbLog, called kProbLog, by [59] is able to further combine several semirings and does not require factorized weights as it uses meta-functions w(a) = f (w(a 1 ), ..., w(a n )) to compute the weight w(a) of an atom from the weights of the atoms a 1 , ..., a n appearing in its proofs. But the kind of weight function and factorization used in kProbLog differs from the unfactorized weight function over the models used in the present paper. Nevertheless, kProbLog is able to represent tensors, compute kernel functions and perform algorithmic differentiation. It will be interesting to investigate whether kProbLog can be combined with semiring programming. As discussed before, SP generalized the formulation of algebraic model counting (AMC) [49], and in that sense, provides a semantic characterization for the sum product problem [4] to richer class of languages and models. But by giving up the propositional language, and in particular, the use of arithmetic circuits, we loose the tractability results and unified evaluation scheme offered by AMC. Of course, it is always possible to restrict and/or otherwise map infinitary languages to finite and decomposable grammars via abstraction. This has been investigated in the case of a recent continuous extension to WMC called weighted model integration [10,11], where by interpreting the pieces of a density function as propositions, propositional circuits are leveraged for inference. In an effort independent to ours 9 , it is shown how an algebraic extension to sum product networks [60] enable tractable problem solving [35], closely following the observations in [49]. In that work, a continuous extension is considered as well, but under the assumption that the specification of the weight function as well as the computation of the count can be factorized. The use of semirings in machine learning is not new to aProbLog, see e.g., [38], and programming languages such as Dyna [24]. Dyna is based on Datalog; our logical setting is strictly more expressive than Datalog and its extensions (e.g., non-Horn fragment, constraints over reals). Dyna also labels proofs but not interpretations, as would SP (thus capturing weighted model counting, for example). Constraints The constraints literature boasts a variety of modeling languages, such as Essence [36], among others [55,68]. (See [30], for example, for a proposal on combining heterogeneous solvers.) On the one hand, SP is more expressive from a logical viewpoint as constraints can be described using arbitrary formulas from predicate logic, and we address many problems beyond constraints, such as probabilistic reasoning. On the other hand, such constraint languages make it easier for non-experts to specify problems while SP, in its current form, assumes a background in logic. Such languages, then, would be of interest for extending SP's modeling features. A notable line of CSP research is by Bistarelli [13] and his colleagues [15,13]. Here, semirings are used for diverse CSP specifications, which has also been realized in a CLP framework [14]. In particular, our account of compositionality is influenced by [13]. Under some representational assumptions, SP and such accounts are related, but as noted, SP can formulate problems such as probabilistic inference in hybrid domains that does not have an obvious analogue in these accounts. Optimization Closely related to the constraints literature are the techniques embodied in mathematical programming more generally. There are three major traditions in this literature that are related to SP. Modeling languages such as AMPL [31] are fairly close to constraint modeling languages, and even allow parametrized constraints, which are ground at the time of search. The field of disciplined programming [40] supports features such as object-oriented constraints. Finally, relational mathematical programming [2] attempts to exploit symmetries in parametrized constraints. From a solver construction perspective, these languages present interesting possibilities. From a framework point of view, however, there is little support for logical reasoning in a general way. Knowledge representation Declarative problem solving is a focus of many proposals, including ASP [17], model expansion [57,66], among others [18]. These proposals are (mostly) for problems in NP, and so do not capture #P-hard problems like model counting and WMC. Indeed, the most glaring difference is the absence of weight functions over possible worlds, which is central to the formulation of statistical models. Weighted extensions of these formalisms, e.g., [5,54], are thus closer in spirit. The generality of SP also allows us to instantiate many such proposals, including formalisms using linear arithmetic fragments [10,64]. Consider OMT for example. OMT can be used to express quantifier-free linear arithmetic sentences with a linear cost function, and a first-order structure that minimizes the cost function is sought. From a specification point of view, SP does not limit the logical language, does not require that objective functions be linear, and a variety of model comparisons, including counting, are possible via semirings. Compositionality in SP, moreover, goes quite beyond this technology. Finally, there is a longstanding interest in combining different (logical) environments in a single logical framework, as seen, for example, in modular and multi-context systems [53,26]. In such frameworks, it would be possible to get a ILP program and ASP program to communicate their solutions, often by sharing atoms. In our view, Section 4 and these frameworks emphasize different aspects of compositionality. The SP scheme assumes the modeler will formalize a convex optimization problem and a SAT problem in the same programming language since they presumably arise in a single application (e.g., a task and motion planner); this allows model reuse and enables transparency. In contrast, modular systems essentially treat diverse environments as black-boxes, which is perhaps easier to realize. On the one hand, it would be interesting to see whether modular systems can address problems such as combined inference and learning. On the other hand, some applications may require that different environments share atoms, for which our account on compositionality could be extended by borrowing ideas from modular systems. Conclusions In a nutshell, SP is a framework to declaratively specify four major concerns in AI applications: • logical reasoning; • non-standard models; • discrete and continuous probabilistic inference; • discrete and continuous optimization. To its strengths, we find that SP is universal (in the above sense) and generic (in terms of allowing instantiations to particular logical languages and semirings). Thus, we believe SP represents a simple, uniform, modular and transparent approach to the model building process of complex AI applications. SP comes with a rigorous semantics to give meaning to its programs. In that sense, we imagine future developments of SP would follow constraint programming languages and probabilistic programming languages in providing more intricate modeling features which, in the end, resort to the proposed semantics in the paper. Perhaps the most significant aspect of SP is that it also allows us to go beyond existing paradigms as the richness of the framework admits novel formulations that combine theories from these different fields, as illustrated by means of a combined regression and probabilistic inference example. In the long term, we hope SP will contribute to the bridge between learning and reasoning.
2016-09-21T08:17:40.000Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "fb7955468ed9a94d77f6d0bf06fe090f5660e8f9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fb7955468ed9a94d77f6d0bf06fe090f5660e8f9", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
14523559
pes2o/s2orc
v3-fos-license
Gallic Acid Induces Necroptosis via TNF–α Signaling Pathway in Activated Hepatic Stellate Cells Gallic acid (3, 4, 5-trihydroxybenzoic acid, GA), a natural phenolic acid widely found in gallnuts, tea leaves and various fruits, possesses several bioactivities against inflammation, oxidation, and carcinogenicity. The beneficial effect of GA on the reduction of animal hepatofibrosis has been indicated due to its antioxidative property. However, the cytotoxicity of GA autoxidation causing cell death has also been reported. Herein, we postulated that GA might target activated hepatic stellate cells (aHSCs), the cell type responsible for hepatofibrosis, to mitigate the process of fibrosis. The molecular cytotoxic mechanisms that GA exerted on aHSCs were then analyzed. The results indicated that GA elicited aHSC programmed cell death through TNF–α–mediated necroptosis. GA induced significant oxidative stress through the suppression of catalase activity and the depletion of glutathione (GSH). Elevated oxidative stress triggered the production of TNF–α facilitating the undergoing of necroptosis through the up-regulation of key necroptotic regulatory proteins TRADD and receptor-interacting protein 3 (RIP3), and the inactivation of caspase–8. Calmodulin and calpain–1 activation were engaged, which promoted subsequent lysosomal membrane permeabilization (LMP). The TNF–α antagonist (SPD–304) and the RIP1 inhibitor (necrostatin–1, Nec–1) confirmed GA-induced TNFR1–mediated necroptosis. The inhibition of RIP1 by Nec–1 diverted the cell death from necroptosis to apoptosis, as the activation of caspase 3 and the increase of cytochrome c. Collectively, this is the first report indicating that GA induces TNF signaling–triggered necroptosis in aHSCs, which may offer an alternative strategy for the amelioration of liver fibrosis. Introduction Gallic acid (3,4,5-trihydroxy benzoic acid, GA), a natural antioxidant, reportedly undergoes a two-step, one-electron transfer autoxidation to generate GA radicals [1]. The oxidation of GA reportedly initiates at the para-hydroxyl site of a benzene ring to generate semiquinone free homotypic interaction motif (RHIM) domains to form a functional amyloid signaling complex IIb (necrosome) to undergo necroptosis. Additionally, the induction of ROS and calcium-induced lysosomal membrane permeabilization (LMP) through calpain activation has been suggested to critically participate in the execution of programmed necrosis to disrupt cell integrity [26,27]. Herein, we investigated the cytotoxic effect of GA on aHSCs and the underlying molecular mechanisms. Results indicated that GA induced oxidative stress in aHSCs through the inhibition of catalase activity and the promotion of intracellular ROS, lipid peroxides, and oxidative DNA levels. These stresses in turn upregulated the expression of TNF-α, reduced the content of intracellular GSH, and blocked the activation of caspase-8 leading to the initiation of necroptosis characterized by the upregulation of TRADD and RIP3, and the engagement of lysosomal membrane permeabilization modulated by elevated intracellular calcium levels and the activation of calmodulin and calpain 1. In addition, GA-induced necroptosis can be diverted to apoptosis by inhibition of RIP1 activity. These results may offer an alternative strategy for the amelioration of hepatic fibrosis. Primary hepatic and hepatic stellate cells isolation and culture Primary hepatic cells (HCs) and hepatic stellate cells (HSCs) were prepared from Sprague Dawley rat liver as described by Seglen [28] and Kawada et al. [29], respectively. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of Taiwan Animal Protect Act. The protocol was approved by the Committee on the Ethics of Animal Experiments of National Chi Nan University (Permit Number: 950102). Carbon dioxide was used for euthanasia, and all efforts were made to minimize suffering. Briefly, the liver was perfused, digested with pronase and collagenase, for 15 min at 37°C at a flow rate of 20 ml/min. and then excised. After further digestion with pronase and collagenase, the resulting suspension was filtered and centrifuged 460×g on a 11% (v/v) Nycodenz cushion (Sigma, St. Louis, MO, USA) to isolate HSCs in the upper whitish stellate cell-enriched layer. Resuspended pellets were cultured in DMEM supplemented with 10% FBS and antibiotics (70 mg L -1 penicillin and 100 mg L -1 streptomycin) at 37°C with a humidified atmosphere of 5% CO 2 . HSCs were activated for 6 passages and used throughout the study. For hepatic cells, after perfusion with collagenase, the liver was excised and dispersed cells in L-15/BSA, followed by sedimentation at unit gravity for 20 min. The supernatant cell suspension was filtered through gauze. The filtrate was washed twice with HBS by centrifugation at 50×g for 45 s to remove debris, damaged cells, and non-parenchymal cells. Before seeding, cells were washed once with the culture medium. The range of cell yields was from 4×10 8 to 6×10 8 with a survival rate of approximately 95%. Cell viability assay Cell viability was determined by MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide) assay. Briefly, activated HSCs (aHSCs) were initially plated at a density of 1×10 4 cells per well in 96-well plates for 24 hrs. The cells were then incubated with designated concentrations of GA and GA analogous (S1 Fig.) for 24 hrs at 37°C. MTT (10 μL, 0.5 g/L) solution was then added to each culture well and incubated for another 4 hrs at 37°C. The MTT-formazan crystals produced by viable cells were dissolved by DMSO. The absorbance at 570 nm was monitored with a microplate reader (Bio-Rad, CA, USA). All experiments were performed in triplicate, and the results of treated cells were shown in percentage of untreated control cells. Cell proliferation assay Cells cultured in serum-free DMEM were plated into 96-multiwell plates (5000 cells/well). After incubation for 24 hrs, cells were treated with GA of 0, 25, 50 75 μM. Cell proliferation was measured by using the BrdU cell proliferation assay kit as manufacturer's instructions (Cell Signaling Technology, Denvers MA). The incorporation of the pyrimidine analogue 5-bromo-2'-deoxyuridine during DNA synthesis in proliferating cells was monitored at 370 nm. All experiments were performed in triplicate, and the results of treated cells were shown in percentage of untreated control cells. Cell cycle analysis At the end of incubation (24 hrs) with GA, the activated HSCs were washed twice with PBS, collected with 0.25% trypsin-EDTA, fixed with ice-cold alcohol (1 mL 75% (v/v)) for 12 hrs at -20°C, and then centrifuged at 380×g for 5 min at room temperature. Cell pellets were treated with l mL of cold staining solution containing 20 μg/mL of propidium iodide (PI) and 20 μg/mL of RNase A, and incubated for 15 min in darkness at room temperature. The samples were analyzed by FACSCalibur system (BD Biosciences, Franklin Lakes, NJ, USA) using Cell-Quest software. Data are representative of at least three independent experiments. Lactate dehydrogenase (LDH) release assay The activity of lactate dehydrogenase (LDH) was measured colorimetrically by LDH assay kit as manufacturer's instructions (Abcam, Cambridge, UK). Briefly, aHSCs were incubated with designated concentrations of GA for 24 hrs at 37°C. The activity of LDH in culture medium was measured spectrophotometrically recording the rate of change in NADH concentration at a wavelength of 450 nm after interaction with a dye. A NADH calibration curve was constructed to determine LDH activity. One unit of LDH refers to the catalyzation of the conversion of lactate to pyruvate to generate 1.0 μmol NADH per min at 37°C. Analysis of reactive oxygen species (ROS) and hydrogen peroxide ROS was determined using a commercial DCFDA-cellular ROS detection assay kit as manufacturer's instructions (Abcam, Cambridge, UK). Briefly, aHSCs were plated on a 96-well plate (2.5×10 4 cells/well). After overnight attachment, cells were treated with GA at designated concentration for 6 hrs, followed by staining with cell permeant reagent, 2',7'-dichlorofluorescein diacetate (DCFDA) for 45 min at 37°C. Deacetylated DCFDA were fluorescently determined after oxidized by ROS to form 2', 7'-dichlorofluorescin (DCF) with Ex 495 nm /Em 529 nm. Culture medium and intracellular hydrogen peroxide (H 2 O 2 ) of aHSCs were analyzed by using a fluorometric hydrogen peroxide kit as manufacturer's instructions (Cayman, Ann Arbor, MI). The assay is based on the conversion of 10-Acetyl-3,7-dihydroxyphenoxazine (ADHP) to highly fluorescent resorufin in the presence of horseradish peroxidase (HRP) and H 2 O 2 . The fluorescence of resorufin was read at Ex 530 nm/Em 590 nm. All experiments were performed in triplicate, and the results of treated cells were shown in percentage of untreated control cells after background subtraction. Detection of DNA oxidative damage The detection of DNA oxidative damage was determined by the DNA Damage EIA kit (Cayman, USA) using Anti-8-OHdG monoclonal antibody to competitively bind 8-OHdG. The immune complexes (anti-8-OHdG and free 8-OHdG) were washed away, while antibodies that caught by immobilized 8-OHdG were detected by a horseradish peroxidase (HRP) conjugated secondary antibody, and the absorbance was measured at 415 nm. Detection of intracellular glutathione The concentration of glutathione and oxidized glutathione (GSH/GSSG) was determined by glutathione assay kit as manufacturer's instructions (Cayman, Ann Arbor, MI, USA). The cell pellet is homogenized in cold phosphate buffer (50 mM, pH 6-7, 1 mM EDTA). The supernatant of the homogenates (10000×g for 15 min) was used to determine GSH/GSSG by an enzymatic recycling method. The protein concentration was determined by the Bradford method. Analyses of lipid peroxidation Intracellular lipid peroxidation of aHSCs was fluorescently (ex. 515 nm; em. 553 nm) measured by the determination of the MDA-TBA complex with fluorometeror (Thermo Scientific) using HPLC with LiChrospher column (RP-18, 5μm, Merck), mobile phase of 25 mM Na 2 HPO 4methanol (58/42, v/v) at a flow rate of 1 ml/min. The complex of MDA-TBA was eluted in 4.8 min. A MDA-TBA complex standard curve was constructed for calibration. Additionally, the lipid peroxidation (LPO) assays were performed using a Lipid Hydroperoxide Assay kit (Cayman Chemical). Lipid hydroperoxides were extracted into chloroform and measured by the redox reactions with ferrous ions. Chromogenic reaction was performed at room temperature for 5 min, followed by reading the mixture at 500 nm. The calibration curve was constructed using 13-Hydroperoxy-octadecadienoic acid. Lipid hydroperoxide was expressed as nmol/mg protein. Catalase assay The activity of catalase (CAT) was determined by catalase assay kit as manufacturer's instructions (Cayman, Ann Arbor, MI, USA). The peroxidatic function of CAT was used for activity evaluation. Cell lysates were incubated with assay buffer, methanol, and H 2 O 2 for 20 min at room temperature in dark. Potassium hydroxide was utilized to terminate the reaction, followed by the addition of 4-amino-3-hydrazino-5-mercapto-1,2,4-trizazole (Purpald) as chromogen to interact with the produced formaldehyde for 10 min. The reaction mixture was measured spectrophotometrically at 540 nm. CAT activity was expressed as nmol/min/mL. All experiments were performed in triplicate, and the results of treated cells were shown in percentage of untreated control cells. Catalase transfection Transfection was performed using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) as manufacturer's instructions. The plasmid containing the encoded sequence for catalase expression (pcDNA3, Invitrogen, USA) was amplified in competent cells (Invitrogen, Grand Island, USA). Plasmids and Lipofectamine were mixed in Reduced Serum Medium (Invitrogen), and incubated for 20 min at room temperature. For internal control, cells were transfected with empty pcDNA vector. Determination of lysosomal membrane permeabilization (LMP) The aHSCs lysosomal membrane stability was determined by the redistribution of the fluorescent dye, acridine orange [30]. After staining, cells were washed twice with fresh medium at 50 ×g for 5 min. to remove excess dye, followed by fluorometrically monitored at Ex495 nm/Em 530 nm by an Olympus IK71 fluorescent microscope (Olympus, Tokyo, Japan). Lysosomal membrane leakiness was determined as cytosolic green fluorescence induced by acridine orange released from lysosomes. Calcium assay The concentration of intracellular Ca 2+ was measured by calcium assay kit (Cayman, Ann Arbor, MI, USA) as manufacturer's instructions. The assay is based on the formation of o-cresolphthalein-calcium in alkaline condition, and the produced purple complexes were monitored at absorbance of 575 nm. Statistic The data has been analyzed by Sigma plot version 9.0. Results are presented as mean±SD for individual experiments. Statistical differences (experiment vs. control) were calculated by student's t-test and P<0.05 was considered as statistically significant. Results Gallic acid induces oxidative stress leading to significant cytotoxic and antiproliferative effects on aHSCs GA and its analogues with different resonance states that may cause distinct levels of oxidative stress were used to explore GA-induced cytotoxic and antiproliferative effects on aHSCs. The results of the MTT assay indicated that GA and analogues such as pyrogallol (P) and 5-Hydroxydopamine hydrochloride (H) demonstrated dose-dependent cytotoxic effects (25,50, and 75 μM) on aHSCs after 24 hrs incubation (Fig. 1A), with an EC 50 value of 30.5±1.7, 41.8±1.6, and 35.0±1.0 μM, respectively, while other analogues did not. GA also held significant antiproliferative effects (P<0.05) on aHSCs as determined by analyzing the newly synthesized DNA of dividing cells through the BrdU assay. A dose-dependent reduction (55.3±2.3 and 66.9±8.4%) in cell proliferation was observed after 24 hrs incubation at GA concentrations of 50 and 75 μM, respectively (Fig. 1B). Notably, in contrast to aHSCs GA showed less cytotoxicity on quiescent HSCs (qHSCs) and no cytotoxicty on normal hepatocytes at GA concentrations of 25, 50, and 75 μM after 24 hrs of incubation (Fig. 1C). The cell viability of qHSCs was almost 10 times higher than that of aHSCs (73.4% vs. 7.9%) at GA 75μM after 24 hrs of incubation (S2 Fig.). Oxidative stress induced by GA and GA analogues was further investigated by analyzing ROS formation. The levels of hydrogen peroxide (H 2 O 2 ) in aHSC culture medium was determined after treatment with GA and GA analogues at 25, 50, and 75 μM. Increased levels of H 2 O 2 were found in the GA, P, and H treated groups ( Fig. 2A). GA also elevated intracellular content of H 2 O 2 (Fig. 2B). Accumulated intracellular ROS (e.g. hydroxyl and peroxyl radicals) determined by DCFDA cellular ROS detection assay was also observed in the GA, P, and H treated groups (Fig. 2C). In addition, GA-induced lipid peroxidation, and oxidative DNA in aHSCs were revealed, as evidenced by dose-dependent formation of MDA (Fig. 2D), lipid hydroperoxides (Fig. 2E), and 8-oxodG, (Fig. 2F), respectively. Intracellular GSH concentration was also decreased with the increase of GA (Fig. 2G). These results suggest that GA induces remarkable oxidative stress in aHSCs. Besides, GA analogues like P and H generated significant amount of ROS intracellularly and in culture medium, and induced remarkable cytotoxicity, whereas other analogues induced no or lower levels of ROS and cytotoxicity (Fig. 1A). These outcomes might suggest that the involvement of oxidative stress in cell demise was chemical structure specific. Moreover, significant cytotoxicity was observed in aHSCs but not in normal hepatocytes after the treatment of GA, which could be due to the decreased antioxidative activity in aHSCs [19]. The accumulation of GA-induced H 2 O 2 in aHSCs could be resulted from impaired intracellular antioxidant system. To further investigate this assumption, the effect of antioxidant system on cell survivability was then determined (Fig. 3). Reagents such as deferoxamine (DFX) (a ferric iron chelator to limit Fenton−like reaction), superoxide dismutase (SOD), and catalase (CAT) were used to reduce oxidative stress. DFX chelates ferric iron to retard Fenton's reaction and the subsequent radical generation. SOD catalyzes the dismutation of superoxide to oxygen and hydrogen peroxide. Catalase catalyzes the decomposition of hydrogen peroxide to water and oxygen. Activated HSCs were initially incubated with GA, followed by the addition of microplate reader after 24 hrs of incubation. (E) Lipid peroxidation products, lipid hydroperoxides, were determined with or without inhibitors of TNF−α and RIP1, SPD-304 (2μM) and Nec-1 (2μg/mL), respectively. (F) Oxidized DNA (8-OH-dG), and (G) total GSH contents were determined. Data were expressed as mean±SD from three different experiments. The asterisk (*) indicates a significant difference from control group (* P<0.05, **P<0.01). doi:10.1371/journal.pone.0120713.g002 antioxidants at different time intervals (0, 0.5, 1, and 2 hrs) after GA treatment. After 24 hrs of incubation, the cell viability was determined. Fig. 3A indicates that group treated with catalase showed the greatest cell survival promoting effect compared to other antioxidants. Group treated with DFX showed reduced cytotoxic effect in the first two time periods (0 and 0.5 hr) presumably due to the suppression of hydroxyl radical production catalyzed by iron. However, at the late time period (1 and 2 hrs), the cytotoxcity of DFX and GA co−treatment group was similar to that of GA alone, suggesting the critical role of H 2 O 2 in cytotoxicity. There were significant cytotoxicity and no rescuing effect observed in the groups treated with SOD probably because of the accumulation of H 2 O 2 resulted by the catalyzation of superoxides. On the other hand, cell survival was significantly promoted in groups treated with catalase, indicating the involvement of H 2 O 2 in cytotoxicity. Improved survivability of aHSCs at several levels of GA treatment (25, 50, and 75 μM) was maintained by transducing the catalase genes (Fig. 3B). A significant 35.1% and 25.7% recovery (P<0.05) at GA concentrations of 50 and 75 μM, respectively, was achieved. Furthermore, the inhibitory potency of GA on the catalase activity was studied. As displayed in Fig. 3C, hepatocytes possess higher catalase activity than that of aHSCs under normal conditions. With the addition of GA (25 and 50 μM), the catalase activity of aHSCs was suppressed dose-dependently, whereas the activity of hepatocytes was promoted at higher GA concentrations. These findings suggest that catalase is critical to the survival of aHSCs insulted by GA-induced oxidative stress. It has been reported that restricted catalase activity shows in HSCs once being activated [31]. This could likely make aHSCs more vulnerable to oxidative stress than normal hepatocytes. Gallic acid induces TNF−α mediated programmed necrosis in aHSCs The GA-induced cytotoxic effect on aHSCs was observed in dose-dependent manners (Fig. 1A). We then attempted to further reveal the molecular mechanisms by which GA mediated the death of aHSCs. Our cell cycle analysis showed that GA did not provoke significant apoptotic effects on aHSCs (Fig. 4A, S3 Fig.). The sub G1 phase showed slight change after GA treatment (25, 50, and 75 μM). However, LDH release (P<0.05) appeared with the increase in GA concentrations (25,50, and 75 μM) (Fig. 4B). This dose−dependent LDH release implies the disruption of the plasma membrane and subcellular organelles. Thus, GA might likely mediate a programmed necrotic effect, necroptosis, on aHSCs. It is known that TNF−α pathway has been suggested to be associated with necroptosis, and RIP1 is one of key factors of necroptosis. The TNF−α antagonist, SPD-304, and RIP1 inhibitor, Nec-1, were then used to examine GA-induced programmed necrotic cell death. The addition of SPD-304 and Nec-1 significantly rescued the survivability of aHSCs (Fig. 4C), reduced the production of lipid hydroxides (Fig. 2E), and increased intracellular GSH (Fig. 2G), indicating the involvement of necroptosis in GA-induced programmed cell death. It is suggested that the activation of RIP3 and TRADD are critical elements of TNF signaling−mediated necroptosis [32]. As shown in Fig. 4D, GA induced substantial TNF−α release from aHSCs, which could likely elicit the downstream activation of necroptosis. Additionally, RIP3, the trigger of necroptosis in the TNF−α pathway, along with the up-regulated expression of TRADD and the blocked caspase-8 activity, engages the effector mechanisms of necroptosis [26]. The results of immunoblotting analysis revealed that with the whole lysates of aHSCs, GA significantly up-regulated TRADD and p−RIP3 (1.4 and 1.3−fold, respectively) and down-regulated the activation of caspase-8 (Fig. 4E). The co−treatment of GA (75 μM) and SPD−304 (2 μM), as expected, down−regulated TRADD almost 2−fold (w/o inhibitor vs. w/ inhibitor, 1.41 vs. 0.73) and p−RIP3 1.4−fold (1.32 vs. 0.99) compared to GA alone (Fig. 4E), and Based on these findings, GA could likely induce necroptosis partly through the actions of activation of TNF−α pathway, suppression of pro-caspase 8 activation, and depletion of intracellular GSH. Buthionine sulphoximine (BSO), an inhibitor of γ-glutamylcysteine synthetase (γ-GCS) to deplete intracellular GSH, was used in conjunction with TNF−α to investigate whether the factors associated with necroptosis could be provoked. As shown in Fig. 4F, BSO alone could significantly elicit the phosphorylation of RIP3 but could not upregulate other factors associated with necroptosis, e.g. TRADD. On the other hand, the combinatory effects of BSO and TNF−α significantly promoted the activation of RIP3 and TRADD. These results might explain partly the necroptotic mechanisms that GA exerted on aHSCs. These results indicate that GA− induced Ca 2+ accumulation was through death receptor (DR)−elicited signaling. Molecules that associated with calcium-modulated necroptosis such as intracellular calcium concentration regulator, calmodulin (CaM), and calcium-activated neutral protease, calpain 1, were then examined. The active form of calpain executes lysosomal membrane permeabilization (LMP), which causes lysosome rupture and the spillage of acidic lysosomal contents to mediate cytoplasm acidification and degradation [33]. The results of the immunoblotting analysis indicate that GA remarkably up−regulated the expression of CaM and calpain 1, but the elevation was suppressed by the treatment of SPD−304 and Nec−1 (Fig. 5B), suggesting that GA triggers the process of necroptosis through the modulation of calcium signaling. Gallic acid promotes increased intracellular calcium levels and calpain− 1−modulated LMP in aHSCs Next, calpain−induced lysosomal membrane permeabilization (LMP) during GA−induced necroptosis was investigated by lysosomal staining with acridine orange. As shown in Fig. 5C, low level of orange fluorescence was observed in cells treated with GA alone, whereas increased orange fluorescence appeared upon the addition of SPD−304 and Nec−1, indicating the presence of intact acid organelle such as lysosome, after the treatment of inhibitors. These results indicate that either blocking TNF−α signalling or RIP1 remarkably arrested the process of GAinduced LMP, which rescued the subsequent cell viability (Fig. 4B). Collectively, our data demonstrated that GA−induced TNF−α −mediated necroptosis in aHSCs was elicited by triggering RIP1 and RIP3 necroptosome, followed by the modulation of Ca 2+ signaling to execute LMP through calpain1 activation. Inhibition of RIP1 activities diverts GA−induced necroptosis to apoptosis It has been reported that necroptosis is reciprocal to apoptosis when the apoptotic signaling is blocked [34]. Therefore, we attempted to study whether blocking the GA-induced signals of necroptosis could divert cell death toward apoptosis. Various concentrations of GA were concurrently added with Nec−1 to aHSCs. , no significant activation of caspase 3 and cytochrome c was observed. Based on these results, the activity of RIP1 was required in GA−triggered aHSC necroptosis. In addition, the diversion of GA−triggered necroptosis to apoptosis verified the reciprocal relationship of these two cell death processes to ensure cell termination under stimuli conditions [34]. Discussion In the present study, we aimed to investigate the molecular mechanisms of programmed cell death that GA exerted in active hepatic stellate cells, a key factor associated with hepatic fibrosis. We revealed that GA promoted necroptotic cell death through the induction of TNF-αmediated necroptosis. GA induced significant oxidative stress as observed by the depletion of intracellular GSH, the formation of intracellular aldehyde (e.g., malondialdehyde, MDA) and hydrogen peroxide, as well as ROS accumulation, which led to subsequent cytotoxicity. It is intriguing that the GA esters, methyl 3,4,5,-trihydroxy-benzoate (M) (-COOCH 3 at C 1 ) and propyl 3,4,5-trihydroxy-benzoate (PG) (-COO(CH 2 ) 2 CH 3 at C 1 ), showed much lower levels of ROS formation and cytotoxicity than those of GA (-COOH at C 1 ), pyrogallol (P) (-H 2 at C 1 ), and 5-Hydroxydopamine hydrochloride (H) (-C 2 H 4 NH 2 at C 1 ). Presumably GA, P, and H are in more resonance forms than M and PG leading to higher levels of ROS formation and cytotoxicity resulted. GA-induced oxidative damage and cytotoxic effects were low in hepatic cells but were high in aHSCs, which could be attributed to the activity of antioxidative systems, such as catalase, a critical regulator of intracellular ROS levels. Hepatocytes hold potent catalase activity and can eliminate GA−induced oxidative stress displaying enhanced cell survivability. Suppressed catalase activity has been addressed in hepatoma cells and activated HSCs [31,35]. Mechanisms involved in decreasing catalase activity have been reported in hepatoma cells due to the genomic methylation of CpG sites in the catalase promoter [21,35], which might also apply to aHSCs during transformation. Our results indicated that GA significantly promoted the secretion of TNF-α and the production of RIP1, reduced intracellular GSH levels, and inhibited the activation of caspase-8 in aHSCs. These observations may suggest the involvement of necroptosis. Moreover, GA also induced several cellular events such as intracellular Ca 2+ influx, lipid peroxidation, and lysosomal disruption (LMP) by Ca 2+ influx activated calpains [36], which are all typical characteristics of necroptosis. Inactive form of caspase-8 integrated with RIP3 leads to the subsequent mobilization of calpain and the promotion of LMP, causing the loss of organelle and cell integrity, and finally leading to necroptosis. These phenomena summarized in Fig. 7 indicate the processing of necroptosis in GA treated aHSCs. The reduction of intracellular GSH levels caused by GA-induced oxidative stress could be essential to the diversion of programmed cell death from apoptosis, which is reportedly occurred in several GA-induced cell deaths, to necroptosis. Reduced levels of GSH have been seen to repress the undergoing of apoptosis. Direct depletion of GSH under pro-oxidative condition has been indicated to prevent CD95-and TNFR1-mediated hepatocyte apoptosis in vivo [37]. The oxidized GSH, GSSG, is also shown to blockade apoptosome-mediated caspase-50, and 75 μM) with or without SPD-304 (2 μM) or Nec-1 (2 μg/mL) for 24 hrs. Representative immunoblots showed the expression of CaM and calpain 1. βactin was used as an internal control. * P<0.05, **P<0.01. (C) GA induces lysosomal membrane permeabilization in aHSCs. The effects of GA on lysosomal stability. The activated HSCs were treated with GA (25 and 50 μM) and with or without SPD-304 (2 μM) or Nec-1 (2 μg/mL) for 24 hrs. The activated HSCs were stained with aridine orange to determine the integrity of the lysosomes. (Scale bars, 10μm). doi:10.1371/journal.pone.0120713.g005 3 activation [37]. Further, the activation of caspase such as caspase-8 is suggested under a reducing environment. This caspase requires antioxidants at death-inducing signaling complex (DISC) for activation [37]. Therefore, the accumulation of intracellular hydrogen peroxide and the depletion of GSH induced by GA could likely impair the activation of caspase-3 and 8, leading to necroptosis in aHSCs. The reciprocal backup relationship of apoptosis and necroptosis has been addressed [34] to ensure cell termination under stimuli conditions. Inactivation of RIP1 by Nec-1 diverts GAtriggered necroptosis to apoptosis as evidenced by the increased level of cytocrome c and the activation of caspase-3. RIP1 plays several roles in the promotion of necroptosis. RIP1 is not only an element of necrosome, but also a mediator in phosphorylating an anti-apoptotic factor, STAT3, at Ser727, which enables the activated molecule to interact with GRIM-19 resulting in the subsequent translocation to mitochondria [38]. This leads to an apoptosis-deficient situation, and provokes TNF-induced necroptosis. RIP1 has been seen to mediate caspase inhibitor-induced TNF-α production [39], and TNF-induced ROS generation [40] to regulate the progression of necroptosis. Thus, the inhibition of RIP1 by Nec-1 would restrict the undergoing of necroptosis. On the other hand, Nec-1, not an antioxidant, is reported to be able to resume intracellular reducing environment due to the suppressive ability in GSH depletion and ROS formation [41]. Accordingly, in addition to be a RIP1 inhibitor, Nec-1 may exercise its "antioxidative" character to halt necroptosis, which usually dysregulate cellular redox metabolome through the depletion of NAD + , NADPH, and GSH [42]. Conclusion GA-induced apoptosis has been reported elsewhere, however, GA elicits necroptosis in aHSCs was first reported herein. GA elicits TNF signaling pathway that promotes necroptosis in aHSCs. The oxidative stress induced by GA may trigger the production of TNF-α, which evokes the downstream signaling of necroptosis, including the formation of necrosome (activation of RIP1, RIP3, and inactivation of caspase-8) and the subsequent events such as intracellular Ca 2+ influx, lipid peroxidation, and lysosomal disruption (LMP) by Ca 2+ influx activated calpains. This is the first report that indicates GA-induced necroptosis in aHSCs, which may provide an alternative strategy for the amelioration of liver fibrosis, in addition to the anti-oxidative activity of this phenolic compound. The intermittent molecules of TNF-α signaling pathway responsible for TNF-α-mediated necroptosis have not yet been clearly asserted. The examination of GA-induced cell death signals that propagate further will be focused on the
2016-05-12T22:15:10.714Z
2015-03-27T00:00:00.000
{ "year": 2015, "sha1": "fb542ad0e765e3a3e4fa563d241b35bc6570e1aa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0120713&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb542ad0e765e3a3e4fa563d241b35bc6570e1aa", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3014210
pes2o/s2orc
v3-fos-license
C3–6 Laminoplasty for Cervical Spondylotic Myelopathy Maintains Satisfactory Long-Term Surgical Outcomes Study Design Prospective cohort study. Objective To clarify long-term surgical outcomes of C3–6 laminoplasty preserving muscles attached to the C2 and C7 spinous processes in patients with cervical spondylotic myelopathy (CSM). Methods Twenty patients who underwent C3–6 open-door laminoplasty for CSM and who were followed for 8 to 10 years were included in this study. Myelopathic symptoms were assessed using Japanese Orthopaedic Association (JOA) score. Axial neck pain was graded as severe, moderate, or mild. C2–7 angle was measured using lateral radiographs of the cervical spine before surgery and at final follow-up. Results Mean JOA score before surgery (11.7) was significantly improved to 15.2 at the time of maximum recovery (1 year after surgery), declining slightly to 14.9 by the latest follow-up. Late deterioration of JOA score developed in eight patients, but was unrelated to the cervical spine lesions in each case. No patient suffered from prolonged postoperative axial neck pain at final follow-up. The mean C2–7 angle before surgery (13.8 degrees) significantly increased to 19.2 degrees at final follow-up. Conclusions C3–6 laminoplasty preserving muscles attached to the C2 and C7 spinous processes in patients with CSM maintained satisfactory long-term neurologic improvement with significantly reduced frequencies of prolonged postoperative axial neck pain and loss of C2–7 angle after surgery. Introduction Some problems can be associated with cervical laminoplasty such as axial neck pain and deterioration of sagittal alignment of the cervical spine 1 ; we have previously reported that C3-6 laminoplasty preserving muscles attached to the C2 and C7 spinous processes can significantly reduce the incidence of postoperative axial neck pain and kyphotic deformity in the short term. [2][3][4] Moreover, our prospective 5-year follow-up study revealed that our C3-6 laminoplasty can maintain satisfactory neurologic recovery with significantly decreased frequencies of postoperative prolonged axial neck pain and loss of cervical lordosis in the medium term. 5 However, a major concern for C3-6 laminoplasty is whether it can maintain neurologic recovery over a long period. In addition, we must also examine whether the preservation of muscles Keywords ► cervical spondylotic myelopathy ► C3-6 laminoplasty ► long-term outcomes ► axial neck pain ► kyphotic deformity Abstract Study Design Prospective cohort study. Objective To clarify long-term surgical outcomes of C3-6 laminoplasty preserving muscles attached to the C2 and C7 spinous processes in patients with cervical spondylotic myelopathy (CSM). Methods Twenty patients who underwent C3-6 open-door laminoplasty for CSM and who were followed for 8 to 10 years were included in this study. Myelopathic symptoms were assessed using Japanese Orthopaedic Association (JOA) score. Axial neck pain was graded as severe, moderate, or mild. C2-7 angle was measured using lateral radiographs of the cervical spine before surgery and at final follow-up. Results Mean JOA score before surgery (11.7) was significantly improved to 15.2 at the time of maximum recovery (1 year after surgery), declining slightly to 14.9 by the latest follow-up. Late deterioration of JOA score developed in eight patients, but was unrelated to the cervical spine lesions in each case. No patient suffered from prolonged postoperative axial neck pain at final follow-up. The mean C2-7 angle before surgery (13.8 degrees) significantly increased to 19.2 degrees at final follow-up. Conclusions C3-6 laminoplasty preserving muscles attached to the C2 and C7 spinous processes in patients with CSM maintained satisfactory long-term neurologic improvement with significantly reduced frequencies of prolonged postoperative axial neck pain and loss of C2-7 angle after surgery. attached to the C2 and C7 spinous processes can maintain reduced frequencies of postoperative axial neck pain and kyphotic deformity over a long period. Therefore, we conducted further follow-up study (average 9-year follow-up) to clarify the long-term surgical outcomes of C3-6 laminoplasty in patients with cervical spondylotic myelopathy (CSM). Materials and Methods Most patients at Osaka University Hospital with cervical stenotic myelopathy have been treated using C3-6 opendoor laminoplasty since September 2002, except for patients with cervical kyphosis ! 15 degrees, single-level anterior lesion without narrow spinal canal, or spinal cord compression at the C7 and/or caudal levels. Based on our criteria, only three patients with CSM underwent C3-7 or C3-T1 laminoplasty and 26 patients with CSM underwent our original C3-6 laminoplasty between September 2002 and December 2004. Our original C3-6 open-door laminoplasty preserving muscles attached to the C2 and C7 spinous processes represents a modification of the previously described unilateralopening laminoplasty. 2,6 For the first 2 weeks after surgery, all patients wore a soft collar. Twenty (15 men, 5 women) of the 26 patients who underwent C3-6 laminoplasty for CSM have been followed for !8 years and were included in this study (follow-up rate ¼ 76.9%). The remaining 6 patients were lost to follow-up; 1 patient died because of reasons unrelated to cervical spine lesions; 3 patients relocated; 2 patients were lost to follow-up due to unknown reasons. Mean age of the 20 patients at surgery was 61.2 years (range, 36 to 87 years; ►Table 1). Mean follow-up period was 9.0 years (range, 8 to 10 years; ►Table 1). All 20 patients underwent follow-up examinations every 3 months for the first 1 year after surgery, and every 1 year thereafter. Myelopathic symptoms were assessed using the Japanese Orthopaedic Association (JOA) score 7 and recovery rate. 8 In patients who developed late deterioration of JOA score during postoperative follow-up, causes of deterioration were investigated. Flexion, neutral, and extension lateral radiographs of the cervical spine were assessed before surgery and at final follow-up. The C2-7 angle was measured as sagittal alignment of the cervical spine. The C2-7 angle is formed by two lines drawn parallel to the posterior margin of the C2 or C7 vertebral body on a radiograph in the neutral position (►Fig. 1). Kyphosis was defined as C2-7 angle À10 degrees, lordosis as C2-7 angle ! 10 degrees, and straight as À10 to < 10 degrees. C2-7 range of motion (ROM) of the cervical spine was calculated by subtracting flexion C2-7 angle from extension C2-7 angle. Postoperative axial neck pain was defined as posterior neck and/or periscapular pain that developed or became aggravated after surgery. According to our previous reports, 2-6 pain intensity was graded as severe (painkillers or local injection needed regularly), moderate (physiotherapy or compress needed regularly), or mild (no treatment needed). Severe or moderate pain after surgery was considered to constitute postoperative axial neck pain. The paired t test, Friedman test, Wilcoxon signed-rank test (with Bonferroni correction), and Fisher exact probability test were applied for statistical analyses using JMP version 5.0.1 software (SAS Institute, Cary, North Carolina, United States), as appropriate. Values of p < 0.05 were considered to indicate statistical significance. The protocol of this prospective long-term follow-up study was approved by the institutional review board of the hospital, and written informed consent was obtained from all participants. Eight of the 20 patients had JOA score decreased by !0.5 point from the score at maximum recovery by final follow-up (►Table 1). In all 8 patients, late deterioration of the JOA score was unrelated to cervical spine lesions. Causes of late deterioration were as followings: lumbar spinal canal stenosis in 4 patients; osteoarthritis of the knee in 2; carpal tunnel syndrome in 1; and stress urinary incontinence in 1. We did not encounter any cases of symptomatic adjacent-level degeneration. Axial Neck Pain Only 1 patient (5%) suffered from aggravated axial neck pain at 1 year after surgery (►Table 1). However, none of the 20 patient complained of postoperative prolonged axial neck pain at final follow-up (►Table 1). Discussion To prevent several surgery-associated problems such as axial neck pain and loss of cervical lordosis, less-invasive selective laminoplasty has been recently applied to reduce damage to the paraspinal muscles and nuchal ligaments. 9-12 Regarding surgical outcomes of selective laminoplasty for CSM, no significant difference in JOA score was reported between a C3-6 laminoplasty group and a C3-7 group both before surgery and at 2 years after surgery. 9 More selective laminoplasty for CSM was reported to show equal neurologic improvement at 2 years after surgery, compared with conventional C3-7 laminoplasty. 10 This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited. 5-year follow-up study revealed that our C3-6 laminoplasty can maintain satisfactory neurologic recovery in the medium term. 5 Conversely, there has been only one report of longterm outcomes of C3-6 laminoplasty for CSM. 12 A study with a mean follow-up of 9.6 years showed that none of the 42 patients developed late neurologic deterioration resulting from cervical spine lesions after C3-6 laminoplasty. 12 However, this was a retrospective study. This is the first prospective study reporting long-term (average 9.0 years) surgical outcomes of C3-6 laminoplasty for CSM. In the present study, mean JOA score improved significantly from 11.7 before surgery to 15.2 at the time of maximum recovery (mean recovery rate, 71.7%), declining slightly to 14.9 (mean recovery rate, 67.1) by final follow-up. Eight of the 20 patients had a JOA score decreased by !0.5 point from the score at maximum recovery by final follow-up, but late deterioration of JOA score was unrelated to the cervical spine in each case. In patients with CSM, the prevalence of spinal cord compression at the C6/7 level is relatively low. 13,14 Given these results, we conclude that the risk of late neurologic deterioration resulting from the caudal adjacent segment degeneration after C3-6 laminoplasty may be reduced. Taken all together, our C3-6 laminoplasty for CSM maintained satisfactory neurologic improvement in the long term. After conventional (no sparing of muscle insertions) C3-7 laminoplasty, loss of C2-7 angle has been reported to reach 6.2 to 11.7 degrees. [15][16][17] Kyphotic deformity often develops after laminectomy from C1 or C2 to the subaxial cervical spine. 18 Biomechanical analysis showed that the semispinalis cervicis and C2 lamina play an important role in dynamically stabilizing the cervical spine and that a loss of cervical lordosis results from detachment of the semispinalis attached to the C2 spinous process, 19 suggesting that a loss of cervical lordosis after laminoplasty mainly results from detachment of muscles attached to the C2 spinous process. 15 However, it has been recently reported that preservation of muscles attached to the C2 spinous process can reduce loss of lordosis of the cervical spine after laminoplasty. [20][21][22] We also reported that sagittal alignment of the cervical spine at 5 years after our C3-6 laminoplasty was more lordotic than before surgery. 5 In this follow-up study, muscle attachments to the C2 spinous process were preserved in all 20 patients. As a result, the mean C2-7 angle increased significantly from 13.8 degrees before surgery to 19.2 degrees at final follow-up, and no patient developed postoperative kyphosis according to our classification. The results in this study indicate that preservation of muscles attached to the C2 spinous process plays a significant role in maintaining cervical lordosis over a long period (8 to 10 years) after laminoplasty. Our C3-6 laminoplasty preserving muscles attached to the C7 spinous process can significantly reduce frequency of postoperative axial neck pain in the short term, 2-4 and this was maintained for 5 years after surgery (the incidence of axial neck pain persisting for 5 years postoperatively was 3.2%). 5 A cadaveric study showed that C3-6 laminoplasty without dissection of muscles attached to the C7 spinous process preserves more muscles such as the trapezius and the rhomboideus minor and major than conventional C3-7 laminoplasty. 23 Clinically, the intensity of postoperative axial neck pain increases in the upright position and decreases in the supine position. Given this clinical characteristic, when these shoulder suspensory muscles are injured by surgical exposure of the C7 spinous process, downward displacement of the upper extremities in the upright position seems to induce axial neck pain. Similar results supporting this were recently reported. 9 In this long-term follow-up study, only 1 patient (5%) had aggravated axial neck pain at 1 year after surgery, and no patient complained of postoperative prolonged axial neck pain at final follow-up. We reported that 10 (30%) of the 33 patients who underwent C3-7 laminoplasty had severe or moderate axial neck pain persisting for 1 year after surgery. 3 There was a significant difference in the frequency of axial neck pain between our patients undergoing C3-7 laminoplasty and those who underwent C3-6 procedure in this study (10/33 versus 1/20, p < 0.05). We conclude that our C3-6 laminoplasty preserving muscles attached to the C7 spinous process significantly reduces the frequency of axial neck pain at 1 year postoperatively and that the frequency of prolonged axial neck pain was further reduced at an average of 9 years after surgery. In conclusion, compared with surgical outcomes in our previous 5-year follow-up study, 5 this further follow-up prospective study showed that our C3-6 laminoplasty preserving muscles attached to the C2 and C7 spinous processes in patients with CSM maintained satisfactory long-term neurologic improvement with significantly reduced frequencies of prolonged postoperative axial neck pain and loss of C2-7 angle for 8 to 10 years after surgery.
2018-04-03T06:17:59.042Z
2014-06-18T00:00:00.000
{ "year": 2014, "sha1": "9e6113cf4bf32246fefba7b67a5ede9414e79a67", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4111945?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "04af19cfe38d3831b5811af6ac445949efa5e02a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55955161
pes2o/s2orc
v3-fos-license
Influence of Grazing Intensity on Soil Properties and Shaping Herbaceous Plant Communities in Semi-Arid Dambo Wetlands of Zimbabwe Key issues of concern regarding the environmental impacts of livestock on grazing land are their effects on soil, water quality, and biodiversity. This study was carried out to determine how grazing intensity influences soil physical and chemical properties and occurrence of herbaceous plant species in dambo wetlands. Three categories of grazing intensity were selected from communal, small scale commercial and large scale commercial land. Dambos from the large scale commercial land functioned as the control. Data analysis included ANOVA and multivariate tests from CANOCO. There were significantly negative changes to soil nutrient status in communal dambos though with a higher number of rare taxa. Sodium, phosphorous, pH and infiltration rate were significant determinants of plant species occurrence. Overgrazing is threatening the productivity, stability, and ecological functioning of dambo soils in communal Zimbabwe. These dambos also require special conservation and management priorities as they contain a large number of rare plant species. Introduction Key issues of concern regarding the environmental impacts of livestock on both public and private grazing lands are their effects on soil, water quality, riparian areas, and biodiversity [1].The direct effects of livestock grazing on ecosystems are well known and include reduction in plant biomass, trampling of plants, including below ground parts and soil, nutrient inputs and bacterial contamination from dung and urine, introduction and dispersal of seeds and other propagules [1][2][3][4].Properly managed grazing lands provide positive environmental benefits, including the provision of clean water supplies, the capacity to sequester atmospheric carbon (C), and the potential to maintain biodiversity [1]. Although many wetlands in the past have been degraded or destroyed as a result of inappropriate land use or development pressures, more recently they have become the focus of intense conservation interest [5], particularly since the establishment in 1971 of the Convention on Wetlands of International Importance especially as Waterfowl Habitat (the "Ramsar Convention").This convention promotes sustainable use of wetlands and provides a framework for the conservation of more than 1600 wetlands that have been nominated as internationally important on the basis of their ecological, botanical, zoological, limnological or hydrological values [6]. Dambo wetland is a small-scale environmental resource which is widespread in Africa's tropical plateau savannas [7].The main area of dambo occurrence is located in Southern and Central Africa, with sporadic occurrence in central West and north Central Africa, south of the Sahara [8,9].With the exception of a typically narrow (<150 km) humid zone around the southern and eastern margins of the subcontinent, the interior and western margin are mostly drylands, with annual potential evaporation greatly exceeding annual precipitation [5].Across southern Africa, wetlands (including more than 20 Ramsar listed sites) can be found in a variety of coastal and inland settings [5] but the emphasis here is on those wetlands that occur within the dryland interior.In Zimbabwe, wetlands are estimated to cover some 1.28 million hectares of the country's land surface.Some 20% of this wetland area lies in communal areas [10]. Dambo wetlands are highly sensitive to grazing pressure.As such, they are considered as useful environmental indicators of environmental pressures [8,11].Land pressure has forced communities to concentrate on land and water availability irregardless of the state of these environmental resources [12]. A number of studies have been carried out on the geography and hydrology of dambos.These have focussed on social and agricultural importance of dambos in Zimbabwe [8,[13][14][15].However, only a few make any reference to important factors shaping the plant communities in wetlands, particularly in the dry ecological Natural Region (NR) of Zimbabwe [16]. Scientific interest in wetlands has also increased rapidly over the last few decades [3,7,14,17,18], largely in response to growing pressures for increased data to inform management decisions regarding conservation, rehabilitation or artificial construction of wetlands, and particularly in view of the potential adjustments that may result from global climate changes [5].Despite the increased interest in research, there is still lack of scientific understanding especially of wetlands in the world's extensive drylands [5], a collective term that includes subhumid, semiarid, arid and hyperarid regions and incurporates almost 50% of the global land area and nearly 20% of the global population [6]. The objective of this study was to determine the extent at which grazing influences soil nutrient dynamics, interaction with plant community and how these influence occurrence of herbaceous plant species in wetlands.Also to determine if communal dambo wetlands are worthwhile to conserve.In order to make valid comparisons, dambos in the same catchment area under different management were compared.It is hypothesized that grazing in dambos results in changes to important soil physical and chemical properties which influence occurrence of plant species. Description of the Experimental Site The study area is located to the north of Masvingo town in Zimbabwe.The area falls within Natural Region IV (NR4) of the Zimbabwean ecological classification sys-tem [19].Altitude is 1204 m above sea level on latitude 19˚50'S and longitude 30˚46'E.NR4 is suitable for extensive farming, and receives an annual rainfall of 450 to 650 mm.Mean maximum temperature during summer is about 28˚C, and the minimum temperature during winter is about 6˚C.It is characterised by sandy soils with low organic matter and humus content, and consequently low fertility.Farming activities in the area are considered risky because of highly variable rainfall [20]. Dambo Wetland Selection Three categories of dambos from contrasting land use history were identified based on frequency and severity of defoliation.Two sites per category were selected and were the ungrazed (UG), moderately grazed (MG) and continuously grazed (CG).The three categories were selected from communal, small scale commercial and large scale commercial land respectively.Ungrazed sites functioned as the control treatment.Aerial photographic maps and a Global Positioning Satellite (GPS) altitude-measuring unit were used as aids for selection.Using mileage, a motorbike was run along and across dambos to estimate their varying sizes. Vegetation Sampling At least three transects measuring 100 m or less were laid in each dambo, this depending on size of the dambo sampled (dambos ranged in size from 0.9 -4.5 ha).All transects were laid perpendicular to the general dambo hydrologic gradient to capture any variations due to moisture gradient [3].Transects were also laid at least 30 m from roads to minimise border effects.Within each dambo, vegetation was sampled from 0.5 m 2 quadrats systematically placed 10 m intervals along transects [21].Sampling was started at the center and then transversing to the edge of the dambos.In each dambo, at least 30 subsamples were obtained.Edges of dambos were determined by sampling until the vegetation cover was >90% pasture grass cover [22].A GPS unit was used to measure altitude and coordinate points within each sampling unit (quadrat).Variables recorded from each quadrat to determine species composition were: name of species (nomenclature followed [23]), erosion estimates (scale of 1 -10, were 1 is no erosion while 10 is badly eroded). Soil Sampling For soil sampling, subsamples were collected along transects from the center of every third 0.5 m 2 quadrat.A minimum of 10 soil subsamples were collected from each dambo.A soil auger was used to collect soil at a depth of top 15 cm.Upon return from field, subsamples were air dried under shade for a few days and then stored in plastic bags for later analysis.Due to homogeneity of vegetation and soils within dambos, fewer subsamples were collected for analysis [24].Samples were then analysed for bulk density, Na, P and pH (CaCl 2 ). Water Infiltration Rate A double ring infiltrometer was used to measure water infiltration rate.The instrument was assembled as outlined in the product manual [25].The instrument was placed at every third quadrat (where a soil sample was taken).This also yielded a minimum of 10 subsamples from each dambo.The rate of infiltration was determined as the amount of water per surface area and time (cm/min) that penetrated the soil.Water percolation is initially fast, but reduces gradually to a constant value, and this is the infiltration rate [25]. Data Analysis One way ANOVA from SPSS ver.13 was used to test for significant differences in soil physical and chemical properties and also in vegetation attributes.Least Significance Difference was used to separate the means.Species data were converted to a presence absence matrix consisting of six dambos by 65 herbaceous plant species.Detrended correspondence analysis (DCA) and hierarchical canonical correspondence analysis from CANOCO ver. 4 were used to assess multivariate relationships between vegetation and environmental data and compared vegetation composition among different dambos.Monte Carlo tests were done to test for significance of soil properties in explaining the observed patterns.Multivariate methods provide a means to structure the data by separating systematic variation from noise [26,27]. Soil Physico-Chemical Properties Soil physical (Table 1) and chemical (Figures 1 and 2) properties varied significantly across the three grazing categories.Phosphorous, sodium and water infiltration were significantly higher under moderate grazing when compared to ungrazed and overgrazed sites.Bulk density, erosion and pH were significantly higher under continuous grazing while lower under both moderate and ungrazed. Plant Species Relationships with Environmental Variables The Eigenvalues for axis 1 (horizontally) and axes 2 (vertically) are 0.16 and 0.12 respectively and they explained a total variance of 64.1% of the observed variation (Table 2).The first synthetic gradient (axis 1) showed a significantly positive correlation of plants with Na and pH (P < 0.05).The second synthetic gradient showed a positive correlation of plants with water infiltration rate and bulk density while negatively with phosphorus and erosion (Figure 3).Monte Carlo tests showed significance of all axes in predicting the observed differences in plant species occurrence (P < 0.05). Plant Species Occurrence across Different Dambos The ordination diagram (Figure 4) is a cluster of species produced from HCCA.Results from the plot shows three clusters based on plant species presence or absence.Results indicate differences in species occurrence across sites.Sites with same species are indicated by forming a cluster.Continuously grazed sites had a significantly higher number of rare plant species (P < 0.05). Influence of Grazing on Environmental Variables While a number of studies have investigated the effects of livestock on soil quality in a range of ecosystem types, very few have been conducted in dambo wetlands [28]. Other studies have also found that in old embanked salty marshes, trampling altered the soil structure (to lamellar structure indicative of compaction) reducing soil infiltration and preventing salt from being leached from the soil [28].The impact on soils was reflected by differences in soil properties between the differently grazed dambos.Infiltration rates were significantly lower in continuously grazed sites as compared to moderately and ungrazed sites.Continuously grazed sites had significantly higher soil bulk density and higher erosion estimates as compared to other sites.Trampling and compaction of soils by livestock decreases water infiltration and increase runoff hence exposing the soil to erosion hazard [14].Ex- cessive compaction has negative consequences on plant growth [29].Grazing increases bulk density or decreases soil porosity [1,30,31].Even though soil changes can result from excessive compaction during grazing, studies also have shown that natural processes such as soil wetting and drying cycles and livestock can affect soil quality through compaction, erosion, and changes in the plant community [3,4].Soil compaction by livestock is comparable to farm machinery and is most severe in the top 5 centimetres of the soil but can extend as deep as 30 cm [29].The degree of compaction depends on soil moisture levels, type of soil and stocking densities.Maximum compaction occurs at soil moisture levels of between 20% to 30% moisture-holding capacity (depending on soil type) and field capacity, as well as at high stocking densities [1].Dambos in Zimuto have been continuously grazed for over 6 decades with subsequent increase in livestock numbers over the years.Reduced water infiltration rates under continuous grazing suggest soil compaction and reduced pore size of soil.Soil deterioration and erosion is high under continuous grazing.This is evidenced by presence of grass species tolerant to heavy grazing like Sporobolus pyrimidalis, Cynodon dactylon and several Eragrostis species.We suggest that soil deterioration could be limited by reducing stocking intensities during dry periods in the communal areas.Similar results have been reported where water filtration rates on silty clay and silty clay loam soils were 2.5 times greater in an area grazed at 1.35 acres/AUM compared to an area grazed at 3.25 acres/AUM [1,32].After 22 years of grazing at this intensity, not only had species composition altered but soil properties had been changed as well. Continuously grazed sites had significantly higher soil pH values (Figure 2) as compared to the other sites.This possibly can be explained by the fact that desiccation is higher.As the water evaporates, it leaves the soil with a higher concentration of salts hence a slightly higher pH value as compared to the other sites.Generally dambos have increased agricultural fertility due to higher concentrations of potassium, phosphate and organic matter, and greater cation exchange capacities [7].Dambo soil characteristics reflect the greater biomass production, lower decomposition rate and the inwash of ions from upslope [7].In line with our findings, it was reported that calcareous fens had lower pH and higher nitrate (NO 3 ) levels, with no differences in ammonium between grazed and ungrazed sites [33].These results suggest that manure inputs were being nitrified and that although the fens were accumulating nitrogen, there may be some resilience to increase in potentially toxic ammoniacal nitrogen levels. Species-Environmental Relationships Our results indicated strong vegetation-explanatory variable relationships in all studied dambo sites.A detrended correspondence analysis (DCA) was first carried out to ascertain on the behaviour of data [27].Results of DCA indicated that most of the species behaved approximately unimoidal along environmental gradients as indicated by the length of maximum gradient which was greater than 4Sd [34].Inorder to define the links between species and the environment, hierarchal canonical correspondence analysis (hCCA) followed. The analysis of species environment relationships and the identification of indicator species are traditional activities in ecology [35].Knowing how human activity influences the fascinating diversity of biological communities raises much interest in people.However, this very diversity creates problems for the statistical analysis of ecological observations [36].This implies a large number of species and a large inherent variability.A set of community samples and associated environmental measurements typically yields an enormous amount of noisy data which is difficult to interpret. By using hCCA, it has been shown that plant structuring and composition in the studied community are considerably influenced by individual effects of ecological factors.Sodium and pH were positively correlated with axis 1 (Figure 3) and as such are major gradients influencing the structuring and diversity of the plant communities.Other factors like infiltration rates, bulky density, phosphorous and erosion are linked to the second axis and are important as well.Any factors leading to changes in water filtration, pH and sodium levels in the dambo wetlands will lead to changes in herbaceous species composition and occurrence of these fragile ecosystems. Weed species like Oxalis latifolia, C. triden and C. bangalensis occurred on well drained soil usually soils close to the top land.Soil moisture gradient could be important in determining or governing the occurrence of these plants.Any factors affecting the drainage conditions of wetlands e.g.infiltration rate will affect what species will occur there.The occurrence or abundance of a species along an environmental gradient often follows shelford's law of Tolerance [37].Each species thrives best at a particular value (its optimum) and cannot survive when the value is either too low or too high.Each species occurrence is thus confined to a limited range, its niche.Species tend to separate their niche, partly so as to minimise competition.If the separation is strong, successive species replacements occur along environmental gradient.The composition of biotic communities thus changes along environmental gradients according to unimoidal functions [37].Some individual factors may have a minor influence, for example the effects of erosion and litter cover and total species richness, but their interactions with each other or with other factors, such as litter accumulation, may have a substantial influence on total species diversity and composition.Consequently, knowledge about the role of individual factors only, and ignorance of their interactions, may lead to false predictions about community structure and function. The cluster (Figure 4) indicates that communities of same ecological background are segregating based on species composition and soils.Plant species composition and structure can be altered indirectly through the modified soil environment and directly through trampling [29].The overgrazed sites had a relatively higher number of rare plant species.This may indicate the need for higher priority for conservation in communal dambo wetlands. Importance of Interactions A number of biotic and abiotic factors have been proposed as determinants shaping plant species structure and diversity [38].However, when considering the rates of species gain or loss in the community, most attention has been directed to the roles of disturbance, physical resources, species interactions and propagule availability.The failure to identify any single factor as the major determinant of species diversity on a more local scale suggests that interactions between several factors are often more important.There are significant strong correlations between sodium and pH, and also between bulk density and pH.The more factors that are involved in an experiment the more complex the possible interactions, and the more sophisticated the analytical techniques required to interpret them.Although a complexity of interactions between different factors has been assumed in plant community theory [17,37,38], few experiments have combined more than two factors to test such effects. Conclusion Continuous grazing is threatening the productivity, stability, and ecological functioning of dambo wetlands in communal areas.There are deleterious changes to soil nutrients which are important vegetation determinants in the dambos.There are evident changes in species composition among differently grazed sites.Communal dambos are less fertile and have a higher number of rare taxa, hence requires special conservation and management priority.Debate over grazing needs to move beyond the simple dichotomy of whether it is good or bad.Evaluation of practical alternatives should be done through experimental studies. Figure 1 . Figure 1.Mean soil phosphorus and sodium levels at sites under different grazing intensities.Values presented are ± standard deviation. Figure 2 . Figure 2. Mean soil pH levels across dambo sites subjected to different grazing management regimes.Values presented are ± standard deviation. Figure 3 .Figure 4 . Figure 3. HCCA biplot indicates how environmental variables influence the occurrence of herbaceous plant species in dambos. Table 1 . Mean soil physical properties in dambos subjected to different grazing intensities. bMeans in rows with different superscripts are significantly different (P < 0.05).
2018-12-08T22:30:35.463Z
2013-09-30T00:00:00.000
{ "year": 2013, "sha1": "2ca3b52337ace8bde4756d2fb80f6d1c18db25ed", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=38685", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "2ca3b52337ace8bde4756d2fb80f6d1c18db25ed", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
52058241
pes2o/s2orc
v3-fos-license
Krukovine Suppresses KRAS-Mutated Lung Cancer Cell Growth and Proliferation by Inhibiting the RAF-ERK Pathway and Inactivating AKT Pathway Oncogenic activation of the KRAS gene via point mutations occurs in 20–30% of patients with non-small cell lung cancer (NSCLC). The RAS-RAF-ERK and RAS-PI3K-AKT pathways are the major hyper-activated downstream pathways in RAS mutation, which promotes the unlimited lifecycle of cancer cells and their metastasis in humans. However, the success of targeted therapy is restricted by many factors. Herein, we show a new pharmacological KRAS signaling inhibitor krukovine, which is a small molecular bisbenzylisoquinoline alkaloid, isolated from the bark of Abuta grandifolia (Mart.) Sandw. (Menispermaceae). This alkaloid targets the KRAS downstream signaling pathways in different NSCLC cell lines, such as H460 and A549, which are established by KRAS mutations. In the present study, we initially investigated the anti-cancer activities of krukovine in KRAS-mutated NSCLC cell lines, as well as KRAS wild type cancer cell line and normal lung cell. Results indicated that krukovine can inhibit the growth and dose-dependently inhibit the colony formation capacity and wound healing ability of H460 and A549. This cytotoxic effect is associated with the induction of cell apoptosis and G1 arrest in those cell lines. Krukovine treatment also suppressed the C-RAF, ERK, AKT, PI3K, p70s6k, and mTOR phosphorylation in H460 and A549. This finding suggests that krukovine represses the growth and proliferation of KRAS-mutated cells by inactivating AKT signaling pathway and downregulating the RAF-ERK signaling pathway. This study provides detailed insights into the novel cytotoxic mechanism of an anti-cancer compound from an herbal plant and promotes the anti-cancer potential of krukovine in NSCLC with KRAS mutation. INTRODUCTION Lung cancer has long been a highly common cancer and the leading cause of cancer-related mortality globally, and its incidence is still increasing (Cancer Genome Atlas Research Network, 2012. About 85% of lung cancer cases are non-small cell lung cancer (NSCLC), and almost 30% of this proportion were diagnosed at an advanced stage (Abacioglu et al., 2005). In most patients with NSCLC, proto-oncogenes, such as KRAS (Kirsten rat sarcoma viral oncogene homolog), and the AKT (also named PKB, protein kinase B) and ERK (extracellular signal-regulated kinase) signaling pathways are constitutionally activated. Aberrant activation of these signaling pathways in cells leads to uncontrolled cell proliferation, apoptotic resistance, and other oncogenic cascades in many cancer types (Brognard and Dennis, 2002;Papadimitrakopoulou and Adjei, 2006;Dutta et al., 2014;Yip, 2015). Therefore, increasing research efforts have targeted these oncogenic signaling pathways to develop novel agents or therapeutics that will effectively treat NSCLC (Harada et al., 2014;Stinchcombe and Johnson, 2014). Actually, effective inhibitors specific for many key constituents of the RAS-PI3K (phosphatidylinositol 3-kinase)-AKT and RAS-RAF-MEK-ERK pathways have been developed. Many of these inhibitors have been used or evaluated in clinical trials. A study involving 3,620 patients with NSCLC reported that KRAS is a prominent prognostic marker for the survival of patients with lung adenocarcinoma but ineffective for patients with lung small cell carcinoma (Brose et al., 2002). Patients with KRAS mutation show reduced progression-free survival, and the mutation has been adopted for biomarker analyses in NSCLC (De Grève et al., 2012;Yasuda et al., 2012). However, the development of RAS inhibitors, such as farnesyltransferase inhibitors, has been unsuccessful to date (Mazières et al., 2013). Although several AKT inhibitors have been developed and subjected to clinical trials for NSCLC treatment, their adverse side effects, such as severe hyperglycemia and other potential metabolic abnormalities, hinder their applications (Heavey et al., 2014;Yip, 2015). Side effects also limit the clinical use of the ERK inhibitor (Gioeli et al., 2011). In this regard, novel targeted drug therapy that can suppress these oncogenic pathways has attracted much research interest. Given their low toxicity and high effectiveness, natural products have been studied and used worldwide recently as potential anti-cancer agents. Our current study identified krukovine, a novel anti-NSCLC compound from natural products. Krukovine is a small molecular bisbenzylisoquinoline alkaloid derived from Abuta grandifolia (Mart.) Sandw. (Menispermaceae). Menispermaceae is a well-known family of flowering plants serving as folk herbal medicine for various diseases, including gastrointestinal diseases, such as diarrhea, genitourinary tract diseases, and respiratory tract diseases (e.g., asthma) (Corrêa, 1984). Several compounds, such as bisbenzylisoquinolinic, morphinic, aporphinic, and oxoaporphinic alkaloids, have been isolated from the roots and leaves of this species (Thomas et al., 1997;de Lira et al., 2002;De Sales et al., 2015). Krukovine was first isolated from the bark of A. grandifolia (Mart.) Sandw. and showed potent antiplasmodial activity decades ago (Steele et al., 1999). In the present study, krukovine exhibited a cytotoxic effect and inhibited the growth and proliferation of two KRAS-mutated lung cancer cell lines. Krukovine also inhibited the proliferation of these cancer cells by inducing G1 arrest and apoptosis. Krukovine downregulates the activity of phospho-C-RAF, phospho-AKT, phospho-p70s6k, phospho-mTOR, and phospho-ERK and modulates the PI3K-AKT-mTOR and RAF-ERK signaling pathways. Krukovine may be an alternative candidate for the development of combined targeted therapy against the abnormal expression of RAS oncogenic downstream signaling pathways in NSCLC. Krukovine Shows a Cytotoxic Effect Toward KRAS-Mutated Cells To evaluate the potential anti-cancer effect of krukovine ( Figure 1A shows the chemical structure), we subjected the KRAS-mutated cell lines H460 and A549 to cytotoxicity tests. These cell lines were treated with krukovine at 0, 5, 10, and 20 µM for 48 or 72 h. Results showed that krukovine inhibited the growth of H460 and A549 in a timedependent manner, while have less cytotoxicity effect on non-KRAS mutation lung cancer cell line H1299 and normal lung cell CCD19-Lu ( Figure 1B). IC 50 values revealed the potent cytotoxicity of krukovine to KRAS-mutated cancer cells, as summarized in Table 1. The IC 50 values were much lower in the H460 and A549 cell lines treated with krukovine for 72 h (9.80 ± 0.13 and 8.40 ± 0.37 µM, respectively) than in those treated for 48 h (19.89 ± 0.19 and 13.69 ± 0.15 µM, respectively). Krukovine Inhibits Cell Colony Formation and Wound Healing Ability in H460 and A549 Cells Long-term colony formation assays of H460 and A549 cells verified the growth inhibiting effect of krukovine. Krukovine significantly inhibited the colony formation capacities (Figure 2) and wound healing ability (Figure 3) of the H460 and A549 cells in a dose-dependent manner. Krukovine Significantly Induces Apoptosis in H460 and A549 Cells To explore the anti-cancer properties of krukovine, we measured the level of cell apoptosis by flow cytometry using Annexin V-FITC/propidium iodide (PI) staining. The results are shown in Figure 4. Krukovine caused limited apoptosis in H460 and A549 cells in low dosage. With increased treatment dosage, the cells experienced extensive apoptosis. Krukovine inhibits caspase-3 expression while increases cleaved PARP (poly ADP ribose polymerase) expression level. This result indicated that cell apoptosis induction also contributes to the krukovine-mediated inhibition of H460 and A549 cell proliferation. Krukovine Induces Cell Cycle Arrest at the G1 Phase in H460 and A549 Cells To explain the decreased cell viability, we treated H460 and A549 cells with krukovine, and their cell cycles were detected by flow cytometry through PI staining. Figure 5 show that krukovine induced a moderate accumulation in the G1 phases and a reduction in the sub-G1 phase. DISCUSSION The RAS-RAF-ERK and PI3K-AKT-mTOR pathways are two main downstream signaling pathways involved in the KRAS genes (Gioeli et al., 2011;Tomasini et al., 2016;Matikas et al., 2017). About 20-30% KRAS mutation (Prior et al., 2012;Stephen et al., 2014), 50-70% overexpression of phosphorylated AKT (Yip, 2015), and 70% activated ERK expression (Heavey et al., 2014) were found in patients with NSCLC. The relatively limited subset of NSCLC carrying these genetic mutations should be effectively treated by mediated target therapy, such as using RAF, ERK, and AKT inhibitors. Unfortunately, most patients with NSCLC do not harbor these genomic events, and the 5year survival rate remains unsatisfactory (Nussinov et al., 2018). Moreover, the side effects of the target inhibitors have hindered their clinical use (Gioeli et al., 2011;Heavey et al., 2014;Yip, 2015). For many years, natural products have been considered as potential resources for novel drug discovery. We identified a new class of small-molecule from herbs that exhibit effects on directly inactivating AKT signaling and downregulating RAF-ERK signaling pathway. In this study, we initially investigated the potential of krukovine to suppress the growth and proliferation of KRAS-mutated NSCLC cell lines. H460 and A549 cells, which contain different codons of KRAS mutation, have served as typical types of KRAS-mutated NSCLC cell lines widely used as in vitro model systems. Krukovine exerts cytotoxic and antiproliferative effects on H460 and A549 cells. We also found that the activities of pivotal proteins, such as AKT, RAF, and ERK, in the RAS-PI3K-AKT-mTOR and RAS-RAF-MEK-ERK signaling pathways are inhibited by krukovine in NSCLC cells. The interaction of the activated RAS-RAF-MEK-ERK pathway with multiple effectors can regulate cell growth, cell differentiation, and apoptosis (Cully and Downward, 2008;Montagut and Settleman, 2009). Phosphorylation of the ERK protein is a key component of the RAS-RAF-MEK-ERK downstream signaling pathway. Phosphorylated ERK translocates to the nucleus and then causes gene expression changes and mediates the activities of various transcription factors (Roberts and Der, 2007). The PI3K-AKT-mTOR signaling pathway plays an important role in cell growth, cell proliferation, angiogenesis, and cell survival; these processes determine treatment resistance against systemic chemotherapy and radiation (Pal et al., 2008). AKT is a crucial factor in this pathway. Phosphorylation of AKT downregulates various downstream substrates, such as Bad, and can result in malignant transformation (Chang et al., 2003). During cancer cell proliferation, AKT phosphorylation can accumulate the cyclin D1 protein and also prevent the release of calcium from the mitochondria and hence avert cell apoptosis (Diehl et al., 1998). Meanwhile, inactivating AKT can inhibit the PI3K-AKT-mTOR signaling pathway and achieve a tumorsuppressive effect. In this case, the feedback activation of AKT importantly participates in the unsatisfactory clinical results of several RAS downstream pathway inhibitors in cancer treatment (Sun et al., 2005;Wei et al., 2015). In the clinics, up to 45% of patients with NSCLC show increased AKT expression (Okudela et al., 2007;Spoerke et al., 2012). In our present study, krukovine induced G1 arrest and apoptosis in H460 and A549 cells; such effects can lead to cell growth inhibition. This action can be associated with the inhibitory effect of krukovine by AKT phosphorylation and RAF-ERK pathway downregulation, which can lead to cancer cell death (Cully and Downward, 2008;Pal et al., 2008;Montagut and Settleman, 2009). The RAF-ERK and PI3K-AKT pathways are the two major hyper-activated downstream pathways in RAS mutation; these pathways promote the uncontrolled growth of abnormal cells and their metastasis in humans. These signaling pathways have been identified as promising targets in cancer therapy in recent years (Asati et al., 2016). However, the success of targeted therapy can be limited by the developed resistance of cancer cells through the mutation of target kinases, redundancy in signaling, feedback activation of pathway components, and compensatory activation of parallel circuits (Shamma et al., 1967). Use multi-targeting synthetic signaling pathway inhibitor to treat NSCLC has been proposed by some studies (Logue and Morrison, 2012;Cheng et al., 2014). In this light, targeting two or more constituents of the same pathway or two different pathways simultaneously, for example, AKT and ERK, has been suggested to improve the success of NSCLC-targeted therapy (Meng et al., 2010;Heavey et al., 2014). In our study, we identified krukovine as a novel KRAS signaling inhibitor and evaluated its anti-cancer activity in NSCLC cell lines. Krukovine effectively inhibited KRAS downstream signaling and induced G1 arrest and apoptosis to exert a cytotoxic effect on KRAS-mutated lung cancer cells lines. Cell Lines and Cell Culture The KRAS mutant NSCLC cell lines used in this study (H460 and A549) were purchased from the ATCC (American Type Culture Collection, United States). All cells were cultured in RPMI-1640 medium containing 10% fetal bovine serum with 100 µg/mL streptomycin and 100 U/mL penicillin. Cells were cultured in an incubator with 5% CO 2 at 37 • C. Cell Growth Inhibition Assay The standard MTT (3-(4,5-dimethylthiazol-2-yl)-2,5diphenyltetrazolium bromide) assay was carried out to evaluate the cell growth inhibition effect of krukovine. In brief, H460, A549, H1299, and CCD19-Lu cells were each planted at 4 × 10 3 cells or 3 × 10 3 cells per well in a 96-well plate and cultured for 12 h to allow cell adhesion. Different concentrations (0, 5, 10, and 20 µM) of krukovine were applied as treatment for another 48 or 72 h. DMSO treatment served as a vehicle control. Every dosage was repeated three times, and at least three independent experiments were performed. At the end of the treatment, MTT solution (5 mg/mL) was added to each well (10 µL per well), and each plate was placed back to the incubator. After further culture for 4 h, the supernatant was carefully removed, and 100 µL of DMSO, as resolved solution, was added to each well while lightly shaking for 10 min to dissolve the MTT crystals. The absorbance was measured by a Tecan microplate reader at 570 nm and used as a reference at 650 nm. The percentages obtained from the absorbance of the treated cells divided by the absorbance of untreated cells were presented as the cell viabilities. The IC 50 of krukovine was calculated by the GraphPad Prism 5.0 software. Cell Apoptosis and Cell Cycle Analysis H460 and A549 cells were each planted on a six-well plate with a density of 1 × 10 5 cells per well overnight. The cells were cultured for over 12 h to allow cell adhesion and then exposed to various concentrations of krukovine for 48 h. At the end of treatment, cells were harvested using trypsin, washed with PBS, and then collected after centrifugation. For cell apoptosis analysis, cells were treated with 5 µL of PI (1 mg/mL) and 5 µL of Annexin V fluorescein dye and stained for 15 min; this step must be performed away from light and at room temperature. The cells were then resuspended in 300-500 µL of Annexin binding buffer and filtered before analysis by a BD FACSAriaIII flow cytometer (BD Biosciences). The percentage of apoptotic cells was quantitatively determined. For cell cycle assay, the cells were fixed with 70% (v/v) ethanol for at least 30 min at 4 • C. Thereafter, the cells were washed with PBS before treatment with 5 µL of PI (1 mg/mL). The percentages of cells at different cell cycle phases (sub-G, S, G1, and G2) were quantitatively measured by the same equipment in the corresponding process. Western Blot Total-cell protein lysates and Western blot materials were prepared as follows. Then, 48 h after drug treatment, the cells were rinsed with lysed ice-cold PBS and lysed in RIPA buffer (150 mmol/L NaCl, 50 mmol/L Tris-HCl, pH 8.0, 1% deoxycholate, 0.1% SDS, and 1% Triton X-100) containing protease and phosphatase inhibitors (Roche, United Kingdom) for at least 30 min and then centrifuged at 14,000 × g for 10 min at 4 • C. The concentration of total protein for each sample was measured by DCTM protein assay kit (Bio-Rad). Then, equal amounts of total protein of each sample were resuspended in loading buffer and denatured for 5 min in 100 • C. The total protein (30 µg) of each sample was separated by 10% SDS-PAGE and then transferred to PVDF membranes (Millipore, United States). The protein membranes were blocked by 5% nonfat milk in 1× TBST for 1 h at room temperature. The samples were incubated with different primary antibodies [phospho-C-RAF, phospho-AKT (Ser473), phospho-ERK (Thr202/Thy204), total-C-RAF, total-ERK, total-AKT, total-p70s6k, phospho-p70s6k, total-mTOR, phospho-mTOR, phospho-PI3K, total-PI3K, caspase-3, PARP, and GAPDH] at 4 • C overnight. The above-mentioned primary antibodies were diluted in 1:1,000. Protein membranes were subsequently incubated with secondary fluorescent antibodies for 2 h and then washed in 1× TBST three times for 5 min each time. All secondary antibodies (anti-rabbit or anti-mouse) were diluted in 1:10,000. All membranes were analyzed by an LI-COR Odyssey scanner (Belfast, United States). Colony Formation Assay H460 and A549 cells were planted into six-well plates (500 cells/well), respectively. After attachment overnight, the cells were treated with various concentrations (0, 5, 7.5, 10, and 20 µM) of krukovine, and the medium was changed every 3 days. When colony formation was visible, the medium was discharged. The colonies were washed with ice-cold PBS gently, fixed in 4% paraformaldehyde (PFA) for 15 min, and then stained with 0.5% crystal violet (20% methanol, 0.5% crystal violet, and 1% PFA in ddH 2 O) for 30 min. After the extra crystal violet was washed away and dried off, the colonies were photographed and analyzed. Scratch Wound Healing Assay H460 and A549 cells were planted into six-well plates (500 cells/well), respectively. After attachment overnight, they should reach ∼70% confluence as a monolayer, then, the confluent monolayer was scratched with a 200 µL sterile pipette tip. After scratching, gently wash the well twice with medium to remove the detached cells. The cells were treated with various concentrations (0, 5, 7.5, 10, and 20 µM) of krukovine. After growing for additional 24 h, wash the cells twice with 1× PBS, take photos for the monolayer on a microscope in the same configurations. Statistical Analysis Statistical analysis was carried out by GraphPad Prism 5.0 software. The results were presented as (mean ± SEM) of three individual experiments. ANOVA or Student's t-test followed by Bonferroni's test was used to compare all pairs of columns. p-Values <0.05 were set as statistically significant. AUTHOR CONTRIBUTIONS LLi, EL, and XY conceived the study, participated in the design and coordination of the whole study, and helped in critically revising the draft for important intellectual content. HL carried out the cell culture studies, molecular biology experiments, data collection, statistical analysis, and manuscript drafting. YW participated in the data collection and performed the statistical analysis. ZJ and FD participated in the molecular biology experiments. YL and LLu helped in critically revising the draft for important intellectual content. All authors have checked and approved the final manuscript and agreed to be accountable for all aspects of the work by ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
2018-08-22T13:04:54.966Z
2018-08-22T00:00:00.000
{ "year": 2018, "sha1": "1e59c131c5eb812fad32e74172821a9f65cd1c52", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2018.00958/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e59c131c5eb812fad32e74172821a9f65cd1c52", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
963861
pes2o/s2orc
v3-fos-license
Frequency of Dyslipidemia in patients with Lupus Nephritis Objective: To determine the frequency of dyslipidemia in patients with lupus nephritis and its association with the degree of proteinuria. Methods: This cross-sectional analytic study included 65 patients who fulfilled the ACR (American College of Rheumatology) criteria for SLE and had renal involvement, presenting to the Division of Rheumatology, Fatima Memorial Hospital (FMH), and Lahore from 21st Sep 2016 to 20th Dec 2016. After 12 hours overnight fast their blood samples were assessed for total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL) and low density lipoprotein (LDL). Patient demographic variables (age, sex) and disease characteristics (disease duration, degree of proteinuria, steroid dose) were noted. Patients were categorized into two groups on the basis of degree of proteinuria: having proteinuria >1gm or ≤ 1gm. Data was analyzed using SPSS version 22. Individual lipid profiles were correlated with the degree of proteinuria. Results: Most common lipid abnormality found in our study was hypertriglyceridemia (58.5%). Total Cholesterol and LDL-C was high in 55.4% and 30.8% subjects respectively. Low HDL was found in 21.5% subjects. Increased frequency of dyslipidemia was noticed in those subjects who had proteinuria >1gm (P value < 0.05). Conclusion: Dyslipidemia was observed in a high frequency in patients with lupus nephritis and was strongly associated with their degree of proteinuria. INTRODUCTION Systemic lupus erythematosus (SLE) is the prototypic autoimmune disease characterized by multisystem involvement and the production of an array of autoantibodies. 1 SLE may involve any organ mainly skin, joints, kidneys, heart, lungs and central nervous system. 2 Most of the patients are young females with F/M ratio of 15.5:1. 3 Presenting features may vary from skin and joint involvement to organ and life-threatening complications like lupus nephritis (33%). 3 Lupus is associated with pre-mature atherosclerosis owing to lipid abnormalities which are common in these patients. Most studies reveal that relative risk of myocardial infarction exceeds 5 to 8 times compared to general population. 4 Studies Original Article have shown that the prevalence of dyslipidemia in lupus patients ranges from 36% at diagnosis to 60% or even higher after 3 years. 5 Patients with SLE have an elevated plasma TG, LDL-C, Apoprotein B, and decreased HDL-C. 6 Elevated TG levels in SLE patients are in part attributable to anti-lipoprotein lipase (anti-LPL), which are present in 47% of patients. 7 Dyslipidemia is believed to decisively affect the long-term prognosis of lupus patients, not only with regard to cardiovascular events but also by influencing lupus nephritis. 5 Nephrotic-range proteinuria, elevated TC level and decreased serum albumin levels not only reflect the activity but also the severity of renal damage in SLE patients. 8 Studies have also shown higher degree of dyslipidemia prevalence in patients having a disease duration of 3 years, Max-SLEDAI ≥ 2 and taking Prednisone ≥ 30 mg/d. 9 Hyperlipidemia and lipoprotein abnormalities may play a role in development of glomerular atherosclerosis in renal disease. 10 Dyslipidemia is prevalent and more severe in lupus nephritis (LN) patients as compared to controls with a similar degree of chronic kidney disease (CKD) despite disease quiescence, low steroid dose and low level of proteinuria. 11 It has been seen that concomitant corticosteroids and renal dysfunction increases the severity of dyslipidemia while Hydroxychloroquine (HCQ) reduces the risk. 11 In patients with LN, hypertension, hyperlipidemia and antiphospholipid syndrome are important risk factors associated with a higher mortality rate and development of renal failure. 12 Statin therapy in lupus patients reduces the risk of mortality (HR 0.44, 95% CI 0.32 to 0.60); coronary artery disease (CAD) (HR 0.20, 95% CI 0.13 to 0.31); cerebrovascular disease (CVD) (HR 0.14, 95% CI 0.08 to 0.25) and end-stage renal disease (ESRD) (HR 0.22, 95% CI 0.16 to 0.29). 13 Dyslipidemia in lupus patients is often underrecognized and also under-treated. Lipid abnormalities lead to pre-mature atherosclerosis which in turn leads to pre-mature CAD in lupus patients. Persistent dyslipidemia is also an independent risk factor to predict the development of CKD in LN patients. Therefore, lipid profile should be monitored regularly and dyslipidemia should be managed aggressively to prevent deterioration of kidney function in such patients. This study was undertaken as there is no data on the prevalence of dyslipidemia and its association with proteinuria in lupus nephritis patients from Pakistan. METHODS The study was conducted in the Division of Rheumatology, FMH Lahore, from September to December 2016, after approval from Institutional Review Board (IRB), FMH, and Lahore. A total of 65 patients were selected both from outpatient and inpatient departments after calculating the sample size (95 % Confidence level, 10 % margin of error and taking frequency of dyslipidemia in SLE of 60%). 7 All patients were >18yrs old and fulfilled the ACR classification criteria for SLE with renal involvement either biopsy proven or had proteinuria, hematuria or an elevated serum creatinine. Patients with CKD, diabetes mellitus, CAD, essential hypertension, Liver or thyroid disease or having h/o familial hyperlipidemia were excluded. Patients on lipid lowering therapy were also not included. Written informed consent was taken from each patient for participation in the study and confidentiality was maintained. Their demographic profiles (i.e. age, sex), disease characteristics (disease duration, degree of proteinuria, current dose of steroids and HCQ) and renal biopsy findings were also noted using a structured questionnaire. After a 12 hours overnight fast and consumption of normal diet for previous two weeks (without fat restriction) blood samples were assessed for TC, TG, HDL and LDL-C. Hyperlipidemia was diagnosed according to National Cholesterol Education Program (TC >200mg/dl, TG >150mg/dl, LDL-C >130mg/dl, HDL-C <40mg/dl). 14 The data was analyzed using SPSS version 22.0. Age, BMI, disease duration, P:C ratio, steroid dose, HCQ dose, TC, TG's, LDL and HDL were presented as mean and ± standard deviation. Categorical variables like gender, degree of proteinuria and dyslipidemia were presented as percentage. On the basis of amount of proteinuria patients were divided into two groups: having a spot urinary protein/creatinine ratio (P: C ratio) of > 1.0 or P:C ratio ≤ 1.0. These two groups were compared for their degree of dyslipidemia by using chi-square test. Individual parameters of fasting lipid profiles were correlated with the degree of proteinuria by Pearson correlation curve. Steroid dose, HCQ dose, BMI and disease duration were also compared with individual lipid profiles using chi-square test. P value ≤ 0.05 was considered significant. RESULTS Total patients were 65 with 83.1% (n=54) females. Demographic details and disease characteristics are shown in Table- Frequency of patients with high TC, TG's, LDL was found to be significantly lower in sub-groups of patients with proteinuria ≤ 1gm and none of the patients in this sub-group had HDL <40mg/dl. (Table-II). Significant positive correlation was found between TC, TG's, LDL and proteinuria respectively, whereas significant negative correlation was found between HDL and proteinuria as shown in Fig. 1-4. TC was also found to be positively correlated with the BMI (Chi-square=4.142 P < 0.042), disease duration >3yrs (Chi-square=16.984 P < 0.05) and steroid dose >10mg/day (Chi-square=13.97 P < 0.001) while negatively correlated with HCQ dose > 200mg/d (Chi-square=6.987 P <0.008). DISCUSSION Premature coronary artery disease (CAD) significantly affects morbidity and mortality in SLE. Certain traditional and disease-specific factors have been identified as independent predictors for CAD. Among the former are age (particularly postmenopausal state), male sex, arterial hypertension, dyslipidemia, and smoking. Disease activity and duration, cumulative damage, antiphospholipid antibodies, high sensitivity C-reactive protein, and renal disease are the most consistent disease-related factors. Corticosteroids are linked to increased CAD risk whereas antimalarial are protective. 15 In our study the mean age was 27 with majority (83%) being females. All patients had renal involvement either biopsy proven (78%) or manifested urinary abnormalities during the course of their illness. At the time of study proteinuria was present in 86% (nephrotic range in 26%), microscopic hematuria in 56.8%. Hypertension was present in 60%. Some of the results are similar to those reflected in previous local studies which reported lupus nephritis being commoner in Saba Sajjad et al. females (92%); commonest age is 20-40years. In these studies proteinuria had been reported in 74% (nephrotic range in 45%), microscopic hematuria in 66.6% and hypertension was found to be present in 70%. 16 9 Radillo HA et al. found out that 68.8% of lupus patients had dyslipidemia which was associated with disease activity measured by SLEDAI, presence of lupus nephritis, use of prednisolone >20mg/d, evolution of disease <3years, while absence of dyslipidemia with the use of HCQ. 20 We have also found a positive correlation between dyslipidemia and prednisolone dose>10mg/d (mean steroid dose was 11.36+11.26), disease duration>3yrs while low TC when on adequate doses of HCQ. A Chinese study reported that 59% of LN patients and 46% CKD controls showed dyslipidemia. LN patients showed higher TC and TG's than controls. More LN patients had abnormal TC, TG or LDL-C (54%, 16% and 38% respectively). 10 We in our study noticed a positive correlation between dyslipidemia and proteinuria. HN Reich et al. found a statistically important interaction between cholesterol and proteinuria (p=0.009). 21 Similarly Kyung-Eun Lee et al. studied 68 patients with biopsy proven lupus nephritis. He found out that the group having higher LDL-C >=100mg/ dl excreted more 24hour urine protein than lower LDL-C group. The higher LDL-C was a significant predictor of CKD in these patients on follow-up. 22 Lui L et al. reported that SLE patients had significantly higher TC, TG, LDL-C levels and significantly lower HDL-C levels compared with the control group (all P<0.01). TC, TG and LDL-C levels were positively correlated with lupus nephritis, corticosteroids therapy and 24 hours urine protein content, HDL-C levels were positively correlated with age, lupus nephritis, and corticosteroids therapy. 23 In lupus nephritis both cholesterol and proteinuria have been repeatedly shown to be interrelated and modifiable risk factors for pre-mature CAD and progressive loss of kidney function. Traditionally care has always been emphasized to control the disease flares, and less attention has been paid to screening and treating patients for dyslipidemia. So it is imperative for all clinicians treating SLE patients to screen for dyslipidemia and start treatment as early as possible to achieve better disease outcomes and improve quality of life of SLE patients. However our study highlights that apart from instituting lipid lowering therapy, targeting the disease to remission might alone be helpful in modifying this CV risk factor and improving the outcome of lupus patients. Limitations of the study: Firstly, there was no control group. Secondly, this was a cross-sectional study. In future we need to plan a long-term prospective study with a larger sample size and a control group to look for association of dyslipidemia with disease activity and disease outcome.
2018-04-03T00:25:57.836Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "79f3f898aa936cf3e5ce488def5fc87eca5d5dce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12669/pjms.332.12410", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79f3f898aa936cf3e5ce488def5fc87eca5d5dce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
35704379
pes2o/s2orc
v3-fos-license
The Effect of Sleep Hygiene on the Incidence of Cardiac Dysrhythmia in Patients with Myocardial Infarction Hospitalized in Critical Care Units : A Randomized Controlled Trial Background: Patients in cardiac care unit (CCU) have some degree of sleep disorders that may consequently increase the risk of dysrhythmia in these patients. Objectives: This study aimed to investigate the effect of sleep hygiene on the incidence of cardiac dysrhythmia in patients with myocardial infarction (MI) hospitalized in CCUs. Methods: In this randomized controlled trial, 62 patients with MI who lacked sleep disorders before admission were assessed using the Pittsburgh sleep quality index and a researcher-made sleep hygiene questionnaire. The patients were selected consecutively and then randomly allocated into the intervention and control groups to either receive the sleep hygiene training or routine care. All patients were under the cardiac monitoring on the second and third days of their hospitalization. Then, the number of PVCs and PACs was recorded during a 6-hour period in these two days. Data were analyzed by chi-square test, independent samples t-test, and Paired t-test. Results: On the third day, the number of PVC (2.06 ± 0.04) and PAC (0.87 ± 0.02) was significantly less in the intervention group than the control group (4.45± 3.71 and 2.68± 2.53, respectively) (P < 0.01). Unlike the control group, in the intervention group, the number of PVC (2.06 ± 0.04 vs. 4.74 ± 0.07, P < 0.01) and PAC (0.87 ± 0.02 vs. 2.91 ± 0.05, P < 0.05) on the third day significantly reduced compared to the second day. Conclusions: Performing sleep hygiene principles can reduce the incidence of dysrhythmia after MI. Therefore, nurses can use sleep hygiene practices in combination with other treatments to reduce the incidence of dysrhythmia after MI. Patients with myocardial infarction (MI) comprise a major part of patients hospitalized in coronary care units (CCU) (1).Dysrhythmia is prevalent in these patients and explains 40% -50% of their mortality (2), especially within the first 48 hours following MI (3).Evidence shows that sleep has a vital role in the regulation of cardiovascular function (4).It has also been shown that the lack of adequate sleep triggers the release of epinephrine and norepinephrine, which consequently would increase the heart rate, respiratory rate, blood pressure, and myocardial oxygen demand (5).These neurotransmitters would also intensify anxiety, irritability, and anger (4) and therefore, would increase the risk of hypertension, coronary artery disease (6), heart attack (7), and dysrythmia (8) such as premature ventricular complexes (PVCs) and premature atrial complexes (PACs) (9). All patients who are hospitalized in CCUs have some degree of sleep disorders (10,11).They may spend 30% -40% of their sleep time awake and the demand for sleep would increase during the day (12).Several factors such as environmental noises, light, temperature, nursing procedures (5), phone ringing, staff and patients' speeches, being connected to the monitoring systems, inappropriate bed, and disregard of bedtime habits (13) might be involved in the prevalence of sleep impairment in CCU patients.Although it is difficult for nurses to control some of the sources of noise and discomfort, any effort should be made to reduce the patients' discomfort and distress and improve their sleep and rest quality (14). Researchers have examined the effects of different strategies to improve the quality of sleep in CCUs.For instance, Jones et al. and Hu et al. have reported that using eye mask and earplugs can improve the perceived sleep quality and hormone balance in critical care patients (15,16).Neyse et al. have also shown that using earplugs can improve the sleep quality in CCU patients with acute coronary syndrome (17).However, no studies are available regarding the effect of sleep hygiene on the incidence of cardiac dysrhythmia in patients with MI.Considering the prevalence of sleep impairment among patients hospitalized in CCUs, the question still remins that "can sleep hygiene affect the incidence of cardiac dysrhythmia in patients with MI?" Objectives This study aimed to investigate the effect of sleep hygiene on the incidence of cardiac dysrhythmias in patients with MI hospitalized in CCUs. Study Design and Participants This non-blind randomized controlled trial study was conducted on 62 patients with MI in two CCUs of Ekbatan hospital in Hamadan, Iran, between July and November 2014.The inclusion criteria included suffering from acute inferior MI (upon consultation with physician), full consciousness, ability to speak, not suffering from mental diseases, lack of pain due to non-cardiac diseases, not being addicted to opium and sleep medications, not receiving medical treatment or any procedure influencing sleep, lack of previous sleep disorders, and spending the first day of hospitalization.The exclusion criteria were: having pain or consumption of narcotics during the night, occurring a long-term PVC or PAC that needed medical treatment, being discharged before the third day, death of the patient, cardiac arrest, loss of consciousness or unwillingness to take part in the research. The sample size was calculated based on the results of a study conducted by Bagheri Nesami (18).Accordingly, σ1, σ2, and d (the difference of the two means) numbered 3.88, 4.82, and 4, respectively.By considering type I error probability of 0.05 and power of 0.9, the sample size was calculated to be 26 for each group.Then, Five patients were added to each group because of the possibility of drop-out.Thus, the sample included 31 patients in each group. The subjects admitted to one of the CCUs comprised the intervention group (n = 31) and patients in the other CCU were assigned as the control group (n = 31).A consecutive sampling method was used to recruit the patients in the study and the selected patients were randomly allocated into the two groups. A total of 74 patients were assessed for eligibility; however, 12 patients were excluded: 9 patients for not meeting the inclusion criteria and 3 patients for unwillingness to participate in the study. Instruments A three-part instrument was used in this study.The first part consisted of questions on the patients' demographic information including gender, age, marriage, residence, and number of children.There were also four questions in this part for recording the number of PVCs and PACs occurred for the patient within a 6-hour period on the second and third days of the hospitalization.The second part was a researcher-made questionnaire that included 15 questions organized in three subsets of 'sleep environment', 'things to be avoided', and 'sleep habits'.The sleep environment subset consisted of 5 questions regarding bed comfort, effect of light, noise, environment temperature and the effect of being connected to the monitoring systems during sleep.The second subset (i.e.things to be avoided) comprised 6 questions about eating heavy foods, tea, coffee, and chocolate, smoking, and thinking about problems before sleep.The subset of sleep habits consisted of 4 questions regarding tooth brushing, prayers, studying book, and drinking milk. All questions were responded on a 4-point scale that was designed proportionally to the scope of questions.The content validity of this part of the instrument was verified by a number of experts and its reliability was confirmed by Cronbach's Alpha of 0.85. The Pittsburgh sleep quality index (PSQI) was used as the third part of the data collection instrument.The PSQI was used to investigate sleep disorders.Nasiri Ziba et al. examined the validity and reliability of the Persian version of the PSQI and the Cronbach's Alpha was reported to be 0.87 (19).The PSQI consists of 19 questions organized into 7 parts (i.e.subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medications, and day-time dysfunction).Each subscale is scored from 0 (no problem) to 3 (very serious problem) and the overall score ranges from 0 to 21.The score 5 or higher indicates sleep disorder or bad sleep quality (20,21).All the needed data were gathered through face to face interviews with individual patients.The first author conducted all of the interviews.The number of dysrhythmias (i.e.PVCs and PACs) occurred in each patient was also monitored using a cardiac monitoring system.For recording the PACs and PVCs, patients underwent the cardiac monitoring on the second and third days of hospitalization for six hours.Cardiac monitoring device was authenticated based on the reputation of manufacturer, standard tools, and reputable mark, and its calibration was certified by the medical equipment engineering technicians before each sample was taken.PVC and PAC are among the prevalent dysrhythmias after MI that do not need any intervention if they are sporadic (22).However, they need to be managed if they are frequent and continuous.In the present study, we selected these two dysrhythmias because they did not improve after the incidence. Procedures Patients with inclusion criteria were identified through daily referring to the aforementioned CCU wards and examining their hospitalization records alongside consulting with the treating physician. Before the start of the study, a coin tossing method was used to randomly allocate the two CCU wards to either control or intervention groups.Then, patients in the CCU1 and CCU2 were enrolled in the intervention group and control group, respectively.Every patient completed the questionnaire at first.To this end, an individual interview was undertaken with each patient and the researcher recorded the answers on the questionnaires.All interviews were conducted while the patients were in a comfortable situation on their beds. On the second hospitalization day, following the assessment of the sleep quality and sleep barriers, based on the barriers expressed by each patient, a combination of adjusted activities was conducted for the patients in the intervention group.To this end, nurses caring for the patients in the intervention group were asked to modify their activities and caring behaviors according to the patients sleep habits and needs (i.e.decreasing their loud speech, decreasing the environmental light and noises, and using Nurs Midwifery Stud.2017; 6(2):e37652.light blinker telephone apparatus instead of the ringing ones).Patients were also trained to avoid smoking and using cell phone and also avoid using materials such as tea, coffee, chocolate, and heavy foods 1 -2 hours before the sleep time.They were also educated that do not think of their life problems and hassles and try some tranquil activities such as praying, reading books or newspapers, tooth brushing or having a cup of milk before the sleep time.Moreover, the beds, bed sheets and coverlets were modified according to their desire.All patients in the two groups were taught about preventing the separation of leads and keeping them at rest as much as possible.However, no intervention was carried out on patients in the control group and they only received the routine care such as decreasing their loud speech and decreasing the environmental light. All patients in the two groups were under the cardiac monitoring on the second and third days of their hospitalization.Then, in collaboration with a cardiologist and using the apparatus memory, the number of PVCs and PACs occurred was recorded during a 6-hour period (i.e.8:00-14:00) in these two days. Ethical Considerations The present study was approved by the ethics committee of Hamadan University of Medical Sciences (grant no.p.16.35.3800, ethical approval code: p.16.35.9.263).All of the patients were informed about the voluntary nature of their participation and they were requested to sign an informed consent prior to participation.All patients were also assured of their anonymity and confidentiality of the data and they were informed that they can withdraw from the study at any time.This study was registered at the Iranian Registry of Clinical Trials under the registration code 2013060113493N1. Data Analysis The statistical analysis was performed using SPSS 11.5.The demographic variables of the two groups were compared using Chi-Square test.Independent samples t-test was also used to compare the number of recorded dysrhythmia between the two groups on the second and third days and paired t-test was employed to compare the data between the second and third days in each group.The Kolmogorov-Smirnov test was utilized to examine the normal distribution of the main quantitative variables, which showed that the distribution was normal. Results Most of the patients in this study were above 50 years old (61.2%), married (67.7%), resided in city (54.8%), and had less than 2 children (41.9%).The number of male patients were more than the number of women in the two groups and a significant difference was observed between the two groups in this regard (P = 0.001) (Table 1).There was no significant difference between the control and intervention groups on the second day of hospitalization in terms of the number of PVC (P = 0.53) and PAC (P = 0.51).However, after the intervention (i.e. on the third day), a significant difference was found between the two groups in terms of the number of PVC (P = 0.005) and PAC (P = 0.007) (Table 2). There was a significant difference in the number of PVC (P = 0.006) and PAC (P = 0.01) (between the second and third days in the intervention group, while there was no significant difference in the number of PVC (P = 0.08) and PAC (P = 0.47) between the second and third days in the control group (Table 2). Discussion The present study showed the positive effect of sleep hygiene on decreasing the rate of dysrhythmias in patients 4 Nurs Midwifery Stud.2017; 6(2):e37652.Although no previous study is available regarding the effect of sleep hygiene on the incidence of dysrhythmias, a number of earlier studies have examined the effects of various non-pharmacological interventions such as therapeutic touch (26), scheduled visits (27), and relaxation methods (28) and indicated that they can reduce the incidence of PVCs and PACs and ventricular dysrhythmias in patients with MI.The positive effects of these interventions might be attributed to the increased serenity and comfort (26), decreased anxiety (29), and reduced levels of catecholamines (5) induced by these methods. The finding of the present study should be interpreted by considering some limitations such as the small sample size, short-term intervention and implementing the study in only two CCUs.One also might criticize the methodology because of the possible differences in the two study settings; however, the routines and methods of caring and treatment were similar in the two wards and no significant difference was observed between the two groups in terms of the patients' characteristics. In conclusion, the present study showed the positive effect of sleep hygiene on the incidence of PVCs and PACs in patients with MI.Considering the effectiveness of the intervention, it is suggested for nurses working in CCUs to be re-trained about the positive effects of sleep hygiene on cardiac patients.CCU nurses are also recommended to use similar intervention to improve the sleep quality in CCU patients especially those admitted with MI.However, further studies with larger sample sizes and longer duration of intervention are suggested. Table 1 . Demographic Information of Patients in Intervention and Control Groups a a Values are expressed as No. (%).b Chi-Square. Table 2 . (24)arison of the Number of PAC and PVC Between the Two Groups on the Second and Third Days a MI, so that the intervention group experienced less PVCs and PACs on the third day than the second day of hospitalization.In agreement with the present study, Jones et al.(15)and Babaee et al.(23)have reported that decreasing the environmental light and noises along with using earplug and eye mask could improve the sleep quality in critical care patients.Arab et al. have also compared the effects of earplug and eye mask on sleep quality and reported that earplug was more effective than eye mask(24). a Values are expressed as mean ± SD. b T-test.c Paired t-test.with
2017-08-14T22:12:05.231Z
2017-01-03T00:00:00.000
{ "year": 2017, "sha1": "abe65bd769de79098460a8fa5d64cd2630adedc8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5812/nmsjournal.37652", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "abe65bd769de79098460a8fa5d64cd2630adedc8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257482644
pes2o/s2orc
v3-fos-license
Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields Recently, Neural Radiance Fields (NeRF) have emerged as a potent method for synthesizing novel views from a dense set of images. Despite its impressive performance, NeRF is plagued by its necessity for numerous calibrated views and its accuracy diminishes significantly in a few-shot setting. To address this challenge, we propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views, without incorporating additional priors. Basically, we train our model under the supervision of reference and unseen views simultaneously in an iterative procedure. In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration. However, these expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF. To alleviate this issue, we construct an uncertainty-aware NeRF with specialized embeddings. Some techniques such as cone entropy regularization are further utilized to leverage the pseudo-views in the most efficient manner. Through experiments under various settings, we verified that our Self-NeRF is robust to input with uncertainty and surpasses existing methods when trained on limited training data. Introduction Synthesizing novel camera views given a set of known views is an important task in computer vision and a prerequisite to many AR and VR applications. Classic techniques have addressed this problem using structure-frommotion [16] or light fields [23]. Recently, Neural Radiance Fields (NeRF) [29] have gained tremendous popularity due to the impressive results in photo-realistic rendering. This approach trains learning-based models implicitly embedded within a 3D geometric context and reconstructs observed images using neural rendering techniques. Albeit effective, the performance of NeRF is highly influenced by the quality and the number of training views. When the known views are limited, NeRF collapses to trivial solutions [33] ( e.g., producing zero density for the unobserved regions) and has the risk of overfitting to seen views. To make this challenging problem tractable, previous works attempt to incorporate some additional priors, such as a semantic feature [18], ground truth depth [10] or normalizing flow [32]. Although these models yield adequate rendering performance, these additional priors are not always valid. Kim et al. [20] propose a prior-free approach that introduces ray entropy minimization and ray information gain reduction for each ray to alleviate the reconstruction inconsistency and overfitting issue. However, entropy regularization imposes sparsity on the estimated scene, resulting in artifacts and flaws in unseen viewpoints. In this paper, we propose Self-NeRF to solve the fewshot novel view synthesis task without additional priors. Our key point is to design a self-training framework, in which we jointly learn from seen views and a large number of auxiliary unseen views. Self-training [52] is a classic method for semi-supervised learning. It aims to learn from unlabeled data by iteratively imputing the labels for samples predicted with high confidence. In the novel view synthesis task, the labeled data refers to seen views while the unlabeled data refers to unseen views. Inspired by self-training, our self-training backbone leverages confident predictions in the previous iteration to produce pseudo-views for unseen views. Our pseudo-views can be categorized into two types: the warped pseudo-views generated through forward warping and the predicted pseudo-views which are the outputs of the previous iteration. The former provide local texture guidance, while the latter help improve the perceptual capability of the global structures. In other words, pseudo-views add more information to guide model training. However, these pseudo-views still contain uncertain regions with inaccurate colors due to warping artifacts or occlusion. Assuming that all input pixel colors are reliable, NeRF will faithfully learn these uncertain pixels in pseudoviews, which results in reconstruction inconsistency across multiple views and degenerate solutions. To avoid this degeneracy, we propose an uncertainty-aware NeRF that autonomously learns a field of uncertainty from pseudo-views. Based on the output of the uncertainty field, we alleviate the impact of uncertain pixels. Specifically, we introduce our specialized warping embeddings and uncertainty embeddings to model image-dependent uncertain colors. Furthermore, we leverage cone tracing technique and a cone entropy regularization within a conical frustum to represent fine details. The cone entropy regularization imposes the model to compact representation in the unobserved views instead of collapsing to a trivial solution. Our experiments have proved that the proposed Self-NeRF shows state-ofthe-art performance on few-shot novel view synthesis as shown in Fig. 1. In summary, the main contributions of our paper are: • We propose Self-NeRF, a novel iterative training scheme for synthesizing novel views from few-shot images without additional priors. • We prove the convergence of our iterative Self-NeRF through theoretical deduction and experiments. • We introduce a practical implementation of Self-NeRF, which leverages an uncertainty-aware NeRF, specialized embeddings and a cone entropy regularization to avoid degeneration due to pseudo-views. Novel view synthesis Given a dense sampling of views, earlier works use view interpolation [6] and light fields [23,15] to reconstruct novel views. To better represent the 3D scene, some works utilize proxy geometry [9] and explicit representation such as layered representations [40,42], voxel [43], mesh [17] and point cloud [36,48]. Recently, a plethora of learningbased methods [12,13,19,25,57] has received growing attention. Simultaneously, there is another line of work that uses volumetric representations to address the task of photorealistic view synthesis. Neural radiance fields (NeRF) [29] employ an implicit neural representation of a 3D scene and use volumetric rendering to generate photo-realistically unseen views. Mip-NeRF [2] follows the step of NeRF and reasons about volumetric frustums along a cone for antialiasing. Mip-NeRF 360 [3] further extends it to model unbounded scenes with a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer. Furthermore, recent works have made tremendous efforts to improve the rendering speed [31,38,44,53], artistic effects [45,11,56], and generalization ability [26,46] of NeRF. Some works [27,49,55] also adapt NeRF for dynamic scenes. Few-shot view synthesis The requirement of numerous calibrated images is a key limitation of NeRF. Some works [10,39] attempt to decrease the data-hungriness by using the depth priors. Without the depth priors, MetaNeRF [24] uses data-driven priors recovered from a domain of training scenes. Pixel-NeRF [54] and Peng et al. [35] utilize the implicit spatial information in the local CNN features to construct the radiance fields. MVSNeRF [5] and IBRNet [46] employ earlier multi-view stereo methods to produce a multi-view feature volume. DietNeRF [18] resorts to the pretrained CLIP-ViT [37] and adapts its projected image embeddings as features to add global semantic information for novel views. These methods heavily rely on external supervisory signals such as depth information or additional pretrained encoders to synthesize novel views. Recently, some studies [4,1,51,8] have introduced the augmentation of warped views with only few-shot images to improve neural radiance fields, which provides more training constraints designed in a small patch. However, these methods achieve suboptimal The iterative pipeline for Self-NeRF. In i th iteration, we gather predicted pseudo-views synthesized by the model in the previous iteration for unseen views. The warped pseudo-views are collected through warping the seen views to unseen views. Then we train the uncertainty-aware NeRF f i θ in a supervised way with seen views and pseudo-views simultaneously. performance since they ignore the uncertainty of warped views caused by the warping operations. Unlike the aforementioned works, some methods [20,32,7,14] introduce a prior-free model. InfoNeRF [20] minimizes the ray entropy among seen and unseen poses and utilizes ray information gain reduction to alleviate reconstruction inconsistency across views. Although their strategies improve the quality of novel-view synthesis, they do not fully explore the full potential of unseen views. Artifacts such as blurring and cloud effects still exist in the synthesized image. By contrast, we fully leverage adequate pseudo-views and reduce the artifacts with our iterative training. Preliminaries Neural radiance fields represent a 3D scene as a continuous implicit function f θ , which outputs emitted radiance value and volume density when given a 3D position x ∈ R 3 and unit viewing direction d ∈ R 3 . In practice, NeRF adapts a multi-layer perceptron (MLP) model to predict the corresponding volume density σ ∈ [0, ∞] and color c ∈ [0, 1] 3 given the queried x and d as follows: where γ is a predefined positional encoding. To render the RGB color at the target pixel, NeRF samples points along the ray R and integrates colors and densities based on the volume rendering. The ray r(t) = o + td is emitted from the camera's center o along the direction d. We compute the 3D position x k = r(t k ) for each sample point t k . Thus the rendered color can be formulated as: where N indicates the total number of sample points along R, and δ i represents the distance between the i th and (i + 1) th points. NeRF casts a single ray per pixel and may produce renderings that are excessively blurred or aliased. To ameliorate this issue, mip-NeRF [2] casts a cone that passes through the pixel's center. To this aim, mip-NeRF derives integrated positional encoding (IPE), which is the integration over a volume covered by a conical frustum. Mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being faster meanwhile. Despite its impressive performance, Niemeyer et al. [32] find that the quality of mip-NeRF's view synthesis drops significantly with only few views. Mip-NeRF fails to generalize well to novel views at test time due to training divergence. In addition, blurry artifacts or floaters may appear because of the inherent ambiguity of few-shot input. Motivation and overview For the novel view synthesis task, we treat pixels in training images as labeled data while the cast rays in novel views are considered as unlabeled data. Consequently, this task can be solved in a semi-supervised learning method. In this paper, we utilize an inductive semi-supervised learning method to harness large amounts of unseen views in combination with smaller sets of seen views. Specifically, we construct a novel iterative scheme in a self-learning manner [50]. The typical self-training algorithm [30,47,58] has three main steps: 1) train a good teacher model with labeled data, 2) use the teacher model to produce pseudolabels on unlabeled data 3) train a student model on labeled and pseudo-labeled data simultaneously. In Self-NeRF, we iterate this algorithm a few times by putting back the student as a teacher to relabel the unlabeled data and training a new Figure 3: Given four seen views (left), we obtain the predicted pseudo-views (middle) and warped pseudo-views (right) for unseen views. student. We denote the student model in i th iteration as f i θ , thus the trained teacher model in i th iteration is f In other words, the working pipeline of Self-NeRF is to train the model f i θ iteratively using the seen views and pseudoviews generated by f The overview of Self-NeRF is shown in Fig. 2. We observe that the performance of Self-NeRF is highly influenced by the quality of pseudo-views and the capability of the model in the iteration. To produce realistic rendering, we carefully design our pseudo-views (Sec. 3.3) and propose an uncertainty-aware model (Sec. 3.4). Sec. 3.5 further describes the inference and optimization of our model. In addition, we discuss the convergence of our iterative training and prove the feasibility of applying self-training on the novel view synthesis task in Sec. 3.6. Pseudo-views in Self-NeRF In i th iteration, we first gather pseudo-views synthesized by f i−1 θ for unseen views. As shown in Fig. 3, predicted pseudo-views capture the main structure of the scene, thus adding them to training views helps improve the perceptual capability of the global structures. However, predicted pseudo-views may introduce color shifts due to training divergence, even when these pixels are visible in the training images. To alleviate this issue, we generate warped pseudoviews through the forward warping. In more detail, we warp seen views I i to unseen views using the predicted depth map D i from f i−1 θ and get warped pseudo-views I j . For pixel p i ∈ I i in the seen view, the corresponding pixel p j in the unseen view is: where d i ∈ D i is depth of p i predicted by f i−1 θ , K i is the camera intrinsic matrices of I i and T ij refers to the transform matrices from I i to I j . Pixels in warped pseudo-views are reprojected from the seen views, thus providing local texture guidance. Uncertainty-aware model in Self-NeRF While proving global structure information and local texture guidance, pseudo-views still contain uncertain pixels for unobserved regions. Assuming that all training pixels are equally reliable, mip-NeRF tends to learn the uncertain colors. Consequently, the performance fluctuates wildly. Worse still, the low signal-to-noise ratio of pseudo-views sometimes leads to training collapse. To handle the challenges of uncertain pixels, we adapt mip-NeRF to be tolerant of uncertainty following Martin-Brualla et al. [27]. To this aim, we model the output color C p (R) as the sum of the real color C r (R) and uncertain color C u (R) as follows: where C r (R) and C u (R) are learned from the radiance field and uncertainty field respectively during training. More specifically, our model adds two specialized embeddings and a branch to emit a field of uncertainty: where σ u and c u are the predicted density and color of uncertainty field. µ u is the uncertainty of the prediction. ω represents the learned warping embeddings that distinguish warped pseudo-views from predicted pseudo-views. Hence we assign the same ω to warped pseudo-views and seen views, explicitly implying that warped pseudo-views are reprojected from the seen views. Through this design, our model is expected to put more trust in the warped pseudoviews that are visible in seen views. φ denotes the uncertain embeddings that model per-image uncertain colors, thus each view has its own distinctive φ. Owing to the uncertainty field, our model relaxes mip-NeRF's strict consistency assumption and imposes mip-NeRF to provide larger µ u in the unobserved regions instead of collapsing to the trivial solution during the iterative process. As a result, Self-NeRF can attenuate the negative impact of uncertainty caused by warping or overfitting and gain information from adequate pseudo-views, yielding a superior image quality. The structure of our model is shown in Fig. 4. Inference and optimization Inference. For the query ray R, we model the predicted color with an isotropic normal distribution with mean color C p (R) and variance V u (R). To get C p (R) according to Eq. 4, we calculate C r (R) through Eq. 2 and the learned variable colors C u (R) can be analogously rendered with: Similar to Eq. 6, V u (R) is approximated with a linear combination of sampled points: In addition, we render the depth d(R) ∈ D i : is leveraged in Eq. 3 to generate warped pseudo-views for the subsequent iteration. Optimization loss. For ray R with the ground truth RGB C gt (R), the RGB loss is the negative log-likelihood (NLL): Likewise, we obtain the pseudo loss for ray R with pseudoviews C pseudo (R) as following: We further regularize the cone tracing with a cone entropy loss following Shannon Entropy [41]: The total loss function to optimize our model is given by: where λ 1 and λ 2 denote manual parameters to balance the loss terms. In particular, λ 1 decays exponentially by a factor of 2 at every 10k steps. The slowly decreasing weight is expected to help the optimization process avoid poor local minima so that pseudo-views provide guidance without conflicting with seen views. Convergence analysis of Self-NeRF Lee et al. [22] have analyzed self-learning technique and proved that it is an effect equivalent to a version of the entropy regularization. The unlabeled data can improve generalization performance even pseudo labels are not precise. For the novel view synthesis task, our pseudo-views reduce overfitting theoretically by providing the possible solution for unseen views. In other words, the statement P (n) that f n θ outperforms f (n−1) θ is true for any n ∈ [2, ∞]. We prove it by induction as follows. Basic step. We first prove that the statement P (n) is true for n = 2. For the sake of simplicity, let us assume that our target function is F gt (x) = sin(x − 0.1) + sin(x) + sin(x + 0.1), which is an analog of integrating along the rays. Taking 4 pairs of labeled data as few-shot input, we tend to model F gt (x) with a naive network f θ through our self-learning pipeline. f θ comprise three fully-connected layers and has 100 neurons in the hidden layer. Following Sec. 3.3, we gather predicted pseudo-labels f (i−1) θ and assign them to unlabeled data x u . Apart from that, we mimic warped pseudo-labels using 1 2 (f ). Fig. 5 shows the performance of learned models f (1) θ and f (2) θ . f (1) θ refers to the model trained solely with few-shot input. Besides the few-shot input, f (2) θ further uses 8 warpedlabels and 12 predicted pseudo-labels. Noted that f (2) θ with a smooth curve is closer to F gt (x) than f (1) θ . From a quantitative perspective, f (2) θ has lower mean absolute error than f (1) θ . Therefore, P (2) is proved to be true. Hence the pseudo-labels from f k θ have a higher signal-noise ratio. When we train f (k+1) θ under the same setting as that in training f (k) θ , the higher quality of pseudo labels ideally leads to a better model. Consequently, f (k+1) θ outperforms f k θ . In other words, the truth of P (k) implies the truth of P (k + 1). Therefore, P (n) is true for any n ∈ [2, ∞]. That is to say, the learned model will be improved with our iterative self-training, yielding photo-realistic novel-view synthesis results without additional priors. Since we do not add priors during iterative training, there NeRF [29] DietNeRF [18] InfoNeRF [20] Ours Ground truth Figure 6: Qualitative comparison on the NeRF synthetic dataset [29] (Row 1-2) in 4-view settings and LLFF dataset [29] (Row 3-4) in 2-view settings. exists the upper bound of Self-NeRF. To determine whether Self-NeRF has reached the upper bound, we sample some unseen views as validation dataset. We think that Self-NeRF has converged when the performance of the unseen views does no improve. Experimental settings Baseline. We compare our method with baseline NeRF [29] and two state-of-the-art models for few-shot NeRF, including DietNeRF [18] and InfoNeRF [20]. Dataset. We demonstrate our approach on NeRF synthetic dataset [29] and LLFF dataset [28]. In NeRF synthetic dataset, we randomly sample 4 viewpoints as few-shot input for each scene. We run this experiment three times and compute the average scores on 200 testing images for evaluation. For LLFF dataset, we take one out of every eight images from the collection of images for evaluation and randomly select 2 views from the remaining images as training images. Evaluation Metrics. We measure the rendered image quality with several quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and learned perceptual image patch similarity (LPIPS). PSNR and SSIM are popular metrics for evaluating reconstruction quality, while LPIPS can reflect the perception of humans more precisely. Implement details. We implement Self-NeRF with Py-Torch [34] while other approaches run on their own official codes. The Adam optimizer [21] is used with an initial learning rate of 0.0005 for optimization. We train other methods for 50K steps (about 15 hours) with their default settings and ensure they have converged. For a fair comparison, we train Self-NeRF in 2 iterations within 15 hours. All these experiments are conducted on a Tesla V100 GPU. Table 1: Quantitative evaluation of our method against NeRF [29], DietNeRF [18] and InfoNeRF [20] on the NeRF synthetic dataset [29] and LLFF dataset [28]. [20]. Please refer to the supplemental materials for the detailed experimental results from individual scenes. Due to uncertainty, the blurry renderings produced by NeRF can outperform the visually appealing but incorrect renderings of DietNeRF on average error metrics like PSNR. However, the quantitative results of DietNeRF are still comparable to those of NeRF. InfoNeRF significantly reduces artifacts in the rendering, resulting in improved quantitative results. Self-NeRF outperforms all other methods in comparison in terms of all the evaluation metrics. Qualitative comparisons. Fig. 6 depicts the rendering images synthesized by different methods. Compared to other methods, Self-NeRF achieves more realistic rendering in the novel views. Specifically, NeRF struggles to accurately reconstruct the scene and often produces blurry and cloudy artifacts. DietNeRF attempts to improve upon this by incorporating priors into the model, resulting in more reasonable and appealing renderings in some cases, such as the front of a ship. However, their use of low-dimensional CLIP [37] embeddings hinders the model's ability to learn high-frequency details. InfoNeRF yields better results with fewer artifacts by imposing the sparsity on the scene. Despite this, their renderings of novel views still exhibit flaws and lack clear details, which makes the results look unrealistic at first glance. By contrast, Self-NeRF preserves the best geometry while generating realistic details. For example, our method successfully reconstructs the front of the ship and the texture of the table. Ablation study We validate our design choices by performing an ablation study on two scenes from NeRF synthetic dataset. Design for networks. We study the effectiveness of our uncertainty-aware NeRF by replacing it with mip-NeRF [2] and NeRF-W [27]. Quantitative and qualitative results are given in Tab. 2 and Fig. 7, respectively. We observe Figure 7: Validate the effectiveness of our uncertaintyaware NeRF in Self-NeRF. that mip-NeRF utilizes cone tracing to capture fine details. However, the absence of uncertainty fields destabilizes the mip-NeRF, resulting in divergent behaviours such as the wire in the mic scene. On the contrary, NeRF-W is an uncertainty-aware model without cone tracing. Thus it alleviates the degradation due to uncertainty but produces blur contents. Our model combines the advantages of both and achieves favorable overall performance. Choice of pseudo-views. We conduct an ablation study on different categories of pseudo-views. We report quantitative results in Tab. 3 and show qualitative results in Fig. 8. As discussed in Sec. 3.3, the predicted pseudo-views are incapable of accurately reconstructing colors for certain regions, resulting in color deviations when exclusively training with them. In contrast, pixels in the warped pseudo-views are more reliable yet may lack fine-grained details. From the results, it is evident that training with these pseudo-views simultaneously leads to optimal results. Analysis Robustness to the number of views. We report the variation curve of quantitative results under different numbers of training views for Lego scene in Fig. 9. Our method exhibits gradual performance improvement with an increasing number of training views. While Self-NeRF outperforms NeRF in all metrics, our method's advantage reaches saturation point when using 16 training images. It is partly because the unseen regions decrease as the number of training views increases, thereby limiting the improvement from pseudoviews. Improvement in iterative training. We report the quantitative results of various iterations in Fig. 10 and depict the outputs of Self-NeRF in Fig. 11. Self-NeRF has converged 1 iteration 2 iterations 5 iterations Figure 11: Visualization of the prediction of Self-NeRF (top row), the warped pseudo-views (middle row) and the corresponding uncertainty map (bottom row) in different iterations. With the increasing number of iterations, Self-NeRF corrects the color shifts on the wheel and reduces artifacts. The improved performance results in better pseudo-views with less uncertainty, which in turn benefits the training. since the LPIPS deteriorates in the 9 th iteration. Note that our uncertainty-aware NeRF is capable of detecting uncertain pixels and leverages pseudo-views to their full potential. It gradually reduces the artifacts and effectively mitigates color shifts as the number of iterations increases. Hence, our iterative process leads to continuous improvement of the overall quality of predictions. Conclusion In this paper, we propose Self-NeRF to synthesize novel views given few-shot images. Inspired by self-training, Self-NeRF iteratively generates pseudo-views and trains the model with seen views and pseudo-views jointly. In the iteration, we generate two categories of pseudo-views: predicted pseudo-views from the previous iteration and warped pseudo-views which are reprojected from seen views using depth-based forward warping. These pseudo-views are shown to have a stabilizing effect and alleviate the color shifts. To avoid the negative impact of uncertain pixels in pseudo-views, we propose an uncertainty-aware NeRF with specialized embeddings. We also utilize techniques such as cone entropy regularization to reconstruct fine details and facilitate optimization. Our experiments further demonstrate our method's competitiveness compared with state-of-the-art models for few-shot novel view synthesis.
2023-03-13T01:15:37.715Z
2023-03-10T00:00:00.000
{ "year": 2023, "sha1": "b18e1fc66f5733defb2d04f61461751e837a6caf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b18e1fc66f5733defb2d04f61461751e837a6caf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
4589695
pes2o/s2orc
v3-fos-license
Integrated Molecular Meta-Analysis of 1,000 Pediatric High-Grade and Diffuse Intrinsic Pontine Glioma Summary We collated data from 157 unpublished cases of pediatric high-grade glioma and diffuse intrinsic pontine glioma and 20 publicly available datasets in an integrated analysis of >1,000 cases. We identified co-segregating mutations in histone-mutant subgroups including loss of FBXW7 in H3.3G34R/V, TOP3A rearrangements in H3.3K27M, and BCOR mutations in H3.1K27M. Histone wild-type subgroups are refined by the presence of key oncogenic events or methylation profiles more closely resembling lower-grade tumors. Genomic aberrations increase with age, highlighting the infant population as biologically and clinically distinct. Uncommon pathway dysregulation is seen in small subsets of tumors, further defining the molecular diversity of the disease, opening up avenues for biological study and providing a basis for functionally defined future treatment stratification. Correspondence chris.jones@icr.ac.uk In Brief Mackay et al. perform an integrated analysis of >1,000 cases of pediatric highgrade glioma and diffuse intrinsic pontine glioma. They identify co-segregating mutations in histone-mutant subgroups and show that histone wild-type subgroups are molecularly more similar to lower-grade tumors. INTRODUCTION Pediatric glioblastoma (pGBM) and diffuse intrinsic pontine glioma (DIPG) are high-grade glial tumors of children with a me-dian overall survival of 9-15 months, a figure that has remained unmoved for decades . Although relatively rare in this age group (1.78 per 100,000 population), taken together, gliomas are nonetheless the most common malignant brain tumors in children, and represent the greatest cause of cancer-related deaths under the age of 19 years (Ostrom et al., 2015). Unlike histologically similar lesions in adults, which tend to be restricted to the cerebral hemispheres, diffuse highgrade gliomas in childhood (pHGG) can occur throughout the CNS, with around half occurring in midline locations, in particular the thalamus and the pons (Jones and Baker, 2014), where the lack of available surgical options confers an especially poor prognosis (Kramm et al., 2011). Numerous clinical trials of chemotherapeutics and targeted agents extrapolated from adult GBM studies have failed to show a survival benefit, and more rationally derived approaches based upon an understanding of the childhood diseases are urgently needed (Jones et al., 2016). It has become increasingly apparent that pHGG differ from their adult counterparts, with molecular profiling studies carried out over the last 6-7 years having incrementally identified key genetic and epigenetic differences in pHGG associated with distinct ages of onset, anatomical distribution, clinical outcome, and histopathological and radiological features (Jones and Baker, 2014;Sturm et al., 2014). In particular, the identification of unique recurrent mutations in genes encoding histones H3.3 and H3.1 Wu et al., 2012) have demonstrated the distinctiveness of the pediatric disease, with the G34R/V and K27M variants appearing to represent different clinicopathological and biological subgroups. This has been recognized by the World Health Organization (WHO) classification of CNS tumors, with the latest version including the novel entity, diffuse midline glioma with H3K27 mutation (Louis et al., 2016). Further refinements incorporating other clearly delineated subsets of the disease in future iterations appear likely and might prove clinically useful. In addition to these uniquely defining histone mutations, detailed molecular profiling has served to identify numerous targets for therapeutic interventions. These include known oncogenes in adult glioma and other tumors with an elevated frequency in the childhood setting (e.g., PDGFRA) (Paugh et al., 2013;Puget et al., 2012) or certain rare histological variants (e.g., BRAF V600E) (Nicolaides et al., 2011;Schiffman et al., 2010), as well as others seemingly unique to DIPG (e.g., ACVR1) Fontebasso et al., 2014;Taylor et al., 2014a;Wu et al., 2014). Future trials will need to exploit these targets, but also incorporate innovative designs that allow for selection of the patient populations within the wide spectrum of disease who are most likely to benefit from any novel agent (Jones et al., 2016). Despite these advances, driven by the efforts of several international collaborative groups to collect and profile these rare tumors, individual publications remain necessarily modestly sized, involving a range of different platforms and analytical techniques. This leaves certain subgroups poorly represented across studies, widely differing individual gene frequencies in different cohorts, and an inability to draw robust conclusions across the whole spectrum of the disease. We have gathered together publicly available data, supplemented with 157 new cases, in order to provide a statistically robust, manually annotated resource cohort of >1,000 such tumors for interrogation. BRAF V600E status was available for 535 cases, with mutant cases (n = 32, 6.0%) present only in midline and hemispheric locations, and conferring a significantly improved prognosis (2 year survival 67%, p < 0.0001, log rank test) (Figures S1G-S1I). There was additional annotation for IDH1 R132 mutation status in 640 cases (n = 40, 6.25%), representing a forebrainrestricted, significantly older group of patients (median 17.0 Anatomical location of all high-grade glioma cases included in this study, taken from original publications (n = 1,033). Left, sagittal section showing internal structures; right, external view highlighting cerebral lobes. Hemispheric, dark red; non-brainstem midline structures, red; pons, pink. Radius of circle is proportional to the number of cases. Lighter shaded circles represent a non-specific designation of hemispheric, midline, or brainstem. (B) Boxplot showing age at diagnosis of included cases, separated by anatomical location (n = 1,011). The thick line within the box is the median, the lower and upper limits of the boxes represent the first and third quartiles, and the whiskers 1.53 the interquartile range. ***Adjusted p < 0.0001 for all pairwise comparisons, t test. (C) Kaplan-Meier plot of overall survival of cases separated by anatomical location, p value calculated by the log rank test (n = 811). (D) Anatomical location of all cases separated by histone mutation (top, n = 441) and histone WT (bottom, n = 314). Left, sagittal section showing internal structures; right, external view highlighting cerebral lobes. Blue, H3.3G34R/V; green, H3.3K27M; dark green, H3.1K27M. Radius of circle is proportional to the number of cases. Lighter shaded circles represent a non-specific designation of hemispheric, midline, or brainstem. (E) Boxplot showing age at diagnosis of included cases, separated by histone mutation (n = 753). The thick line within the box is the median, the lower and upper limits of the boxes represent the first and third quartiles, and the whiskers 1.53 the interquartile range. ***Adjusted p < 0.0001 for all pairwise comparisons, t test. (F) Kaplan-Meier plot of overall survival of cases separated by histone mutation, p value calculated by the log rank test (n = 693). See also Figure S1 and Table S1. years, p < 0.0001, t test) with longer overall survival (2 year survival 59%, p < 0.0001, log rank test) (Figures S1J-S1L). DNA Copy Number High-quality DNA copy-number profiles were obtained from 834 unique cases of pHGG/DIPG, taken from BAC and oligonucleotide arrays (n = 112), SNP arrays (n = 128), 450k methylation arrays (n = 428), and whole-genome or exome sequencing (n = 325) (Table S3). Clustering on the basis of segmented log 2 ratios highlighted some of the defining chromosomal features of the pediatric disease, including recurrent gains of chromosome 1q, and losses of chromosomes 13q and 14q ( Figure 3A). There are also a significant proportion of tumors (n = 147, 17.6%) with few if any DNA copy-number changes, with no bias toward lower-resolution platforms (p = 0.134, Fisher's exact test), and the presence of other molecular markers obviating concerns of a substantial normal tissue contamination. These cases were found throughout the CNS, were younger at diagnosis (7.0 versus 10.3 years, p < 0.0001, t test) and had a longer overall survival (median 18.0 versus 14.0 months, p = 0.0107 log rank test) ( Figure S3A). Common large-scale chromosomal alterations with prognostic significance included loss of 17p (n = 156), which targets TP53 at 17p13.1 and confers a shorter overall survival in tumors of all locations and all subgroups ( Figure S3B), and gains of 9q (n = 108), more broadly encompassing a region of structural rearrangement on 9q34 in medulloblastoma (Northcott et al., 2014), and correlating with shorter overall survival in multiple pHGG/DIPG subgroups ( Figure S3C). Subgroup-Specific Alterations When IDH1-mutant tumors were removed and the cohort restricted to those cases for which histone H3 status was available, we were able to investigate subgroup-specific DNA copynumber changes in 705 pHGG/DIPG ( Figure 4A). Applying GISTIC within these case sets revealed specific focal events enriched within individual subgroups, including AKT1 amplifications in H3.3G34R/V, MYC and CCND2 amplification in H3.3K27M, and MYCN/ID2, MDM4/PIK3C2B, and KRAS amplification in H3 WT ( Figure 4B) (Table S4). These latter events were generally restricted to hemispheric tumors, while MYCN/ID2 were enriched in H3 WT DIPG ( Figure S4A). H3.1K27M tumors generally lacked amplifications/deletions, but were instead characterized by frequent gains of 1q and the whole of chromosome LGG-like. Radius of circle is proportional to the number of cases. Lighter shaded circles represent a nonspecific designation of hemispheric, midline, or brainstem. (C) Boxplot showing age at diagnosis of included cases, separated by simplified methylation subclass (n = 440). The thick line within the box is the median, the lower and upper limits of the boxes represent the first and third quartiles, and the whiskers 1.53 the interquartile range. ***Adjusted p < 0.0001 for all H3 G34R/V pairwise comparisons, t test; **adjusted p < 0.01 for LGG-like versus WT, t test. (D) Kaplan-Meier plot of overall survival of cases separated by simplified methylation subclass, p value calculated by the log rank test (n = 307). See also Figure S2 and Table S2. 2, and the loss of 16q ( Figure 4C). PXA-like tumors had frequent CDKN2A/B deletions and a unique loss at 1q, associated with shorter overall survival within this group ( Figure S4B). Whole-arm losses were also enriched in H3.3G34R/V tumors, specifically 3q, 4q, 5q, and 18q, where smallest regions of overlap were in some instances able to narrow the common region to a handful of candidate genes ( Figure S4C). On chromosome 4q this appeared to target FBXW7 at 4q31.3, also aligning with the GISTIC data ( Figure 5A). Across three independent platforms, gene expression over the whole arm was significantly lower when 4q was lost (Agilent, p = 0.00231; Affymetrix, p = 0.000102; RNA sequencing (RNA-seq), p = 0.0398; Wilcoxon signed-rank test) ( Figure S5A). (Table S5). There were also four patients with three different somatic coding mutations identified (below), two truncating and one missense, three of which were in hemispheric tumors, and two with H3F3A G34R ( Figure 5B). In cases with 4q loss, median FBXW7 gene expression was reduced compared with those with normal copy number (Agilent, p = 0.029; Affymetrix, p = 0.015; RNA-seq, p = 0.4; Mann-Whitney U test) ( Figure 5C). Within H3.3K27M tumors, we identified a recurrent amplification at 17p11.2 (n = 17; 170 kb to 11.96 Mb), across multiple platforms and significantly enriched in DIPGs, which appears to target TOP3A within these tumors ( Figure 5D). Where available (n = 6) ( Figure S5B), whole-genome sequencing data reveals this occurs via complex intra-and inter-chromosomal rearrangements ( Figure 5E) leading to increased mRNA expression of TOP3A in amplified versus non-amplified cases in Agilent (n = 1), and Affymetrix and RNA-seq (p = 0.011 and p = 0.016, respectively, Mann-Whitney U test) data ( Figure 5F) ( Table S5). In an integrated dataset, TOP3A was the most differentially expressed gene in the region in amplified cases (adjusted p = 0.00856 Mann-Whitney U test). We further identified a single somatic missense mutation (C658Y) in an additional case of DIPG, and, taken together, TOP3A alterations were mutually exclusive with ATRX deletion/mutations found in H3.3K27M DIPG (0/13). Integrated Pathway Analysis Many of the rare variants we identified ( Figure S6E) were found in genes associated with intracellular signaling pathways and processes more commonly targeted by high-frequency events, often in a mutually exclusive manner. In total, 297/326 (91.1%) of cases were found to harbor genetic alterations in one or more of nine key biological processes ( Figure 7A). These included well-recognized pathways such as DNA repair (198/ 326, 60.7%), largely driven by TP53 mutations (n = 160), but also by common mutually exclusive (p < 0.0001, Fisher's exact test) activating truncating alterations in PPM1D (n = 18), as well as heterozygous mutations in a diverse set of genes including those involved in homologous recombination (ATM, BRCA2, BLM, ATR, PALB2, RAD50, and RAD51C) and numerous Fancomi anemia genes (BRIP1, FANCM, FANCA, and FANCG), among others ( Figure S7A). Although TP53 is almost always found in concert with H3.3G34R/V in the cerebral hemispheres, these additional DNA repair pathway mutations were enriched in Figure S3 and Table S3. H3.3K27M DIPG (36/68, 52.9%). Also co-segregating with H3.3G34R/V and TP53 is ATRX, although mutations/deletions of the latter gene are also frequently found in conjunction with H3.3K27M (28/54, 51.8%). ATRX accounts for a large proportion of the cases harboring mutations in genes coding for chromatin modifiers (54/118, 45.8%); however, there is a diverse set of readers, writers, and erasers also targeted at lower frequency, especially in DIPG, including the previously mentioned BCOR (n = 14) and ASXL1 (n = 6) in addition to SETD2 (n = 8), KDM6B (n = 6), SETD1B (n = 5), and ARID1B (n = 5) among many others ( Figure S7B). Uniquely, the accumulated data uncovered a series of additional processes involved in maintenance of DNA replication, genome integrity, or transcriptional fidelity, targeted by infrequent but mutually exclusive alterations in pHGG and DIPG. We incorporated the integrated dataset into a pathway enrichment analysis (significant gene sets, false discovery rate [FDR] < 0.05, visualized as interaction networks by Cytoscape Enrichment Map) in order to gain additional insight into dysregulated biological processes. In addition to the subgroup-specific differential targeting of distinct nodes within common signaling pathways already described (e.g., RTK, PI3K/mTOR, and MAPK), additional dysregulated processes across the diversity of the disease were identified ( Figure 7B). This revealed the perhaps not unexpected dysregulation of numerous developmental and CNS-associated gene sets (various immature organ systems, neuronal communication), but also previously unrecognized areas such as nuclear transport, cell migration, and the immune response (Table S7), which may provide further insight into disease biology as well as represent potential therapeutic strategies targeting key regulators of tumor phenotype. Indeed, neuronal communication with pGBM and DIPG cells is a recently demonstrated microenvironmental driver of pediatric glioma growth (Qin et al., 2017;Venkatesh et al., 2015). Histone H3/IDH1 WT Subgroups Finally, we wanted to explore those cases absent of any histone H3 or IDH1 mutations in more depth. Using a t statistic-based stochastic neighbor embedding projection of the 450k methylation data, we identified three distinct clusters of tumors separate from the G34, K27, and IDH1 groups ( Figure 8A). Consensus clustering of the H3/IDH1 WT cases alone confirmed the presence of three robust subgroups ( Figure 8B), which were also recapitulated by unsupervised hierarchical clustering of the 10,000 probe classifier subset ( Figure 8C). These groups included a largely hemispheric set of tumors containing, but not restricted to, the PXA-and LGG-like subgroups (WT-A). These tumors were driven by BRAF V600E, NF1 mutations, or fusions in RTKs including MET, FGFR2, and NTRK2,3 ( Figure 8D). Although including many younger patients, the ages varied widely ( Figure 8E). Regardless, this group had the best overall survival (median = 63 months, p < 0.0001 versus rest, log rank test) ( Figure 8F), with the non-PXA/LGG-like tumors within this group themselves having an extended median survival time of (C) Boxplots representing gene expression differences between FBXW7 lost/mutated cases (blue) and those with normal copy/WT (gray) in three independent gene expression platform datasets. The thick line within the box is the median, the lower and upper limits of the boxes represent the first and third quartiles, and the whiskers 1.53 the interquartile range. (D) Segmented exon-level DNA copy-number heatmaps for 17p11.2 amplification in predominantly H3.3K27M DIPG (dark red, amplification; red, gain; dark blue, deletion; blue, loss; n = 17). Chromosome 17 ideogram is provided indicating enlarged genome browser view and genes within common regions targeted across samples (gray). Clinicopathological and molecular annotations are provided as bars according to the included key. Although there remain tumors without detectable genetic alterations, we are nonetheless able to assign clinically meaningful subgroups with plausible driver alterations to the vast majority of pediatric HGG/DIPG. DISCUSSION Integrated molecular profiling has revolutionized the study of diffusely infiltrating high-grade glial tumors in children, providing evidence for unique mechanisms of molecular pathogenesis reflecting their distinct developmental origins (Baker et al., 2015;Jones and Baker, 2014). Although they are relatively rare, the present study accumulates 1,067 unique cases, a number similar to the aggregated analysis of the The Cancer Genome Atlas adult LGG/GBM cohorts (n = 1122, with grade III included in the ''lower-grade'' series) (Ceccarelli et al., 2016). Although there are clearly the usual caveats with such retrospective analyses of inconsistently annotated and treated cases, the cohort appears to represent a clinically useful approximation of the diversity of the pHGG/DIPG population. In adults, the key distinction is between IDH1 mutant (G-CIMP/ ATRX/TP53 or 1p19q co-deleted/TERT promoter mutated) and WT (classical, mesenchymal, PA-like) (Ceccarelli et al., 2016), whereas in the childhood setting IDH1 mutations were restricted to a small proportion (6.25%) of tumors mostly in adolescents (representing the tail end of an overwhelmingly adult disease), and harbored only rare examples of the common alterations seen in WT adult GBM (e.g., 4.9% EGFR mutation/amplification). Instead, most prominent among the differences between pediatric and adult studies is the frequency of hotspot mutations in genes encoding histone H3 variants: 2/820 (0.2%) in adults (Ceccarelli et al., 2016) versus 449/893 (50.3%) in the present pHGG/DIPG series. The importance of recurrent H3 mutations in the childhood setting has become increasingly clear since their unexpected discovery in 2012 Wu et al., 2012), with clear clinicopathological differences associated with distinct variants (Jones and Baker, 2014;Jones et al., 2016;Sturm et al., 2014), and fundamental insights into mechanisms of epigenetically linked tumorigenesis (Bender et al., 2013;Bjerke et al., 2013;Chan et al., 2013;Funato et al., 2014). Despite this, precisely how we can target these mutations clinically remains elusive Hennika et al., 2017). Data from such a large series of tumors demonstrates the robustness of the histone-defined subgroups in terms of anatomical location, age of incidence, clinical outcome, methylation and gene expression profiles, copy-number changes, co-segregating somatic mutations, and pathway dysregulation. As most of the non-histone molecular alterations previously reported in pHGG/DIPG have been relatively infrequent, it is only through this accumulated dataset that we have been able to uncover subgroup-specific genes/processes that may play a role as diagnostic, prognostic, or predictive markers or drug targets in these diseases. H3.3G34R/V-mutant tumors are restricted to the cerebral hemispheres and co-segregate with ATRX and TP53 mutations; they are also the only pediatric subgroup to harbor frequent MGMT promoter methylation . Copynumber profiling of 63 cases highlighted a significant enrichment of chromosomal arm losses at 3q, 4q, 5q, and 18q, further refined by smallest region of overlap and GISTIC analyses. At 4q31.3, this identified FBXW7 as a candidate gene target of the loss. FBXW7 encodes a member of the F box protein family and is frequently deleted/mutated in cancer, supporting its tumor-suppressive function (Davis et al., 2014); notably in relation to H3.3G34R/V it has been reported to play a role in MYC/ MYCN stabilization through its action as a component of the SCF-like ubiquitin ligase complex that targets MYC/MYCN for proteasomal degradation (Welcker et al., 2004;Yada et al., 2004). With MYCN upregulated in H3.3G34R/V tumors through differential H3K36me3 binding (Bjerke et al., 2013), this observation adds to the mechanisms by which Myc proteins exert their influence in this subgroup, and provide further rationale for the observed effects of disrupting these interactions, such as with Aurora kinase A inhibitors which target the direct interaction between the catalytic domain of Aurora A and a site flanking Myc Box I that also binds SCF/FbxW7 (Richards et al., 2016). H3.3K27M tumors are found in two-thirds of DIPG and nonbrainstem midline pHGG alike, where they are associated with a shorter overall survival in both locations, as well as in the small number of cases reported in the cortex. Although presumably reflecting a common or overlapping origin, the pattern of co-segregating mutations differ, e.g., PDGFRA alterations predominating (E) Sequencing coverage (top) and log 2 ratio plot (bottom) for chromosomes 7, 17, and 20 for two cases, showing complex intra-or inter-chromosomal rearrangements leading to specific copy-number amplification of TOP3A. (F) Boxplots representing gene expression differences between TOP3A amplified cases (red) and those with normal copy (gray) in three independent gene expression platform datasets. The thick line within the box is the median, the lower and upper limits of the boxes represent the first and third quartiles, and the whiskers 1.53 the interquartile range. See also Figure S5 and Table S5. Figure S6 and Table S6. in the pons, and FGFR1 variants being largely restricted to the thalamus . Our analysis of more than 300 cases further identifies differential amplification of CCND2 (DIPG) and CDK4 (non-brainstem midline), and, most strikingly, an amplification at 17p11.2 involving TOP3A in H3.3K27M DIPG. This complex rearrangement often involves loss of the more distal part of 17p involving TP53, along with intra-or inter-chromosomal translocations to deliver an increase in TOP3A copy number and gene expression. TOP3A encodes DNA topoisomerase III alpha, which forms a complex with BLM (Wu et al., 2000), has an important role in homologous recombination (Yang et al., 2010), and has been implicated in maintenance of the ALT phenotype (Temime-Smaali et al., 2009). Notably, TOP3A amplification/mutation was found to be mutually exclusive with ATRX mutation in H3.3K27M DIPG, with depletion by small interfering RNA reducing ALT cell survival (Temime -Smaali et al., 2008), and therefore represents a potential therapeutic target in this subgroup. (B) Pathway enrichment analysis of pHGG/DIPG subgroups. Distinct pathways and biological processes between the subgroups are colored appropriately (FDR q < 0.01). Nodes represent enriched gene sets, which are grouped and annotated by their similarity according to related gene sets. Node size is proportional to the total number of genes within each gene set. The illustrated network map was simplified by manual curation to remove general and uninformative sub-networks. See also Figure S7 and Table S7. Figure S8 and Table S8. H3.1K27M tumors by contrast are restricted to the pons, patients are younger and with a slightly longer survival , and are largely defined at the copy-number level by whole chromosomal arm gains and losses . They have the well-recognized association with ACVR1 mutation (Taylor et al., 2014b); however, we also identify an enrichment of downstream PI3K pathway mutations (PIK3CA and PIK3R1) in comparison with the largely upstream RTK alterations present in H3.3K27M DIPGs, important in designing stratified trials and combinatorial therapies. Further association with mutations of the BCL6 repressor gene BCOR, commonly altered in medulloblastomas, neuroepithelial tumors, and sarcomas, highlights a further avenue for interventional study through its regulation of the SHH pathway (Tiberi et al., 2014). In H3/IDH1 WT cases, methylation profiling refines the heterogeneous collection of tumors, particularly identifying two predominantly hemispheric intermediate risk subgroups that classify alongside other entities (PXA-and LGG-like) in a larger series of better outcome tumors (WT-A). These had already been strongly linked with dysregulation of the MAPK pathway (BRAF V600E) along with CDKN2A/ CDKN2B deletion (Nicolaides et al., 2011). However, with molecular markers such as losses at 1q and 17p appearing to confer a worse outcome there may be more than one subgroup within this entity, and a co-clustering group of H3/IDH1 WT tumors appeared distinctly driven by somatic NF1 mutation. The LGGlike tumors generally occur in very young patients, where the appearance of few genetic alterations and a significantly better prognosis is shared by the majority of infant HGG. Gene fusion events, including those targeting NTRKs1-NTRK3, are common in this age range. Notably this enhanced survival is restricted to patients diagnosed under 12 months of age, and is not recapitulated in the 1-3 year age group, although this is the common clinical definition of ''infants'' in many centers. Excluding these morphologically high-grade but biologically and clinically low-grade tumors, the remaining H3/IDH1 WT cases can be further split into two poor-outcome groups driven by EGFR/MYCN/CDK6 (WT-B) or PDGFRA/MET (WT-C) or amplifications. These groups overlap with other methylation-based classification groups (PDGFRA versus EGFR versus MYCN (Korshunov et al., 2017); ''GBM_pedRTK'' versus ''GBM_MYCN'' versus ''HGG_MID'' (molecularneuropathology.org/mnp), however, are uniquely defined here spanning anatomical locations and integrated with sequencing data. Further exploration of these heterogeneous subgroups in order to refine integrated molecular diagnostics to prioritize patient subpopulations for stratified treatment remains a priority. The remarkable biological diversity spanning pediatric malignant glioma is finally demonstrated by the <5% tumors with a hypermutator phenotype, some of the greatest mutational burdens in all human cancer, and candidates for immune checkpoint inhibitors (Bouffet et al., 2016). Previously unrecognized processes altered in small subsets of tumors identified through this meta-analysis, such as the splicing machinery, miRNA regulation, and the WNT pathway offer further areas for exploration. The thorough cataloging of dysregulated molecular pathways across the whole spectrum of pediatric diffusely infiltrating gliomas in the present study provides the basis for novel therapeutic development. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: log 2 ratios of median coverage in tumor and normal sequences were processed with in-house scripts. To combine copy number platforms, median log 2 ratios were recovered within all known genes and exons and normalized such that the median displacement of X in male:female comparisons was rescaled to an average of -1. Exon-level median log ratios and smoothed values were then combined across platforms and thresholded to call gains and losses above and below log 2 ratios of ±0.3 with a contig of 1MB and amplifications and deletions above and below a threshold of ±1.5 with a minimum of 3 contiguous exons. CBS binary segmentation from the DNAcopy package was applied to each dataset to provide smoothed log 2 ratios. Genes within common CNVs in normal individuals were excluded from further analysis with reference to the CNV map of the human genome. DNA copy number data was clustered based upon categorical states (deep deletion, loss, no change, gain and amplification) based upon the Euclidean distance method with a Ward algorithm. Gains and losses in chromosomal arms were called based upon contiguous regions covering more than one third of the exonic regions within each arm. For regions of focal copy number change cases carrying copy number alterations were ranked according to the length of the largest CNA in each case and are plotted as heatmaps aligned to precise genomic coordinates alongside genomic tracks based upon hg19 made with the R package gviz. Minimal regions of copy number alteration were assigned based on the frequency of categorical states within each region. Focal amplifications and deletions were identified in CBS segmented data using the GISTIC algorithm in MATLAB on the exon-level data, with thresholds for gain and loss of 0.3 and gene-level filters to remove regions of common copy number variation in normal individuals based on the CNV map of the human genome. DNA Methylation Methylation data from the Illumina Infinium HumanMethylation450 BeadChip was preprocessed using the minfi package in R. DNA copy number was recovered from combined intensities using the conumee package with reference to methylation profiles from normal individuals provided in the CopyNumber450kData package. We have used the Heidelberg brain tumor classifier (molecularneuropathology.org) to assign subtype scores for each tumor compared to 91 different brain tumor entities using a training set built from more than 2000 tumors implemented in the MNP R package. Simplified methylation subgroup assignments were then made to incorporate cases carrying G34R/V or K27M mutations in H3 histones, IDH1 mutation at R132, low grade glioma-like profiles (predominantly diffuse infantile ganglioglioma and pilocytic astrocytoma) and those similar to pleomorphic xanthoastrocytoma (PXA). Wild-type HGG encompassed many other methylation subgroups and were simply assigned by exclusion with the groups above. Clustering of beta values from methylation arrays was performed using the 10K probeset from the Heidelberg classifier based upon Euclidean distance with a ward algorithm. Methylation heatmaps show only the most variable probes of the classifier between simplified methylation subgroups. Overall methylation was calculated as the mean of the 10K classifier probeset for each subgroup and MGMT promoter methylation was calculated based upon the MGMT-SPT27 model implemented in the MNP package. t-stochastic neighbor embedding (tSNE) was used to project the methylation clustering in three dimensions using the Rtsne package. A Pearson correlation matrix of the 10K probeset was subjected to tSNE using a theta value of zero over 10,000 iterations as previously described and plotted using the rgl package. mRNA Expression Gene expression data was obtained from Agilent WG2.5, Affymetrix U133Plus2.0 or RNA sequencing platforms. Gene expression was processed from two color Agilent microarrays using the R packages marray and limma and from single channel Affymetrix arrays using the affy package. Differential expression was assigned for microarray data using the limma package based upon a false discovery rate of 5%. RNASeq was aligned with Bowtie2 and TopHat and summarized as gene level fragments per kilobase per million reads sequenced using BEDTools and cufflinks/cuffnorm. Following rlog transformation and normalization, differential expression was assigned with DESeq.2. Known Ensembl genes were further filtered to remove low abundance genes in all three datasets whose maximal expression was within the lowest 20% of all expression values based upon probe intensities or read depth. Replicate probes/features for each gene were removed by selecting those with the greatest median absolute deviation (MAD) in each dataset. Following centering within each dataset, log-transformed expression measures were combined and further normalized using pairwise loess normalization. Gene Set Enrichment Analysis was performed using the GSEA java application based upon pairwise comparisons of the major subgroups in the merged dataset. Heatmaps of gene expression across chromosomal arms were made using centered expression values rescaled across each chromosomal arm based upon the median absolute deviation of each probe. Differential expression analysis of TOP3A and FBXW7 was based on a Mann-Whitney U test of centred expression values between cases with and without losses and amplifications respectively in each case. Sequence Analysis Sequencing data was available as whole genome and/or whole exome (predominantly using Agilent's SureSelect whole exome capture sets v4 and v5) Short read sequences from whole exome or whole genome sequencing were aligned to the hg19 assembly of the human genome using bwa. Following duplicate removal with Picard tools variants were called using the Genome Analysis toolkit according to standard Best Practices (Broad) including local re-alignment around Indels, downsampling and variant calling with the Unified Genotyper. Variants were annotated with the variant Effect predictor v74 from Ensembl tools and ANNOVAR to include annotations for variant allele frequency in 1000 genomes dbSNP v132 and the ExAc database as well as functional annotation tools SIFT and Polyphen). Depth of coverage varied from 16-295x (median 88x), with the greatest variation unsurprisingly in the exome data (whole genome range 50-150x, median=85x). Somatic variants were identified in regions covered by at least 10 reads in normal and tumor sequences carrying at least 3 variant reads in the tumor and less than 2 in normal sequences. Hotspot TERT promoter mutations C228T and C250T were incidentally captured by the various exome platforms as they are located only 114 and 146 bp upstream of the translation start site, and were called even if only covered by a few reads. Mutation signatures were ascertained by grouping somatic substitutions on the basis of their 3 0 and 5 0 bases into 96 possible trinucleotide categories. Candidate Fusion Gene Nomination Structural variants were called from whole genome data using Breakdancer (breakdancer.sourceforge.net) filtered to remove commonly multi-mapped regions to identify somatic breakpoints separated by a minimum of 10 kbp involving at least one Ensembl gene. Fusion transcripts were detected from RNAseq data using chimerascan version 0.4.5a filtered to remove common false positives. To minimize unverified false positives, reporting of nominated fusions was restricted to genes within the core functional pathways and processes identified through integrated DNA copy number and somatic variant calling. Inferred Tumor Purity We used determined the somatic allele-specific copy number profiles using read depth from whole genome / exome sequencing, and used ASCAT (rick.ac.uk/peter-van-loo/software/ASCAT) to provide for an estimate of the non-neoplastic cell contamination of the sample as well as the overall ploidy of the tumor. Values ranged from 36-100%, with a median of 83%. Integrated Analysis of Driver Events Somatic non-synonymous coding mutations were filtered to remove common passenger mutations, polymorphisms and false positives in exome sequencing. Data were integrated with focal DNA copy number calls by GISTIC to provide gene-level binary alteration calls which were further selected for putative drive status on the basis of functional annotation. Oncoprint representations of integrated mutations, gene-level copy number alterations and fusion events were made using the online tool available at cBioportal (cbioportal.org). For the most commonly mutated genes mutations were mapped to the canonical transcript and plotted according to their predicted protein position using the Protein Painter (pecan.stjude.org). Integrated views of copy number alterations, structural variants and somatic mutations were made using CIRCOS (circos.ca) and rearrangements within TOP3A amplified regions in whole genome sequenced cases were identified using Breakdancer and aligned with copy number breakpoints in R. Pathway Analysis Pathway assignments were made for all genes carrying copy number alterations, structural variations or somatic mutations based on pathways in the MSigDB molecular signatures databases (Broad) as well as Gene Ontologies for Biological Processes and Molecular Functions (Gene Ontology consortium) and canonical pathways from KEGG, NetPath and Reactome. Genes within known CNVs and common false positives in exomic sequencing were excluded with reference to large scale genome profiling studies (CNVmap, ExAc, BCBio) Pathway analysis of genes carrying mutations, gene fusions and copy number aberrations was based on the pathways defined by these combined databases and subjected to enrichment analysis using the EnrichmentMap module within CytoScape. Statistical Analysis Statistical analysis was carried out using R 3.3.1 (www.r-project.org). Categorical comparisons of counts were carried out using Fishers exact test, comparisons between groups of continuous variables employed Student's t-test, Wilcoxon signed -rank test, ANOVA or Mann-Whitney U test. Differences in survival were analysed by the Kaplan-Meier method and significance determined by the log-rank test. All tests were two-sided and a p value of less than 0.05 was considered significant. Multiple testing was accounted for using false discovery rate q values or the Bonferroni adjustment. ADDITIONAL RESOURCES Processed copy number profiles are hosted as a disease-specific project within the Progenetix framework for annotated genomic analyses (dipg.progenetix.org) (Cai et al., 2014), and represented in the arrayMap resource (arraymap.org) (Cai et al., 2012). Curated gene-level copy number and mutation data are provided as part of the pediatric-specific implementation of the cBioPortal genomic data visualisation portal (pedcbioportal.org). Newly-generated raw data files are housed alongside published datasets made available to the Cavatica NIH-integrated cloud platform (www.cavatica.org).
2018-04-03T04:02:10.907Z
2017-10-09T00:00:00.000
{ "year": 2017, "sha1": "a42072a9b4e60b5d6982c180d97bd6c7d196bcbe", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S1535610817303628/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "947b1ce0214ce424a55d55d48da2e024258c64f7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
252683831
pes2o/s2orc
v3-fos-license
GLOSTAR — Radio Source Catalog II : 28 ◦ < ` < 36 ◦ and | b | < 1 ◦ , VLA B-configuration As part of the Global View on Star Formation (GLOSTAR) survey we have used the Karl G. Jansky Very Large Array (VLA) in its B-configuration to observe the part of the Galactic plane between longitudes of 28◦ and 36◦ and latitudes from −1◦ to +1◦ at the C-band (4–8 GHz). To reduce the contamination of extended sources that are not well recovered by our coverage of the (u, v)-plane we discarded short baselines that are sensitive to emission on angular scales < 4′′. The resulting radio continuum images have an angular resolution of 1. ′′0, and a sensitivity of ∼ 60 μJy beam−1; making it the most sensitive radio survey covering a large area of the Galactic plane with this angular resolution. An automatic source extraction algorithm was used in combination with visual inspection to identify a total of 3325 radio sources. A total of 1457 radio sources are ≥ 7σ and comprise our highly reliable catalog; 72 of these are grouped as 22 fragmented sources, e.g., multiple components of an extended and resolved source. To explore the nature of the catalogued radio sources we searched for counterparts at millimeter and infrared wavelengths. Our classification attempts resulted in 93 H ii region candidates, 104 radio stars, 64 planetary nebulae, while most of the remaining radio sources are suggested to be extragalactic sources. We investigated the spectral indices (α, S ν ∝ ν) of radio sources classified as H ii region candidates and found that many have negative values. This may imply that these radio sources represent young stellar objects that are members of the star clusters around the high mass stars that excite the H ii regions, but not these H ii regions themselves. By comparing the peak flux densities from the GLOSTAR and CORNISH surveys we have identified 49 variable radio sources, most of them with an unknown nature. Additionally, we provide the list of 1866 radio sources detected within 5 to 7σ levels. Introduction The Global view on star formation (GLOSTAR) survey is presently the most sensitive radio survey (∼ 60 µJy beam −1 ) of the northern hemisphere of the Galactic plane at the C-band (4 to 8 GHz) Medina et al. 2019). Taking full use of the capabilities of the Karl G. Jansky Very Large Array (VLA), a distinction of GLOSTAR compared to previous surveys is that it simultaneously observes radio continuum and spectral line emission. GLOSTAR is indeed complementary to the wealth of Galactic plane surveys at infrared and submillimeter wavelengths that address star formation in the Galaxy, some of which are described in subsection 3.7. The primary goal of the GLOSTAR survey is to localize signposts of massive star formation (MSF) activity (see Brunthaler et al. 2021, for a detailed overview of the survey). Towards this goal, in the continuum mode, the survey mainly observes compact, ultra-and hyper-compact HII regions, that trace different early phases of MSF activity (e.g., Medina et al. 2019;Nguyen et al. 2021). Thus, the GLOSTAR survey complements previous radio surveys by providing a powerful and comprehensive radio-wavelength survey of the ionized gas in the Galactic Plane with an unprecedented sensitivity. However, the survey area is also populated with radio sources related to post-main sequence stars (e.g. Wolf-Rayet stars, pulsars, etc.), planetary nebulae, supernova remnants and extragalactic radio sources (Medina et al. 2019;Chakraborty et al. 2020;Dokara et al. 2021). In spectral line mode, it traces the radio recombination lines from regions ionized by massive stars and the methanol maser line at 6.7 GHz (Ortiz-León et al. 2021;Nguyen et al. 2022), both of which are related to massive star formation. The formaldehyde absorption line at 4.8 GHz is also observed. It traces neutral molecular gas and its radial velocity information with respect to the local standard of rest (LSR) radial can help to solve distance ambiguities. The final GLOSTAR images will cover the Galactic plane between Galactic longitudes, , of −2 • and 60 • and latitudes, b, from −1 • to +1 • and the Cygnus X region. The final dataset will consist of low resolution images (∼ 20 ), using the VLA in the D configuration, and high resolution images (∼ 1. 0), using the B configuration. These VLA data sets could also be combined for optimal sensitivity of the intermediate spatial ranges, such images will be presented in future works. The VLA observations will be complemented with very low resolution (∼ 150 ) images of single dish observations from the Effelsberg radio telescope, to recover the most extended emission and solve the "missing short spacing" issue affecting interferometer-only images. The overview of the full GLOSTAR capabilities are described in detail by Brunthaler et al. (2021). Previously, we have reported the radio source catalog of the low resolution VLA images covering the area 28 • < < 36 • and −1 • < b < +1 • (Medina et al. 2019). Complementary infraredand sub-millimeter-wavelength data were examined towards the radio source positions with the goal to elucidate the nature of the radio sources. In this paper we report the compact (∼ 1. 0) radio sources from the same region using the high resolution observations obtained with the VLA in B-configuration. The interesting part of this new catalog is that we can identify the most compact, and probably youngest, (hypercompact) HII regions. Observations The Karl G. Jansky Very Large Array (VLA) of the National Radio Astronomy Observatory 1 was used in its B-configuration to observe the (4-8 GHz) continuum emission. We follow the same instrumental setups and calibration as for the D-configuration data. We refer the reader to the overview paper (Brunthaler et al. 1 The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. 2021) for a detailed description of the data management that we summarize in the following subsections. Observation Strategy The correlator setup consisted of two 1 GHz wide sub-bands, centered at 4.7 and 6.9 GHz. Each sub-band was divided into eight spectral windows of 128 MHz, and each spectral window comprising of 64 channels with a channel width of 2 MHz 2 . The chosen setup avoids strong persistent radio frequency interference (RFI) seen (most prominently) at 6.3 and 4.1 GHz and allows estimation of source spectral indices. For each epoch the total observing time was five hours. At the beginning of the observations, the amplitude/bandpass calibrator 3C 286 is observed for ten minutes. Then, the phase calibrator, J1804+0101, is observed for one minute, followed by pointings on target fields for eight minutes, after which the phase calibrator is observed for another minute. The observation cycle, consisting of phase calibrator-targets scans, is repeated over the full five hours. During this time, an area of 2 • × 1 • is covered with phase centers for 676 target fields in a semi-mosaic mode (see Brunthaler et al. 2021, for a detailed overview of the observation strategy). Each pointing was observed twice for 11 seconds which, after considering the slewing time, yields a total integration time of 15 seconds per field. The theoretical noise level in brightness (or peak flux density) from these observations is 90 µJy beam −1 per field and per sub-band. The noise is improved after combining the fields and both sub-bands by a factor of ∼ 2. We observed a total of eight epochs, or a total of 40 hours of telescope time, under project ID VLA/13A-334. The total covered area is 16 square degrees. The observations were taken during the period from 2013 September to 2014 January. (AIPS) (Greisen 2003). We have written calibration scripts that handle the GLOSTAR data edition and calibration . The calibration follows the standard procedures of editing and calibrating interferometric data. These include bandpass, amplitude and phase calibration. The calibrated data were imaged using the Obit task MFImage. The CLEANing process from MFImage divides the observed band into nine frequency bins, that are narrow enough to perform a spectral deconvolution, addressing the effects of variable spectral index and antenna pattern variations. In the end, we obtained an image representing the data for the full frequency band and images of each individual frequency bin. The images were obtained from the data that had projected baselines with distances in the (u, v)-plane larger than 50 kλ; i.e., discarding all radio emission on angular scales larger than ≈ 4. 13 at all observed frequencies. This choice rejects emission from poorly mapped extended structures that introduced artifacts in the images, with a minor impact in the overall sensitivity. The images were convolved with a circular beam 1. 0 of size, and with pixel size of 0. 25. The mosaics are constructed to obtain images of 35000 × 30000 pixels, containing the 1 • × 2 • angular area surveyed by epoch, following the schemes described by Brunthaler et al. (2021). The mean measured noise in the resulting images is 60 µJy beam −1 ; though it can be significantly higher in some areas in which extended emission was not properly recovered or around very bright radio sources that produce imperfectly cleaned sidelobe emission. Catalog construction In this section we discuss the procedure used for constructing the catalogues presented in this work. First, we have extracted the sources from the images, selected the sources that are real, identified candidate radio sources, and discarded image artifacts. We have investigated the astrometry, flux density (S ν ), spectral indices (α; S ν ∝ ν α ), and have searched for counterparts at other wavelengths. Based on the counterpart information, we attempted a classification of the radio sources. The final catalog is presented in Table 1, with the reliable radio sources, their properties and the source classification. Source extraction The source extraction was performed following the procedures described by Medina et al. (2019), and we refer the reader to that work for the details, while here we will give a brief summary. Using the SExtractor (Bertin & Arnouts 1996) tool from the Graph-ical Astronomy and Image Analysis Tool package (GAIA 3 ), we first create a noise image. Then, we have used the BLOBCAT package (Hales et al. 2012) to extract the sources from each of the final images. Both the intensity map and the noise map are used as inputs by BLOBCAT. This package recognizes islands of pixels (blobs) representing sources with a minimum peak flux above N times the noise level in the area. Purcell et al. (2013) noted that with N < 4.5, large radio images will be dominated by spurious sources. Thus, to diminish the number of spurious detections in our catalog, for our extraction we have initially defined N = 5 and require blobs to consist of a minimum of 12 contiguous pixels. The minimum number of pixels was chosen to be the number of expected pixels in 50% of the beam area. The final number of extracted blobs is 3880 . After this first extraction we performed a visual inspection of all the blobs to identify clear image artifacts, such as sidelobes from very bright radio sources, and discarded 555 blobs. The spatial distribution of all the blobs, excluding the artifacts, is shown in Figure 1. We detected a total of 1457blobs with a signal-to-noise ratio (S/N) ≥ 7.0 and 1866 blobs with 5.0 ≤ S/N < 7.0. It has been determined that in large radio surveys such as GLOSTAR the most reliable sources are those with peak flux values above 7σ noise as no spurious sources are expected at these levels (e.g., Purcell et al. 2013;Bihr et al. 2016;Wang et al. 2018;Medina et al. 2019). However, as some blobs with a brightness between 5-7σ noise could represent real sources, we have searched for counterparts inside a radius of 2 in the SIMBAD astronomical database 4 for all sources (see details below). This comprehensive database is the best option to look for counterparts at any wavelength in large areas of the sky but, admittedly, it can miss recent catalogs. We have also considered weak blobs (peak flux values < 7σ noise ) as real radio sources whose positions are consistent with the radio sources from our D-configuration catalog by Medina et al. (2019). The full number of weak blobs with a known counterpart in the SIMBAD database and our Dconfiguration catalog is 142. The remaining 1724 radio blobs having a S/N between 5 and 7 and have no known counterpart are referred to as candidate radio source detections. We give the list of these 1866 sources with S/N< 7 in Appendix A (Table A.1) labeling those with SIMBAD or D-configuration counterparts, and we do not analyze them further. We consider the remaining 1457 blobs as highly reliable radio sources. They are represented by the blue circles in Figure 1 and are listed in Table 1. The position, the SNR, the peak and the integrated flux density values (columns (2) to (8) in Table 1) are taken from the values determined by the BLOBCAT software (Hales et al. 2012). From now on, this paper will focus on the analysis of these sources. Considering the ratio between the integrated flux density (in units of Jy) and the peak flux density (in units of Jy beam −1 (here named as the Y-factor; Y = S ν,Int /S ν,Peak ) we can divide these sources into extended (Y > 2.0), compact (1.1 < Y ≤ 2.0) and point-like (Y ≤1.1) sources. The Y factor of each source is listed in column (9) of Table 1. Using this classification we obtain 100 extended, 455 compact, and 904 point-like sources. As expected for these images, the sources are dominated by compact and point-like sources as we have rejected extended structures in our imaging process. Astrometry For our previous catalog based on the VLA D-configuration observations we have estimated that the accuracy of the astrometry is of the order of 1 . The position errors of the extracted sources with BLOBCAT in the VLA B-configuration images have a mean value of 0. 06 in both Galactic longitude and latitude. The 0. 06 is a formal statistical error estimate from BLOBCAT and does not reflect position errors resulting from imperfect phase calibration. As most of the observed radio sources are expected to be background extragalactic objects, the proper motion of these sources is expected to be zero. The only other recent Galactic plane survey that observed the same region at a similar radio frequency is the Co-Ordinated Radio 'N' Infrared Survey for High-mass star formation (CORNISH; Hoare et al. 2012;Purcell et al. 2013). The observations of CORNISH were obtained at 5 GHz using the VLA in its B-configuration. Because of the similarity of the frequency and the use of the same VLA array configuration the angular resolution of CORNISH is 1. 5, similar as that of the GLOSTAR survey B array data discussed here (1. 0). A total of 257 compact and point-like sources are listed in both the GLOSTAR-B and CORNISH catalogs (Purcell et al. 2013) with a maximum angular separation of 1. 5, the COR-NISH angular resolution. In the upper panel of Figure 2 the measured offsets of these sources are plotted. We obtain mean and standard deviation for the position offsets of −0. 04 ± 0. 01 and 0. 17 in Galactic longitude direction, and 0. 03 ± 0. 01 and 0. 15 in Galactic latitude direction. The mean offsets in both directions are smaller than the mean position error of GLOSTAR radio sources, suggesting a good astrometry. The most accurate positions of radio sources are obtained with the Very Long Baseline Interferometry (VLBI) technique. The Radio Fundamental catalog of extragalactic radio sources compiles the positions of ∼ 19, 000 sources that have been measured with VLBI 5 . We found in this catalog 15 sources detected both by GLOSTAR and CORNISH. In the lower panel of Figure 2 we show the offsets between the GLOSTAR and the VLBI positions as black crosses, and the offsets between the CORNISH and VLBI positions as blue crosses. The mean GLOSTAR−VLBI position offset and its standard deviation is −0. 05 ± 0. 02 and 0. 06, respectively, in Galactic longitude direction, and 0. 03 ± 0. 03 and 0. 11, respectively, in Galactic latitude direction. On the other hand, the mean CORNISH−VLBI position offset and its standard deviation is −0. 02 ± 0. 02 and 0. 07, respectively, in the Galactic longitude direction, and −0. 02 ± 0. 03 and 0. 10, respectively, in the Galactic latitude direction. The conclusion from this analysis is that the astrometry of the B-configuration GLOSTAR images presented in this paper is accurate to better than 0. 1. Comparison with the D-configuration catalog In the first GLOSTAR catalog, we have reported a total of 1575 discrete sources detected in the same region presented in this paper, but based on data obtained with the VLA in its most compact (D) configuration. The GLOSTAR images from the VLA D-configuration observations have an angular resolution of 18 , i.e., 18 times larger than the VLA B-configuration images presented in this paper, which also excluded the shortest baselines to further filter out extended emission, resulting in some expected differences. First, given the higher angular resolution observa- tions of the B-configuration, the extended radio sources reported by Medina et al. (2019) are resolved out and, in some cases, only the brightest peaks of extended sources are detected. It is worth noting that some compact sources detected in the Bconfiguration images can be seen projected on the area of the extended sources, although they do not represent their direct counterpart. Their possible relation must be studied further in future (e.g., upper-panel of Figure 3). Second, some multiple component radio sources that are unresolved or slightly resolved in the D-configuration images will be resolved in the B-configuration images (middle-panel of Figure 3) and the integrated flux densi-ties of the individual components can be estimated. Third, some individual radio sources detected in the D-configuration images can be resolved and appear as fragmented radio sources in the B-configuration images (lower-panel of Figure 3). In total, we have found that 95 sources in the D-configuration images are resolved into 224 B-configuration sources (see column (12) of Table 1). The components of fragmented radio sources are grouped and treated as a single source. However, the information of each single component is given in Table 1 and the fragmented sources are listed in Table 2. The integrated flux density reported in Table 2 is obtained by adding the integrated flux densities of the individual fragments. In total, 72 sources recovered from BLOBCAT are grouped into 22 fragmented sources. Other sources were detected as single compact sources in both D-and B-configuration images and can be considered as direct counterparts. The mean position error of D-configuration sources is 1. 2 (Medina et al. 2019) and the beam size of the B-configuration images is 1. 0. Thus, by adding these values in quadrature we used a maximum angular separation of 2 between sources in both catalogs to consider them as direct counterparts. The number of matching sources between both catalogs with this criterion is 372. A further 312 matching sources are found using an angular separation of 9 , half of the angular resolution of the D-configuration images, for which the association must be investigated further. Given the differences described above, the matching of sources between both catalogs is not expected to be one-to-one. In column (12) of Table 1 we list the GLOSTAR Dconfiguration name to which the B-configuration source is related. With the D-configuration name we have also labeled those sources that are related to two or more B-configuration sources and if they are considered as individual (I) or fragmented (F) sources. In total, 908 B-configuration sources were related to 780 D-configuration sources. The remaining 551 Bconfiguration sources have no counterpart in the D-configuration catalog. Most of these sources are located in the inner parts of the Galactic plane (see Figure 4) where the noise level is higher in the D-configuration images because of the bright and extended radio sources (see the lower panel of Figure 1 in Medina et al. 2019). As the noise levels could be as high as 500 µJy beam −1 , it explains why most of the sources were not detected in the Dconfiguration images, but are detected in the B-configuration images where the noise level is about 10 times lower. Source sizes The source sizes are obtained following Medina et al. (2019), who determined the source effective radius. BLOBCAT determined the number of pixels comprising each source, which can be used to estimate the area (A) covered by the source using the pixel size of 0. 25 × 0. 25. Then the effective radius can be determined using The effective radius distribution is shown in Figure 5, and the value for each source is listed in column (10) of Table 1. Flux densities To estimate the reliability of the radio source flux densities determined from the GLOSTAR images, we compare the results from the BLOBCAT extraction with those in the CORNISH catalog. As most of the sources are expected to be extragalactic sources whose variability is expected to be low (typically only of a few percent over timescales of several years) these provide a good point of comparison. We compare sources that are point like, and thus used the peak flux density. Moreover, we have only compared sources whose peak flux densities are above 2.7 mJy in both catalogs, as this is the 7σ base point in the high reliability source catalog of CORNISH (Purcell et al. 2013). We found 207 sources that meet these criteria. A difference between both catalogs is the mean observed frequency. CORNISH observed at 5 GHz, and GLOSTAR observed a wider bandwidth centered at 5.8 GHz. The measured flux density thus is expected to differ slightly for this observational mismatch due to the spectral index. Assuming that the extragalactic objects have a mean spectral index of −0.7 (Condon 1984), the CORNISH flux values will be on average 10% higher than in GLOSTAR. In the upper panel of Figure 6 we plot the peak flux densities measured by the CORNISH survey as a function of the peak flux densities measured in this work. The dashed line indicates the equality line, and most of the sources are around this line. The lower panel of this figure shows the distribution of the peak flux density ratio between the results from both catalogs. A Gaussian fit to the distribution indicates that the mean value is 1.11 ± 0.03 with a standard deviation of 1.28. Considering the expected higher values in the CORNISH catalog, we conclude that the integrated and peak flux densities of the GLOSTAR-B radio sources are accurate to within 10%. Spectral indices The spectral index of a radio source gives us information on the dominant emission mechanism. Using the wide frequency coverage of our observations we can estimate the spectral index within the observed band. We measure the peak flux density in each of the imaged frequency bins for compact and point like sources that have S/N> 10 in our main source extraction, a total of 988 sources. The constraint of the S/N was chosen to consider that the noise level in each of the imaged frequency bins will be ∼ 3× higher, and may thus not be detected in the individual frequency bin images or their determined values will be affected by noise. On the other hand, over the area covered by an extended source, flux density variations that will depend on its structure may be observed at each frequency, and are hence not considered for this analysis. To obtain the spectral index we assume that in the observed frequency range the flux density is described by the linear equation: log S ν = α · log ν + C. A weighted least-square fitting is made to the measured flux densities. Independent spectral index values for each source are listed in column (11) in Table 1. The distribution of the determined spectral indices is shown in Figure 7. The distribution of spectral indices has a mean value of −0.66 ± 0.02. Spectral indices at radio frequencies in the same area of the presented images have been measured previously from the GLOSTAR D-configuration (Medina et al. 2019) and in the THOR survey , ; described in the next section). To compare the different results, we have selected the sources that have spectral indices determined and have point like structure (Y ≤ 1.1) in all three sets of results. The last constraint is imposed to diminish effects of possible scale structure differences as GLOSTAR D-configuration and THOR have angular resolutions of 18 . Figure spectral index differences of the three sets. We found that the measured spectral indices are consistent among the three data sets, given that the mean of the differences is consistent with zero. We thus conclude that the spectral indices measured on GLOSTAR B-configuration images are reliable. In star forming regions, three main mechanisms are known to produce compact radio continuum emission, and are related to different astrophysical phenomena (Rodríguez et al. 2012). The majority of radio sources show thermal free-free radio from ionized gas (e.g., HII regions, externally ionized globules, proplyds, jets) that has a spectral index ranging from 2.0 (optically thick) at low frequencies to −0.1 (optically thin) at high frequencies. Magnetically active low-mass stars, may show nonthermal gyrosynchrotron with spectral indices ranging from −2.0 to +2.0. Nonthermal synchrotron emission arising from colliding winds in high mass binaries as well from jets ejected by high mass stars interacting with the ambient interstellar medium (ISM) have a typical spectral index of −0.7. However, other phenomena not related to star formation also can produce thermal radio emission, namely gas ionized in planetary nebulae (PNe), while synchrotron emission is observed from extragalactic sources. Background active galactic nuclei (AGN) will mostly emit optically thin synchrotron emission with spectral indices ∼ −0.7. On the other hand, a fraction (up to 20%) of extragalactic background radio sources show a flat or positive spectral indices (e.g., Callingham et al. 2017). These represent a population of star forming galaxies, and progenitors of AGNs (i.e., high frequency peakers Dallacasa et al. 2000;Dallacasa 2003). Given the diversity of the radio sources, to better understand their nature, information at other wavelengths is required. This will be discussed in the following subsection. Counterparts at other wavelengths To gain more insight on the nature of the radio sources, we have searched for counterparts at shorter wavelengths. The search was focused on catalogs that could give us evidence for ionized gas, dense cold gas and dust, which are indicators of massive young stars and the regions they form in. We now briefly describe the catalogs used in our search. The APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) observed the galactic plane at a wavelength of 870 µm (345 GHz), and angular resolution ∼ 20 (Schuller et al. 2009). The emission at this wavelength is dominated by dense cool gas and dust. Several ATLASGAL source catalogs have been released by Contreras et al. (2013), Csengeri et al. (2014), and Urquhart et al. (2014), who list > 10, 000 dense clumps. We will compare our results with the compact source catalog presented by Urquhart et al. (2014). The differences between the angular resolution of ATLASGAL and the observations presented here have to be taken into account. Thus, we have used an offset of 18 (roughly the beam size of ATLASGAL images) to consider an ATLASGAL source to be a potential counterpart for the compact radio source. We found 143 radio sources matching the position of 83 submillimeter sources. The Herschel infrared Galactic plane survey (Hi-GAL) observed the inner Galaxy in 5 bands distributed in the wavelength range between 70 µm and 500 µm (Molinari et al. 2010(Molinari et al. , 2016. Notably, data taken at these wavelengths allow determinations of the peak of the spectral energy distribution of cold dust and thus the source temperatures. The Hi-GAL observations have angu- lar resolutions from 10 down to 35 from the shortest to the longest wavelength. The median uncertainty of Hi-GAL sources is ∼ 1. 2. We have used a 2 offset between a Hi-GAL source and a GLOSTAR source to consider them as counterparts, and found 98 sources that matched this criterion. The Wide-field Infrared Survey Explorer (WISE) mapped the entire sky in four infrared bands centered at 3.4, 4.6, 12.0, and 22.0 µm. The WISE ALL-sky Release Source Catalog contains astrometry and photometry for over half a billion objects (Wright et al. 2010). The angular resolution of the observations is 6 at the shortest wavelength and the position errors are around 0. 2. To consider a WISE source as the counterpart to a GLOSTAR radio source we have used a maximum offset of 2 . A total of 125 sources match this criterion. The Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE; Benjamin et al. 2003;Churchwell et al. 2009) mapped a large fraction of the Galactic plane with the Spitzer space telescope. It observed in four near-infrared sub-bands covering the range from 3.6 to 8.0 µm, and with angular resolutions ∼ 2 . While the filter widths of the 3.6 and 4.5 µm bands are similar to those of WISE, the GLIMPSE survey has focused on Galactic plane observations and is more sensitive than the former. The interstellar medium emission at these wavelengths comes mainly from warm and dusty embedded sources. Considering the angular resolution of our observations and the position uncertainties in the GLIMPSE survey we have used an offset of 2 for the counterpart searching, leading to 251 matching sources. The United Kingdom Infrared Deep Sky Survey (UKIDSS) is a suite of five public surveys at near infrared wavelengths (NIR) of varying depth and area coverage (Lucas et al. 2008) 6 . Particularly, the UKIDSS Galactic plane survey (UKIDS-GPS) covered a total of 1878 deg 2 of the northern hemisphere Galactic plane. The main observed bands are the so called J (1.25 µm), H (1.65 µm) and K (2.20 µm) bands. The spatial resolution of UKIDSS-GPS observation is typically better than 0. 8. For the counterpart search of GLOSTAR radio sources in the UKIDSS- 6 The UKIDSS project is defined in Lawrence et al. (2007). UKIDSS uses the UKIRT Wide Field Camera (WFCAM; Casali et al. 2007). The photometric system is described in Hewett et al. (2006), and the calibration is described in Hodgkin et al. (2009). The pipeline processing and science archive are described in Hambly et al. (2008). GPS catalog we have used this value of 0. 8, finding 389 matches. After our careful search of counterparts at other wavelengths, it is noticeable that 640 radio sources have no counterparts at any other wavelength. The spectral index distribution of the sources in this category shows that they have preferably negative spectral indices (see Fig 7). We will further discuss these sources in the next section. Classification of radio sources Using the counterparts of the radio sources in the catalogues described in the previous section, a robust classification can be carried out. A single classification was performed for fragmented sources listed in Table 2 instead of separate classifications of their fragments, and thus the classification was done to 1409 sources. The classification criteria are based on our findings of the emission properties and/or counterparts of near-infrared (NIR; UKIDSS), mid-infrared (MIR; GLIMPSE and WISE), farinfrared (FIR; Hi-Gal), and submillimeter (SMM; ATLASGAL). Images from the above mentioned infrared surveys are plotted at the position of the radio sources, some examples can be seen in Appendix B. For SMM and FIR, we show the emission properties of each GLOSTAR source. For NIR and MIR, we did a visual inspection of the three-color images for the UKIDSS (red K-band, green H-band and blue J-band), the GLIMPSE (red 8.0 µm, green 4.5 µm, and blue 3.6 µm), and the WISE (red 22.0 µm, green 12.0 µm and blue 4.6 µm) surveys. The sources have been classified into 5 groups, using the following criteria: -Photo dissociated region (PDR): Ionized gas seen as extended emission at MIR, and showing only weak or no compact emission at FIR and SMM wavelengths (Hoare et al. 2012). -Extragalactic candidate (EgC): Radio sources that have nocounterpart at any other wavelength, or are only seen as a point source at NIR wavelengths (Hoare et al. 2012;Lucas et al. 2008;Marleau et al. 2008). -Other sources. Sources that could not be classified in any of the previous categories. The number of radio sources in these groups are 93 HII region candidates, 4 PDRs, 83 radio stars, 65 PNs, 1163 EgCs, and 2 other sources. Examples of sources in these classes are shown in Figures B.1, B.2, B.3, and B.4. The individual source classification is given in column (17) in Table 1 and in column (5) of Table 2 for fragmented sources. Sources previously classified Classification of the radio sources is a main part of the catalog construction. However, some of the sources could have been previously classified. A search for previous classifications of the radio sources has been performed using the SIMBAD astronomical database (Wenger et al. 2000), within a radius of 2 . We found that 269 radio sources have counterparts in the SIMBAD database. Most of these sources, 138, are only classified as radio sources, and are thus of an unknown nature. The classification of the remaining 107 sources suggests that these are Galactic sources. In Table 3 we have separated these sources in seven types of sources, and have listed the source class in SIMBAD considered for these types and the number of sources of each type. Article number, page 9 of 25 A&A proofs: manuscript no. main Overview of the final classification The final radio source classification is a combination of the method described in Section 3.8, and the use of known information of individual sources recovered from the SIMBAD database. For our final catalog we additionally consider the SIMBAD classification for sources for which we find no counterpart in the searched IR and sub-mm catalogs. This was, for example, the case for pulsars that are usually not detected at infrared wavelength and are non-thermal radio emitters. Based on the criteria described in the previous section they were first classified as EgCs and after consulting the SIMBAD database they were finally classified as pulsars. A similar situation occurred with a Wolf-Rayet star and a cataclysmic variable star, wherein they were initially classified as a PN and as a Radio-star, respectively. We use the classification recovered from SIMBAD in these cases. In Table 4 we give the number of sources found in each of the classifications. Interesting sources are the HII region candidates, as they are related to massive star formation, that we will discuss later. Comparison of classifications Classifications of some of the detected radio sources have been performed in previous radio surveys and here we will compare the results of these classifications. We will focus on a comparison based on the low resolution GLOSTAR D-configuration image presented by Medina et al. (2019), and with the classification based on the CORNISH survey which has a similar angular resolution observation as our B-configuration data (Purcell et al. 2013). GLOSTAR D-configuration Some differences between the present catalog and that derived from the D-configuration data are expected, since for the latter, the search for counterparts was done using larger angular separations than in this work on account of the lower resolution. 54 of the HII region candidates identified by us were also detected in D-configuration images, and were also identified as HII regions. Four other HII region candidates had a D-configuration counterpart, but were unclassified. The remaining 32 HII region candidates were not detected in the D-configuration images. Additionally, six sources classified as HII region candidates in the Dconfiguration catalog have been classified as PNe in the present work as their positions are no longer coincident with ATLAS-GAL sources. We have detected 32 radio sources that were classified as PNe in the D-configuration catalog, of which 30 are classified as PNe in this work, one is classified as a WR star and another as an EgC because it is no longer positionally coincident with IR emission. We also detect 13 sources identified as radio stars in the D-configuration, of which 10 were related to a radio source also classified as radio stars. The remaining source, G028.098-00.781, was resolved into three different sources, two of them were classified as extragalactic sources and the other as PN. A further 749 sources detected in D-configuration images were also detected in the B-configuration images. Among these, 688 were classified as EgCs, consistent with the suggestion by Medina et al. (2019) that most of their unclassified radio sources are background extragalactic radio sources. The remaining 61 unclassified sources in D-configuration have now been classified, 43 as radio-stars, 14 as PNe, and 4 as HII region candidates. A comparison between the method used in the D-configuration data and in the B-configuration data shows a consistency better than 90% in the resulting classes. CORNISH Survey The CORNISH survey has a similar angular resolution, and it also used IR information to classify detected radio sources. In their classification effort, they consider UCHII regions and dark HII regions; we detected radio emission from 40 CORNISH sources classified as UCHII and from one dark HII region, which are classified as HII region candidates by us. We also have detected 25 CORNISH radio sources classified as PNe, that are also classified as PNe in our work. Seventeen radio-stars in the COR-NISH survey have counterparts in our catalog; we classified fifteen of them also as radio-stars and two of them as EgCs because we did not find IR counterparts. We have detected 171 COR-NISH radio sources classified as IR-Quiet (not detected at IR wavelengths); we classify 167 of them as EgCs, 4 as radio-stars as IR emission is now reported in the position of these sources, and one as a pulsar based on the SIMBAD database. Finally, out of the 29 radio-Galaxy sources in CORNISH, 28 are classified as EgCs in our work, and the remaining source is classified as radio-star given its IR properties. The consistency in classification between CORNISH and GLOSTAR B-configuration is high, especially for HII regions and PNe where we found a 100% agreement. THOR survey The HI/OH/Recombination line survey of the Milky Way (THOR; Bihr et al. 2015;Beuther et al. 2016;Wang et al. 2020) observed the northern hemisphere of the Galactic plane with the VLA in C-configuration and using its L-band receivers (1 to 2 GHz). It covers Galactic longitudes from 14. • 5 to 67. • 4 and Galactic latitudes from −1. • 25 to 1. • 25, i.e., including the area of the maps presented in this paper. The angular resolution of the THOR radio continuum images is 25 and they have noise levels from 0.3 to 1.0 mJy beam −1 (Wang et al. 2018). While a detailed comparison of the source classification by THOR and GLOSTAR B-configuration is limited given the different image angular scales, it is still, however, useful. To compare GLOSTAR B-configuration sources with THOR sources, we used a maximum separation of 2. 5, which is the position accuracy of the THOR survey (Wang et al. 2018). We found 554 sources matching in position between these surveys. From these sources, the THOR survey identified 34 HII regions (26 of which are identified as HII region candidates) and 9 as PNe (also identified as PNe with our classification criteria). Our classification for the remaining eight sources identified as HII regions by the THOR survey is four EgCs and four PNe. Differences between the classification of these sources can be caused by the different angular scales used for comparison with IR surveys. In fact, the median match radius of IR HII regions (identified with the WISE survey; Anderson et al. 2014) with the THOR survey is ∼ 60 (Wang et al. 2018), almost two orders of magnitude larger than our angular resolution. HII region candidates We have identified 93 HII region candidates 7 . Out of these sources, 71 were previously related to HII regions from our analysis of the D-configuration images or from CORNISH or from THOR or from the SIMBAD database and 22 are new detections from this work. A characteristic of the radio emission from HII regions is that the spectral index values ranges from −0.1 (optically thin free-free radio emission) to 2.0 (optically thick freefree radio emission). Observationally, the spectral index distribution of hundreds of young HII regions at similar frequencies show a mean value of 0.6 (Yang et al. 2019(Yang et al. , 2021. We have determined the in-band spectral index (see Section 3.6) for 57 HII region candidates, 13 of which are new. In Figure 9, we plot the spectral index of these HII regions as a function of the R eff. the Y-factor and the S/N. Surprisingly, some of the determined spectral indices are negative and well below the lowest value expected for free-free radio emission of α = −0.1, indicating that the radio emission nature is non-thermal. The results are not affected by the size of the source, as most of them are slightly resolved sources (R eff θ resolution /2 = 0. 50). Kalcheva et al. (2018) obtained the spectral index to known ultra-compact HII regions and found that about 18% of their sample were consistent with negative spectral indices. They discuss their results and conclude that different interferometer array configurations and time variability could explain these negative values. These effects, however, are not present in the GLOSTAR observations presented here. Hence, these radio sources with negative spectral indices could be related to HII regions but they are not the HII regions themselves (e.g. Purser et al. 2016). Compact radio sources have been found related to HII regions. The most known and nearby case is the Orion Nebula Cluster (ONC) where around 600 compact radio sources are found in the HII region ionized by the Trapezium (Forbrich et al. 2016;Vargas-González et al. 2021). As first suggested by Garay et al. (1987); Garay (1987), most of these fall into two broad categories: first, sources with thermal radio emission from circumstellar matter, often protoplanetary disks ("Proplyds"), that are photo-evaporated in the intense UV field of the brightest Trapezium star θ 1 Ori C (O7Vp). Second, nonthermal sources associated with the coronal activity of magnetically active low mass members of the ONC many of which also show X-ray emission and are (highly) variable Dzib et al. 2021). Other example regions are NGC 6334 ) and the M17 (Rodríguez et al. 2012) star forming regions (SFRs) where several radio sources are found close to prominent HII regions. Also, in these SFRs a significant fraction of the compact radio sources have been found to produce non-thermal emission. That high number of such radio sources can be detected on the ONC is a result of the cluster's close distance, D, of just ≈ 400 pc (Menten et al. 2007;Kounkel et al. 2017). Extrapolating the Orion case to a distance of a few kilo-parsecs, most of these magnetically active stars would not be detected, except for the two strongest sources with measured integrated flux densities of a few tens of mJy. We note that, NGC 6334 and M17, mentioned above, are also relatively nearby, at D = 1.34 +0.15 −0.12 and 1.98 +0.14 −0.12 kpc, respectively Wu et al. 2014). Time variable weak radio sources around the well-studied ultracompact HII region W3(OH) (D = 2.0 kpc) are other examples (Wilner et al. 1999). For more distant SFRs, thermal radio emission is only detected from the HII region itself that is excited by the central high mass star(s) of a cluster. It should be noted that other phenomena can produce compact non-thermal radio emission in massive star forming regions that are at work in Orion and can be observed in regions at distances of a few kpc, such as non-thermal radio jets (e.g. Carrasco-González et al. 2010;Purser et al. 2016) and wind collision regions in massive binary stars (e.g. Dzib et al. 2013). Non-thermal radio jets from YSOs, however, are rare and only a handful of cases are known. Similarly, non-thermal radio emission from wind collision regions are not common and are usually associated with evolved massive stars that have powerful winds. On the other hand, the radio sources with positive spectral index could be true HII regions. More extensive multi-wavelength radio analysis is needed to characterize their emission and determine their turnover frequency, emission measure, etc. In general, all the radio sources that are associated with HII regions discussed in this work deserve further attention and more detailed studies. Variable radio sources By comparing the peak flux densities measured in the GLOSTAR images with those reported in CORNISH (Purcell et al. 2013), we have looked for sources with variable radio emission. The search was restricted to sources with Y < 2.0, i.e., with compact radio emission. Variability is identified when the source is detected in both catalogs, GLOSTAR and CORNISH, but the flux ratio is larger than 2.0. We also identify variability when we detect a source in the GLOSTAR catalog with a peak flux den- sity level > 2.7 mJy beam −1 (the CORNISH detection limit) and it is not reported in the CORNISH catalog. Finally, a source detected as unresolved in the CORNISH catalog that is not in the GLOSTAR high reliability catalog or in the catalog of sources with 5 to 7σ, is also considered as a variable radio source. With the above criteria we have identified 49 variable sources. They are listed in Table 5 together with the peak flux densities in the compared catalogs. The GLOSTAR and COR-NISH classification are listed when available. Most of the identified variable radio sources are EgC or IR-quiet sources, and have only been detected at radio wavelengths. Since extragalactic background sources are not expected in general to show pronounced variability, thus some of these sources could be interesting Galactic radio sources whose nature has to be explored. Extragalactic Objects As has been shown in our previous works, most of the detected radio sources are expected to be background extragalactic objects (Medina et al. 2019;Chakraborty et al. 2020). Following the formulation by Fomalont et al. (1991), and using an observed area of 57,600 arcmin 2 , assuming a nominal noise level of σ = 65 µJy beam −1 and a threshold 7× the noise level, we estimate that the number of expected background extragalactic sources in our image is 1138 ± 664. This number suggests that most of the detected radio sources above 455 µJy beam −1 (7σ) are of extragalactic origin. From our classification criteria we have compiled a list of 1159 sources that have most probably an extragalactic origin. They are labeled as EgC in Table 1. Their extragalactic nature is also supported with the negative spectral index of most of them. The spectral index is determined for 777 of these sources and 157 (20%) have a flat or positive spectral index, consistent with the expected number of extragalactic radio objects that are expected to have positive spectral indices at our frequencies. Similar to our previous work (Chakraborty et al. 2020), we have also studied the Euclidean-normalized differential source counts of the point sources characterized as being extragalactic in origin. We have binned the source integrated flux densities in logarithmic space and divided the raw counts in each bin by the fraction of image area over which a source with a given integrated flux density value can be detected, known as the visibility area (Windhorst et al. 1985). The differential source counts have been calculated by dividing the visibility area weighted source counts in each bin, by the total image area (Ω in steradians) and bin width (∆S in Jy). These differential source counts have been normalized by multiplying with S 2.5 , where S is the mean integrated flux density of sources in each bin (Windhorst et al. 1985). The normalized differential source counts is shown in Figure 10, where the error bars are Poissonian. We have compared our findings with two simulated catalogues, the SKA Design Study simulations (SKADS, Wilman et al. 2008) and the Tiered Radio Extragalactic Continuum Simulations (T-RECS, Bonaldi et al. 2019). We have also compared our findings with the observed extragalactic source populations at low-frequency as well as high-frequency, which include: the TIFR GMRT Sky Survey at 150 MHz (TGSS-ADR1; Intema et al. (2017)), BOOTES field at 150 MHz using LOFAR (Williams et al. 2016), Lockman Hole field at 1.4 GHz with the LOFAR (Prandoni et al. 2018), COS-MOS field at 3 GHz with VLA (Smolčić et al. 2017), ECDFS field at 5 GHz (Huynh et al. 2015) and 9 GHz (Huynh et al. 2020) with ATCA and the 1.4 GHz source counts based on observations with VLA by Condon (1984). In all cases we have scaled the source counts to 5.8 GHz using a spectral index, α = −0.7. We have found that the source count of these sources classified as extragalactic in the GLOSTAR survey is statistically similar and consistent with the previously observed extragalactic source population as well as with the simulated catalogs. This shows that the majority of these sources are indeed of extragalactic origin. Figure 7 shows that the peak of the spectral index distribution is −0.66 ± 0.02. As discussed in the previous subsection most of the unidentified radio sources will be of extragalactic origin, however some of these will be interesting Galactic non-thermal radio sources. These sources are only detected at radio frequencies and it will be hard to distinguish them. Very Long Baseline Interferometry (VLBI) observations to all unidentified compact radio sources could help to distinguish the Galactic from the extragalactic radio sources. Position measurements on the scale of several months to years could measure the proper motions and trigonometric parallaxes of these sources. As the extragalactic background sources are not ex- (Purcell et al. 2013). pected to exhibit proper motions or trigonometric parallaxes, these observations can help to distinguish between the two classes of radio sources. A wide field VLBI survey of the Galactic Plane is now feasible thanks to the DiFX software correlator (Deller et al. 2011) which allows multiple-phase center correlation inside primary beam of the interferometer. Fig. 10. The Euclidean-normalized differential source counts of the point sources classified as of extragalatic origin in the GLOSTAR survey. We have compared the source counts with the simulated radio sky and previously observed source populations. For details of simulated catalogues and different observed source populations see text. Perspective on the search of non-thermal galactic sources they require less computational power. The observed frequency by the GLOSTAR survey also has the advantage that the radio emission is less affected by scattering than the lower frequencies used for conventional pulsar searches. As an interesting example, the radio emission from PSR J1813-1749, one of the most energetic known pulsars, was first detected in radio continuum images at 5 GHz (Dzib et al. 2010 while searches for the pulsed radio emission at lower frequencies failed (Helfand et al. 2007;Halpern et al. 2012;Dzib et al. 2018). It turns out that interstellar scattering in the direction of this pulsar is very high, and PSR J1813-1749 is the most heavily scattered known pulsar (Camilo et al. 2021). Pulsar radio continuum emission is characterized by point-like structure and steep radio spectrum (α = −1.4 ± 1.0; Bates et al. 2013). These are criteria that are shared with EgCs in our classification scheme, making them hard to differentiate. Dedicated observations to search for pulsed emission can distinguish them, with the advantage that the observations will be target intended instead of a blind survey. Conclusions As part of the GLOSTAR survey, the VLA in its B-and Dconfiguration was used to observe a large portion of the Galactic plane in the C-band. In this paper we present the B-array observations covering the area within 28 • ≤ < 36 • and |b| < 1 • , which we previously investigated using data obtained in the D-configuration (Medina et al. 2019). Using a combination of automatic source extraction with BLOBCAT (Hales et al. 2012) and visual inspection we have identified 3325 radio sources. The catalog of these radio sources is divided in two parts. The catalog of highly reliable radio sources contains 1457 entries. Detailed properties of these sources are given, such as the positions, the signal-to-noise-ratio, integrated and peak fluxes, the ratio between these two values (also known as the Y-factor), the effective radii, and the spectral indices. The weak source catalog lists 1866 sources with a signal-to-noise ratio between 5 to 7. Only their basic properties such as positions, the signal-tonoise-ratio, integrated and peak fluxes are given. The highly reliable radio sources were further investigated. The positions of these sources were compared with the positions from the CORNISH catalog (Purcell et al. 2013) and radio sources detected with the VLBI from the Radio Fundamental catalog. We found that the positions of GLOSTAR sources are in agreement with those in these catalogs to better than 0. 1. We have also compared our integrated flux densities with those in the CORNISH catalog and conclude that the GLOSTAR integrated flux densities are accurate to within 10%, apart from clearly variable sources. From a comparison with the GLOSTAR D-configuration catalog, we find that 908 of them are related to 780 sources detected in the D-configuration images. In particular, 22 D-configuration sources are partially resolved and appear as fragmented sources in the new high resolution images. A total of 72 highly reliable B-configuration sources comprise these 22 fragmented sources. To further investigate the nature of the highly reliable radio sources, we have used information from surveys at infrared and sub-millimeter wavelengths as well as consulting the SIM-BAD database. The classification of the radio sources resulted in 93 HII region candidates, 64 PNe, 81 radio stars, and most of the remaining sources as EgCs. We compared our classification with the classification done to the D-configuration radio sources, and to the sources from the CORNISH survey. We find that the classification from the catalogs agree in more than 90% of the sources. An interesting result is that many sources classified as HII region candidates, however, have a negative in-band spectral indices suggesting that the radio emission is predominantly non-thermal and, thus, they are not the HII regions themselves, but likely related to nearby YSOs. These sources could be other radio emitter objects related to star formation. They deserve a further study using deeper and radio multi-wavelength observations to better characterize their radio emission nature, and the nature of the objects themselves. Finally, by comparing the integrated flux densities from GLOSTAR and CORNISH, whose observations are separated by 7 years, we have identified 49 variable sources, whose nature has to be explored for most of these sources. Acknowledgments This research was partially funded by the ERC Advanced Investigator Grant GLOSTAR (247078). AY would like to thank the help of Philip Lucas and Read Mike when using the data of the UKIDSS survey. RD and HN are members of the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. HB acknowledges support from the European Research Council under the Horizon 2020 Framework Program via the ERC Consolidator Grant CSF-648505. HB also acknowledges support from the DFG in the Collaborative Research Center SFB 881 -Project-ID 138713538 -"The Milky Way System" (subproject B1). VY acknowledge the financial support of CONACyT, México. This work (partially) uses information from the GLOSTAR database at http://glostar.mpifr-bonn.mpg.de supported by the MPIfR, Bonn. It also made use of information from the ATLAS-GAL database at http://atlasgal.mpifr-bonn.mpg.de/ cgi-bin/ATLASGAL_DATABASE.cgi supported by the MPIfR, Bonn, as well as information from the CORNISH database at http://cornish.leeds.ac.uk/public/index.php which was constructed with support from the Science and Technology Facilities Council of the UK. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This publication also makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. We have used the collaborative tool Overleaf available at: https: //www.overleaf.com/.
2022-10-04T13:11:59.047Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d1cda19dd6c872c2568934d44f7291210b106fc7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d1cda19dd6c872c2568934d44f7291210b106fc7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
221782302
pes2o/s2orc
v3-fos-license
Generalized Linear Models outperform commonly used canonical analysis in estimating spatial structure of presence/absence data Background Ecological communities tend to be spatially structured due to environmental gradients and/or spatially contagious processes such as growth, dispersion and species interactions. Data transformation followed by usage of algorithms such as Redundancy Analysis (RDA) is a fairly common approach in studies searching for spatial structure in ecological communities, despite recent suggestions advocating the use of Generalized Linear Models (GLMs). Here, we compared the performance of GLMs and RDA in describing spatial structure in ecological community composition data. We simulated realistic presence/absence data typical of many β-diversity studies. For model selection we used standard methods commonly used in most studies involving RDA and GLMs. Methods We simulated communities with known spatial structure, based on three real spatial community presence/absence datasets (one terrestrial, one marine and one freshwater). We used spatial eigenvectors as explanatory variables. We varied the number of non-zero coefficients of the spatial variables, and the spatial scales with which these coefficients were associated and then compared the performance of GLMs and RDA frameworks to correctly retrieve the spatial patterns contained in the simulated communities. We used two different methods for model selection, Forward Selection (FW) for RDA and the Akaike Information Criterion (AIC) for GLMs. The performance of each method was assessed by scoring overall accuracy as the proportion of variables whose inclusion/exclusion status was correct, and by distinguishing which kind of error was observed for each method. We also assessed whether errors in variable selection could affect the interpretation of spatial structure. Results Overall GLM with AIC-based model selection (GLM/AIC) performed better than RDA/FW in selecting spatial explanatory variables, although under some simulations the methods performed similarly. In general, RDA/FW performed unpredictably, often retaining too many explanatory variables and selecting variables associated with incorrect spatial scales. The spatial scale of the pattern had a negligible effect on GLM/AIC performance but consistently affected RDA’s error rates under almost all scenarios. Conclusion We encourage the use of GLM/AIC for studies searching for spatial drivers of species presence/absence patterns, since this framework outperformed RDA/FW in situations most likely to be found in natural communities. It is likely that such recommendations might extend to other types of explanatory variables. INTRODUCTION Ecological communities tend to be spatially structured in response to environmental gradients that are themselves organized in space, or to spatially contagious processes such as growth, dispersion, and species interactions (Legendre & Legendre, 2012;Peres-Neto & Legendre, 2010). Thus, disentangling the causes of spatial structure and identifying spatial variability and different scales of organization in natural communities is a central question in ecology (Legendre, 1993). Answering this question requires the construction of explanatory variables based on spatial relationships among sites (Dray, Legendre & Peres-Neto, 2006). One approach extensively used to create spatial variables and/or control for spatial autocorrelation in residuals is an eigenvector-based method, called Moran's eigenvector maps (MEMs, Dray, Legendre & Peres-Neto, 2006). This method creates spatial explanatory variables representing structure on a range of spatial scales from the spatial relationships among sampling sites. These variables can be used for a broad range of goals, from controlling for phylogenetic autocorrelation in ecological data (Diniz-Filho et al., 2012) to searching for spatial structure in natural communities, even when irregularly sampled (e.g., Bauman et al., 2016;Neves et al., 2015). In many studies the response variables for which ecologists seek to find spatial structure are community composition datasets containing either abundances or presence/absence information (here, we focus on the latter). For community ecology studies, Redundancy Analysis (RDA) is one of the most popular strategies due to its versatile framework, wellestablished literature and abundant toolkits available for implementation (see (Blanchet et al., 2014) ; Borcard, Legendre & Drapeau, 1992;Saiter et al., 2015). The RDA algorithm searches for optimal linear combinations (in the leastsquares sense, see Legendre & Legendre, 2012) of the explanatory variables that best explain the variation in the transformed community composition data (Legendre & Gallagher, 2001;Borcard, Gillet & Legendre, 2011;(Blanchet et al., 2014)). The usual approach then consists of establishing the global significance of the relationship between the response matrix and all the explanatory variables, after which a subset of explanatory variables is usually selected by stepwise procedure such as Forward Selection (FW, sensu Blanchet, Legendre & Borcard, 2008a) The most common approach uses two thresholds for variable selection: a significance level α and the adjusted R 2 (see below and Blanchet, Legendre & Borcard, 2008a for details). This whole framework will hereafter be called RDA/FW for brevity. A statistic related to the Akaike Information Criteria (AIC, Akaike, 1973) has also been suggested for RDA model selection (Godínez-Domínguez & Freire, 2003), but it has been shown to perform poorly and will not be further explored here (Bauman et al., 2018a). However, methods based on least-squares such as RDA are unlikely to perform well when applied to data that violate the assumption of constancy in the mean-variance relationship. This assumption is usually violated by datasets containing many zeros including abundance (count or semi-quantitative) and presence/absence (binary) data. Data transformation does not always solve such problems (O'Hara & Kotze, 2010;Warton, 2018), although least-squares can give reasonably robust tests of the significance of regression coefficients (Ives, 2015). In general, algorithmic methods such as RDA do not take into account the statistical properties of the response variable, such as the distribution of variances and how the response changes along spatial/environmental gradients (Ferrier et al., 2007;Warton, Wright & Wang, 2012;Warton et al., 2015;Warton, 2018). More recently, Generalized Linear Models (GLMs) have been proposed as an alternative model-based approach to the analysis of presence/absence or count data (Wang et al., 2012;Warton et al., 2015;Yee, 2006). The use of GLMs has long been established for univariate analyses and related approaches for multivariate count data are now available (O'Hara & Kotze, 2010;Warton, 2018). The usual approach to selection of explanatory variables in this approach is Akaike's Information Criterion (AIC: Akaike, 1973;Wagenmakers & Farrell, 2004). This framework will hereafter be named GLM/AIC. Here, we compared the performance of the RDA/FW and GLM/AIC approaches to selecting spatial explanatory variables for community presence/absence data by measuring the proportion of spatial patterns contained in simulated communities they could correctly retrieve. There have been some studies of simulated multivariate count data (Warton, Wright & Wang, 2012), but presence/absence data are particularly important in spatial studies because they are often the only data that can be collected consistently over large spatial extents. We therefore compare the performance of RDA/FW and GLM/AIC methods for the selection of MEM spatial variables (including one special case, the asymmetric eigenvector maps or AEM) from realistic simulated presence/absence data. We used spatial variables as our predictors since we were interested in discovering whether varying the spatial scales in which communities were structured would affect model performance. We generated simulated data sets with predefined spatial structure based on three real data sets, under two different ecological interpretations of presence/absence data. First, we assumed that species are truly present at some sites and absent at others, and are detected if present (simulated presence method, SPM). Alternatively, absences may represent failure to detect species that are truly present. In this case, we simulated species abundances, followed by a simulated sampling step to obtain presence/absence data (simulated abundances method, SAM). Baseline datasets We compared the two approaches to spatial variable selection using simulated community data based on three real community composition datasets with a range of properties: 1. Presence/Absence of 110 marine benthic macroalgae species from a Rapid Assessment Program for biodiversity of 42 sample sites spanning roughly 2,000 km 2 at Ilha Grande Bay, Rio de Janeiro, Brazil (southwest Atlantic) (Carlos-Júnior et al., 2019), permit number IBAMA/RJ:031/04); 2. Presence/Absence of 588 plant species from grassland covering 500 km 2 of Scotland's coast. Data were collected from 3639 5× 5 m quadrats from 94 sites. We used sites as our sample units, treating species as present when they occurred in at least one quadrat at a site, and absent otherwise (see Lewis, Pakeman & Marrs, 2014) for more information); 3. Presence/Absence of 47 freshwater aquatic insect species collected from 30 sample sites in five tributaries of the Guapiaçú River basin, Brazil which covers about 40 km 2 (Feijó-Lima in prep, permit number INEA-RJ: 019-2014). For each of the datasets we used the geographical coordinates (maps and sampling sites in Fig. S1) to calculate spatial explanatory variables for regression ( Fig. 1). We chose MEMs as our spatial variables since they are commonly used to describe spatial structure in ecological studies. Moreover, in contrast to coarser methods such as trend-surface analysis, MEMs are a flexible method, capable of describing all spatial scales provided by the sampling design (Borcard, Gillet & Legendre, 2011). They are also more flexible and powerful than the method of principal coordinates of neighbor matrices (PCNMs, a special case of distance-based MEMs) (Bauman et al., 2018a;Bauman et al., 2018b;Borcard & Legendre, 2002;Dray, Legendre & Peres-Neto, 2006). One needs two matrices to build the MEM variables for a given set of site coordinates: matrix B describing the connectivity among the geographical sampling sites and matrix A describing the weights of such connections. The Hadamard product of these two matrices generates the spatial weighting matrix (matrix W), which is then doubly centred and diagonalized, yielding eigenvectors to be used as spatial variables. For ecological studies, the processes of interest are usually those generating positive autocorrelation, and it is therefore common to use only MEMs associated with positive eigenvalues (as in this study). For studies in which negative spatial autocorrelation is also of interest (e.g., where negative interactions such as competitive exclusion, predation, etc are suspected), the eigenvectors associated with negative eigenvalues can also be separately used (Bauman et al., 2018a). We made decisions about B and A for each dataset based on our ecological knowledge of the spatial structure of these regions, since our goal was to simulate communities with ecologically sensible spatial structures. Therefore, for dataset 1 we chose the minimum spanning tree (B) with Euclidian linear distances as weights (A). Our decision was based on the shape of the bay and the fact that the main water movements make the sampling sites geographically compartmentalised in subregions where sites are likely to be minimally connected (Carlos-Júnior et al., 2019). Similarly, spatial organisation in dataset 2 could be sensibly described in terms of Delaunay Figure 1 Schematic diagram of the main steps used in this study to simulate community presence/absence data with pre-defined spatial structure. Data acquisition (I): We used real data from marine, terrestrial and freshwater communities and their respective sampling site coordinates as our baseline datasets. Obtaining response and predictor matrices (II): Those datasets were used to construct a response matrix of presence/absence data Y (1) and a matrix X of spatial explanatory variables called MEMs. The spatial variables were obtained from a pairwise site-by-site distance matrix A (2) (continued on next page. . . and a connectivity matrix B (3) describing the spatial relationship among sites (see main text for specific decisions for each dataset). The Hadamard product of these two matrices generates the spatial weighting matrix W (4), which is then doubly centred and diagonalised, yielding eigenvectors to be used as spatial variables, represented below by matrix X. Obtaining realistic coefficients for spatial variables (III): From a Generalized Linear Model (GLMs) for the relationship between Y and X (5) we obtained a matrix C of realistic regression coefficients (6). Using non-zero coefficients to model new presence/absence data with pre-defined spatial structure (IV): We sampled different numbers of non-zero coefficients from C under 14 distinct scenarios (see main text) to build a new matrix C* and then left-multiplied C* by X (7) to obtain matrix Y*. This matrix represented the logit predicted probabilities of presence or a matrix of log abundances, depending on which of two models that differed, respectively, in assumptions regarding absences as real (simulated presence model, SPM) or artifacts derived from poor sampling (SAM). From Y* we estimated (8) new presence/absence data Y* containing the spatial structure defined by C*. Using GLM/AIC and RDA/FW to select spatial models using the simulated presence/absence data (V): Finally, we regressed Y* against X using the GLM/AIC and RDA/FW frameworks (9) to assess which MEMs would be correctly selected by those two methods. The performance of each method was mainly assessed by the proportion of MEM variables that were correctly included or excluded from final models by each method (10). triangulation (B) with Euclidian weights (A) . Despite some degree of connectivity among all sites, pairs of sites could be mostly associated not to their immediate neighbours but rather as a function of their distances. This is due to cultural differences in land management. For example, northern and western islands share cultural histories, which is reflected in species composition (Lewis, Pakeman & Marrs, 2014). Directional spatial processes in ecological data, such as those observed in rivers, are well described by a special case of MEMs called asymmetric eigenvector maps (AEM, Blanchet, Legendre & Borcard, 2008b), which were used for constructing variables for dataset 3. In MEMs, larger eigenvalues are associated with broader-scale spatial structures while smaller eigenvalues represent fine-scale spatial structures. This allowed us to control the spatial scale of variation in community structure. Simulating communities with chosen spatial drivers We simulated realistic communities with known spatial structure, based on the three datasets. We used spatial eigenvectors as explanatory variables. We varied the number of MEMs with non-zero coefficients and created new binary (presence/absence) communities (with the same number of sites and same expected number of species as the real ones) using two different modelling scenarios. These simulated communities reflected the effect of those MEMs with non-zero coefficients. By varying the number and ordering of the non-zero coefficients, we could therefore control the spatial structure and scale of the simulated community data (see scheme in Fig. 1 and Table 1). In order to simulate new binary communities under the simulated presence method (SPM, in which species are always detected if present), we first estimated a coefficient matrix C of size (m variables + 1 (first) row with intercepts) × p species from each real data set. This was achieved using the manyglm function with binomial errors in R package mvabund Table 1 Simulation scenarios for the three datasets as described in main text. Distribution of MEM variables with non-zero coefficient under each simulation scenario in all three datasets (A = marine algae from Ilha Grande Bay, m = 16; B = Scotland grasslands, m = 30; C = freshwater insects, m = 12). Rows and columns define all simulation scenarios regarding the number of variables to be used and their position. Rows represent the number of non-zero variables to be included based on set K (see main text), whereas columns define the scaling of these non-zero variables, i.e. position to which those non-zero variables would be assigned. Scaling 1 assigned non-zero coefficients only to MEMs associated with larger eigenvalues representing broader spatial scales. Scaling 2 assigned non-zero coefficients only to MEMs associated with smaller eigenvalues, representing finer spatial scales. Scaling 3 assigned non-zero coefficients to MEMs representing a range of spatial scales. Cells contain sets of indices of explanatory variables. When nVar=0, none of the variables had non-zero coefficients. Scaling 1 (only broad) 2 (only fine) 3 (mixed) (version 3.11.9, Wang et al., 2012), with explanatory matrix X (n sites × m positive MEMs + an initial column of 1 s). The matrix C gives the effect of each explanatory variable on the logit-transformed probabilities of presence. The mvabund package provides a GLM framework for multivariate response data. We then created new hypothetical scenarios by generating a new coefficient matrix C * , of the same size as C, whose elements c * kj are given by whereF b is the empirical distribution function of c kj (k =2, 3, . . . , m +1, j = 1, 2, . . . , p) (Evans, Hastings & Peacock, 2000), and the b * kj are sampled with replacement. The set K defines to which rows of C * the non-zero coefficients were allocated: we studied 14 such sets (see below and Table 1(A-C)). In other words, we used the originally-estimated intercepts in each simulation (first row of Eq. (1)), and drew those coefficients assigned to non-zero values (second row of Eq. (1)) from the empirical distribution of all the originally-estimated explanatory variable coefficients. We sampled the values of the non-zero coefficients from the empirical distribution in order to simulate plausible but not fixed spatial structures. Table 1 depicts for each dataset how the non-zero coefficients were assigned for each dataset and simulation scenario (see below). We then calculated predicted probabilities of presencep ij for the jth species at the ith site. Given the matrixŶ = XC * (n sites × p species) of predicted logit probabilities of presence, the predicted probability of presence iŝ The simulated presence/absence value for species j at site i was sampled from a Bernoulli distribution with success probabilityp ij . The result is a community matrix with the same number of sites and the same expected number of species as the real community, and with realistic coefficients for spatial eigenvectors. As in the maximum likelihood estimation done by manyglm (Wang et al., 2012), species and sites were assumed conditionally independent when generating simulated presence/absence data, given the values of the explanatory variables. Our simulated communities correspond to the simple case in which presence/absence patterns are affected by environmental variables but not interspecific interactions. Nevertheless, interspecific interactions could well be relevant to real world systems and other models (Godsoe & Harmon, 2012;Anderson, 2017). Since GLMs are specified correctly for presence/absence data generated this way, we would expect them to perform well. We therefore devised a second ecologically meaningful simulation method in which absences arise from the sampling protocol, called the simulated abundance method (SAM). The two simulation methods differ in whether they assume we have true absences or sampling-related absences. Note that it is not possible to simulate binary data directly using RDA, because RDA does not generate predicted probabilities of presence. Instead, we treatedŶ as log expected abundances and exponentiated each element to get expected abundances λ. Then we calculated the probability of detecting the species under Poisson sampling (i.e., the probability of drawing a value of at least 1 from a Poisson distribution with parameter λ), which iŝ Finally, we generated a Bernoulli random variable with success probabilityp ij to produce a simulated presence-absence observation. Both GLM and RDA are mis-specified for data generated in this way. Codes for both the SPM and SAM simulation frameworks and all the datasets used in our simulations are available as supplemental information (Data S1-Data S3). We compared GLM and RDA variable selection under up to 14 different scenarios, differing in the number of non-zero coefficients (nVar) and whether these coefficients were associated with fine or broad spatial scales. We simulated up to six different choices of the number of MEM variables creating the spatial structure in the data (i.e. having non-zero coefficients): none, approximately one sixth, approximately one third, approximately half, approximately three-quarters, and all (Table 1(A-C), rows). We also simulated three different spatial scales of the patterns. As mentioned above, MEMs associated with larger eigenvalues represent broader spatial scales. We ordered the MEMs in descending order of eigenvalues and arranged the non-zero coefficients within matrix C * in three different ways (Table 1 (A-C), columns): only broad-scale MEMs with non-zero coefficients (scaling 1); only fine-scale MEMs with non-zero coefficients (scaling 2); half broad-scale, half fine-scale (scaling 3). Because not every combination of number of non-zero coefficients and spatial scaling is possible (e.g., it is not possible to assign one non-zero coefficient in scaling 3), there were 14 possible combinations overall for each dataset ( Table 1). The main steps of the simulation scheme are summarized in Fig. 1. RDA and GLM We used the default RDA function from the R package vegan (version 2.5-6, Oksanen et al., 2019), with simulated community composition as the response variable, and MEMs associated with positive eigenvalues generated from geographical coordinates of the sample sites as explanatory variables. In order to perform a transformation-based RDA (Borcard, Gillet & Legendre, 2011;Blanchet et al., 2014) we used the Ochiai coefficient, which is the Hellinger transformation analogue for binary data, as recommended by Legendre & Gallagher (2001) and Borcard, Gillet & Legendre (2011). Binomial GLMs were fitted to the same data using the manyglm function in R package mvabund (Wang et al., 2012). We fitted our models using a logistic regression (logit link function for binomial response), with species compositional data as the multivariate response variable and MEMs as predictors. No interaction terms were included, following common practice in spatial modelling of community data. Comparing model selection between RDA and GLM frameworks We compared the results of model selection between the approach usually taken in the RDA and a somewhat-similar approach for GLMs. For RDA, we used the forward selection with double stopping criterion following (Bauman et al., 2018a;Bauman et al., 2018b), beginning with a global test of significance (model with all spatial predictors) and carrying on with the variable selection if the global model was significant. The forward selection itself consists of a stepwise procedure including in the model the variable contributing the most to the adjusted R 2 . The procedure stops either when the next variable with the highest contribution is not significant (first stopping criterion) or causes the adjusted R 2 to be bigger than that of the global model (i.e., containing all variables; second criterion). This is implemented in the function ordiR2step in the vegan package (Oksanen et al., 2019). For GLM, we used forward selection with a stopping rule based on minimum Akaike Information Criterion (AIC) (Akaike, 1973;Wagenmakers & Farrell, 2004). The selection procedure started from a model with intercept only and added one explanatory variable at a time, until no further improvement in the sum of AIC over each of the response variables was possible. We used this approach because the usually large number of MEMs makes it difficult to compare the AIC sum over all possible GLMs. The performance of each method on simulated data was mainly assessed by two criteria. First, we assessed how many MEMs with zero coefficients were incorrectly included in the final model. Second, we assessed how many MEMs with non-zero coefficients were incorrectly excluded from the final model. Also, we assessed overall accuracy (score) as the percentage of MEMs whose inclusion/exclusion status was correct. The goals of ecological studies are usually not directly related to the inclusion/exclusion of individual MEM variables, but instead to identify spatial pattern, represented by a linear combination of MEMs. However, since the MEMs form a basis for the space spanned by the transformed spatial weighting matrix, such a linear combination is unique (Fraleigh & Beauregard, 1995, pages 197-198). Furthermore, the MEMs are orthogonal, so that each represents a qualitatively distinct aspect of spatial pattern. Therefore, if an individual MEM is incorrectly included or excluded, the estimated spatial pattern is qualitatively wrong. We further explored the ability of each method to capture spatial pattern using a graphical approach (Article S1). For each real dataset and each method, we haphazardly picked one simulated data set. We plotted the MEM decompositions of both the true and estimated spatial patterns. We chose the scenarios in which each method had the worst performance in terms of correctly including/excluding variables, in order to determine whether in such cases, overall spatial pattern would still be captured. Finally, we calculated how much of the variation in response variables was explained by each method using the adjusted R 2 for the linear model in RDA and its analogue for GLMs, the D-value (Tjur, 2009). These two values cannot be directly compared since they are not exactly equivalent, but their results could yield interesting insights and are made available as supplemental information (see table results in Data S4). For each of the combinations of conditions in Table 1, 1,000 simulated data sets were generated under each of SPM and SAM. For each simulated data set, spatial explanatory variables were selected using both GLM/AIC and RDA/FW. RESULTS Overall, GLM/AIC outperformed RDA/FW in selecting spatial explanatory variables when data were simulated under either SPM or SAM in all three scaling patterns (Fig. 2). In general, GLM/AIC had fairly predictable performance: it performed nearly perfectly when few or none of the available variables had non-zero true coefficients (i.e., nVar = 0, It is also noteworthy that when the model had a smaller number of variables to select from (River dataset 3 with 12 MEMs), scores in GLM/AIC were higher, with virtually no incorrect inclusion of variables, and incorrect exclusion of variables occurring on average in only approximately 6% of all 14000 simulations over the whole set of replicates (Fig. 3E). Under the same conditions, RDA/FW's rate of success was approximately 81%, incorrectly including variables at a rate of 18% (incorrect exclusions represented less than 1%) as depicted in Fig. 3E. Under both the SPM and SAM simulation methods, GLM/AIC differed substantially from the RDA/FW framework in regard to the type of errors it most often produced. GLM/AIC had virtually no incorrect inclusion of variables (Fig. 3, blue). However, when nVar = [3m/4] or nVar = m some variables that should be included in the final model Frequency (proportion) Figure 3 Differences in performance between GLM/AIC and RDA/FW frameworks regarding the proportion of incorrect inclusions/exclusions of explanatory variables across 1,000 simulations for each method. Panels A, C and E depict results where community presence/absence data was simulated directly from real coefficients (SPM, see main text) whereas B, D and F show simulation results where presence/absence data was estimated from expected abundances (SAM). Panels A and B depict results for simulated data based on subtidal macroalgae in Ilha Grande Bay; C and D represent data based on plant species from Scottish grassland; and E and F represent data based on aquatic macroinvertebrate insect species from a river in Brazil. Darker lines represent mean values. Full-size DOI: 10.7717/peerj.9777/ fig-3 were left out. Nevertheless, GLM/AIC never had less than around 90% accuracy over all three datasets (overall mean = 96 ± 1.3% against 71 ± 1.7% from RDA/FW). On the other hand, RDA/FW often included more variables than it should in the model (Fig. 3, red). Such errors especially occurred when 0 < nVar ≤ [3m/4]. Under some conditions, up to one third of the variables selected by RDA/FW had zero coefficients. MEM decompositions of true and estimated spatial structure provided a visual assessment of the extent of the misspecification yielded by each method (Article S1). In all three datasets, the worst performance of GLM/AIC corresponded to those models in which it should have included all MEM variables (Fig. 2). Those scenarios represented communities structured at all spatial scales (broad, intermediate and fine). Despite incorrectly excluding several individual variables, GLM/AIC was capable of selecting subsets of variables that corresponded to all those scaling categories ( Articles S1.2-S1.7). In contrast, RDA/FW performed worse when there were few spatial variables (nVar = 5, nVar = 10 and nVar = 2 for datasets 1, 2 and 3, respectively). Under those conditions, incorrect inclusion of variables also resulted in the inclusion of incorrect spatial scales. For example, in one simulation from dataset 1 (Article S1.8) the true spatial structure contained only five MEMs describing finer spatial scale patterns (scaling 2 = MEMs 12-16). However, the final model selected by RDA/FW included 13 variables describing both broad (MEMs 1-6) and intermediate spatial scales (MEMs 9, 11), along with the correct ones (Article S1.9). Similar results were found in all three datasets (Articles S1.10-S1.13). Moreover, these incorrect inclusions of individual variables by RDA/FW resulted in the inclusion of MEM variables associated with eigenvalues substantially different from the correct ones, representing spatial scales much larger than those actually present in the data (Article S1.14). For matters of space, we only plotted one failure example from each dataset for both GLM/AIC and RDA/FW. However, the correct spatial structures within simulated communities and those structures retrieved by both methods in all our simulations scenarios are available as supplemental data (Data S5). DISCUSSION Here, we showed that a GLM/AIC-based method for finding spatial structure in communities outperformed an RDA/FW-based method, for presence-absence data simulated under two different ecologically plausible scenarios about how absences arise. We based our simulated datasets on real datasets from marine, terrestrial and freshwater data. Notably, differences in assumptions about how absences arise made little difference to performance. This might be due to the structure of our community presence/absence datasets, which (like most ecological datasets) had many rare species and, therefore, many expected abundances close to zero. In such cases, the relationship between the community data and explanatory variables could be approximated by a binomial GLM with a logit link function, even if this was not the correct model (as in the SAM simulations). We therefore focus below on general patterns that apply equally to both assumptions about absences, rather than on the details of these assumptions. In selecting spatial explanatory variables, GLM followed by AIC-based model selection (GLM/AIC) performed better than the widely-used approach of RDA followed by forward selection (RDA/FW). Not only did GLM/AIC have better performance overall, but its performance varied little between simulation conditions (Fig. 2). In contrast, RDA/FW performed unpredictably, but often retained too many explanatory variables (Fig. 3). The problems arising from data with non-Gaussian error distributions, such as classic community presence and absence data, in a linear modelling framework are not new to science (Legendre & Gallagher, 2001;McCullagh & Nelder, 1989;Wolda, 1981). Classical linear models such as RDA (Legendre & Anderson, 1999;Legendre & Legendre, 2012) make assumptions regarding constancy of variance in the data (Ter Braak & Prentice, 1988) that cannot be true for presence-absence data, even after data transformation (O'Hara & Kotze, 2010;Warton, 2018;Warton, Wright & Wang, 2012). The problem may be negligible in some hypothesis testing situations (Ives, 2015). Regardless, incorrectly assuming linearity (and constant variance) may lead to serious problems. Unfortunately, RDA is an algorithmic method that makes implicit decisions about the distribution of variances (Ter Braak & Prentice, 1988;Warton, Wright & Wang, 2012) and does not provide the flexibility to separate systematic variation from random variation in the way that statistical models such as GLMs do (Warton et al., 2015;and see O'Neil & Schutt, 2013) for differences between algorithms and statistical models). New frameworks, such as using GLMs with spatially-structured random effects (followed by variation partitioning to find environmental and spatial components) have also been specifically proposed as a model-based alternative to MEMs (Ovaskainen et al., 2017). Despite recent advances showing that better estimates could be obtained by using sensible selection procedures, manipulating the data appropriately and/or by splitting the analysis of the response data over shorter spatial/environmental gradients (Bauman et al., 2018a;Ives, 2015;Vieira et al., 2019), employing statistical models that match the distribution of the response data is better practice in most cases (Ferrier et al., 2007;Warton, 2018;Warton et al., 2015). Another relevant aspect of the general performances of the two methods concerns the peaks of performance in detecting spatial structure. The scores in the GLM/AIC framework were close to ideal across datasets when the number of variables that should be selected was none or was small relative to the number of variables available. The performance only decayed when many or all of the available variables should have been retained in the final model. Thus, if a few variables are responsible for most of the spatial structure in community composition, GLM/AIC will usually outperform RDA/FW (Fig. 2). Considering that the majority of effects could be derived from a small number of causes (Sullivan, 2019) in many biological systems, GLM/AIC could presumably perform well on many real systems. On the other hand, RDA/FW worked best precisely in situations thought unlikely in real systems, when no spatial structure is present among communities (where GLM/AIC also performed equally well), or when composition is structured at all possible spatial scales (i.e., nVar = 0 and nVar = m, respectively). Moreover, when the model had a small number of variables to select from (River dataset, Figs. 3E-3F), performance of RDA/FW was very variable. The two approaches also differed in the ways they failed. GLM/AIC more often included too few variables, while RDA/FW more often included too many. This was consistent among all three datasets under SPM and SAM simulations (Fig. 3) and is in contrast with results from previous studies where GLMs produced higher Type I error rates compared to a linear model (Ives, 2015). For beta diversity studies, where the aim is to identify the most important variables associated with differences in community composition, leaving out a few variables that affect composition is better, in our opinion, than including many variables whose effects are not important. On the contrary, in other scenarios such as when one tries to select pivotal attributes that could be important for the conservation of a population or community, it might be better to accept a higher risk of including spurious variables. Furthermore, model selection problems involve a trade-off between bias and variance, with inclusion of unnecessary variables inflating the uncertainty in parameter estimates (Miller, 1990). Using AIC is often a good way to deal with this trade-off (Anderson, Burnham & Thompson, 2000), and in our simulations, an AIC-based approach worked well. Thus, we suggest that GLM/AIC will usually outperform RDA/FW in selecting spatial explanatory variables for presence/absence community composition data. Unfortunately, AIC-like statistics are not recommended for constrained ordination methods such as RDA, and therefore its use cannot be trusted (see below and Bauman et al., 2018a for details). When different RDA-based procedures were systematically compared, the commonly (mis)used combination of RDA and AIC model selection produced the worst results, yielding inflated Type I errors rates (Bauman et al., 2018a). Therefore, the benefits from AIC in dealing with the bias and variance trade-off do not apply to RDA or related ordination methods. Despite our interest in some attributes of the MEMs for our simulations, such as differences in model performance under varying spatial scales, we hypothesize that the results demonstrated here hold true for other types of explanatory variables (e.g., environmental) not tested here. The spatial scale represented by the MEMs had a negligible effect on GLM/AIC's performance, with only one condition in one dataset slightly differing in results between different scales (see Fig. 4 when the number of non-zeros is 3m/4 ). In contrast, RDA/FW's performance was strongly affected by spatial scale (Fig. 4). In real systems, where the spatial scale at which community composition varies is not known a priori, the performance of RDA/FW could therefore be unpredictable. The uncertainty around RDA/FW performance over differing spatial scales could be especially troublesome for analyses involving processes that may not be constant along spatial/environmental gradients, as commonly observed for rates of species turnover, for example (Ferrier et al., 2007;Fitzpatrick et al., 2013). CONCLUSIONS We discourage the use of traditional RDA/FW to search for spatial descriptors of variation in multivariate presence/absence data sets of moderate size, although large datasets could potentially overcome the issues found here. Instead, we recommend the GLM/AIC framework, in which the relationship between the response and its predictors is modelled in a way that respects the nature of the response. Similar recommendations are likely to apply to other forms of community abundance data with non-normal error distributions (e.g., count data with many zeros or proportional data, Bolker et al., 2009;Warton, Wright & Wang, 2012;Warton et al., 2016).
2020-09-10T10:06:14.816Z
2020-09-03T00:00:00.000
{ "year": 2020, "sha1": "c351b0200f16587c69a03e9ec58efe1fb0fb5813", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.9777", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a73fdcf2e5ccd85e083f6e07a85d81e40d7d1fab", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
237935633
pes2o/s2orc
v3-fos-license
Worldwide Research Trends in Landslide Science Landslides are generated by natural causes and by human action, causing various geomorphological changes as well as physical and socioeconomic loss of the environment and human life. The study, characterization and implementation of techniques are essential to reduce land vulnerability, different socioeconomic sector susceptibility and actions to guarantee better slope stability with a significant positive impact on society. The aim of this work is the bibliometric analysis of the different types of landslides that the United States Geological Survey (USGS) emphasizes, through the SCOPUS database and the VOSviewer software version 1.6.17, for the analysis of their structure, scientific production, and the close relationship with several scientific fields and its trends. The methodology focuses on: (i) search criteria; (ii) data extraction and cleaning; (iii) generation of graphs and bibliometric mapping; and (iv) analysis of results and possible trends. The study and analysis of landslides are in a period of exponential growth, focusing mainly on techniques and solutions for the stabilization, prevention, and categorization of the most susceptible hillslope sectors. Therefore, this research field has the full collaboration of various authors and places a significant focus on the conceptual evolution of the landslide science. Introduction Landslides are disasters that cause damage to anthropic activities and innumerable loss of human life globally [1]. Mass movement processes cause significant changes in the Earth's relief, causing economic losses due to landslides in mountainous areas with a dense population [2,3], and even in the direct and indirect cost of buildings or infrastructure on an urban scale [4][5][6]. In the evolution of the reliefs, landslides are considered to be intrinsic processes, and among other dynamics, they favor the formation of valleys [7], and the contribution of river sediments and ecological renewal. The degree of physical, biological and chemical weathering, earthquakes, and extraordinary rains (among other natural processes) can cause slope instability [8,9]. Landslides have caused costly damage and loss of life worldwide, yet the most devastating disasters occur in developing countries [10]. Therefore, the implementation of The academic field of landslides is broad, where some researchers have made efforts to understand their structure [37], addressing literature reviews [11] and their classification [36,38,39], as well as the bibliometric analysis of various landslide concepts through the Science Citation Index-Expanded (SCIE) and Social Sciences Citation Index (SSCI) databases [13]. Over time, various studies have been carried out regarding landslides, but very few have highlighted their structure and intellectual growth. Therefore, a new bibliometric study would allow a new approach to its structure and updates on its different research scopes. The use of bibliometric methods is considered for the analysis of scientific activity in an academic field. Derek J. de Solla Price initially exhibited the bibliometric analysis in 1965 [40]. The proposal focuses on the quantitative evaluation of an academic field of study by analyzing its structure, characteristics and existing relationships, which allows examining its behaviour between the disciplines of a specific field of study [41,42]. The bibliometric analysis allows identifying research areas (current and future) and the analysis of their multidisciplinary production, achieving a more systematic comprehensive evaluation in the field of study [43,44]. Due to the above, the research question arises: How has the intellectual/conceptual structure of the various types of landslides developed over time? The present study aims to evaluate the intellectual structure of the landslide through performance analysis and bibliometric mapping to determine the development, patterns and trends of its scientific structure. Thus, to analyze the scientific production and intellectual structure of the field of study, managing to provide a transparent, updated, reliable and high-quality study for its transdisciplinary use. This study has been structured in five sections, starting with an introductory framework of the problem, highlighting its objective and investigative question to support at the end of this work, followed by Section 2, in which the materials and the implemented methodology are described (three phases: research criteria and source identification, software and data extraction, and data analysis and interpretation). Section 3 represents the results and their analysis, to later be discussed in Section 4 and, finally, Section 5 concludes with the scientific trends of this research field. Materials and Methods A systematic review allows an exploration of the intellectual territory of existing studies in the face of a problem raised, evaluating the contributions and synthesizing the data obtained to provide reliable knowledge of a particular field of study [45,46]. This exhaustive and rigorous procedure is similar to the protocol presented in the bibliometric analysis [47,48]. The bibliometric analysis allows evaluating the scientific production of journals [49,50] or understanding the intellectual structure of various fields of knowledge such as management [51][52][53], environment [54][55][56], natural science [57] and health [58]. Employing analytical techniques that allow an exploration of the tendencies of investigation and interpretation of new perspectives in the investigative field [59,60]. The methodology proposed in this work is shown in Figure 2. Its structure comprises three phases that allow the proposed bibliometric analysis to be carried out: (i) Research criteria; (ii) reprocessing of data and software; and (iii) analysis and interpretation of data. Phase I. Research Criteria and Database Use For this research, a bibliographic search of the classification of landslides was established based on the internal mechanics of the mass movement. These requirements are encompassed by the USGS, which establishes a classification according to the internal mechanics present in landslides, such as fall, topple, slide, spread and flow [36]. The selection of these terms allows the compilation of the base documents to be considered in this study. The selection of documents should be made based on choosing a reliable, quality database with comprehensive coverage. The databases used for bibliometric studies are the Web of Science and Scopus, which differ in volume of information, journal coverage and subject areas [61]. The Scopus database was selected due to its comprehensive coverage in years, journals in various areas of knowledge [62][63][64][65], an intuitive search system, easy data download and high-quality standards [66,67], which allows a more precise bibliometric evaluation in the domain of any subject to be analyzed. The search carried out in Scopus focuses on the titles of the publications that contain the term "landslide" with the terms of: fall, fall, slide, spread and flow. The search topic is as follows: (TITLE (fall*) OR TITLE (topple*) OR TITLE (slide*) OR TITLE (spread*) OR TITLE (flow*) AND TITLE (landslide*)). The landslide research field is vast, so it is necessary to obtain more exact results and synthesize the study approach; therefore, the search in Scopus focuses only on the title of the publications with the previously established terms [68,69]. In this way, a total of 661 publications were obtained, to which inclusion criteria such as all types of document, language, years and study area were applied [13], in addition to an exclusion criterion such as the year 2021 (year still in progress), obtaining a final database of 641 documents. Phase II. Data and Software Reprocessing The selected records are downloaded in csv format (comma separated values) from the Scopus database for analysis using the Microsoft Excel software from Office 365 ProPlus [70]. Since the downloaded database contains miles of data from various variables (e.g., authors name, document title, year, keywords, abstracts, among others), a review and cleaning of the data is required to ensure precision in analysis results [71,72]. Cleaning consists of eliminating duplicated values, incomplete or erroneous records that cannot be completed manually [73]. A total of 9 deleted records and 632 documents to be analyzed were established. The new csv files were entered in VOSviewer, an open access and reliable software that allows the construction and visualization of bibliometric networks in various fields of study, allowing a comprehensive bibliometric mapping in any research branch [74,75]. This software allows an analysis of the structure of the research field through co-occurrence [76], co-citations [77][78][79][80], and bibliographic coupling [81]. This software has been used in different scientific areas such as: sustainability [82], natural and cultural resources [83], geosciences [55,84], medicine [76] and the circular economy [85], among others. Its analysis is carried out only for articles in English, obtaining a total of 354 documents. Phase III. Data Analysis and Interpretation The results were examined using the two classic approaches to bibliometric analysis: Performance Analysis and Science Mapping [42,86]. • Performance analysis allows an evaluation of its scientific production (authors, countries, journals) and its scientific impact [87,88]; • sciences mapping allows the graphic representation of the cognitive structure of the study field and its evolution [41,89]. It is considered to apply a triangulation method that allows an analysis of this structure by examining its micro (keywords), meso (articles and authors) and macro (journals) components [90]. Scientific Production From 1952 to 1990 ( Figure 3), landslides have been analyzed from a descriptive perspective, considering the internal mechanics and the mass movement type that is generated according to the lithology and the material involved [91][92][93]. Its leading causes are determined, such as the hydraulic gradient and earthquakes [94][95][96][97]. There is also the beginning of geotechnical and geomorphological studies and the elaboration of models to understand the internal mechanics of the different triggered landslides [93,98,99]. Given this analysis, this period is considered to be the beginning of studies that will be the basis for further research. Period I (1990-2000) focuses on researches related to the debris flows, managing to generate models for the understanding and prediction of landslides, and the volume of material deposited in a sector [100,101]. It considers different aspects such as the mechanical process of mass movement [102,103], data in the field (rainfall, vegetation cover, slope inclination, distance, elevation), coefficient of internal friction, among others [104][105][106][107]. This period is the basis for continuous studies and analysis of future landslide models. In period II (2001-2010), the exponential research growth and a significant focus on the classification of landslides is observed. These classifications focus on the area of engineering and speed of landslide for the elaboration of physical models [108], considering the material involved (gravel, sand, silt and clay) and its variations (debris, earth and mud, peat and rock), thus managing to formalize definitions that allow identifying the present types of landslides [109][110][111][112]. In 2008, a relevant study to the global analysis of rainfall was presented, which made it possible to study rainfall and its influence on shallow landslides and debris flows [113]. These studies are the basis of all landslide warning systems throughout the world [114][115][116]. From this, the mathematical prediction models have been considered of great importance worldwide, calculating and predicting the trajectory, speed and depth that landslides would have [117][118][119]. Finally, period III (2011-2020) focuses on the improvement and combination of different numerical models, managing to represent the reality of the environment and the mechanical behavior of the landslides for their respective analysis in field and risk assessment [120][121][122][123]. In this way, at the end of this period, these investigations and improved models allow us to understand the behavior of different landslides types [124][125][126]. In addition, the geomorphological, tectonic and hydrodynamic processes involved in mass movement processes were explained in detail [127,128]. Different experimental research was conducted considering the pressure of the pore fluid, type of grain, rainfall and a large amount of on-site and laboratory investigations, assuring the validity of the results [129][130][131][132][133][134]. Language and Types of Documents In the areas of knowledge related to Life Science and Earth Science, the English language is predominant [135]. Landslide is no exception; despite presenting studies in 15 languages, 81.8% of its studies are written in English. This predilection for language is due to its relevance in scientific communication as there is an overrepresentation of Englishspeaking journals, and it is the common nexus for international collaboration [136,137]. The second language is Chinese (13.45%), due to its high national collaboration on topics of debris flow and flow-type landslides in national indexed journals (e.g., Yantu Lixue/Rock and Soil Mechanics, Yanshilixue Yu Gongcheng Xuebao/Chinese Journal of Rock Mechanics and Engineering, Journal of Natural Disasters). Another characteristic of landslide studies is that they mostly constitute journal articles (74%) since these documents are considered certified knowledge, as they are examined by peer reviewers who have expertise in the field of knowledge [138]. Other types of documents are shown in Figure 4. Contribution by Country The analysis of the contribution of the countries allows us to understand their relationships in knowledge generation [87]. This product is developed by the collaboration of 64 countries (see Figure 5), in which most of the research is related to developed countries. The map was generated through ArcMap 10.5 software, using data from the authors' affiliations. China has the most significant academic contribution on landslides ( Figure 5), collaborating with 47 countries, especially Italy, the United Kingdom and the United States. The contributions with Italy are related to numerical modelling in the propagation of flow-like landslides [139][140][141]. Concerning the United Kingdom, studies focus on modelling debris flow and submarine landslides and as a flow influenced by precipitation, earthquakes, or tectonic movements, e.g., [142][143][144]. The third international partner, the United States, focuses on landslide monitoring and numerical modelling based on the smoothed particle hydrodynamics (sph) method, e.g., [145][146][147]. China has experienced sustained economic growth over the last 30 years, allowing broad knowledge development in various academic fields [148]. In Italy, as the second country with more contributions in the analyzed topic, representative authors such as Guzzetti F., Cuomo S., Cascini L., Sorbino G., Crosta G.B. present studies focused on numerical modelling, the application of sph and GEOtop-FS, run-out analysis and trigger factors in shallow landslides and debris flows [117][118][119]149,150]. Japan is the third country with a scientific contribution, with authors such as Imaizumi F., Sassa K., Wuang G. who highlight the effects of landslides and shallow landslides as a consequence of deforestation, groundwater flow, earthquakes, rainfall and flow path [151][152][153][154][155]. Other countries contributing in this area can be observed in Figure 5. Bibliometric Mapping Analysis The construction of bibliometric maps, depending on what is established in the methodology. Only articles and the English language are considered given their broad domain in various areas of knowledge [156,157]. Co-Occurrence Author Keyword Network This type of analysis allows visualizing the study area (its history and evolution) and its possible trends [158][159][160]. Figure 6 shows the co-occurrence network of author keywords, where 25 nodes (represents each author-keyword with at least four co-occurrences) and four clusters (groupings of nodes of the same color) are observed [161]. The figure allows a visualization of the intellectual structure of landslides to be examined in greater detail. Cluster 1 (red color) shows studies of landslides caused by precipitation and pore pressure in the subsoil studied, due to the topography and water flow caused by rainfall [94,115,[162][163][164]. These studies were carried out based on: (i) post-failure in deposits of colluvial, weathered and pyroclastic origin [118]; (ii) simulation of the probability of occurrence in hydrographic basins using GEOtop-FS [117]; (iii) the quantification of morphology and hydrological conditions [165]; and (iv) an evaluation of susceptibility and slope stability for landslide prevention [166]. Other studies reflect the slope instability that can cause significant hazards, mainly influenced by the deposit type, the rapid flows generated by seismic movements [167][168][169], large-scale deforestation [170], groundwater fluctuation, and different triggering scenarios [132,171]. Studies focusing on this cluster have led to improved mapping, understanding, interpretation and prediction of landslides, such as the movement direction through the hydraulic gradient [172], the influence of rainfall, soil saturation [125,173] and continuous monitoring for preventive decisions in potential hazardous landslides [174]. Cluster 2 (green color) focuses on landslides with a non-Newtonian flow behavior, demonstrated through numerical modelling, geological study and its geodynamic behavior [121,[175][176][177]. These movements and trajectories are influenced by different factors such as: (i) rheology and topography [139]; (ii) hydrometeorological events such as heavy rainfall [113,178]; (iii) soil saturation in gravelly and sandy materials [178]; (iv) pore pressure impact caused by earthquakes [155,179,180]; and (v) the frontal plowing phenomenon [140]. These landslides have a natural, rapid and irregular behavior with devastating dynamics. This cluster provides the scientific community with resources to understand flow-like landslides through numerical and 3D models [181]. Models considering the smoothed particle hydrodynamics (SPH) [77,[182][183][184] and the use of satellite images using methods such as InSAR [185][186][187]. These studies have allowed the modelling of submarine landslides [188,189] and landslides in landfills caused by seismic action [182]. In addition, they facilitate the affected area mapping and evaluate the intensity of the danger for the planning of adequate risk management [190]. Cluster 3 (blue color), these landslides can be generated by: (i) earth rubble and intense added rainfall [131,191] or when they come in contact with the mainstream [116]; (ii) failures in the landslide dam [192,193]; and (iii) the material traction on a slope, liquefaction or even due to temperature changes [105]. For its understanding, various experiments were carried out, such as the use of differential equations for the dynamics of the system [129], analysis of the theory of the critical state in the mobilization of debris flows due to the increase in the basal pressure of pores [194], and the generation of dynamic models to understand the evolution of the system [112]. For a further understanding of debris flow, maps used that are supported by Geographic Information Systems (GIS) [195,196], geophysical studies [197] and statistical methods such as logistic regression (LR) [198,199] and Multivariate Adaptive Regression Splines (MARS) were explored [200], allowing us to understand the formation or prevention of landslide dams [201][202][203] and debris flows, which can also be generated by shallow landslides, which are identified through susceptibility mapping [124,204,205]. Cluster 4 (yellow color), covers the topics written in other clusters given its great diversity or classification [36]. Its studies focus on numerical simulations for the understanding and prediction of landslides [206][207][208], which allows an understanding of the groundwater flow affectation [209,210], the infiltration of water by rainfall [211,212] and wave propagation (tsunamis) due to the collapse of slopes in bodies of water [181,213]. Recently, scientific contributions regarding landslides have been present. Multiphase flow models present submarine landslides, especially on the type and size of particles (rheology) [188]. Regarding groundwater or what is percolated by high rainfall, it is considered in Critical Rainfall Threshold (CRT) analysis, monitoring system by video camera systems and the generation of two-dimensional mathematical models by the finite difference method [214][215][216]. Co-Citation Analysis Co-citation analysis is one of the most widely used methods in bibliometric analysis [41]. It allows us to explore the relationships between documents, to know the knowledge base and the intellectual structure of a field of study [217,218]. Co-citation analyzes the number of times two documents are co-cited by another subsequent document [79]. When frequently cited in other publications, documents show a close relationship, which allows us to consider that they belong to the same field of research [219,220]. However, this relevance does not imply that the ideas shared by the various authors coincide with each other [221]. In this work, two co-citation methods are used: author co-citation analysis and Journal co-citation analysis, which are presented below: Author Co-Citation Analysis (ACA) This analysis is an adaptation of work by H. Small [79], done by White and Griffith [222] using the authors of the papers. ACA considers that by citing two authors more frequently in several papers, it is very likely that their fields of research are similar [223]. This makes it possible to discover the co-citation groups of reference authors that make up the knowledge base of the intellectual structure studied [73,224]. Furthermore, it allows the discovery of the academic community linked to confirming this knowledge base [225]. Figure 7 shows this co-citation network of authors. Its construction is carried out with the VOSviewer software version 1.6.17, which uses a proprietary technique called VOS to allow a grouping of the units of analysis using similarities [74]. The nodes represent the authors' names, which may represent topics, schools of thought or specialties [226]. The structure presents six clusters, with 235 authors possessing more than 20 co-citations. Cluster 1 (red color) consists of 60 authors. The studies in this cluster focus on the research area of shallow landslides and debris flow influenced by rainfall or hydrological triggers [227][228][229]. These authors include Guzzetti F. (157 co-citations), in studies related to precipitation and shallow landslides [113,230]; Crosta G.B. (128) in numerical modelling and debris flow [231,232]; and Godt J.W. (107), in map generation and modelling of shallow landslides for landslide risk prevention and assessment [233,234]. Cluster 2 (green color) has 44 authors. This cluster has studies focused on the internal mechanics of landslides and debris flows, and the factors that affect the movement or detachment of material [235][236][237][238][239], in addition, it considers the run-out analysis of rock and soil slides [121,240,241]. These research topics are cover by various authors such as Sassa K., Xu Q and Wang G. with 131, 97 and 90 citations. Cluster 4 (yellow color) is distant from the rest of the clusters, located at the extreme right of Figure 7. This cluster comprises 37 authors, such as Masson D.G. (79 co-citations) and his studies in the underwater landslides are influenced by groundwater [251][252][253]. Grilli S.T. (49) and Hager W.H. (46) focus on the generation of modelling and numerical simulations linked to the movement of underwater masses and subsequent tsunamis [254][255][256]. Cluster 5 (purple color) is in the central part of the structure and has 32 authors, such as Hungr O. (259), who researches runout analysis and the generation of models for risk assessment [257][258][259]. Iverson R.M. (248) and Reid M.E. (77) focused on the study of debris flow and hydrological factors such as groundwater hydraulics [260][261][262]. This analysis considers the relevance and similarity of journals in a field of study to reveal the intellectual structure [225,266]. JCA studies the number of times two journals are co-cited by another journal, revealing the various research fields that make up the intellectual structure [67,267]. Figure 8 shows this co-citation network of journals. The VOSviewer software version 1.6.17 is used to construct and visualize the connections between the various journals represented by nodes. This network shows 69 journals with at least 20 co-citations displayed in four clusters. Figure 8. Visualization of the co-citation network assigning a representative color for each cluster (topics) and nodes (journals). According to the structure built using the VOSviewer software version 1.6.17. The colors red, green, blue and yellow appear in order of importance. Cluster 1 of red color consists of 20 journals with 1239 citations, in which the following stand out: "Journal of Geophysical Research" in the category of Agricultural and Biological Sciences, Earth and Planetary Sciences, and Environmental Science; the "Journal of Fluid Mechanics" in Physics and Astronomy; and the "Journal of Hydraulic Engineering" in Environmental Science. The latter converge in the category of Engineering. Cluster 2 (green color) contains 20 journals and 3526 citations, focusing mainly on the category of Earth and Planetary Sciences, such as the journals of: "Engineering Geology", "Geomorphology" and "Landslides". Cluster 3 (blue color) focuses on the Earth and Planetary Sciences category and consists of 17 journals with 622 citations such as: "Marine Geology", "Geological Society of America Bulletin" and "Geology". Cluster 4 (yellow color) has 12 journals and 834 citations, such as "Canadian Geotechnical Journal", in the Engineering category, and "Environmental and Engineering Geoscience", which have a focus on Environmental Sciences. These are intertwined with the "Geotechnique" journal in the Earth and Planetary Sciences category, reflecting the interconnection with the other clusters in Figure 8. Discussion This study shows a consistent increase in scientific research on a landslide, thanks to the contribution of 64 countries spread over five continents (Figure 5), in 15 languages, mostly in scientific articles and in the English language. During the 90s, scientific production entered an introductory period, where Iverson R.M., Crosta G., and other authors contributed to the scientific community with the results of their analyses and studies (theoretical, laboratory and field) on the dynamic behavior of debris flows and landslides [101,105]. According to the Scopus database, this scientific production has experienced considerable growth since 2001 (representing 90.2% of publications). In the decade 2001-2010, scientific research increased (Figure 3), prioritizing the update of old studies such as the global rainfall threshold [113], the classification of landslides [109] and the generation of models [117,119], which in this period are essential for understanding and preventing landslides. Over the last decade (2011-2020), the increase in its scientific production has been stable, improving the development and combination of models generated in the previous period [125,126]. In this way, the analysis of landslides and the dynamic behavior of the debris flow, shallow landslides and their movement as a flow was perfected ( Figure 6). The analysis of the intellectual structure of this field of study is conducted through three scientific maps: In the analysis of co-occurrence of authors keywords, the application of geographic information systems (gis) and numerical simulations are a means for the study and analysis of landslides, debris flow and flow-like landslides, e.g., [184,213]. The sph (smoothed particle hydrodynamic) method is also part of this type of analysis, in conjunction with implementing sector rheology, e.g., [149]. Numerical models are the most common method for analyzing the main issues in each cluster, focusing on modelling, erosion, slope stabilization and rainfall among others, for such study, e.g., [174]. Secondly, the author co-citation analysis allows an observation of the interconnections that the various authors have in the entire landslide field (Figure 7), which has international collaboration mainly from countries in Asia, Europe and North America ( Figure 5). One of the main topics of study is the shallow landslides, which since 1988 has focused on the analysis of propagation and transformation in debris flows [268]. This issue is related to the duration and intensity of rainfall analyzed by Guzzetti, et al., (2008) [113]. The authors characteristic of this analysis, such as Sassa (green cluster), Hungr (purple cluster), Takahashi (sky cluster), Guzzetti (red cluster), among others (Figure 7), focus on the main hydrological and hydraulic, seismic and geomechanical factors causing the shallow landslide, debris flow, and consequently, the development of numerical models for risk prevention and assessment [229,232,234,235,238,241,264,265,269]. These topics are related to the red and blue clusters in Figure 6. In addition, the existence of small groups that are isolated from those previously mentioned is observed, which we detail below: (a) the group of Pastor, Cascini and Evans (blue cluster, Figure 7), they analyzed issues related to landslide dams, erosion, the susceptibility and stabilization of slopes referring to debris flows (blue cluster, Figure 6) [244,250], which is done through simulations [243,245] and mathematical models (e.g., smoothed-particle hydrodynamics-SHP [119,245]). (b) Masson, Grilli and Hager's group (yellow cluster, Figure 6) study the action of groundwater and its influence on mass movement (underwater and on the surface), which can trigger the generation of tsunamis or the propagation of landslides such as flows, which can be analyzed using models and numerical simulations [251,[254][255][256]. These topics are closely related to the green and yellow clusters ( Figure 6). Third, in the journal co-citation analysis (Figure 8), the red cluster is observed with a broad domain about the rest of the clusters in the categories of: Engineering, Agricultural and Biological Sciences, Physics and Astronomy, Earth and Planetary Sciences, and Environmental Science. Another field of study is that of Earth and Planetary Sciences (green and blue cluster, Figure 8), focusing on the hydraulic and geotechnical properties of the material and its formation environment (geological and geomorphological) [270][271][272]. The green and blue clusters are intertwined with the yellow cluster (Earth and Planetary Sciences, Figure 8), focusing on understanding landslides, improving the models in the assessment, and their classification [273][274][275]. Instead, given the diversity of the landslide science representing the red cluster (Figure 8), it focuses on the behavior of the landslide, similar to that of a flow and the engineering analysis of the mechanical and hydraulic characteristics of the material [276][277][278][279][280]. This study is related to the group of authors Masson, Grilli and Hager (yellow cluster, Figure 7). In this way, the entire intellectual structure and its topics of interest are analyzed, such as shallow landslide, debris flow, landslide and flow like landslide (Figure 4), which cover the five classifications made by the USGS (fall, topple, slide, spread, and flow) (Figure 1) [36]. Conclusions This work analyses the scientific production of the research field of landslides, according to the classification addressed by the USGS. It allows an exploration and analysis of the intellectual structure of 632 publications from the Scopus database, which is feasible for a bibliometric study. When performing the performance analysis, its constant evolution is visualized between 1952-2020 ( Figure 3), with a significant increase in the last 20 years. The 74% corresponds to scientific articles (Figure 4), the majority of which are in English. The scientific contribution is concentrated in 64 countries, led by China ( Figure 5). The debris flow is a type of landslide generated by various causes, such as precipitation and collapse of landslide dams. This field of study analyzes the material's hydraulics, geodynamics and geological properties in the face of hydrometeorological and seismic events, which are an essential part of the propagation of landslides with a flow behavior and subsequent generation of debris flow ( Figure 6). Some authors present studies related to the subject, such as Guzzetti F., Crosta G.B., Godt J.W., Sassa K. and Wang G., among others (see Figure 7). The shallow landslide is an area of study supported since 1980 by Nel Caine and by Guzzetti et al., 2008, who analyze this type of landslides as a consequence of the duration and intensity of rains. This research area is in a period of growth. Therefore, it links the material's hydrological processes and hydraulic conditions as its main triggering factors. Therefore, the implementation of numerical models for slope stabilization and risk prevention enhances their importance ( Figure 6). In addition, the group of co-cited authors, such as Guzzetti, Crosta and Godt (red cluster, Figure 7), analyze a large part of these landslides, which may be the basis for understanding debris flow formation and other types of landslide. It is essential to mention that the intellectual structure of this research field made it possible to point out or list topics of interest that can increase scientific knowledge of this subject, such as: • The analysis of the hydraulic properties and the circumstances by which landslides can be generated as a flow; • a technical and geological analysis on topics related to submarine landslides, among which run-out analysis and the propagation of tsunamis due to landslides and earthquakes stand out, this being an area of study that is evolving. We consider that this study is a contribution to the academic literature due to: (i) The possibility of getting to know different researchers in specific topics of this field of study, which allows the establishment of collaboration networks; (ii) to know the experiences validated by the different authors, using techniques and methods of study that enrich scientific knowledge; and (iii) the study serves as a guide for novice researchers who wish to know in brief outlines this general structure of knowledge. Finally, there are some limitations to this work: (a) restriction due to the classification of landslides, only to the contribution of the USGS; and (b) the use of the database (Scopus), without considering other existing bases in the academic world such as the Web of Science or Dimensions. Considering these limitations, future research is estimated using different databases and other classifications related to landslides.
2021-09-28T05:25:03.349Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "d2198988564b808f84b8e599efb7ae271e8ec60e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/18/9445/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2198988564b808f84b8e599efb7ae271e8ec60e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
248029012
pes2o/s2orc
v3-fos-license
Youth Sensitivity in a Pandemic: The Relationship Between Sensory Processing Sensitivity, Internalizing Problems, COVID-19 and Parenting The personality trait sensory processing sensitivity (SPS) is an established risk factor for the development of internalizing problems. Highly sensitive adolescents react stronger to environmental cues including parenting environment and stressful life events. The aim of the current study was to examine if the perceived impact of COVID-19, mediates the link between SPS and internalizing problems. In addition, it was tested if parenting style moderates the mediating effect of perceived COVID-19 impact between SPS and internalizing problems among adolescents. The study had a cross- sectional design and data were collected between April-July 2020 during the first lockdown in the Netherlands. Participants were 404 adolescents aged 9–18 years (Mage = 13.49). Questionnaires were administered online to assess SPS (Highly Sensitive Child Scale), parenting style (Parenting Style Inventory-II), internalizing problems (Patient Health Questionnaire-4) and COVID-19 pandemic impact (COVID-19 impact scale). The SPSS macro PROCESS was used to test the mediation model of perceived COVID-19 impact and the moderated mediation model with parenting style as a moderator. A relationship was found between SPS and internalizing problems which is partly mediated by the COVID-19 impact. The moderating effect of parenting style was not found. These findings provide insight into the effect the pandemic has had on highly sensitive adolescents. Further research is needed to develop and test interventions to support sensitive youth and thus possibly prevent the development of internalizing problems. • The relationship between sensory processing sensitivity (SPS) and internalizing problems in adolescents is partly mediated by the perceived COVID-19 impact (stressful life event). • Parenting style has no moderating effect on the relationship between SPS and internalizing problems. • Permissive parenting is related to less internalizing problems and less perceived negative COVID-19 impact. In the first months of 2020, the SARS-CoV-2 virus (also known as Coronavirus or COVID-19) spread rapidly around the world resulting in a once-in-a-century pandemic (Gates, 2020). In response to rising infection numbers, the Dutch government adopted a set of safety measures labelled as an "intelligent lockdown" implemented as of March 16 th , 2020. These measures included the closure of public places (e.g., schools, cafes, and museums), instructions to stay at home and keep social distance, and quarantine in the case of infection. The virus, as well as the intelligent lockdown, have led to many changes in people's daily lives, including the life of adolescents, on a scale that is unprecedented in modern history, posing a risk to adolescents' development as well as emotional and psychological wellbeing (Dawson & Golijani-Moghaddam, 2020;Prime et al., 2020). Preliminary research has already shown increased levels of stress, anxiety, and depressive symptoms in children and adolescents during the COVID-19 pandemic (Brooks et al., 2020;Dawson & Golijani-Moghaddam, 2020;Orgilés et al., 2020;Xie et al., 2020). Anxiety, depression, and stress can be summarized under the broader category of internalizing problems, defined as occurring within a person rather than being acted out externally in the environment (Graber & Sontag, 2009). Internalizing problems during childhood and adolescence are predictive of multiple negative developmental outcomes such as peer victimization (Reijntjes et al., 2010), internalizing disorders (e.g., anxiety, depression), and poor health later in life (Essex et al., 2014;Essex et al., 2009;Herrenkohl et al., 2010). Given the severity of the possible negative developmental outcomes of internalizing problems, it is important to determine risk factors predictive of internalizing problems during childhood and adolescence, particularly in times of Research has identified the personality trait sensory processing sensitivity (SPS) as a risk factor for developing internalizing problems (Aron et al., 2012;Slagt et al., 2018). SPS is a relatively stable trait that reflects an individual's sensitivity to environmental influences such as other people's expressed emotions, loud noises and pain, and the intensity of the individuals reaction in response. Higher levels of SPS are associated with a feeling of overstimulation in response to excessive demands. Although there is considerable overlap with the concept of temperamental reactivity, SPS draws on the literature on personality traits. A high score on the SPS trait is related to internalizing problems like depression or anxiety symptoms (Aron et al., 2005;Bakker & Moulding, 2012;Boterberg & Warreyn, 2016;Dal, 2016;Evers et al., 2008;Liss et al., 2008;Liss et al., 2005). Also, a study of Dean et al., (2018) showed among a sample of typicallydeveloping children that a higher level of SPS related to more externalizing and internalizing problems. Little is known about the mechanisms underlying the impact of SPS on adolescents' internalizing problems. The peculiar situation of the COVID-19 pandemic, which was experienced as stressful by some, may be a potential mediator in the relation between SPS and internalizing problems. In this study, we want to examine perceived COVID-19 impact as a mediator of the link between SPS and internalizing problems. According to Baron & Kenny (1986), a mediator is defined as a variable that changes as a result of the predictor and subsequently affects a third variable. Thus, the current study will examine whether the COVID-19 impact adolescents perceive changes as a result of their level of SPS and subsequently affects the development of internalizing problems. Recent research on the impact of the COVID-19 pandemic as a stressful life event reveals a rise in internalizing problems in youth (Brooks et al., 2020;Dawson & Golijani-Moghaddam, 2020). The relationship between stressful life events and internalizing problems has been well established (e.g. Graber & Sontag 2009). A study on the mental health status of Chinese children reported more than usual depression and anxiety symptoms during the lockdown (Xie et al., 2020). In a similar study in Italy and Spain, 85.7% of the parents reported changes in their children's emotional state and behaviors during the quarantine (Orgilés et al., 2020). Thus, these findings strongly suggest that the COVID-19 impact is related to an increase in internalizing problems in youth across nations. Obviously, a stressful life event itself (here: COVID-19 pandemic) does not increase because of an individual's characteristic (here: SPS). However, a person's perception of the impact of a stressful life event can vary depending on individual characteristics. High SPS individuals show a stronger emotional reactivity to the environmental context (Aron & Aron, 1997;Aron et al., 2012). Even though the COVID-19 pandemic is a stressful life event on a community level, the perceived impact of the situation varies greatly from person to person. Individuals with high SPS scores are more shaken than others by changes in their life (Aron et al., 2012). In line with the trait-by-environment design of the diathesis-stress model (trait x environment) it can be expected that highly sensitive youth perceive a stronger negative impact by the COVID-19 pandemic which leads to more internalizing problems. Earlier research has identified the perceived impact of stressful life events as a possible mediator between risk factors and internalizing problems (Luby et al., 2006). Stressful life events can be defined as occurrences in a person's life that change the usual activities and require considerable readjustment (Dohrenwend 2006). A stressful life event can occur on a personal level, like the loss of a loved one, or affect an entire community, like an earthquake (Schwarzer & Luszczynska, 2012). The COVID-19-pandemic and the resulting lockdown situation constitute drastic changes in the daily lives of adolescents, including a sudden switch to online education, not being able to participate in sports and deprivation of real-life peer interactions. The pandemic can therefore be categorized as a potential stressful life event for adolescents. Therefore, adolescents with a higher level of SPS may be more sensitive to stressful contexts, such as the lockdown during the COVID-19 situation and may therefore experience more internalizing problems. To obtain more insight into the relation between SPS and internalizing problems via impact of COVID-19, highly sensitive individuals should be considered in the context of their environment. During the lockdown, adolescents had to spend a large amount of time at home under the care of and in close proximity to their parents. This makes parenting an even more influential and challenging environmental factor that may moderate the relation between SPS, perceived COVID-19 impact and internalizing problems in adolescents. Parenting practices interact with the personality of a child in predicting behavioral outcomes (Pluess & Belsky, 2010). According to the differential susceptibility model (Belsky & Pluess, 2016), certain susceptibility factors promote individuals to be more influenced by both positive and negative environmental stimuli when compared to others without those traits (Rabinowitz & Drabick, 2017). SPS is such a susceptibility factor. Adolescents scoring high on the SPS personality trait appear more susceptible to environmental influences in a for-better-and-for-worse manner, resulting in worse developmental outcomes under negative circumstances but also better developmental outcomes in a supportive environment (Belsky & Pluess, 2016). Adolescents high in SPS respond to environmental cues with stronger emotional reactivity and are therefore more impacted by the influences of the parenting environment (Aron & Aron, 1997). Highly sensitive children experienced more negative affectivity including depression in the context of a less caring parental environment (Aron et al., 2005). The relationship between high SPS and internalizing problems is moderated by parental care indicating that highly sensitive adolescents might be particularly impacted by uncaring parents (Liss et al., 2008). Earlier research has also found an interaction between SPS and negative parenting styles in predicting indices of psychopathology (Sadoughi et al., 2007). These findings provide evidence that parenting can have a moderating effect on the relationship between SPS and internalizing problems in adolescents. A common way to measure parenting is through parenting dimensions and the subsequent parenting styles, of which responsiveness and demandingness are considered the main dimensions. Responsiveness and demandingness of parenting are linked to child well-being not just in isolation, but also in the way they interact to describe patterns of parenting (Power, 2013). The first to use parenting dimensions to categorize distinct parenting styles was Baumrind (1967Baumrind ( , 1971. Building on that theory, Maccoby & Martin (1983) described parents in terms of their position on the two main parenting dimensions, responsiveness and demandingness. The four parenting styles emerging from this framework are (1) authoritative (high in responsiveness and demandingness); (2) authoritarian (low in responsiveness and high in demandingness), (3) permissive (high in responsiveness and low in demandingness), and (4) neglectful (low in responsiveness and demandingness; Power 2013). Authoritative parenting is suggested to be optimal for developmental outcomes whereas authoritarian, permissive, and neglectful parenting styles have been linked to poorer outcomes for children and adolescents (Newman et al., 2008;Pinquart, 2017). Thus, a parenting style that is characterized by high responsiveness and demandingness is considered most optimal as to a variety of adolescent developmental outcomes. Furthermore, parenting style may interact with SPS in predicting the COVID-19 impact perceived by adolescents. There is some evidence that parenting style can have a moderating effect on the impact certain stressful life events have on children (Slone et al., 2012). The COVID-19 induced lockdown is a stressful life event which coincides with adolescents spending more time at home under the care of their parents. It is therefore expected that the parenting style will affect the impact that the pandemic has on adolescents. While an authoritative parenting style can act as a protecting factor, the other three parenting styles might worsen the perceived negative impact of this stressful life event among adolescents high in SPS. While earlier research established the relationship between SPS characteristics and internalizing problems in children and adolescents, the moderating effect of parenting styles has not yet been examined. Furthermore, no research has explored the above-mentioned constructs in the context of a stressful life event like the COVID-19 pandemic yet, as far as the authors are aware of. The goal of the current study was to address the knowledge gap in the literature, and investigate the relations among SPS, perceived COVID-19 impact and internalizing problems in adolescents, and moderation by parenting styles. A positive relation between SPS and internalizing problems is expected, and this relation is hypothesized to be (partially) mediated via perceived COVID-19 impact. Furthermore, it is expected that an authoritative parenting style is protective in the impact of SPS on internalizing problems as well as on the complete mediation relations. Data collection among adolescents during the lockdown period provides a unique possibility to examine the proposed moderated mediation model (Fig. 1). Participants The total sample consisted of 404 children and adolescents attending primary or secondary school. The initial sample Fig. 1 The proposed moderated mediation model was reduced in two ways. First, all participants who were older than 18 were removed from the sample (n = 1). Then, participants with missing data were removed (n = 8). The final sample consisted of 395 participants aged 9 to 18 years (M = 13.49, SD = 2.15). Forty-six percent of the participants were male. Most youth (68.6%) attended secondary school at the time the data were collected (typical age range 11 to 18 years). The sample was predominantly Dutch (96.5%). Procedure Our study had a cross-sectional design and examined between-person differences. Adolescent self-report questionnaires from the first measurement wave of the Digital Family project were used. The Digital Family project is an ongoing Dutch longitudinal study primarily investigating youth digital media use in the context of the family. Participants were recruited through different channels, like social media and personal networks. Besides, schools that successfully participated in a previous study were asked to include the recruitment information in their newsletter. The data collection was conducted in April-July 2020, which coincided with the "intelligent lockdown" implemented by the Dutch government to slow the exponential spread of the COVID-19 virus. This entailed the closure of all childcare institutions, schools, sport clubs and foodservice industry as well as the stimulation of social distancing by staying home as much as possible. All participants who signed up for the study received an email with the link to an online questionnaire. Before starting the questionnaire, participants were presented with an informed consent form, which disclosed to them that the data would be used unanimously and that they could stop participating in the study at any time. Parents provided active consent for children aged <16 years. After accepting the informed consent form, participants could start completing the online questionnaires which took about 25 min to complete. They were asked to answer the questions honestly. For completing the questionnaire, participants received a 5 Euro gift-voucher. Data collection was approved by the Ethics Committee of the Faculty of Social and Behavioral Science at Utrecht University (FETC20-192). Instruments Sensory Processing Sensitivity (SPS) was measured by a shortened version of the Highly Sensitive Child Scale (HSC; Pluess et al., 2018). The original scale used twelve items to determine the overall sensitivity with 3 items specifically measuring the subcategory low sensory threshold (LST), four items measuring the subcategory aesthetic sensitivity (AES), and five items measuring the subcategory ease of excitation (EOE). The AES subcategory was excluded since recent research shows that AES neither sufficiently correlates with the other two subcategories nor with the negative outcomes associated with SPS (Ershova et al., 2018;Liss et al., 2008). Examples for items measuring the remaining two subcategories are 'Loud noises make me uncomfortable' (LST) or 'I don't like change' (EOE). Participants could answer on a 5-point Likert scale (1 = I don't agree at all, 5 = I completely agree). The variable SPS can be operationalized as a categorical as well as a continuous variable. When measured as a categorical variable, the group of participants is divided into the top 30% (i.e., highly sensitive group) and the bottom 70% (i.e., not highly sensitive group; Aron et al., 2012;Pluess et al., 2018). As the SPS scores in the current study were normally distributed, SPS was operationalized as a continuous variable which is in line with the recommendations of Pluess et al. (2018). Notably, higher SPS scores indicate higher levels of sensory processing sensitivity. The Cronbach's alpha of the questionnaire in our study was 0.76, meeting the criteria of 0.7 for reliable internal consistency. Parenting style was measured using the Parenting Style Inventory-II (PSI-II; Darling and Toyokawa 1997). The scale was designed to assess the construct of parenting styles. Originally, the scale consisted of 36 items with twelve items for each parenting dimension, namely, autonomy-granting, demandingness, and responsiveness. In the current study, a shortened version of the questionnaire was used, with four items each to measure autonomygranting and demandingness, and three items to measure responsiveness. An example of an item measuring the dimension responsiveness was 'My parents (or caregivers) are there for me if I have a problem.' Participants could answer on a 5-point Likert scale (1 = I don't agree at all, 5 = I completely agree). The parenting dimensions were then used to compute a total score for each of the four parenting styles. For example, a high score for authoritarian parenting represents low responsiveness and autonomy granting and high demandingness. Cronbach's alpha for the authoritative, authoritarian, permissive and neglectful parenting styles were 0.71, 0.74, 0.71 and 0.75 respectively. Internalizing problems were measured using the Patient Health Questionnaire-4 (PHQ-4; Kroenke et al., 2009), an ultra-brief tool for identifying individuals with anxiety and/ or depression symptoms. The scale showed good psychometric properties regarding internal reliability as well as construct, factorial, criterion, and process validity despite its limited number of items (Kroenke et al., 2009). The PHQ is suitable for measuring internalizing problems among children and adolescents (López-Torres et al., 2019). The PHQ consisted of two core anxiety items and two core depression items. Examples for items were 'Over the last 2 weeks, how often have you been bothered by feeling nervous, anxious, or on edge?' (anxiety) or 'Over the last 2 weeks, how often have you been bothered by feeling down, depressed, or hopeless?' (depression). Participants answered on a 4-point Likert scale (1 = Not at all, 4 = Nearly every day), with higher scores indicating higher levels of internalizing problems. For the purpose of this study, the score of all four items were combined to form a total mean score, indicating the general level of internalizing problems the participants are experiencing. Cronbach's alpha of the total score was 0.71, meeting the criteria of 0.7 for internal consistency. The perceived COVID-19 impact was measured with items from a questionnaire constructed by researchers from the faculty of social sciences at Utrecht University. The COVID-19 questionnaire measured the impact of the COVID-19 pandemic on different areas of life like activity, school, sleep, or atmosphere in the home. Items were for example 'The COVID-19 crisis has led to more fighting in our family', 'I have problems sleeping because of the COVID-19 crisis' or 'I worry more about my schoolwork because of the COVID-19 crisis.' The original questionnaire included 11 items, of which 8 items were selected as most suitable to measure the perceived COVID-19 impact. The items were rated on a 5-point Likert scale ranging from 'completely disagree' (=1) to 'completely agree' (=5). The scores of all items were combined to form a total mean score, indicating the level of negative COVID-19 impact the participants were perceiving. The Cronbach's alpha of the questionnaire was 0.57, which is considered a poor internal consistency. However, as the internal consistency is not unacceptable, the questionnaire was included in the current study while keeping the low internal consistency in mind as a limiting factor. Data Analyses For conducting the data analyses, the statistical program SPSS with the PROCESS macro was used. The dataanalytic strategy included a stepwise approach: First, a Pearson correlation analysis was conducted to investigate the intercorrelations between SPS, internalizing problems, perceived COVID-19 impact, and the four different parenting styles. In the next step, the mediation model of the perceived COVID-19 impact was investigated using the SPSS macro PROCESS, Model 4 (Hayes, 2017). The moderating effect of parenting style on the relationship between SPS and internalizing problems was tested using the SPSS macro PROCESS, Model 1 (Hayes, 2017). Then the moderated mediation model of perceived COVID-19 impact and parenting style was investigated using the SPSS macro PROCESS, Model 7 (Hayes, 2017). The 95% biascorrected confidence intervals of the conditional direct and indirect effects were estimated via bootstrapping. The effects were considered significant when the confidence intervals do not include zero. Gender and age are included as control variables. Furthermore, the days in lockdown at the time of data collection was added as a control variable. The lockdown started mid-March 2020 and the data was collected from April until mid-July. In the course of this time, the acuteness of the situation as well as the sort of safety measures in place changed. For participants who filled in the questionnaire beginning of April the COVID-19 situation was still new and safety measures were very strict. Youth that participated at a later point had a chance to get used to the new circumstances but on the other hand were already exposed longer to the COVID-19 restrictions. Hence, prolonged exposure to COVID-restrictions and thus the time of data collection might have affected the perceived COVID-19 impact as reported by the participants. Descriptive Analyses The descriptive statistics and correlations between the variables of interest are presented in Table 1. The results showed that sensory processing sensitivity (SPS) was positively related to perceived COVID-19 impact and internalizing problems. These findings suggest that a high level of the SPS trait in adolescents is a potential risk factor for perceiving a stronger impact of the COVID-19 pandemic and more internalizing problems. In addition, authoritative and permissive parenting reported by the child were both negatively related to perceived COVID-19 impact and internalizing problems, whereas authoritarian parenting and neglectful parenting were positively related to those variables. Finally, the perceived COVID-19 impact was positively related to internalizing problems. Main Effect and Mediation via Perceived COVID-19 Impact Firstly, the main relationship between SPS and internalizing problems was tested. A regression analyses revealed that the total effect of SPS on internalizing problems in the absence of the mediator (perceived COVID-19 impact) was significant (β = 0.37, p < 0.001). This supports the hypothesis, that there is a positive relationship between the SPS trait and internalizing problems in adolescents during the lockdown. Next, the mediation model describing the relationship of SPS with internalizing problems, and the indirect role of perceived COVID-19 impact in this relation (hypotheses 1 and 2) was examined. The analysis was conducted using the PROCESS macro, Model 4 (Hayes, 2017). The results showed a significant positive association between SPS on perceived COVID-19 impact (β = 0.284, p < 0.001). In the mediation model, the direct effects of SPS on internalizing problems as well as the indirect effect via perceived COVID-19 impact were both significant (β = 0.258, p < 0.001; β = 0.114, p < 0.001). The perceived COVID-19 impact partially mediated the association between the SPS trait and internalizing problems (Fig. 2). Parenting Style as a Moderator To examine the moderating effect of parenting style on the relationship between SPS and internalizing problems the PROCESS macro, Model 1 was used (Hayes, 2017). The results of the analyses showed no significant moderating effect for any of the parenting styles (all ps > 0.05). Moderated Mediation of Parenting Style on SPS and Internalizing Problems via COVID-19 Impact Analysis of moderated mediation (perceived COVID-19 impact mediates the association between SPS and internalizing problems which in turn is moderated by the parenting style) was conducted using the PROCESS macro, Model 7 (Hayes, 2017). The results for the interaction effect of the different parenting styles are presented in Table 2. No significant moderating effect for any of the parenting styles was found. The results generated by PROCESS examining the moderated mediation as a whole are presented in Table 3. The moderated mediation model was not significant for any parenting style. Discussion Developmental theories like the diathesis-stress model and the differential susceptibility hypothesis support the notion that children and adolescents respond differently to environmental cues such as stressful life events and the parenting LL low limit, CI confidence interval, UL upper limit *p < 0.05 environment, depending on certain personality traits such as sensory processing sensitivity (SPS). The goal of our study was to investigate the relationship between SPS and internalizing problems in youth by testing if the relationship is (partly) mediated by the perceived impact of a stressful life event like the COVID-19 pandemic. Also, it was examined whether this relationship was moderated by parenting style. In accordance with the first hypothesis, the results of our study show a link between SPS and internalizing problems in adolescents. These findings replicate the results of previous research, which already established the relationship between SPS and internalizing problems, i.e., anxiety and depression (Aron et al., 2005;Bakker & Moulding, 2012;Boterberg & Warreyn, 2016;Dal, 2016;Evers et al., 2008;Liss et al., 2008;Liss et al., 2005). This can be seen as further evidence that high levels of SPS are a risk factor for developing internalizing problems. Furthermore, the results indicate that the relationship between SPS and internalizing problems is partly mediated by the perceived COVID-19 impact. Partial mediation means that the mediator (here: perceived COVID-19 impact) is only partly responsible for the relationship between SPS and internalizing problems. Even though earlier research demonstrated that the perceived impact of stressful life events can act as a mediator between internalizing problems and associated risk factors (Luby et al., 2006), our study is the first to find this effect for the impact of COVID-19. This goes to show that while it is necessary to give attention to the immediate medical aspects and consequences of COVID-19, it is also imperative to consider the impact the pandemic has had and will continue to have in other areas of life, like the mental health of adolescents. The finding that the relationship between internalizing problems and SPS is partially and not fully mediated by perceived COVID-19 impact makes further research necessary to understand the precise relationship between SPS and internalizing problems. Future research should focus on identifying alternative mediators and moderators to better understand the pathways that result in problematic outcomes for highly sensitive children. Identifying alternative mediators can open up opportunities for preventive interventions targeting internalizing problems in children with high SPS traits. Looking at the parenting dimensions, we found that authoritarian and neglectful parenting correlated with worse child outcomes, namely stronger perceived COVID-19 impact and more internalizing problems. As would be expected, the opposite was true for the authoritative parenting style, which is broadly accepted to produce the best child outcomes (Pinquart, 2017). Surprisingly, we also found a negative link between a permissive parenting style and perceived COVID-19 impact as well as internalizing problems. This contrasts with previous studies, which found a positive link between permissive parenting and internalizing problems (Rose et al., 2018). An explanation could be that permissive parenting, which is characterized by low demandingness and high responsiveness, might produce better child outcomes in the unique context of a stressful life event like the COVID-19 pandemic. It is possible that when extraordinary circumstances demand increased flexibility from children and adolescents, the necessity of parental demandingness decreases. Another explanation could be that more permissive parents were not as strict with their children about following the lockdown requirements and for example allowed their children to still see their friends. Not following the rules could potentially buffer against mental health deterioration in the pandemic, albeit in an unrecommended and unsafe way. Contrary to our expectations, the link between SPS and internalizing problems was not moderated by parenting style. A moderation effect would have meant that the relationship between SPS and internalizing problems depends on a third variable, in this case the parenting style. Finding a moderation effect would have supported the differential susceptibility theory of SPS. According to this theory, individuals scoring high on the SPS personality trait are more susceptible to environmental influences in a for-betterand-for-worse manner, resulting in worse developmental outcomes under negative circumstances but also better developmental outcomes in a supportive environment (Belsky & Pluess, 2016). With regards to parenting, this theory was supported by highly sensitive children showing stronger positive outcomes in relation to positive parenting practices while also showing more negative outcomes as a result of negative parenting practices in earlier research (Liss et al., 2005;Slagt et al., 2018). There are two possible explanations why we did not find the interaction between parenting and sensitivity. The first explanation is that the moderation effect of parenting does exist but was not found due to methodological shortcomings. Our sample was homogeneous with almost all participants reporting medium to high scores (less than 2% scored lower than 3 and 55% scored 4 or higher on a scale from 1 to 5) on authoritative parenting. The generally positive parenting practices in the sample might have prevented finding a moderating effect. The second explanation is that SPS may interact with parenting practices, yet not with parenting style as such. Earlier research has established the moderating effect of parenting on SPS for certain aspects of parenting, like parental care, responsiveness, autonomy granting, positive interactions, and inductive discipline (Liss et al., 2005;Slagt et al., 2018). For other aspects of parenting, like parental overprotection, the interaction effect was not found (Liss et al., 2005). It is possible that parenting styles are an aspect of parenting that is too broad to interact with youth sensitivity. In recent years, research has gradually moved from using the global concept of parenting styles to a more specific approach; distinct parenting dimensions like psychological control and adolescent disclosure as well as models looking at domain-specific parenting are on the rise and potentially reflect a more naturalistic picture of the parenting situation (Smetana, 2017). Future research should therefore examine the moderating effect of specific parenting dimensions on the relationship between SPS and internalizing problems among adolescents. One of the strengths of our study is the large sample with around 400 participants. The sample size allowed for a precise estimation of effect sizes and provided sufficiently reliable results with sufficient precision and power. Another strength is that the data were collected during the lockdown, specifically when lock down restrictions were in place. Data collection during this unique period provided the opportunity to measure the impact of a sudden occurrence of a stressful life event on a larger population and thus testing of the differentially susceptibility model for high SPS children. The results of our research should be considered in light of several limitations. Firstly, the study had a crosssectional design. Subsequently, no causal claims can be made on the nature of the relationships between the variables. For example, the correlation between SPS and internalizing problems is in line with the concept of SPS as a risk factor for internalizing problems among adolescents, but a longitudinal study design is necessary to confirm the direction of effect. Secondly, our dataset does not include data on possible atypical development of our participants. As SPS might correlate with neurological issues in development, it would have been useful to include indices for atypical development as a control variable. Thirdly, the internal consistency of the questionnaire measuring the COVID-19 impact turned out to be poor (α = 0.57). Under normal circumstances, a questionnaire with such a restricted Cronbach's Alpha would be revised, as it can be a sign that the items are not measuring the same concept which would impact the reliability of the instrument of measurement. However, the fast pace in which the COVID-19 crisis developed, and the uniqueness of the variable perceived COVID-19 impact did not allow the researchers to either depend on already tested questionnaires or conduct the indepth analysis that normally accompanies the development of a new questionnaire. As the internal consistency of the COVID-19 questionnaire was not unacceptable, we chosen to use the questionnaire while keeping the low internal consistency in mind as a limiting factor. Lastly, all questionnaires were self-report measures, which holds the risk that the participating youth were answering in a socially desirable manner. In conclusion, our study shows that high levels of the SPS trait are related to more internalizing problems among adolescents. This main effect is partly mediated by the perceived impact of a stressful life event, the COVID-19 crisis. Future research should focus on identifying more environmental factors that mediate or moderate the relationship between SPS and internalizing problems. By learning more about what environment highly sensitive adolescents need to thrive, we might be able to support sensitive youth and prevent the development of internalizing problems. Author Contributions All authors contributed to the study conception and design. Data collection was performed by S.B. and S.G. Analysis was planned and performed by S.B. and I.K. The first draft of the manuscript was written by S.B. and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. All authors agree to the authorship order and content of the manuscript. Compliance with Ethical Standards Conflict of Interest The authors declare no competing interests. Ethical Approval Approval was obtained from the ethics committee of Utrecht University. The procedures used in this study adhere to the tenets of the Declaration of Helsinki. Informed Consent Informed consent was obtained from all individual participants included in the study. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
2022-04-09T15:24:27.179Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "ef81a4134deaaa24e6ea0fca6547afbd328e0f48", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10826-022-02243-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "91631b7683486a382f851191a7529e73ff0cf1be", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119376660
pes2o/s2orc
v3-fos-license
Evolving a puncture black hole with fixed mesh refinement We present an algorithm for treating mesh refinement interfaces in numerical relativity. We detail the behavior of the solution near such interfaces located in the strong field regions of dynamical black hole spacetimes, with particular attention to the convergence properties of the simulations. In our applications of this technique to the evolution of puncture initial data with vanishing shift, we demonstrate that it is possible to simultaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult and wave extraction is meaningful. I. INTRODUCTION Numerical relativity, which comprises the solution of Einstein's equations on a computer, is an essential tool for understanding the behavior of strongly nonlinear dynamical gravitational fields. Current grid-based formulations of numerical relativity feature ∼ 17 or more coupled nonlinear partial differential equations that are solved using finite differences in 3 spatial dimensions (3-D) plus time. The physical systems described by these equations generally have a wide range of length and time scales, and realistic simulations are expected to require the use of some type of adaptive gridding in the spacetime domain. A primary example of the type of physical system to be studied using numerical relativity is the final merger of two inspiraling black holes, which is expected to be a strong source of gravitational radiation for ground-based detectors such as LIGO and VIRGO, as well as the spacebased LISA [1]. The individual black hole masses M 1 and M 2 set the scales for the binary in the source interaction region, and we can expect both spatial and temporal changes on these scales as the system evolves. The binary must be evolved for a time t ∼ 1000M , M ∼ M 1 + M 2 , starting from an orbital separation ∼ 10M to simulate its final few orbits followed by the plunge and ringdown. This orbital region is surrounded by the wave zone with features of scale ∼ 100M , where the outgoing signals take on a wave-like character and can be measured. Accomplishing realistic simulations of binary black hole mergers on even the most powerful computers clearly requires the use of variable mesh sizes over the spatial grid. Adaptive mesh refinement (AMR) was first applied in numerical relativity to study critical phenomena in scalar field collapse in 1-D [2]; several other related studies have also used AMR, most recently in 2-D [3]. AMR has also been used in 2-D to study the evolution of inhomogeneous cosmologies [4,5]. In the area of black hole evolution, AMR was first applied to a simulation of a Schwarzschild black hole [6]. Fixed mesh refinement (FMR) was used to evolve a short part of a (nonequal mass) binary black hole merger [7], an excised Schwarzschild black hole in an evolving gauge [8], and orbiting, equal mass black holes in a co-rotating gauge [9]. AMR has also been used to set binary black hole initial data [10,11]. The propagation of gravitational waves through spacetime has been carried out using AMR, first using a single 3-D model equation describing perturbations of a Schwarzschild black hole [12] and later in the 3-D Einstein equations [13]. Gravitational waves have also been propagated across fixed mesh refinement boundaries, with a focus on the interpolation conditions needed at the mesh boundaries to inhibit spurious reflected waves [14]. Realistic simulations of the final merger of binary black holes are likely to require a hierarchy of grids, using both FMR and AMR. The source region would have the finest grids, and would be surrounded by successively coarser grids, encompassing the orbital region and extending into the wave zone out to distances > 100M . Evolving dynamical gravitational fields using such a mesh refinement hierarchy poses a number of technical challenges. For example, the gravitational waves produced by the sources will originate as signals in the near zone and need to cross fixed mesh refinement boundaries to reach the wave zone. In addition, Coulombic-like signals that may vary with time but are not wavelike in character, such as are produced by the gravitational potential around black holes, can stretch across mesh boundaries. Inappropriate interpolation conditions at refinement boundaries can lead to spurious reflection of signals at these interfaces; cf. [14]. Additional complications can arise when the grid refinement is adaptive. In this paper, we use the evolution of a single Schwarzschild black hole with FMR as a numerical laboratory. We represent the black hole as a puncture without excision and use gauges with zero shift in which the solution undergoes significant evolution. This tests the ability of our code to handle dynamically changing spacetimes in the vicinity of mesh refinement boundaries. Using a hierarchy of fixed mesh refinements, we are able to resolve the strong field region near the puncture (and demonstrate the convergence of the solution in this region) while locating the outer boundary at > 100M . In Sec. II we describe our methodology, including the numerical implementation. The treatment of mesh refinement boundaries is discussed in Sec. III. Black hole evolutions with FMR are presented in Sec. IV; examples are given of evolutions using geodesic slicing, and 1 + log slicing with zero shift. We conclude with a summary in Sec. V. A. Basic Equations We use the BSSN form of the ADM equations [15,16]. These equations evolve the quantities written here in terms of the physical, spatial 3-metric γ ij and extrinsic curvature K ij [17], where all indices range from 1 to 3. In Eq. (1e),Γ i ab is the Christoffel symbol associated with the conformal metricγ ij . These quantities evolve according to where here and henceforth the indices of conformal quantities are raised with the conformal metric. The lapse α and shift β i specify the gauge, the full derivative notation d/dt = ∂/∂t − L β is a partial with respect to time minus a Lie derivative, and the notation "TF" indicates the trace-free part of the expression in parentheses. These quantities are analytically subject to the conditions known respectively as the Hamiltonian and momentum constraints. When evaluating Eqs. (3) we recompute the physical quantities from the evolved quantities using Eqs. (1). Note that the covariant derivatives of the lapse in Eqs. (2b) and (2d) are with respect to the physical metric, and are used here for compactness. In the code, this is computed according to using only the conformal BSSN quantities, and the index of the covariant derivative is raised on the right hand side of Eq. (2b) with the physical metric. The Ricci tensor in Eq. (2d) is also with respect to the physical metric. We compute it according to the decomposition and giving the conformal and remaining pieces of the physical Ricci tensor. The notation∇ i denotes the covariant derivative associated with the conformal metric. There are many rules of thumb in the community regarding how to incorporate the constraints into the evolution equations, and, in particular, when to use the independently evolvedΓ i as opposed to recomputing the equivalent quantity from the evolved metric. We have made our choices manifest in the writing of the equations here; we largely follow the rules set out in [18]. B. Numerical Implementation For the spatial discretization of Eqs. (2), we take the data to be defined at the centers of the spatial grid cells and use standard O(∆x) 2 centered spatial differences [19]. To advance this system in time, we use the iterated Crank-Nicholson (ICN) method with 2 iterations [20], which gives O(∆t) 2 accuracy. We employ interpolated Sommerfeld outgoing wave conditions at the outer boundary [15] on all variables, except for theΓ i which are kept fixed at the outer boundary. Overall, the code is second-order convergent; specific examples of this are given in Sec. IV below. We explicitly enforce the algebraic constraints thatà ij is trace-free and thatγ = 1 after each ICN iteration. We enforce the trace-free condition by replacing the evolved variable withà and we enforce the unit determinant condition by replacing the evolved metric with Both of these constraints are enforced in all of the runs presented below. Since theΓ i are evolved as independent quantities, Eq. (1e) acts as a further constraint on this system of equations. We monitor the behavior of this so-calledΓ i constraint along with the Hamiltonian and momentum constraints, Eqs. (3a) and (3b) We use the Paramesh package [21,22] to implement both mesh refinement and parallelization in our code. Paramesh works on logically Cartesian, or structured, grids and carries out the mesh refinement on grid blocks. When refinement is needed, the grid blocks needing refinement are bisected in each coordinate direction, similar to the technique of Ref. [23]. All grid blocks have the same logical structure, with n x zones in the x-direction, and similarly for n y and n z . Thus, refinement of a 3-D block produces eight child blocks, each having n x n y n z zones but with zone sizes in each direction a factor of two smaller than in the parent block. Refinement can continue as needed on the child blocks, with the restriction that the grid spacing can change only by a factor of two, or one refinement level, at any location in the spatial domain. Each grid block is surrounded by a number of guard cell layers that are used to calculate finite difference spatial derivatives near the block's boundary. These guard cells are filled using data from the interior cells of the given block and the adjacent block; see Sec. III. Paramesh can be used in applications requiring FMR, AMR, or a combination of these. The package takes care of creating the grid blocks, as well as building and maintaining the data structures needed to track the spatial relationships between blocks. Paramesh handles all interblock communications and keeps track of physical boundaries on which particular conditions are set, guaranteeing that the child blocks inherit this information from the parent blocks. In a parallel environment, Paramesh distributes the blocks among the available processors to achieve load balance, minimize inter-processor communications, and maximize block locality. This scheme provides excellent computational scalability. Equipped with Paramesh, the scalability of our code has been tested for up to 256 processors and has demonstrated a consistently good scaling factor for both unigrid (uniform grid) and FMR runs. For unigrid runs, we started with a uniform Cartesian grid of a certain number of grid cells, a fixed number of timesteps, and a certain number of PEs (Processing Elements), and then increased the number of PEs to run a larger job while the number of grid cells per PE remained constant. In this situation, we expect that the total time taken to run the code, including the CPU time used by all of the PEs, should scale linearly with the size of the problem under perfect conditions. In reality, communication overhead makes the scalability less than perfect. We define the scaling factor to be the time expected with perfect scaling divided by the actual time taken. Using FMR, we ran the same simulations as in the unigrid case except that a quarter of the computational domain was covered by a mesh with twice the resolution. Despite the more complicated communication patterns, scalability in the FMR runs is comparable to that in the unigrid runs. The scaling factor of our code is 0.92 for unigrid runs and 0.90 for FMR runs. For the work described in this paper, we are using FMR. For simplicity, we use the same timestep, chosen for stability on the finest grid and with a Courant factor of 0.25, over the entire computational domain; cf. [24]. At the mesh refinement boundaries, we use a single layer of guard cells. Special attention is paid to the restriction (transfer of data from fine to coarse grids) and prolongation (coarse to fine) operations used to set the data in these guard cells, as discussed in the next section. III. TREATMENT OF REFINEMENT BOUNDARIES Careful treatment of guard cells at mesh refinement boundaries is needed to produce accurate and robust numerical simulations. The current version of our code uses a third order 1 guard cell filling scheme that is now included with the standard Paramesh package. This guard cell filling proceeds in three steps. The first step is a restriction operation in which interior fine grid cells are used to fill the interior grid cells of the underlying "parent" grid. The parent grid is a grid that covers the same domain as the fine grid but has twice the grid spacing. The restriction operation is depicted for the case of two spatial dimensions in the left panel of 1 In our terminology the "order of accuracy" refers to the order of errors in the grid spacing. Thus, third order accuracy for guard cell filling means that the guard cell values have errors of order ∆x 3 , where ∆x is the (fine) grid spacing. (Note that third order accurate guard cell filling was termed "quadratic" guard cell filling in Ref. [14].) Second order accuracy for the evolution code means that, after a finite evolution time, the field variables have errors of order ∆x 2 . The restriction proceeds as a succession of onedimensional quadratic interpolations, and is accurate to third order in the grid spacing. Note that the 3-cell-wide fine grid stencil used for this step (nine black circles in the figure) cannot be centered on the parent cell (grey square). In each dimension the stencil includes two fine grid cells on one side of the parent cell and one fine grid cell on the other. The stencil is always positioned so that its center is shifted toward the center of the block (assumed in the figure to be toward the upper left). This ensures that only interior fine grid points, and no fine grid guard cells, are used in this first step. For the second step, the fine grid guard cells are filled by prolongation from the parent grid. Before the prolongation, the parent grid gets its own guard cells (black squares in the right panel of Fig. 1) from the neighboring grids of the same refinement level, in this case from the coarse grid. The stencil used in the prolongation operation is shown in the right panel of Fig. 1. The prolongation operation proceeds as a succession of onedimensional quadratic interpolations, and is third order accurate. In this case, the parent grid stencil includes a layer of guard cells (black squares), as well as its own interior grid points (grey squares). At the end of this second step the fine grid guard cells are filled to third order accuracy. The third step in guard cell filling is "derivative match-ing" at the interface. 2 With derivative matching the coarse grid guard cell values are computed so that the first derivatives at the interface, as computed on the coarse grid, match the first derivatives at the interface as computed on the fine grid. The first derivative on the coarse grid is obtained from standard second order differencing using a guard cell and its neighbor across the interface. The first derivatives on the fine grid are computed using guard cells and their neighbors across the interface, appropriately averaged to align with the coarse grid cell centers. This third step fills the coarse grid guard cells to third order accuracy. An alternative to derivative matching, which we do not use, is to fill the coarse grid guard cells from the first layer of interior cells of the parent grid. However, we find that such a scheme leads to unacceptably large reflection and transmission errors for waves passing through the interface. These errors are suppressed by derivative matching. Why should third order guard cell filling be adequate to maintain overall second order accuracy? This is a nontrivial question, and there are certain subtleties that arise in our black hole evolutions. In Appendix A, we present a detailed error analysis for our guard cell filling algorithm based on simplified model equations for a scalar field in 1-D. This toy model shares many of the features of the full BSSN system and provides a useful guide to understanding the behavior of our black hole evolutions. We demonstrate that, with this algorithm, second spatial derivatives of the BSSN variables defined by Eqs. (1) acquire first order errors at grid points adjacent to mesh refinement boundaries. These first order errors show up as spikes in a convergence plot for quantities that depend on second spatial derivatives, such as the Hamiltonian constraint. The key result of this analysis, however, is a demonstration that the first order errors in second derivatives do not spoil the overall second order convergence of the evolved variables in Eqs. (1), in spite of the fact that second spatial derivatives appear on the right-hand sides of the evolution equations (2); cf. [25,26]. IV. BLACK HOLE EVOLUTIONS Black hole spacetimes are a particularly challenging subject for numerical study. Astrophysical applications will require that our FMR implementation perform robustly under the adverse conditions which arise in black hole simulations, such as strong time-dependent potentials and propagating signals that become gravitational waves. In this section we demonstrate that our techniques perform convergently and accurately in the presence of strong field dynamics and singular "punctures" associated with black hole evolutions, and that these methods can be stable on the timescales required for interesting simulations. The puncture approach to black hole spacetimes generalizes the Brill-Lindquist [27] prescription of initial data for black holes at rest. In this approach, the spacetime is sliced in such a way as to avoid intersecting the black hole singularity, and the spatial slices are topologically isomorphic to R 3 minus one point, a puncture, for each hole. The punctures represent an inner asymptotic region of the slice which can be conformally transformed to data which are regular on R 3 . In this way a resting black hole of mass M located at r = 0 is expressed in isotropic spatial coordinates by γ ij = Ψ 4 BL δ ij , with conformal factor Ψ BL = 1 + M/2r and K ij = 0. A direct generalization of this expression for the conformal factor can be used to represent multiple black hole punctures, and data for spinning and moving black holes can be constructed according to the Bowen-York [28] prescription. A key characteristic which makes this representation appropriate for spacetime simulations is that with suitable conditions on the regularity of the lapse and shift, the evolution equations imply that time derivatives of the data at the puncture are regular everywhere despite the blow up in Ψ BL at the puncture. Numerically, we treat the punctures by a prescription similar to that given in [18]. In the BSSN formulation, this amounts to a splitting of the conformal factor exp(4φ) into a regular part, exp(4φ r ), and non-evolving singular part, exp(4φ s ), given by Ψ BL . The numerical grid is staggered to make sure the puncture does not fall directly on a grid point, and to avoid the large finite differencing error, the derivatives of φ s are specified analytically. We study two test problems, each representing a Schwarzschild black hole in a different coordinate system. Both problems test the performance of our FMR interfaces under the condition that strong-field spacetime features pass through the interfaces. The first case, described in Sec. IV A, is a black hole in geodesic coordinates, in which the data evolve as the slice quickly advances into the singularity. In Appendix B, we present an analytic solution for the development of this spacetime with which we can compare for a direct test of the simulation. Our next test, in Sec. IV B, uses a variant of the "1+log" slicing condition to define the lapse, α, with vanishing shift, β i = 0. This gauge choice allows the slice to avoid running into the singularity, but causes the black hole to appear to grow in coordinate space so that the horizon passes though our FMR interfaces. A. Geodesic Slicing We begin with the numerical evolution of a single puncture black hole using geodesic slicing. As explained in Appendix B, at t = πM the slice Σ π on which our data resides will reach the physical singularity 3 ; because we are not performing any excision, this sets the maximum duration of our evolution. Nevertheless, t ≈ 3M is long enough to test the relevant features of FMR evolution and provide us with a simple analytical solution. In these simulations we locate the puncture black hole at the origin. The spherical symmetry of the problem allows us to restrict the simulation to an octant do- main with symmetry boundary conditions, thereby saving memory and computational time. The outer boundaries are planes of constant x, y, or z at 128M each. As noted before, one of the important uses of FMR is to enable the outer boundary to be very far from the origin, at > ∼ 100M . In this case we can apply our exact solution as the outer boundary condition, though any numerical effects produced at this distant boundary are completely irrelevant for the most interesting strong-field region. In this test we are mainly interested in how the FMR boundaries behave near the puncture and under strong gravitational fields. Even with the outer boundary far away we can, by applying multiple nested refinement regions, highly resolve the region near the puncture, as is required to demonstrate numerical convergence. To achieve the desired resolution near the puncture, we use 8 cubical refinement levels, locating the refinement boundaries at the planes 64M , 32M , 16M , 8M , 4M , 2M and 1M in the x-, y-, and z-directions. To test convergence we will examine the results of three runs with identical FMR grid structures, but different resolutions. The lowest resolution run has gridpoints ∆x f = ∆y f = ∆z f = M/16 apart in the finest refinement region near the puncture. The medium resolution run has double the resolution of the first run in each refinement region. The highest resolution run has twice the resolution of the medium resolution run, for a maximum resolution of ∆x f = M/64 near the puncture. The memory demand and computational load per timestep for the low, medium, and high resolution runs are similar to unigrid runs of 32 3 , 64 3 and 128 3 gridpoints. Since the data in our simulations are defined at the centers of the spatial grid cells (see Fig. 1), we must interpolate when extracting data on cuts through the simulation volume. We use cubic interpolation, which is accurate to order ∆x 4 , to insure that the interpolation errors are smaller than the largest differencing errors of order ∆x 2 expected in the simulations; cf. the discussion on postprocessing in [29]. When interpolating at a location near a refinement boundary, we adjust the stencil so that the interpolation involves only data points at the same level of refinement while still maintaining order ∆x 4 errors. In Fig. 2 we plot the conformal metric componentγ xx for the highest resolution run along the x-axis at times t = 0.5, 1.0, 1.5, 2.0, and 2.5M . Note that in these coordinates the event horizon is at r = 0.5M at t = 0 and moving outward toward larger values of the radial coordinate. By t = πM , the singularity is at coordinate position r = 0.5M , so the mesh refinement interface at x = 1M is truly in the strong field regime. Because the slice will hit the singularity at coordinate position x = 0.5M , the metric grows sharply there as the simulation time advances. In the present context though, we are not so much interested in the field values of this well-studied spacetime, as in the simulation errors, that is, the differences between the analytical solution presented in Appendix B and the numerical results. These differences allow us to directly measure the errors in our numerical simulation. At late times, these errors are, not surprisingly, dominated by finite differencing errors in the vicinity of the developing singularity. The plots in Fig. 3 compare these errors along the xaxis at t = 2.5M for the three different resolution runs described above, demonstrating the convergence ofγ xx , A xx , the Hamiltonian constraint, and the x-component of the momentum constraint. In each panel, the solid line shows the errors for the high resolution run. The errors for the medium (dashed line) and low (dotted line) resolution runs have been divided by 4 and 16, respectively. That the curves shown lie nearly atop one another is an indication of second order convergence, i.e. that the lowest order error term depends quadratically on the gridspacing, ∆x. That the remaining difference between the adjusted curves near x = 0.5M seems also to decrease quadratically is an indication that the next significant error term is of order ∆x 4 . We achieve convergence to the analytic solution everywhere, from very near x = 0, through the peak region which is approaching the singularity, and in the weak field region. Animations of these results can be found in the APS auxiliary archive EPAPS; see Fig. 3A and the associated animation file in Ref. [30]. Our particular interest is in the region near the refinement interfaces. Fig. 4 shows a close-up ofγ xx near the refinement boundary at x = 2M . In this figure we have included the values of the guardcells used for defining finite-difference stencils near the interface. Again, the curves lie nearly atop one another, indicating second order convergence. A similar close-up of the Hamiltonian constraint H requires a little more explanation. Eq. (3a) involves up to second derivatives of the BSSN data, implemented by finite differencing. sequence of values computed at the nearest point approaching the interface as ∆x → 0 approaches zero at only first order. This is as expected according to the discussion in Appendix A (cf. Fig. 12) for a derived quantity involving second derivatives. We have specifically verified that, as withγ xx , all BSSN variables converge to second order at the refinement boundaries. We also examined the simulation data along cuts away from the x-axis and have found them to be qualitatively similar to those on the axis. In particular, plots and animations of the errors along the line y = z = 0.25M can be found in the EPAPS supplement; see Fig. 3B and the associated animation file in Ref. [30]. This particular 1-D cut is instructive since it includes the strong field region yet has no particular symmetric relation to the solution. The fact that the errors along this line are qualitatively similar to those along the x-axis gives us confidence that the results we display in Fig. 3 are not subject to accidental cancellations due to octant symmetry boundary conditions that might produce artificially small errors. We have also examined the L 1 and L 2 norms of the errors in basic variables and constraints to assess the overall properties of the simulation. Representative results are shown in Fig. 6, where the top panel displays the convergence behavior of the L 2 norm of of the error inγ xx and the bottom panel the convergence of the L 2 norm of H. The errors for the medium (dashed line) and low (dotted line) resolution runs have been divided by 4 and 16, respectively. These curves lie nearly atop the errors for the high resolution run (solid line), indicating the second order convergence of these error norms; see Appendix C and Eqs. (C3) and (C4). B. 1 + log slicing Having rigorously tested the code against an analytic solution, we now use a different coordinate condition to study a longer-lived run with nontrivial, nonlinear dynamical behavior in the region of FMR interface boundaries. For this purpose, we again use zero shift but with a modified "1+log" slicing condition given by where insertion of the factor Ψ BL = 1 + M/2r, originally recommended by [18], has proven to enhance convergence near the puncture in our simulations. For the numerical experiments with 1+log slicing, the grid structure, including the locations of the mesh refinement interfaces, is the same as in the geodesic slicing case. We carry out three runs, with low, medium and high resolution defined as before. A 1+log evolution serves as an excellent numerical experiment to test the robustness of our mesh refinement interfaces. The 1+log family has been well-studied in unigrid runs in the past, so the generic behavior of this coordinate system is known and provides a general context for comparison with our mesh refinement results. Because the 1+log slicing is singularity avoiding, in contrast to the geodesic slicing case, simulations in a 1+log gauge are known to last ∼ 30M −40M , giving us an opportunity to study the properties of our mesh refinement interfaces in longer duration runs. Finally, as shown by Fig. 7, as the lapse (right panel of the figure) collapses around the singularity, a strong gradient region in the metric (left panel of the figure) moves outward, passing through mesh refinement boundaries in the process. According to unigrid runs already in the literature (e.g., [18]) choosing an appropriate shift, such as the Gamma-driver shift, would cause the evolution to freeze, preventing catastrophic growth in the metric functions and confining the strong field behavior to the region r < 10M . This also increases the stable evolution time of the simulations. For our purposes here, however, we choose to let the strong gradient region move outward because we specifically wish to study how well the mesh refinement interfaces handle a strong dynamical potential on timescales t > 10M . We consider this an important test, since such phenomena may develop near refinement boundaries in the course of realistic astrophysical simulations of multiple black holes. Having made this choice, we expect to see exactly what appears in Fig. 7. The metric functionγ xx (left panel) grows due to well-understood grid stretching related to the collapse of the lapse (right panel) and the fact that grid points are falling into the black hole. The peak of the metric simultaneously moves to larger coordinate position. We expect, therefore, that at some point certain regions of the simulations will no longer exhibit second order convergence because the gradients in the metric simply grow too large, because the peak of the metric moves into a region of lower refinement that cannot resolve the gradients already present in the metric at that point, or because of a combination of the two. The simulations in this gauge, nonetheless, remain second order convergent long enough for us to study the effects of the strong potential passing through the innermost mesh refinement interfaces. Because we do not have an analytic solution for the 1+log case to use in our convergence tests, we show threepoint convergence plots instead. Specifically, for a given field f , we plot (f low − f med )/4 using a dashed line and f med − f high using a solid line. Since the three different resolutions "low", "medium", and "high" are related to each other by factors of two, the two lines in each panel should overlay exactly for perfect second order convergence. Fig. 8 shows such a three-point convergence plot forγ xx andà xx for a 1-D cut along the x-axis. The left panels, showing data from t = 8M , demonstrate that the metric and other variables are second order convergent everywhere at that time. Overall, we continue to see second order convergence in the evolved variables, constraints, and norms until t ∼ 10M . The convergent behavior starts to break down around t ∼ 10M due to difficulties with resolving the sharp feature in the metric. In the region 1M ≤ x ≤ 2M , between the first and second FMR boundaries, the peak itself grows sharply and the coarser grid is not sufficient to provide the resolution needed for convergent behavior. For 2M ≤ x ≤ 4M , the grid is again coarsened by a factor of 2 and is not able to resolve adequately the steep gradient on the leading edge of the metric peak. A snapshot at t = 16M is shown in the right panels of Fig. 8; by this time, the peak of the metric has passed through two refinement interfaces (at x = 1M and x = 2M ). The time development of these errors, and in particular their departure from second order convergence, can be seen in the animations available in the EPAPS supplement; see Fig. 8A and the associated animation file in Ref. [30]. Throughout the duration of the runs the region x > ∼ 5 does remain second order convergent, even though the grid is further coarsened by factors of two at x = 8M , 16M , 32M , and 64M , since all the fields change very slowly as they approach the asymptotically flat regime. The simulations will continue to run stably past this point (to approximately t ≈ 35M ), but the resolution in the regions to the right of the interface at x = 1M is not sufficient to produce convergent results, as was expected. The Hamiltonian and momentum constraints along the x-axis are shown in Fig. 9. Three curves are plotted in each panel. The errors for the highest resolution run are given by the solid line. The errors for the medium (dashed line) and low (dotted line) resolution runs have been divided by factors of 4 and 16, respectively. The constraints are second order convergent in the bulk for times t < ∼ 10M , when the resolution is sufficient to handle the growing feature in the metric (left panels). As expected, H exhibits first order convergent spikes at mesh refinement interfaces; cf. Fig. 5 and Appendix A. For t > ∼ 10M , as the peak of the fields propagates into the coarser grid regions past x = 2M , the lowest resolutions are not sufficient to resolve the rising slope of the metric, and, like the evolved variables (Fig. 8), the constraints no longer demonstrate second order convergence. The right panels of Fig. 9 show the constraints at t = 16M , right after the peak of the metric passes through the refinement interface at x = 2M . See Fig. 9A and the associated animation file in Ref. [30] for animations of these data. The behavior of the simulations at locations away from the x-axis is qualitatively similar to that shown in Figs. 8 and 9. Plots and animations of the errors along the line y = z = 0.25M are available in the EPAPS supplement; see Figs. 8B and 9B and their associated animation files in Ref. [30]. We have also examined the L 1 and L 2 norms of the errors to assess the overall behavior of these runs, and display representative results in Fig. 10. The L 2 norms of the errors in the basic variablesγ xx andà xx are shown in the left top and bottom panels, respectively, using 3-point convergence plots. The dashed lines show the difference between the low and medium resolution results divided by 4, and the solid lines show the difference between the medium and high resolution results, demonstrating the overall second order convergence of these simulations at early times. The L 1 norm of H is displayed in the top right panel, where the solid line gives the errors for the high resolution run. The errors for the medium (dashed line) and low (dotted line) resolution runs have been divided by factors of 4 and 16, respectively to show second order convergence, as expected from Eq. (C7). In the lower right panel the L 2 norm of H is shown, with the solid line giving the results for the high resolution run. As discussed in Appendix C, the errors for the medium (dashed line) and low (dotted line) resolution runs have been divided by factors of 2 3/2 and 4 3/2 = 8 to account for the effects of significant first order convergent errors in H at the mesh refinement boundaries, in addition to the second order convergent errors in the bulk; see Eq. (C8). One final feature of these simulations, the high frequency noise near the origin seen in the right panels of Fig. 9, requires some explanation. First of all, it is not related to the presence of the refinement boundaries; in particular, we have reproduced it in unigrid runs and with an independently-written, 1-D (spherically symmetric) code. Higher resolution exacerbates this problem: both the frequency and amplitude of the noise increase with resolution. We have found the location of this noise to be independent of resolution and the number and positions of FMR boundaries. This feature, which we call the "point-two M problem," originates around r ∼ 0.2M . It becomes most evi- dent at times t > 10M , first appearing in the lapse and K, which are directly coupled, and then eventually mixing into all of the extrinsic curvature variables. For the duration of the evolutions the noise remains within the region 0.0M < ∼ x < ∼ 0.5M . Outside this region, all basic variables demonstrate satisfactory second order convergence, including at refinement boundaries, up to times t ∼ 10M . Having chosen a generally accepted gauge, and having focused on effects of the mesh refinement interfaces in this work, we have not fully investigated the cause of nor possible remedies for this apparent pathology. We note it here, however, as an interesting topic for future investigation. V. SUMMARY This paper demonstrates that fixed mesh refinement boundaries can be located in the strong field region of a dynamical black hole spacetime when the interface conditions are handled properly. This result was ver-ified through simulation of a Schwarzschild black hole in geodesic coordinates, for which we have an analytic solution for comparision, and through simulations of Schwarzschild in a variation of the 1+log (singularity avoiding) slicing with zero shift. Mesh refinement technology, therefore, is a viable to way to use computational resources more efficiently, and to simulate the very large spatial domains needed to compute the dynamics of the source interactions and allow extraction of the resulting gravitational waveforms. Our method for handling the interface conditions, based in part on the Paramesh infrastructure, is detailed. For these simulations we find that, in handling the interface condition between FMR levels, third order guard cell filling is sufficient for overall second order accuracy in the simulations. By nesting several levels of mesh refinement regions, we are able to resolve the puncture convergently while simultaneously pushing the outer boundary of our domain to 128M and keeping the computational problem size modest. We estimate that for only a 12% increase in the computational size of the problem, we could push the outer boundary to 256M ; moving the outer boundary out even farther will be possible for production runs on larger machines. Combined with our earlier results showing that gravitational waves pass through such FMR interfaces without significant reflections [14], we have now studied, in detail, the effects of FMR interfaces on the two primary features, waves and time-varying strong potentials, of astrophysically interesting spacetimes. In this paper, we have evolved single black holes using gauges with zero shift in order to produce test problems in which strong-field spacetime features with steep gradients pass through mesh refinement interfaces. In more realistic, astrophysical simulations of multiple black holes, we expect to use non-zero shift prescriptions. While a shift vector will allow us to control certain aspects of the dynamics, we still expect to find some strong, time-varying signals to propagate across mesh refinement boundaries. We are currently implementing non-zero shift conditions into our FMR evolutions and will report on this work in a separate publication. Operations Office and the Commodity Cluster Computing Group at Goddard. APPENDIX A: ERROR ANALYSIS OF GUARD CELL FILLING SCHEME To help pave the way for understanding the behavior of our black hole evolutions near mesh refinement boundaries, we provide here a detailed analysis of a toy model for a scalar field in one spatial dimension, using the same third order guard cell filling algorithm detailed in Sec. III. The model equations arė where the dot denotes a time derivative and primes denote space derivatives. These equations can be solved numerically using the same twice-iterated Crank-Nicholson algorithm used to evolve our black hole spacetimes. The fields at timestep n + 1 are given in terms of the fields at timestep n by where D 2 is a finite difference operator approximating the second spatial derivative, and D 4 = (D 2 ) 2 . Consider for the moment a uniform spatial grid. If D 2 is the usual second order accurate centered difference operator, the dominant source of error for ψ n+1 j comes from the term proportional to ∆t 3 . This term has the wrong numerical coefficient as compared to the Taylor series expansion of the exact solution. The dominant sources of error for π n+1 j come from the term proportional to ∆t 3 , which also has the wrong numerical coefficient, and from the second order error in D 2 ψ n j . For a uniform grid the dominant error in D 2 ψ n j is D 4 ψ n j ∆x 2 /12, so the leading errors for a single timestep are For ∆t ∼ ∆x, each of these one-time-step errors is proportional to ∆x 3 . If we evolve the initial data to a finite time T , the O(∆x 3 ) errors accumulate over N = T /∆t timesteps resulting in second order errors. 4 Thus, the 4 This is a simplification. The dominant error after N timesteps includes other terms of order ∆x 2 in addition to the product of N and the one-time-step error. These other terms include, for example, the product of an order N 3 coefficient and a one-timestep error of order ∆x 5 . basic variables ψ and π are second order convergent on a uniform grid. On a non-uniform grid, guard cell filling introduces errors of order ∆x 3 in ψ at grid points adjacent to the boundary. This leads to errors of order ∆x in D 2 ψ n j and 1/∆x in D 4 ψ n j . From Eq. (A2b) we see that in one timestep π can acquire errors of order ∆x 2 . The concern is that these errors might accumulate over N = T /∆t timesteps to yield first order errors. This, in fact, does not happen. Simple numerical experiments show that ψ and π are second order convergent on a non-uniform grid with third order guard cell filling. We can understand this result with the following heuristic reasoning. The numerical algorithm of Eq. (A2) approximates, as does any mathematically sound numerical scheme, the exact solution of the scalar field equations (Eq. (A1)) in which the field π propagates along the light cone. The "bulk" errors displayed in Eq. (A3b) accumulate along the past light cone to produce an overall error of order N ∆x 3 ∼ ∆x 2 at each spacetime point. Errors in guard cell filling, which occur at a fixed spatial location, do not accumulate over multiple timesteps since the past light cone of a given spacetime point will cross the interface (typically) no more than once. The characteristic fields for the system (A1) are π ± ψ ′ so that ψ ′ , like π, propagates along the light cone. As a result, the value of ψ at a given spacetime point is determined by data from the interior of the past light cone. From Eq. (A2a) we see that the one-time-step errors for ψ due to guard cell filling are order ∆x 3 . These errors can accumulate over N timesteps to yield errors of order N ∆x 3 ∼ ∆x 2 . The derivatives ψ ′ and ψ ′′ are computed at finite time T by evolving the ψ, π system for T /∆t timesteps then taking the centered, second order accurate numerical derivatives of ψ. Numerical experiments show that ψ ′ and ψ ′′ , defined in this way, are second order convergent on a non-uniform grid with third order guard cell filling. 5 Continuing with our heuristic discussion, we can understand the second order convergence of ψ ′′ as follows. Let (ψ n j ) err ≈ E n j ∆x 2 denote the error in ψ at grid point n, j, where the coefficient E n j is independent of ∆x. Some of this error is due to guard cell filling at the mesh refinement interface and some is due to the accumulation of "bulk" errors (A3). Now, the second derivative of ψ, computed as D 2 ψ n j = (ψ n j+1 −2ψ n j +ψ n j−1 )/∆x 2 , will contain errors of the form (D 2 ψ n j ) err = E n j+1 − 2E n j + E n j−1 . Since the bulk errors are smooth, the bulk contribution to E n j+1 − 2E n j + E n j−1 will scale as ∆x 2 . It is also the case that the errors due to guard cell filling are smooth. This is because the value of ψ at any given point is determined by the interior of the past light cone, so its error includes an accumulation of guard cell filling errors along 5 The error in ψ ′′ is fairly noisy but the overall envelope containing this noise is second order convergent. the history of the mesh refinement interface. In the limit of high resolution this accumulation of error approaches the same value at neighboring grid points j − 1, j, and j + 1. In other words, the guard cell filling contribution to E n j+1 − 2E n j + E n j−1 approaches zero as ∆x → 0. Evidently, the guard cell filling contribution, like the bulk contribution, scales like ∆x 2 . The discussion above indicates that we can evolve the scalar field system of Eq. (A1) for a finite time on a grid with mesh refinement, numerically compute the second derivative of ψ, and find that ψ ′′ is second order accurate. Without shift, the BSSN equations (2c) and (2d) are similar to the scalar field equations withγ ij playing the role of ψ and −à ij playing the role of π. This feature was one of the original motivations behind the BSSN system. Note that the term analogous to ψ ′′ in theπ equation is the termγ lmγ ij,lm contained in the trace-free part of the Ricci tensor, which appears on the right-hand side of Eq. (2d). Obviously there are many other terms that appear on the right-hand side of the dà ij /dt equation. We can model the effect of these terms by including a fixed function on the right-hand side of theπ equation: We have written the fixed function as the second derivative of χ. For simplicity we choose χ to depend on x only, the most relevant dependence for our consideration of behavior across spatial resolution interfaces. The general solution of this system is then whereψ,π is a solution of the homogeneous wave equation (Eq. (A1)). The extended model system of Eq. (A4) can be solved numerically with the discretization ψ n+1 j = ψ n j + ∆t π n j + . . . π n+1 j = π n j + ∆t (D 2 ψ n j − D 2 χ j ) + . . . (A6b) The higher order terms in ∆t, not shown here, come from the iterations in our iterated Crank-Nicholson algorithm. It is important to recognize that the χ ′′ term is expressed as the numerical second derivative of χ j and not as the discretization of the analytical second derivative, (χ ′′ ) j . The reason for this choice is that D 2 χ j mimics the effect of the extra terms on the right-hand side of Eq. (2d) which, in our BSSN code, depend on the discrete first and second derivatives of the BSSN variables φ andΓ i . From the discussion of the wave equation (Eq. (A1)) we can anticipate the results of numerical experiments with the model system (Eq. (A4)) on a non-uniform grid. For arbitrary initial data ψ 0 j , π 0 j , the numerical solution is given by ψ n j =ψ n j + χ j (A7a) π n j =π n j (A7b) whereψ n j ,π n j is the numerical solution of the homogeneous wave equation (Eq. (A1)) with initial dataψ 0 j −χ j , π 0 j . The order of convergence for ψ n j is determined by how rapidly, as ∆x → 0, the numerical solution in Eq. (A7a) approaches the exact solution ψ(t, x) =ψ(t, x) + χ(x). Since χ j is simply the projection of the analytic function χ(x) onto the numerical grid, the term χ j in the numerical solution (Eq. (A7a)) does not contribute any error. We have already determined that on a non-uniform grid ψ n j approachesψ(t, x) with second order accuracy. Thus, we expect ψ n j to be second order convergent. What about derivatives of ψ? The order of convergence for D 1 ψ n j is found by comparing the discrete derivative D 1 ψ n j = D 1ψn j + D 1 χ j to the analytic solution ψ ′ =ψ ′ + χ ′ . Again, as we have discussed, D 1ψn j approachesψ ′ with second order errors. It is also easy to see that the numerical derivative D 1 χ j approaches χ ′ with second order accuracy. Away from grid interfaces this is obviously true, assuming that D 1 is the standard second order accurate centered difference operator. For points adjacent to a grid interface, guard cell values for ψ are filled with third order errors. These errors lead to second order errors in D 1 ψ n j . Overall then, we expect second order convergence for D 1 ψ n j . The expected convergence rates for ψ and ψ ′ are confirmed by the results shown in Fig. 11. For these numerical tests, we chose χ(x) = exp((x − 50)/10) and initial data ψ(0, x) = 100e −(x+10) 2 /400 + e (x−50)/10 (A8a) π(0, x) = 1 2 (x + 10)e −(x+10) 2 /400 . Each set of curves shows the errors at three different resolutions, ∆x = 5/16, 5/32, and 5/64, where ∆x is the fine grid spacing. The evolution time is 20.83, corresponding to 200, 400, or 800 timesteps (depending on the resolution) and a Courant factor of 1/3. The order of convergence for the second derivative of ψ is determined from a comparison of D 2 ψ n j = D 2ψn j + D 2 χ j and the analytic solution ψ ′′ =ψ ′′ + χ ′′ . We have seen that D 2ψn j approachesψ ′′ with second order accuracy. The situation for D 2 χ j , however, is somewhat different. Away from any grid interface D 2 χ j will approach χ ′′ with second order accuracy, assuming D 2 is the standard second order accurate finite difference operator. But for points adjacent to the interface, and only those points, guard cell filling errors of order ∆x 3 in χ will lead to first order errors in D 2 χ j . Thus, we expect to find second order convergence for D 2 ψ n j at all points except those points adjacent to the interface. Points adjacent to the interface should be first order convergent. Fig. 12 shows the results of our convergence test for ψ ′′ . The spikes at the interface (x = 0) appear because the two grid points adjacent to the interface are only first order convergent. Elsewhere, the plot shows second order convergence. The behavior demonstrated in Fig. 12 also occurs in the BSSN system when we examine the convergence plot for the Hamiltonian constraint. In graphing the Hamiltonian constraint H, we are comparing a combination of grid functions that includes second derivatives of the BSSN variables to the exact analytical solution for H, namely, zero. We therefore expect spikes to appear at interfaces in the convergence plot for the Hamiltonian constraint, and indeed they do (see, for example, Figs. 3 and 9). We wish to emphasize that the lack of second order convergence for second spatial derivatives at grid points adjacent to the interfaces is not due to any error in our code, or shortcoming of the numerical algorithm. Since the undifferentiated variables are second order convergent everywhere, we can always assure second order convergence of their derivatives by using suitable finite difference stencils. For example, in computing D 2 ψ n j from ψ n j we can use a second order accurate one-sided operator D 2 that avoids using guard cell values altogether. With such a choice the spikes in Fig. 12 disappear, and D 2 ψ n j is everywhere second order convergent. In our BSSN code, it is most convenient to compute the Hamiltonian constraint using the same centered difference operator D 2 that we use for the evolution equations. As a consequence, spikes appear at the grid interfaces in the convergence plots (Fig. 3 and 9). APPENDIX B: ANALYTIC SOLUTION FOR GEODESICALLY SLICED SCHWARZSCHILD In a numerical simulation, geodesic coordinates are obtained by using unit lapse and vanishing shift. This implies that the grid points will follow geodesic trajectories through the physical spacetime. We present here a physical derivation of the Schwarzchild spacetime metric in this well-known coordinate system, based on those geodesics; an alternate derivation is available in Refs. [6,7,31]. The Schwarzschild geometry in standard coordinates is given by where g T T = g −1 RR = (1 − 2M/R). To express this metric in geodesic coordinates, consider a spatial Cauchy surface Σ 0 in a 4-manifold M and a congruence of radial geodesics crossing Σ 0 . Let the affine parameter τ for each geodesic be zero at Σ 0 . Considering subsequent slices of constant proper time, we can set a global time τ which we use to define a new foliation Σ τ of M. Each geodesic in the congruence is labeled by the coordinates of its initial "starting" point in Σ 0 . The radial position ρ of the starting point in Σ 0 can thus be promoted to a new radial coordinate on M to pair with the time coordinate τ . Now we will derive the metric components in this τρ coordinate system. The affine parameter τ induces the normalized vector n a = (∂/∂τ ) a tangent to the geodesic, implying that the lapse g τ τ = −1. Assuming the geodesics begin at rest so that n a is normal to Σ 0 implies that g τ ρ = 0 initially. Furthermore, the geodesic equation n a ∇ a n b = 0 requires that g τ ρ,τ = 0. Thus g τ ρ (the shift) must remain zero. A straightforward transformation from Eq. (B1) for the remaining metric coefficient yields The term in the denominator is the energy defined for geodesics on this spacetime, and it is conserved along the geodesics: n a ∇ a (n b ξ b ) = 0, where ξ b = (∂/∂T ) b is the timelike Killing field. On Σ 0 one can evaluate this energy as −g 0 T T , where g 0 ab = g ab | T =0 . This gives A similar application of conservation of energy in n a n a = −1 yields [6,7]: This expression provides an implicit definition for R = R(ρ, τ ), which is easily inverted numerically to high precision. To perform numerical evolutions the geodesic coordinates have a drawback: the physical singularity is already present on the initial slice (τ = 0) at ρ = 0. We can avoid this problem by going to isotropic coordinates (r, θ, φ) by means of the transformation ρ = r (1 + M/2r) 2 . We see that ρ → ∞ both as r → 0 and as r → ∞. For real r the minimum value of ρ is ρ = 2M (the horizon) at r = M/2, now the surface closest to the physical singularity on the initial slice. Substituting ρ = 2M into Eq. (B4) we see that geodesics originating on this surface reach the physical singularity, R = 0, at time τ = πM , defining the maximum temporal extent of our coordinate system. Returning to the metric, the transformation to isotropic coordinates gives us g ρρ = ∂R ∂ρ Expressions for the extrinsic curvature, which have not previously appeared in the literature, can be derived in a similar manner. As we know from the ADM formalism [17,32], the extrinsic curvature can be viewed as the rate of change of the spatial metric when the lapse is unity and the shift is zero. This gives To evaluate the partial derivatives in Eqs. (B6) and (B8), we note that if we have a function f = f (u, v, w) = 0 defining u as an implicit function of v and w, we can use the chain rule and the implicit function theorem to show that There are no off-diagonal terms. Observe that we have only partial derivatives of f that can be obtained analytically from Eq. (B4), and easily evaluated numerically. APPENDIX C: DEFINITION OF Ln NORMS AND SCALING PROPERTIES For a function f defined on a uniform grid ∆x = ∆y = ∆z ≡ h, we take the L n norm of the function to be where f jkm is the value of the function at grid point (j, k, m); cf. [33]. If the function is defined on a nonuniform grid with ℓ refinement levels, the L n norm becomes where h i is the cell size on the i th grid. In our work, the function f denotes an error, either derived from comparison with an analytic solution (for example, the Hamiltonian constraint for all our runs, and the basic variables with geodesic slicing) or from comparison with a run at a different resolution as part of a three-point convergence test (for the basic variables with 1 + log slicing). It is useful to work out the scaling behavior expected when error norms from runs with different resolutions are compared. Recall that, for the runs presented in this paper, h low = 2h med = 4h high , and the errors in basic variables such asγ pq andà pq are expected to scale as f ∼ h 2 everywhere. Let N be the characteristic number of zones along one dimension of a simulation volume, so that h ∼ 1/N . We focus on the L 1 and L 2 norms, which are the ones generally used to examine errors in numerical relativity. Then and so that both the L 1 and L 2 norms should exhibit second order convergence in this case. Note that these expressions are valid not only for unigrid runs but also for our FMR simulations, since the refinement structure of the grid is the same in all these runs. This situation regarding the L 1 and L 2 norms of the Hamiltonian constraint H is somewhat more complicated. As we have shown in Sec. IV and Appendix A, H has errors that scale as f ∼ h on refinement boundaries and as f ∼ h 2 in the bulk. For the runs with geodesic slicing, the errors in H in the bulk near the puncture dominate over those at the refinement boundaries; see Fig. 3. Since these errors show second order convergence f ∼ h 2 , we expect that both the L 1 and L 2 norms will also scale ∼ h 2 , as in Eqs. (C3) and (C4). However, in the case of 1 + log slicing, the first order convergent errors on the refinement boundaries play a larger role; see and, since h ≪ 1, for the scaling of the norms of H in 1 + log slicing.
2019-04-14T02:22:34.805Z
2004-03-11T00:00:00.000
{ "year": 2004, "sha1": "6457f869fade2b683b44520b8b36682d52eebc22", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/0403048", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d3566d394b46b923daab9eaeb5b1d96b3fc2b0e3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119061354
pes2o/s2orc
v3-fos-license
K2-287b: an Eccentric Warm Saturn transiting a G-dwarf We report the discovery of K2-287b, a Saturn mass planet orbiting a G-dwarf with a period of $P \approx 15$ days. First uncovered as a candidate using K2 campaign 15 data, follow-up photometry and spectroscopy were used to determine a mass of $M_P = 0.317 \pm 0.026$ $M_J$, radius $R_P = 0.833 \pm 0.013$ $R_J$, period $P = 14.893291 \pm 0.000025$ days and eccentricity $e = 0.476 \pm 0.026$. The host star is a metal-rich $V=11.410 \pm 0.129$ mag G dwarf for which we estimate a mass $M_* = 1.056$ $M_\odot$, radius $R_* = 1.07 \pm 0.01$ $R_\odot$, metallicity [Fe/H] = $0.20 \pm 0.05$ and $T_{eff} = 5673 \pm 75$ K. This warm eccentric planet with a time-averaged equilibrium temperature of $T_{eq} \approx 800$ K adds to the small sample of giant planets orbiting nearby stars whose structure is not expected to be affected by stellar irradiation. Follow-up studies on the K2-287 system could help in constraining theories of migration of planets in close-in orbits. INTRODUCTION Corresponding author: Andrés Jordán ajordan@astro.puc.cl Giant extrasolar planets that orbit their host stars at distances shorter than ≈ 1 AU but farther away than the hot-Jupiter pile-up at ≈ 0.1 AU, are termed "warm" giants. They have been efficiently discovered by radial velocity (RV) surveys (e.g., Hébrard et al. 2016;Jenkins et al. 2017), and have a wide distribution for their eccentricities, with a median of ≈ 0.25. The origin for these eccentricities is a topic of active research because the migration of planets through interactions with the protoplanetary disc predicts circular orbits (Dunhill et al. 2013), while planet-planet scattering after disc dispersal at typical warm giant orbital distances should generate usually planet collisions rather than high eccentricity excitations (Petrovich et al. 2014). Transiting giants are key for constraining theories of orbital evolution of exoplanets. Besides providing the true mass of the planet, follow-up observations can be carried out to constrain the sky-projected spin-orbit angle (obliquity) of the system, which is a tracer of the migration history of the planet (e.g., Zhou et al. 2015;Esposito et al. 2017;Mancini et al. 2018). While the obliquity for hot giant (P < 10 d) systems can be affected by strong tidal interactions (Triaud et al. 2013;Dawson 2014), the periastra of warm giants are large enough that significant changes in the spin of the outer layers of the star are avoided, and thus the primordial obliquity produced by the migration mechanism should be conserved. Unfortunately, the number of known transiting warm giants around nearby stars is still very low. In addition to the scaling of the transit probability as a −1 , the photometric detection of planets with P > 10 days requires a high duty cycle, which puts strong limitations on the ability of ground-based wide-angle photometric surveys (e.g., Bakos et al. 2004;Pollacco et al. 2006;Bakos et al. 2013) to discover warm giants. From the total of ≈ 250 transiting giant planets detected from the ground, only 5 have orbital periods longer than 10 d (Kovács et al. 2010;Howard et al. 2012;Lendl et al. 2014;Brahm et al. 2016a;Hellier et al. 2017). On the other hand, the Kepler and CoRoT space missions found dozens of warm giants (e.g. Deeg et al. 2010;Bonomo et al. 2010;Dawson et al. 2012;Borsato et al. 2014), but orbiting mostly faint stars, for which detailed follow-up observations are very challenging. Due to their relatively low equilibrium temperatures (T eq < 1000 K), transiting warm giants are important objects for characterizing the internal structure of extrasolar giant planets since their atmospheres are not subject to the yet unknown mechanisms that inflate the radii of typical hot Jupiters (for a review see Fortney & Nettelmann 2010). For warm giants, standard models of planetary structure can be used to infer their internal composition from mass and radii measurements (e.g., Thorngren et al. 2016). In this work we present the discovery of an eccentric warm giant planet orbiting a bright star, having physical parameters similar to those of Saturn. This discovery was made in the context of the K2CL collaboration, which has discovered a number of planetary systems using K2 data (Brahm et al. 2016b;Espinoza et al. 2017;Jones et al. 2017;Giles et al. 2018;Soto et al. 2018;Brahm et al. 2018a,b). Observations of campaign 15 (field centered at RA=15:34:28 and DEC=-20:04:44) of the K2 mission (Howell et al. 2014) took place between August 23 and November 20 of 2017. The data of K2 campaign 15 was released on March 2018. We followed the steps described in previous K2CL discoveries to process the light curves and identify transiting planet candidates. Briefly, the K2 light curves for Campaign 15 were detrended using our implementation of the EVEREST algorithm (Luger et al. 2016), and a Box-Least-Squares (BLS; Kovács et al. 2002) algorithm was used to find candidate boxshaped signals. The candidates that showed power above the noise level were then visually inspected to reject evident eclipsing binary systems and/or variable stars. We identified 23 candidates in this field. Among those candidates, EPIC 249451861 stood out as a high priority candidate for follow-up due to its relative long period, deep flat-bottomed transits, and bright host star (V = 11.4 mag). The detrended light curves of the six transits observed for EPIC 249451861 by K2 are displayed in Figure 1. Spectroscopy We obtained 52 R=48000 spectra between March and July of 2018 using the FEROS spectrograph (Kaufer et al. 1999) mounted on the 2.2 MPG telescope in La Silla observatory. Each spectrum achieved a signal-tonoise ratio of ≈ 90 per spectral resolution element. The instrumental drift was determined via comparison with a simultaneous fiber illuminated with a ThAr+Ne lamp. We obtained additionally 25 R=115000 spectra between March and August of 2018 using the HARPS spectrograph (Mayor et al. 2003). Typical signal-to-noise ratio for these spectra ranged between 30 and 50 per spectral resolution element. Both FEROS and HARPS data were processed with the CERES suite of echelle pipelines (Brahm et al. 2017a), which produce radial velocities and bisector spans in addition to reduced spectra. Radial velocities and bisector spans are presented in Table 3 with their corresponding uncertainties, and the radial velocities are displayed as a function of time in Figure 2. No large amplitude variations were identified which could be associated with eclipsing binary scenarios for the EPIC 249451861 system and no additional stellar components were evident in the spectra. The radial velocities present a time correlated variation in phase with the photometric ephemeris, with an amplitude consistent with the one expected to be produced by a giant planet. We find no correlation between the radial velocities and the bisector spans (95% confidence intervals for the Pearson coefficient are [−0.19, 0.21], see Figure 3). Ground-based photometry On July 14 of 2018 we observed the primary transit of EPIC 249451861 with the Chilean-Hungarian Automated Telescope (CHAT), installed at Las Campanas Observatory, Chile. CHAT is a newly commissioned 0.7m telescope, built by members of the HATSouth (Bakos et al. 2013) team, and dedicated to the followup of transiting exoplanets. A more detailed account of the CHAT facility will be published at a future date (Jordán et al 2018, in prep 1 ). Observations were obtained in the Sloan i' band and the adopted exposure time was of 53 s per image, resulting in a peak pixel flux for EPIC 249451861 of ≈ 45000 ADU during the whole sequence. The observations covered a fraction of the bottom part of the transit and the egress (see Figure 5). The same event was also monitored by one telescope of the Las Cumbres Observatory 1m network (Brown et al. 2013) at Cerro Tololo Inter-American Observatory, Chile. Observations were obtained with the Sinistro camera with 2mm of defocus in the Sloan i band. The adopted exposure time for the 88 observations taken was 60 s, and reduced images were obtained with the standard Las Cumbres Observatory pipeline (BANZAI pipeline). The light curves for CHAT and the Las Cumbres 1m telescope were produced from the reduced images using a dedicated pipeline (Espinoza et al 2018, in prep). The light curves were detrended by describing the systematic trends as a Gaussian Process with an expo-nential squared kernel depending on time, airmass and centroid position and whose parameters are estimated simultaneously with those of the transit. A photometric jitter term is also included; this parameter is passed on as a fixed parameter in the final global analysis that determines the planetary parameters ( § 3.2). GAIA DR2 Observations of EPIC 249451861 by GAIA were reported in DR2 (Gaia Collaboration et al. 2016. From GAIA DR2, EPIC 249451861 has a parallax of 6.29 ± 0.05 mas, an effective temperature of T eff = 4994 ± 80 K and a radius of R = 1.18 ± 0.04 R . We used the observed parallax for EPIC 249451861 measured by GAIA for estimating a more precise value of R by combining it with the atmospheric parameters obtained from the spectra as described in § 3. Two additional sources to EPIC 249451861 are identified by GAIA inside the adopted K2 aperture (≈ 12 ). However, both stars are too faint (∆G > 7.8 mag) to produce any significant effect on the planetary and stellar parameters found in § 3. The radial velocity variations in-phase with the transit signal, which are caused by EPIC 249451861, confirm that the transit is not caused by a blended stellar eclipsing binary on one of the companions. Stellar parameters As in previous K2CL discoveries we estimated the atmospheric parameters of the host star by comparing the co-added high resolution spectrum to a grid of synthetic models through the ZASPE code (Brahm et al. 2017b). In particular, for EPIC 249451861 we used the co-added FEROS spectra, because they provide the higher signalto-noise ratio spectra, and because the synthetic grid of models used by ZASPE was empirically calibrated using FEROS spectra of standard stars. Briefly, ZASPE performs an iterative search of the optimal model through scatter plot using data from our spectroscopic observations of EPIC 249451861. We find that the data is consistent with no correlation. χ 2 minimization on the spectral zones that are most sensitive to changes in the atmospheric parameters. The models with specific values of atmospheric parameters are generated via tri-linear interpolation of a precomputed grid generated using the ATLAS9 models (Castelli & Kurucz 2004). The interpolated model is then degraded to match the spectrograph resolution by convolving it with a Gaussian kernel that includes the instrumental resolution of the observed spectrum and an assumed macroturbulence value given by the relation presented in Valenti & Fischer (2005). The spectrum is also convolved with a rotational kernel that depends on v sin i, which is considered as a free parameter. The uncertainties in the estimated parameters are obtained from Monte Carlo simulations that consider that the principal source of error comes from the systematic mismatch between the optimal model and the data, which in turn arises from poorly constrained parameters of the atomic transitions and possible deviations from solar abundances. We obtained the following stellar atmospheric parameters for EPIC 249451861: The T eff value obtained with ZASPE is significantly different to that reported by GAIA DR2, but is consistent that of the K2 input catalog (Huber et al. 2016). The stellar radius is computed from the GAIA parallax measurement, the available photometry, and the atmospheric parameters. As in Brahm et al. (2018b), we used a BT-Settl-CIFIST spectral energy distribution model (Baraffe et al. 2015) with the atmospheric parameters derived with ZASPE to generate a set of synthetic magnitudes at the distance computed from the GAIA parallax. These magnitudes are compared to those presented in table 1 for a given value of R . We also consider an extinction coefficient A V in our modeling which affects the synthetic magnitudes by using the prescription of Cardelli et al. (1989). We explore the parameter space for R and A V using the emcee package Foreman-Mackey et al. (2013), using uniform priors in both parameters. We found that EPIC 249451861 has a radius of R = 1.083 ± 0.008 R and has a reddening of A V = 0.54 ± 0.02 mag, which is consistent with what is reported by GAIA DR2. Finally, the stellar mass and evolutionary stage for EPIC 249451861 are obtained by comparing the estimation of R and the spectroscopic T eff with the predictions of the Yonsei-Yale evolutionary models (Yi et al. 2001 R [R ] Figure 4. Yonsei-Yale isochrones for the metallicity of EPIC 249451861 in the T eff -R plane. From left to right the isochrones correspond to 1, 3, 5, 7 and 9 Gyr. The position of EPIC 249451861 is at the center of the blue shaded region, which marks the 3σ confidence region for T eff and R . mass and age of EPIC 249451861 are M = 1.036±0.033 M and 5.6 ± 1.6 Gyr (see Figure 4), similar to those of the Sun. The stellar parameters we adopted for EPIC 249451861 are summarized in Table 1. Global modeling In order to determine the orbital and transit parameters of the EPIC 249451861b system we performed a joint analysis of the detrended K2 photometry, the follow-up photometry, and the radial velocities. As in previous planet discoveries of the K2CL collaboration, we used the exonailer code which is described in detail in Espinoza et al. (2016). Briefly, we model the transit light curves using the batman package (Kreidberg 2015) by taking into account the effect on the transit shape produced by the long integration time of the long-cadence K2 data (Kipping 2010). To avoid systematic biases in the determination of the transit parameters we considered the limb-darkening coefficients as additional free parameters in the transit modeling (Espinoza & Jordán 2015), with the complexity of limbdarkening law chosen following the criteria presented in Espinoza & Jordán (2016). In our case, we select the quadratic limb-darkening law, whose coefficients were fit using the uninformative sampling technique of Kipping (2013). We also include a photometric jitter parameter for the K2 data, which allow us to have an estimation of the level of stellar noise in the light curve. The radial velocities are modeled with the radvel package (Fulton et al. 2018), where we considered systemic velocity and jitter factors for the data of each spectrograph. We use the stellar density estimated in our stellar modeling as an extra "data point" in our global fit as described in Brahm et al. (2018a). Briefly, there is a term in the likelihood of the form by Newton's version of Kepler's law, and ρ * and σ ρ * are the mean stellar density and its standard-deviation, respectively, derived from our stellar analysis. In essence, because the period P is tightly constrained by the observed periodic transits, this extra term puts a strong constraint on a/R * , which in turn helps to extract information about the eccentricity e and argument of periastron ω from the duration of the transit. Resulting planet parameters are set out in Table 2, the best-fit orbit solution in Figures 2 and 6 and the best-fit light curves in Figure 5. DISCUSSION By combining data from the Kepler K2 mission and ground based photometry and spectroscopy, we have confirmed the planetary nature of a P = 14.9 d candidate around the V = 11.4 mag G-type star EPIC 249451861. We found that the physical parameters of EPIC 249451861b (M P = 0.315 ± 0.027 M J , R P = 0.847 ± 0.013 R J ) are consistent to those of Saturn. The non-inflated structure of EPIC 249451861b is expected given its relatively low time-averaged equilibrium temperature of T eq = 808 ± 8 K. In Figure 7 the mass and radius of EPIC 249451861b are compared to those for the full population of transiting planets with parameters measured to a precision of 20% or better. Two other transiting planets, orbiting fainter stars, that share similar structural properties to EPIC 249451861b are HAT-P-38b (Sato et al. 2012) and HATS-20b (Bhatti et al. 2016), which have equilibrium temperatures that are higher but relatively close to the T eq ≈ 1000 K limit below which the inflation mechanism of hot Jupiters does not play a significant role (Kovács et al. 2010;Demory & Seager 2011). By using the simple planet structural models of Fortney et al. (2007) we find that the observed properties of EPIC 249451861b are consistent with having a solid core of M c = 31 ± 4M ⊕ . However, models that consider the presence of solid material in the envelope of the planet are required to obtain a more reliable estimate for the heavy element content of EPIC 249451861b (e.g., Thorngren et al. 2016). The numerous radial velocity measurements obtained for the EPIC 249451861 system allow us to constrain the eccentricity of the planet to be e = 0.478 ± 0.025. Even though EPIC 249451861b is among the most eccentric extrasolar planets to have a period shorter than 50 days, its periastron distance is not small enough to cause a significant migration by tidal interactions throughout the main sequence lifetime of the host star. Specifically, by using the equations of Jackson et al. (2009), we find that in the absence of external sources of gravitational interaction, EPIC 249451861b should have presented and eccentricity of e ≈ 0.65 and a semimajor axis of a ≈ 0.15 AU when the system was 0.1 Gyr old. Under the same assumptions, we expect that EPIC 249451861b would be engulfed by its host star at an age of ≈12 Gyr before being able to reach full cir- cularization at a distance of a ≈ 0.1 AU. These orbital properties for EPIC 249451861b and those of the majority of eccentric warm giants are not easy to explain. If EPIC 249451861b was formed in situ (Huang et al. 2016) at 0.15 AU or migrated to this position via interactions with the protoplanetary disc (Lin & Ida 1997), its eccentricity could have been excited by the influence of another massive object in the system after disc dispersal. However, planet-planet scattering (Ford & Rasio 2008) at these close-in orbits generally produces planet collisions rather than eccentricity excitation (Petrovich et al. 2014). An alternative proposition for the existence of these eccentric systems is that they are being subject to secular gravitational interactions produced by another distant planet or star in the system (Rasio & Ford 1996), with the planet experiencing long term cyclic variations in its eccentricity and spin orbit angle. In this scenario, the planet migrates by tidal interactions only during the high eccentricity stages, but it is usually found with moderate eccentricities. Further observations on the EPIC 249451861 system could help support this mechanism as the responsible for its relatively high eccentricity, particularly given that Petrovich & Tremaine (2016) concludes that high-eccentricity migration excited by an outer planetary companion can account for most of the warm giants with e > 0.4. Specifically, long term radial velocity monitoring and the search for transit timing variations could be used to detect the relatively close companions to migrating warm Jupiters proposed by Dong et al. (2014). Future astrometric searches of companions with GAIA could also be used to find companions and infer the predicted mutual inclination between both orbits, which are predicted to be high Anderson & Lai (2017). Finally, it is worth noting that an important fraction of the transiting warm giants amenable for detailed characterization (J < 11 mag) have been discovered in the last couple of years thanks to the K2 mission (see Figure 8). The combination of relatively long observing campaigns per field, and the increased number of fields monitored, have allowed the discovery and dynamical characterization of several warm giant planets with data from the K2 mission (see Figure 8, Sinukoff et al. 2016;Smith et al. 2017;Barragán et al. 2017;Shporer et al. 2017;Brahm et al. 2018a;Yu et al. 2018;Brahm et al. 2018b;Johnson et al. 2018). While not particularly designed to discover warm giants, the TESS mission (Ricker et al. 2015) is expected to discover ≈ 120 additional warm giants with R P > 4R ⊕ and an incident flux F < 150F ⊕ , where F ⊕ is the incident flux at CERES (Brahm et al. 2017a;Jordán et al. 2014), ZASPE (Brahm et al. 2017b(Brahm et al. , 2015, radvel (Fulton et al. 2018)
2018-12-21T13:37:40.000Z
2018-09-24T00:00:00.000
{ "year": 2019, "sha1": "1ed1a72f150cccbe6e10227bdc05528f6f0d174d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.08879", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1ed1a72f150cccbe6e10227bdc05528f6f0d174d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
266896272
pes2o/s2orc
v3-fos-license
effects of cohesion analysis-Based instructions (caBi) on english as Foreign language (teFl) Students’ reading Comprehension and Self-Efficacy This study examined effects of CABI on grade 11 EFL students’ reading comprehension and self-efficacy using reading self-efficacy as mediating variable at Azezo Secondary School, Gondar City Administration. The study employed a quasi-experimental design with quantitative approach. Thus, intact groups were selected using simple random sampling technique, and one out of these groups was assigned as an experimental group (n=52) while the other as a control group (=52) by lot; both of them were selected using lot method. Then, the experimental group was taught using CABI, whereas the control one was taught using conventional way of teaching reading. In order to collect the data, reading comprehension test and reading self-efficacy questionnaire were used as data collecting instruments. The data were analyzed using chi square, independent samples t-test and SEM. The results of the study indicated that there was a direct significant difference (B = 1.781, CR = 5.844 (>±1.96), p < .05) between the control and experimental group students’ reading comprehension posttest scores in favor of the experimental group participants. Besides, the result showed that there is significant effect (β = .533, CR = 4.630 (>±1.96), p < .05) of CABI on experimental group students’ reading self-efficacy beliefs posttest score in comparison to the control one. In the same vein, the result (β = .825, CR = 3.855(>±1.96), p < .05) unveil that CABI had positive effect on experimental group participants’ reading comprehension posttest score through reading self-efficacy compared to the control group students. Hence, the instruction has a positive effect on reading comprehension both directly and indirectly (through reading self-efficacy). This implies that reading self-efficacy partially mediates the causal relationship between CABI and reading comprehension. introduction Reading is a fundamental skill in helping EFL learners to attain a high level of academic success since knowledge acquisition is possible mostly through it.As a result, learners should struggle to discover effective ways of reading to developing their reading skills that will empower them to be successful in their academic undertakings (Staller & Grabe, 2002;Koda, 2004Koda, , 2005;;Nuttall, 2005).In spite of its significance, it is common that EFL learners in the Ethiopian context face problems in comprehending reading texts effectively as it is complex processes that involve continual extraction and incremental integration of text information.Different research conducted on this area. For instance, studies which were carried out on Early Grade Reading Assessment in 2010, 2014 and 2018 gauged that the achievements of students is directly related to their reading ability.These studies revealed that most of the students did not meet the Minimum Learning Competencies (MLC) of the MoE in terms of literacy.Besides, empirical studies consistently revealed that secondary and preparatory students lacked basic reading skills in the target language, in fact, in a deteriorating trend (MoE, 2018).According to these studies, the strategy used in teaching reading is one among the problems students encountered. In line with this, as to the researchers' experiential knowledge of teaching reading skills both at preparatory school, and university level, it has been realized that learners often fail in comprehending texts effectively.Besides, many EFL teachers have complained that most of their students were poor readers, (Moges, 2011;Desta, 2013;Abera, 2014;Abatyihun, 2018;Berhanu, 2019;and Tafere, 2019).This implies that students' reading ability is below what is expected of them.Therefore, one of the reasons, probably, would be because learners' are not taught through analysis of cohesive devices which are used in the text. Likewise, local studies with descriptive survey design, such as, Mulu (2007) and Abera (2014) found out that lack of understanding of linguistic elements and the absence of CABI are among the difficulties that impede EFL learners' reading comprehension achievement.Hence, these might be among the cause for students to perform poorly in reading activities. In addition to these studies, the researchers did preliminary study, and conducted classroom observation on Grade 11 students at Azezo Secondary School, which was selected using simple random sampling technique.In this school, there were 12 sections in 2020/21 academic year.Among these sections, the researchers observed three classes using simple random sampling technique.Before students went through the reading text, they had been instructed to discuss the title of the passage, and some general reading activities printed in the textbook to predict what the text would be about.Then, they went through the reading text.Finally, they were told by their teacher to do the reading comprehension activities.This means, CABI is not practiced in reading classes. However, contemporary literature show that successful reading comprehension necessitates a set of linguistic knowledge, and the skills which can help to utilize the knowledge for analyzing textual meaning (Schmidt, 1994and Grabe, 2005, 2009).In this respect, one of the impeding problems in reading comprehension is that second or foreign language learners usually focus on the mere analysis of single words or sentences, whereas reading has a communicative function and the whole text and context must be regarded cumulatively which is possible by interpreting discourse semantics created through cohesion (McCarthy, 1991;Matthiessen & Slade, 2002& Wenquan, 2009).Thus, it can be argued that learners should be aware of practicing reading through CABI so that they might be able to comprehend a given text effectively. On the other hand, self-efficacy, the belief that students can complete a specific learning task effectively, is also of vital importance for students studying English as a foreign language.This is because self-efficacy determines how learners feel, think, motivate themselves and behave (Bandura, 1994).Therefore, efficacy is one of the concerns to foreign language learners on which students should put necessary efforts to complete a task persistently.In this regard, scholars, such as Wigfield & Guthrie (1997) and Wang & Guthrie (2004) agreed that students' efficacy belief is crucial factor in influencing their reading comprehension.As a result, it appears indispensable to examine various reading strategies that may enhance learners' self-efficacy beliefs on reading comprehension.With respect to this, studies, such as Guthrie et al. (2009); Guthrie et al. (2013); Piercey (2013) indicated that students' perceived self-efficacy is vital for their reading comprehension achievement.In other words, students' reading comprehension is determined by their level of self-efficacy.However, this concern has not been tested empirically in related to the practice of CABI instruction in EFL context.In the same vein, as to the researchers' teaching experiences, most of grade 11 students considered the reading passages difficult to be understood.This might be due to the fact that learners' level of efficacy on their reading ability could have been low.Furthermore, few local empirical studies were conducted on learners' efficacy beliefs, but their objective is not in line with this study.For example, Niguse (2019) examined effects of integrated reading-and-writing practices on grade 11 students' self-efficacy about their reading comprehension, and about their summary writing employing quasi-experimental study with pretest-posttest design.Zelalem (2019) did a study employing the same design, but aiming at examining effects of formative assessment practices on first year university students' self-efficacy in writing a composition.Hence, the former found that the treatment had a positive effect on the respective skills; whereas, the latter showed that the treatment had negative effect on writing a composition.However, these studies did not examine whether the reading efficacy of EFL students can be enhanced through CABI.Thus, teaching reading through analysis of cohesive devices is expected to fill the gap so that timely remedial measures can be taken. In addition to this, reading self-efficacy is also used as a mediating variable between causal relationship of CABI and reading comprehension.Studies, which used reading self-efficacy as mediating variable between dependent and independent variables, showed inconsistent results.For instance, Shehzad et al. (2018) conducted a study on the relationship of self-efficacy sources and metacognitive reading strategies using reading self-efficacy beliefs as mediating variable.Similarly, the study done by Shehzad et al. (2019) aimed at identifying the association between self-efficacy sources and reading comprehension employing reading self-efficacy beliefs as a mediating variable.The result indicated that reading self-efficacy beliefs significantly mediated self-efficacy sources and reading strategies and comprehension respectively.Moreover, Endris (2017) examined effects of rhetoric structure instruction on grade six first language (Amharic) students' reading comprehension and mediating role of reading self-efficacy between the strategy and reading comprehension.In addition, Lau (2009) conducted a research to examine the mediating role of reading self-efficacy between students' reading instruction and reading amount of junior and senior secondary school students in a Chinese educational context.The result of these studies shows that reading self-efficacy did not mediate rhetoric structure-based instruction and reading comprehension, and reading instruction and reading amount accordingly.Therefore, because the result of the aforementioned studies is inconsistent, this study intends to examine if reading self-efficacy mediates causal relationship between CABI and reading comprehension. When it comes to effect of CABI on students' reading comprehension, there are some global studies conducted using quasi-experimental with pretest and posttest design.For example, Aidonlou et al. (2012); Saljooghian (2012) and Ali and Shakoori (2014) examined the effect of CABI on students' reading comprehension.To this end, the results of these studies showed that CABI had a significant effect on learners' level of reading comprehension.Conversely, other studies, such as Al-Surmi (2011) and Wilawan (2011) revealed that students in the experimental group did not outperform the control one. Understandably, the findings of the aforementioned studies are inconclusive.Along with this, studies done by Al-Surmi (2011) and Wilawan (2011) do not appear consistent with recent reading theories which assert that the analysis of cohesive features of the reading text is vital for improving EFL learners' level of comprehension of texts (Celce-Murcia and Olshtain, 2000;Kintsch, 2005).In this regard, it is more appropriate to examine if analysis of cohesive features of the text does have effect on EFL students' reading comprehension. More specifically, in Ethiopian context, as far as the researchers' reading is concerned, there is a research article done by Hawa (2020) aimed at examining effects of discourse analysis-informed instruction on developing grade 10 learners' reading comprehension in Woldia Millennium Secondary School.This study employed quasi-experimental design with quantitative approach.The findings of this study revealed that discourse analysisinformed reading instruction significantly improved high school students' reading skills.However, in the first place, this study did not examine whether students' reading efficacy beliefs would be enhanced as a result of using discourse analysis instruction.Moreover, the aforementioned study did not examine the mediating role of reading self-efficacy between the causal relationship of independent and dependent variables in which this study filled as a gap. On the other hand, Niguse (2019); Abdurahman (2019) & Meseret (2019) conducted their dissertation papers.For instance, Abdurahman (2019) did a quasi-experimental design study aiming at investigating effects of extensive reading on grade 8 learners' reading comprehension and attitudes.In the same way, Niguse (2019) conducted a study with the same research design to examine effects of integrated reading-and-writing practices on grade 11 learners' performance and self-efficacy of reading comprehension and summary writing.In addition, Meseret (2019) conducted a quasi-experimental one group time series design study to determine effects of discourse markers use instruction on second year English language university students argumentative essay writing in process writing approach.The results of the above studies showed that the treatment had significant effect on the respective skills. However, none of the aforementioned studies examined if learners' reading comprehension and their reading self-efficacy would be impacted as a result of practicing reading through CABI, and the mediating role of reading self-efficacy between causal relationship of the independent and dependent variables.Concerning this research area, Wu (2017) recommends that rigorous studies need to be conducted on implementing the instruction in reading classes.Hence, it is very vital and inspirational for the researchers to focus on the issue to fill the existing gap of the studies. research Hypothesis The following alternative research hypotheses are formulated: H1: CABI improves students' reading comprehension.H1: CABI increases students' perceived efficacy about their reading comprehension.H1: CABI enhances students' reading comprehension through Reading self-efficacy. Methods and techniques This study employed a quasi-experimental design research because the natural setting of the school does not allow random assignment in which students should be assigned as intact groups (Pallant, 2010;Creswell, 2012;Tabachnick & Fidell, 2013).In the same way, the study employed a quantitative research approach due to the fact that the data were collected by using quantitative (questionnaire and reading comprehension test) sources.Hence, the quantitative data were collected before and after the treatment to examine if CABI does have an effect on learners' reading comprehension, and on their self-efficacy.Besides, this research approach also helps the researchers to examine the mediating role of reading self-efficacy in the causal relationship between CABI and reading comprehension. research Participants and Sampling techniques In Gondar city administration, there are seven public secondary schools.Among them, Azezo Secondary School was selected using simple random sampling technique since this technique provides each school with equal chance of being selected.As this study is quasi-experimental design, it constituted two groups (control and experimental).For a quasi-experimental study, (Creswell, 2012) existing sections (intact groups) randomization over individual randomization is prioritized for the fact that randomly assigning students to the two groups would disrupt the existing classroom learning.As a result, among 12 sections, the intact groups (section 9 and 12) were selected using simple random sampling technique.After selecting the two sections, one was assigned as an experimental group (n=52) while the other as a control group (=52) using group randomization sampling technique. Data collection instruments In order to achieve the main objective of the study, reading comprehension test and reading self-efficacy questionnaire were employed to examine learners' level of comprehension and reading efficacy beliefs, respectively.To begin with the reading comprehension test, two equivalent versions of reading comprehension tests were adapted from literature sources by the researchers, and administered before and after the intervention.Each test contains 30 items, and the items of the tests were designed in the form of multiple-choices in a way they can help to assess students' level of skills of inferences, referencing, identifying main ideas of paragraphs and predicting about the reading text.The items were designed by considering the three main levels of comprehension, which are literal, interpretive or inferential and critical. In related to questionnaire, it was also prepared to determine if there is significant improvement in students' self-efficacy about the reading comprehension of the experimental and control groups.It was adapted from Henk et al. (2012) and Kosar et al. (2022).The questionnaire contains 34 items designed in the form of objective type scaled from (6) = definitely true of me, (5) = mostly true of me, (4) = a little bit true of me, (3) = a little bit not true of me, (2) = mostly not true of me to (1) = definitely not true of me.Then, the questionnaire was translated into the students' first language (Amharic).Besides, the equivalence of the two versions (English and Amharic) of the questionnaire was checked by two colleagues who have PhD degree in TEFL. Furthermore, validity and reliability of the instruments were checked.In the case of validity, comments were given for tests and questionnaire from advisors and colleagues, and then the researcher made necessary deletion, modification, and addition accordingly. As the reliability issue is concerned, the researchers checked the internal consistency of the reading comprehension tests.Thus, the two equivalent forms of the reading comprehension tests were calculated by using Pearson correlation, and the result was found to be .75.This indicates that there was a high reliability between the two versions of tests.Besides, internal consistency of reading self-efficacy questionnaire of the Amharic version was also checked.Thus, the coefficient weight reads .941.This indicates that the reading self-efficacy questionnaire is highly reliable. More importantly, KMO test and confirmatory factor analysis were also computed.The former was used to check how suited the data is to run Confirmatory Factor Analysis (CFA), and the latter was used to indicate the extent of relevance of variables in explaining a construct.Therefore, prior to CFA, KMO test was computed as .80,and Bartlett's Test of Sphercity also read as x² = 1910.155,df = 561, p = .001(p < .05)indicating adequate sample size at significant threshold. Therefore, prior to factor analysis, KMO test was computed, and the result was found to be .80,p = .001,which is meritorious and significant, and then, the analysis was made for reading self-efficacy questionnaire.CFA loading coefficients were found to be greater than 0.4 for nearly every item, which is acceptable.In line with this, the composite reliability (CR) of the questionnaire items was computed as.924, indicating that the items are reliable.Besides, AVE and discriminant validity were also checked, and the results were calculated as 0.728 and 0.853 respectively, which show that the questionnaire was valid. the treatment and Data collection Procedures Prior to the treatment, experimental group students were informed about the purpose of the study.In addition, they were informed on how to manage photocopied materials during reading sessions.The treatment was given in the regular classroom, but only in reading lessons.As initial stage, lecture was provided for three periods on basic concepts of grammatical cohesion (reference, substitution, ellipsis and conjunction), and lexical cohesion (reiteration: simple repetition, complex repetition, simple paraphrase, complex paraphrase, homonym and superordinate, and collocation: ordered set, activity-related and elaborative collocation) in order to make students aware of cohesive devices.Each period lasts for 42 minutes.Then, in every reading session, the experimental group participants were familiarized with some types of cohesive devices, and got instruction through identification, analysis and use of cohesive devices in both grammatical and lexical cohesion.In other words, they were exposed to ranges of reading comprehension activities which were designed from students' reading texts by focusing on the target cohesive devices and become obtainable with explicit examples.The instruction was given for two months (8 periods of reading session) during the regular class.Likewise, the control group students received the same load of instruction and practiced the reading passages for the same duration of time as the experimental group students.Nevertheless, the strategies used were different since the control group participants followed the usual instruction of teaching reading.To this end, they were assigned to read the passages and encouraged to do reading comprehension activities that are written in the textbook. In the case of data collection procedure, the researchers explained the purpose of the study to the school director and obtained an approval from him.The director recommended one English language teacher who could help the researchers in facilitating in the process of selecting the participants of the study.The researchers also gained consent from him to participate in the process.In doing so, the participants for the study were selected randomly, and requested to take pretests (reading comprehension and reading selfefficacy questionnaire) to gain baseline data.Then, the experimenter teacher was provided with both theoretical and practical training for four sessions (four hours) about the theory of CABI, and about how it could be practically implemented in reading classes.After that, the experimenter teacher implemented the instruction on the experimental group students in reading classes for eight weeks.In the meantime, the researcher observed the lessons in the experimental group to check effectiveness of its implementation.After the intervention was completed, posttests were administered for both groups in order to see effects of CABI in comparison with the control group reading comprehension and their self-efficacy about their reading comprehension. Data analysis Methods In this study, the data were analyzed by using inferential (independent samples t-test) statistics to check mean differences of the control and experimental group students' in their age, reading comprehension scores and their self-efficacy beliefs.Besides, chisquare test was used to check whether there was gender proportion in the control and experimental group students.To this end, the data were computed using the Statistical Package for Social Sciences (SPSS) version 23.Then, applying dummy coded (control group as 0, but experimental group as 1), SEM analysis utilizing AMOS, v. 20 was used to examine whether CABI had a direct effect on students' reading comprehension, and on their reading self-efficacy.Besides, it was also used to confirm whether reading self-efficacy play a mediating role in the causal relationship between CABI and reading comprehension.Likewise, bootstrapping (95% bias-corrected confidence interval) with 5,000 samples substitution was used to examine whether or not CABI had a significant effect on students' reading comprehension through reading self-efficacy.Prior to testing hypotheses, test assumptions such as normality, homogeneity, Levene's test, and linearity were checked and satisfied.Therefore, before performing the data analysis, evaluation of the general assumptions of parametric tests were made.Moreover, the data gained through the instruments fits the proposed model of this study (see Fig. 1). ethical considerations Ethical issues were considered in this study.A consent letter was gained from the department of English language and literature.Then, the researchers obtained permission from the selected school.After obtaining permission from the school, the researchers obtained consent from the participants. In this regard, the participants were assured that their participation in the study was voluntary and the information they provided would be confidential.The students were informed that they were participating in a research project.So, the aims, significance, and nature of the activities that they were completing as their course component was also elucidated.Moreover, both the control and experimental groups were treated to practice reading texts incorporated in their textbook.What makes them different in practicing reading, therefore, is the approach used; that is, the control group was treated using conventional way of teaching reading, while the experimental one was treated using CABI. Furthermore, prior to examining the mean differences between the control and experimental group participants' posttest scores, experimental group students suggested about the strategy employed.Thus, since participants' appraisal of the use of CABI was encouraging, it was also used with the control group participants with the same amount of load that the experimental students received. results and Discussions This section presents discussions, analysis and findings of the study based on the research hypotheses.As discussed earlier, reading comprehension and self-efficacy tests (pretests and posttests) were administered in order to measure the students' reading comprehension, and their efficacy beliefs on reading.The pretests were computed to confirm whether the two groups had the same background in their reading comprehension, and in their self-efficacy scores or not by using independent samples t-test.On the other hand, the participants' reading comprehension posttest were computed to confirm whether there was significant difference between the control and experimental groups in their scores or not by using the same statistics. Therefore, since it was checked that there was a significant difference between the two groups in terms of their reading comprehension posttest scores in support of the experimental one, SEM analysis was computed to determine whether there was direct, indirect and both (direct-indirect) effects of CABI on students' reading comprehension or not.As depicted in Table 1, the mean score of the control and the experimental groups on the pretest is 9.500 and 9.923 respectively.Moreover, the pretest scores (t = 0.996, df = 102, p = 0.322 (>.05)) indicate that there was no significant difference in reading comprehension between the control and experimental groups before the treatment.Thus, it can be inferred that the two groups were not significantly different in their level of reading comprehension before the treatment. Similarly the result of the same Table revealed that the mean scores of the control and the experimental groups on reading self-efficacy are 4.287 and 4.049 respectively.The results indicate that the two groups have almost similar score in their reading self-efficacy beliefs before the intervention.In a similar vein, the pretest scores (t = 1.608, df = 102, p = 0.111 (>.05)) show that there was no significant difference between the control and experimental groups efficacy beliefs on reading before the treatment. Furthermore, the proportions of gender and age between the control and the experimental group students were computed to determine their influence on their reading comprehension, and on their self-efficacy posttest score.Thus, participants' gender proportion was checked by using Chi square test.The results (x² = 5.38, df = 1, p = .019)were found to be significant implying that the difference might influence the posttest results.Hence, gender was used as a covariate with reading comprehension, and with self-efficacy posttest score in the path analysis.On the other hand, the proportion of age between the groups was carried out using independent samples t-test.However, the results (t= 0.727, df = 108, p = 0.469) indicates that there was no significant difference between the control and experimental groups in terms of age.For this reason, participants' age was not used as a covariate in the SEM analysis of participants' posttest results.As can be seen in Table 2, the mean score of the control and experimental group is 9.865 and 12.827, respectively.This shows that there is a difference between the mean scores of the control and experimental group in reading comprehension posttest.The posttest reading comprehension mean scores (t = 6.578, df = 102, p = .001(<.05))indicates that there was statistically significant difference between the control and experimental groups, supporting the latter.The difference appears to have been resulted due to the CABI intervention implemented with experimental group participants. Likewise, as can be seen in the same Table, the posttest mean score of the control and the experimental groups in self-efficacy on reading comprehension is 4.141 and 4.684, respectively.This indicates that there is a difference between the control and experimental group in the posttest reading self-efficacy, favoring the experimental group participants. Similarly, The posttest reading self-efficacy mean scores (t = 4.259, df = 102, p = .001(< .05))confirmed that there was statistically significant difference between the control and the experimental groups supporting the latter. In general, since it was checked that there was a significant difference between the groups in terms of their reading comprehension posttest score in support of the experimental one, SEM analysis was computed using AMOS to determine whether there was direct, indirect or both direct and indirect effects of CABI on students' reading comprehension. The result is reported in figure 1 and Table 3. effect of caBi on reading comprehension As shown in Table 3, the outcome reveals that the difference in mean scores between the control and the experimental groups on their reading comprehension posttest was statistically significant.This means, as can be seen in the above Table, the regression coefficient and critical ratio were found to be as 1.781 and 5.844 (>±1.96),respectively, and the significant threshold of p value is .000,which is less than .05.This proves that there was significant difference on mean score between the control and the experimental groups in reading comprehension posttest in support of the experimental one.The result also shows that there is no zero between the lower (1.181) and thevupper (2.445) levels confidence interval confirming that there was significant difference between the groups supporting the latter. In other words, there was positive direct effect of CABI at 95 % confidence interval threshold. The difference appears to have resulted due to cohesion analysisbased intervention conducted on experimental group participants.Therefore, the instruction has significant direct effect on experimental group students' reading comprehension posttest score.This implies that the instruction helped the students to improve their reading comprehension skills. Moreover, the result of this study supports the conclusions made from different studies, such as Bahrami's (1992); Akbarian's (1998); Degand's et al. (1999); Pérez and Macia's (2002) and Innajih's (2007) in that interpreting discourse semantics can enhance EFL learners' achievement in reading comprehension tasks, and it can play vital role in developing EFL learners' reading skills. Contrary to the recent and the aforementioned research findings, different studies revealed that analysis of cohesive devices did not have effect on students' reading comprehension.For instance, Al-Surmi (2011) and Wilawan (2011) reported that there were no significant differences between the control and the experimental groups students' reading comprehension scores.The findings of this study are not consistent not only with the findings of the current study, but also with the theoretical stance of different literature that argues about the theoretical relationship of variables of this study. For instance, in support of the findings of the current study, contemporary literature, such as Xu and Zhang (2015); Cui (2017); Rong (2017); Wu (2017); Pang (2019) and Fu (2020) state that interpreting connections of discourse semantics which is created through cohesion, improve EFL learners comprehension skills.In other words, analyzing grammatical and lexical devices of the reading passages helps students develop their level of reading comprehension. Effects of CABI on Reading Self-Efficacy As it is depicted in Table 3, the result shows that the control and experimental groups' efficacy beliefs mean score difference on their posttest was significant.In the Table , the result indicates that the critical ratio was found to be 4.630 (> ±1.96), and that the regression weight was calculated as .533with a significant level of p = .000(p < .05).The result, indicated in the table, also shows that there is no zero between the lower level (.288) and upper level (.758) confidence intervals. This implies that there was a significant difference in reading self-efficacy between the experimental and control groups, favoring the experimental one.This means, there was significant effect of CABI on experimental group participants' efficacy beliefs on their reading comprehension in comparison to the control group students at 5% confidence interval threshold. The result of this study supports the finding of the study done by Niguse (2019), who reported that integrated reading-and-writing instruction has significant effect on the students' reading self-efficacy.In a similar vein, this study also supports Balc's (2017), whose report showed that implementation of learning-style based activities have significant effect on students' reading comprehension skills, and their self-efficacy perceptions in EFL classes.From this, one can make the argument that use of effective reading strategies can promote students' reading self-efficacy. Similarly, the result of this study is consistent with different quasi-experimental design studies (Naseri & Zaferanieh, 2012& Tavakoli & Koosha, 2016) that confirmed experimental group participants show greater achievement in reading self-efficacy than students in the control one as a result of different strategies used in reading classes.This evidences that the use of effective reading strategies improve EFL students' reading self-efficacy.Besides, the finding of this study is also in agreement with the studies (Li & Wang, 2010& Shang, 2010) that unveiled there is a significant positive relationship between the use of reading strategies and perceptions of reading self-efficacy on reading.This implies that effective use of reading strategy enables EFL learners to be efficacious on their reading comprehension so that they are likely to develop their skills effectively. Furthermore, the finding of this study is in agreement with the literature, which view selfefficacy as determinant predictor of success and achievement.For instance, students with high level of self-efficacy can make greater efforts in practicing the required task, and they are more persistent than students with low self-efficacy (Bandura, 1994;Pejares, 2000& Schunk, 2003).This implies that self-efficacy influences students' emotional reactions on performing the required tasks.With respect to sociocognitive theory, learners' reading efficacy beliefs play a crucial role in learners' level of reading comprehension.In other words, according to this theory, students' reading self-efficacy influences their level of comprehension (Usher & Pajares, 2008& Guthrie et al., 2013).This means, learners with a high level of reading self-efficacy prefer to perform more challenging tasks, and they spend more time, and they make greater efforts to comprehend the text. On the other hand, in contrast to the result of this study, Yoğurtçu (2013) found that there was no significant relationship between students' reading comprehension, and their selfefficacy for low self-efficacious students.However, reading comprehension self-efficacy was associated with reading comprehension skills for high self-efficacious students.In connection to this, Asadi (2014) found that there was no significant connection between EFL students' reading comprehension, and their self-efficacy beliefs on reading. The inconsistency between the finding of the recent study and the findings of aforementioned studies concerning reading self-efficacy might be because of students' low level of background knowledge on their reading self-efficacy belief.As students are engaged in learning the English language in a foreign context, the students, who try to interpret the meaning of a text seeks gradual process in order to develop their comprehension skills, and their self-confidence.In other words, since they may develop their reading selfefficacy beliefs when they understand the text very effectively, the practices may require gradual process so as to let students to be more efficacious in comprehending the text. To this end, students may rate themselves as they are not efficacious in comprehending the reading text. Self-Efficacy In Table 3 above, the output of AMOS revealed the mean score difference of the control and experimental groups on reading comprehension posttest through reading selfefficacy, which is indirect effect of CABI on students reading comprehension.The Table shows that the mean score difference between the control and experimental groups on their reading comprehension posttest was significant.According to the output shown in the Table, the critical ratio was found to be 3.855 (>±1.96), and the regression weight was calculated as .825witha significant level of p =.000 (p < .05).Therefore, the outcome indicated in the table, demonstrates that there is no zero in the lower level (.447), and in the upper level (1.309) confidence intervals confirming that there was indirect significant difference in reading comprehension between the two groups in favor of the experimental one.In other words, there was significant indirect effect of CABI on experimental group participants' reading comprehension posttest compared to the control group participants at 95 % confidence interval threshold.Furthermore, as the strategy had significant effect on reading comprehension both directly and indirectly (through reading self-efficacy), this leads to the conclusion that reading self-efficacy partially mediated CABI and reading comprehension. Accordingly, Shehzad et al. (2018) conducted a study on the relationship of self-efficacy sources and metacognitive reading strategies using reading self-efficacy beliefs as mediating variable.The result unveil that there was significant relationship between self-efficacy sources and reading self-efficacy beliefs and also between reading self-efficacy beliefs and metacognitive reading strategies.Similarly, the study done by Waleed et al. (2019) which aimed at identifying the association between Bandura's four hypothesized self-efficacy sources and reading comprehension by employing reading self-efficacy beliefs as a mediating variable.The result indicates that reading self-efficacy beliefs mediated self-efficacy sources and reading comprehension. (2019), Pohl et al. (2020) and Rogowska et al. (2022) indicate that self-efficacy acted as a mediating variable.This implies that reading self-efficacy can play a mediating role in the causal relationship between different variables and reading strategies that can be employed in EFL classes.Thus, it seems possible to make the case that reading selfefficacy can be used as predictor variable, and it mediates the practices that can be employed in reading classes to the development of students' reading comprehension. This appears consistent with the theory claiming that the interest in self-efficacy could be attributed to the consistent claims by Bandura that judgments of capability a person brings to a specific task are strong predictors of the performance that results from that task, and it mediates the other determinants of that performance (Adeyemo, 2007).Moreover, the effect of the treatment will not have direct effect only on the outcome, but there will also be indirect effect.As a result, researchers (Imai et al., 2010) must rely on an additional assumption that the outcome may also be obtained indirectly that is through mediating variables.More importantly, social cognitive theory states that selfefficacy is a multidimensional construct that varies according to the domain of demands (Zimmerman, 2000), and it must be evaluated at a level that is specific to the outcome domain (Bandura, 1986;Pajares, 1996).In other words, since reading comprehension and self-efficacy are correlated each other, the kind of strategy that can be implemented in reading classes may promote students' level of comprehension as a result of their level of efficacy on their reading comprehension.On the basis of the results and discussions of the study, the next chapter deals with summary, conclusions and implications of the study. On the other hand, there are quasi-experimental design studies whose results are inconsistent with the result of the current study.For instance, Endris (2017) conducted PhD work entitled "effects of rhetoric structure instruction on grade six Amharic first language students' reading comprehension, mediating role of reading self-efficacy and motivation between the strategy and reading comprehension."The result of this study shows that reading self-efficacy did not mediate rhetoric structure-based instruction and reading comprehension.In addition, the study conducted by Lau (2009) in Chinese context indicates that self-efficacy did not mediate reading instruction and reading amount of junior and senior secondary school students.This might happen due to the fact that either the instructions did not suit the needs of the students or the students may not have explicit knowledge about the tenet of the strategies that these studies used. conclusions The findings of this study indicate that implementation of CABI to develope grade 11 EFL students' reading comprehension and self-efficacy beliefs is found to be encouraging.This means that the strategy used enhanced their level of comprehension and self-efficacy beliefs on comprehending the text.Thus, it can be deduced that interpreting discourse semantics, which is occurred through cohesion cannot be ignored because the meaning of the text depends upon cohesive devices used in it.In other words, to develop students' reading comprehension, analysis of semantic links needs to be deemed.Besides, as to the findings of this study, it is possible to generalize that acquainting students with a strategy that helps to analyze discourse features of the reading passages enable them to develop their comprehension, and self-efficacy beliefs about reading.Likewise, the causal relationship between CABI and reading comprehension is found to be mediated by reading self-efficacy.Moreover, the findings unveil that the strategy has direct significant effect on reading comprehension through reading self-efficacy; that is, indirectly.Therefore, it can be concluded that reading self-efficacy partially mediates the strategy and reading comprehension.From this, one can make the argument that reading self-efficacy as one of affective variables play vital role in mediating reading strategies used in EFL classes in general and students' reading comprehension in particular. recommendations Based on the findings of this study, the researchers suggests that curriculum designers need to play vital role in providing and incorporating procedures that enable students to make analysis of grammatical and lexical devices of the text.Besides, they should play vital role in providing explicit procedures of cohesive analysis instructions in the students' reading texts so that they can have the opportunity to identify, interpret and analyze semantic links to develop their reading skills, and their level of efficacy beliefs about reading comprehension. In line with this, the findings of this study implies that EFL teachers and students need to focus on how to identify, interpret and analyze cohesive devices of the text to foster reading comprehension and efficacy beliefs on reading comprehension.Since the reading texts are cohesive through grammatical and lexical devices, the focus of the instruction should follow explicit procedures to analyze those devices of the text.This implies that the practices of pedagogy in EFL reading classes of grade 11 students should focus on analysis of semantic links of the text to develop their comprehension skills, and their efficacy beliefs about their reading comprehension skills. Besides, implication of the findings of this study leads to recommend that researchers need to replicate this study adapting the data collection instruments and the training manual to various EFL contexts.Firstly, since this study might not represent all the secondary schools at large, future research should be done on more student participants in more secondary schools using the same materials conducted with this study.Then, other research works should examine effects of CABI on other aspects of language learning, such as speaking and listening in order to determine its effects on EFL setting at large.Thirdly, future research should investigate effects of the strategy on students' attitude on developing their level of reading comprehension, and the mediating role of it between the strategy and reading comprehension.Finally, future research needs to investigate teachers' practices, beliefs and challenges they face in fostering cohesive analysis-based instructions in reading classes. acknowledgments We are very much indebted to express our heartfelt gratitude to our colleagues who gave us valuable comments and suggestions for the betterment of this study.We are also grateful to Azezo Secondary School directors, English language teachers and students who participated in this study. Table 1 : Independent Samples Test: t-test Statistics for the Control and Experimental Groups on Reading Comprehension and Self-Efficacy Pretest Score Table 2 : Independent Samples Test: t-test Statistics for the Control and Experimental Groups on Reading Comprehension and Self-Efficacy Posttest Score Table 3 : Unstandardized Relative Regression Weights of Variables in the Structural Model CABI = Cohesion Analysis-Based Instruction; RC= Reading Comprehension; RSE= Reading Self-Efficacy
2024-01-10T16:14:44.578Z
2023-12-18T00:00:00.000
{ "year": 2023, "sha1": "c3af7d5ebb2e55601dec2c5acac3ec7480abfae5", "oa_license": "CCBYNC", "oa_url": "https://www.ajol.info/index.php/erjssh/article/view/261200/246566", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "846a008bdfd69bbe05d8dfe024db669cd86c89df", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
256805831
pes2o/s2orc
v3-fos-license
Analysis of the Results of the Pedagogical Experiment on the Integrated Analysis of the Average and Dispersions : Pedagogical scientists often need to process the results of a pedagogical experiment. However, not every scientist (especially in humanitarianism) has appropriate mathematical training, so statistical data processing is a problem for him. Scientists-pedagogues in Ukraine use various statistical methods to process the results of a pedagogical experiment and face the problem of cumbersome calculations and the accuracy of assessments. Therefore, we developed a method that is based on the correct mathematical apparatus, simplifies the processing of empirical data, and allows us to draw qualitative conclusions without the explicit use of mathematical apparatus. To simplify the statistical analysis of the results of the pedagogical experiment and the interpretation of the obtained data, the authors suggest using a spreadsheet and analyzing the data according to Student's and Fisher's criteria (comparing the average sample and its variance) and controlling intermediate indicators of the results of the pedagogical experiment. The method developed by the authors has an advantage compared to other methods: it is enough to analyze the pair "mean and variance" for the sample to conclude the significance of the differences in the control and experimental groups. The method has a simple implementation since almost every researcher has a spreadsheet processor on his computer. The method does not require a thorough knowledge of the statistics course. The method guarantees more reasonable conclusions (two criteria are used at once), which is important when conducting a pedagogical experiment. Literature Review Mathematical statistics has a significant number of criteria designed to test a variety of statistical hypotheses.As a rule, hypotheses concern either the law of data distribution in the sample (normal, binomial, etc.) or the numerical characteristics of the sample (means, variance, correlation, etc.) [15], which describes the use of criteria for the normal distribution law. There are also several non-parametric methods for statistical evaluation of results (Wilcoxon-Mann-Whitney test, Spearman test, etc.) [6]. The most developed theory is to test hypotheses about the numerical characteristics of the normal distribution law.Verification of averages in two samples is carried out using the t-test (Student) [5]. Pearson's, Student's, and Fisher's criteria are used to compare/contrast distributions in two samples (usually control and experimental groups) [12]. When evaluating production, medical, pedagogical, and other management decisions, hypotheses about the numerical characteristics of the sample of control and experimental groups are usually used.In scientific and pedagogical research, the average scores are often compared or the significance of changes in learning outcomes is investigated, for which the Student's criterion of assessment of averages or non-parametric criterion of signs is used [5,6].Sometimes, the authors stop at the analysis of the percentage distribution of individual characteristics of students [2,7,10] and do not conduct a statistical analysis of the results. The development and dissemination of digital technologies have become an effective lever in creating effective and convenient tools for statistical analysis of pedagogical research.O. Spirin, T. Novitska, and A. Yatsyshyn emphasize the importance of using electronic library databases in such research [21]. General approaches to evaluating the effectiveness of pedagogical research using information and digital technologies were studied by S. Novitsky.He analyzed the effectiveness of empirical and theoretical approaches in the context of monitoring and analysis of scientific and educational work [13]. M. Shyshkina analyzes the tools of computerization of statistical and analytical research [20].W. Rogers, H. Morris-Matthews, J. Romig, and E. Bettini propose to improve the method of observation in the educational process using information technology, in particular, to prove its reliability and validation [19]. Methods of empirical illustration of pedagogical research and methodological review of digital resources for their implementation are offered by A. Katrin [1]. Often scientists who do not have a deep mathematical education are guided by works where examples explain the peculiarities of testing the results of a pedagogical experiment on one or another criterion.Among such works we consider the most common:  work [4], which outlines a significant number of statistical methods that should be used in psychological and pedagogical research.The authors of G. Glass, J. Stanley, set the following goal: first, to teach them to understand research reports in scientific papers, provided they are familiar with the problem being studied; second, to learn to plan research and analyze the results using a reference book.Scientists considered the first goal to be the main one [4].Indeed, it is impossible to carry out own research, and the main thingis to understand the results of the received statistical characteristics.The book is a handy textbook for training future teachers in the non-humanitarian field of training;  work [4], which provides brief theoretical information on the use of the most common statistical criteria based on non-parametric methods of sample evaluation.The authors note that any presentation of a general theory of statistical hypothesis testing inevitably involves very serious mathematical training, which is not possessed by most research educators, and therefore consider a large number of typical studies and statistical criteria that should be used;  work [6], which describes typical cases of using statistical methods in pedagogical research.The author gives "recipes" for the use of statistical methods in typical cases of analysis of experimental data on the results of pedagogical research, provides an algorithm for selecting statistical criteria, and methods for determining statistical similarities and differences in characteristics of the studied objects.The work is designed for teachers-researchers, primarily for graduate students and applicants [6];  manual [5] contains methodological advice for young scientists on the organization of pedagogical experiments, the algorithm of their implementation, as well as offers options for statistical analysis of the results of pedagogical experiments. manual [15], which describes the use of criteria for the normal distribution law;  online resources for automation of calculations, for example, to determine the coefficient of linear pairwise correlation Pearson [9,17]. It should be noted that with the development of information technology, a lot of computer programs have appeared, where developers provide tools to support statistical calculations.Among them: are Statistica, Maple (stat subpackage), GeoGebra, MS Excel (statistical functions and Analysis package), etc.With the correct interpretation of commands (computer tools), it becomes possible to quickly process cumbersome mathematical formulas and simplify calculations (for example, [3]).And then it becomes important not so much the ability to write a formula, but the awareness of hypotheses and the correct perception and interpretation of the result. Thus, some conditions allow researchers with a liberal education to conclude without complex mathematical calculations.The purpose of our study is to propose such an approach (method) to the statistical analysis of the results of a pedagogical experiment, which would require knowledge of only an initial course in statistics (understanding the sample, sample size, population, mean, variance, standard deviation), and which should become the basis for qualitative analysis and reasoned conclusions. Background Here is a typical algorithm for testing the statistical hypothesis: (1) At the first stage, samples are formed for a certain indicator (observation). (2) The null hypothesis H 0 and the alternative hypothesis H 1 are formulated.The null hypothesis is tested using a specially selected random variable, the exact or approximate distribution of which is known in advance. The statistical criterion K n is chosen.To test the null hypothesis according to the sample data, the observed (empirical) value of the criterion with a given level of significance is calculated (for pedagogical sciences, a reliable probability of 0.95 or a significance level of 0.05 is accepted).Critical and empirical meanings are compared.If the empirical value of the criterion falls into the critical area (Fig. 1), hypothesis H 0 is rejected in favor of the alternative hypothesis H 1 .Hypothesis testing can solve, first of all, the problem of comparing sample numerical characteristics (averages, variances) with the corresponding specified values, and numerical characteristics of two or more samples among themselves (testing the hypothesis that these samples belong to one set). Testing statistical hypotheses about the equality of means are based, for example, on the algorithm of the Student's test, and on the equality of varianceson the algorithm of the Fisher test. These criteria are embedded in most specialized computer programs, which automates the calculation of both the critical value of the criterion and the empirical value of statistics on this criterion. As an example, we give calculations in the MS Excel spreadsheet, which is installed on almost every computer (Fig. 2).The researcher only needs to compare critical and empirical significance to conclude whether the null hypothesis is accepted or rejected.For the example shown in the figure (Fig. 2), we have:  to compare the means, compare the empirical value of the statistic t = 0.13 (сell I11) and the critical value t = 2.09 (cell I15) and conclude that the sample data give reason to accept the null hypothesis (0.13 < 2.09) statistical equality of means is equal to zero);  to compare variances, compare the empirical value of statistics F = 0.80 (cell E9) with the critical value F = 0.46 (cell E11) and conclude that the sample data give reason to reject the null hypothesis (0.80 > 0.46) in favor of the alternative (the variance is not equal to, and the difference is statistically significant). In a pedagogical study, it is advisable to compare the average values and deviations at different stages of the pedagogical experiment to adjust the proposed teaching methods. Methodology To achieve the result, the following were used: theoretical analysis of scientific records by revealing the theoretical foundations of statistical analysis in pedagogical achievements; content analysis of Internet resources by the method of resource allocation for the automation of statistical observations; empirical methods (pedagogical experiment) demonstrating the process of processing empirical data for a fragmented methodology. In pedagogical research, experimental verification of the effectiveness of the author's methodological system (model, approach, etc.) involves the use of the normal distribution law.But approaches to comparing experimental data in the control and experimental groups require not only checking the normality of their distribution, but also a more detailed analysis, which concerns not only the mean value of the data (meanmathematical expectations), but also the variance, which allows us to characterize the values of the sample variance around its average value (standard deviation) and, if additionally taken into account, it becomes possible to deepen the analysis of the data obtained and more reasonable conclusions about the stability of the studied parameters and their prediction. Therefore, our approach to the statistical analysis of the results of a pedagogical experiment is based on two positions: 1) a comprehensive accounting of changes in the means and variances of both samples: the idea is to simultaneously analyze the samples according to the Student and Fisher criteria, compare the means and variances, and draw appropriate conclusions about the effectiveness of the author's model or approach; 2) tracking the intermediate results of the pedagogical experiment for timely adjustment of the developed methodology or approach: the idea is that, first of all, the pair "control group and experimental group at the clarifying stage of the experiment" is monitored, the methodology is improved (or not), and the new paired group and experimental group at the stage of experiment formation "the author's method is finally confirmed. To study the results of the pedagogical experiment, the following algorithm of actions is proposed, which simplifies the analysis and at the same time relies on the mathematical apparatus and gives reasonable conclusions. (1) At the beginning of the pedagogical experiment, we form groups: (average for KG, 2variance for KG) as a control; EG-1 (average for EG-1, 2variance for EG-1) as experimental at the refinement stage of the experiment.Later we form the group EG-2 ( average for EG-2, 2variance for EG-2) as experimental at the formative stage of the experiment. (2) The groups are selected so that at the entrance of the experiment they are homogeneous in composition concerning the indicator under study (for example, the results of tests or survey results).Such verification is carried out in advance, for example, using the chi-square criterion [6].(3) Set the significance level (usually 5% or 0.05).(4) We record the results of the studied indicator for each of the samples (ie we form samples). (5) When using computer tools (in this case MS Excel) determine the results of the application of statistical criteria for estimating the mean (Student's criterion) and estimating variances (Fisher's criterion).( 6) Construct a table (Table 1): if the hypothesis H 0 is accepted, then enter the value 0 in the table (slight discrepancy or deviation in the results).If the alternative hypothesis H 1 is accepted, then we enter the value 1 in the table (significant discrepancies or deviations in the results).Fill the cells filled with blue. Table 1.Significance of the difference between the mean and variance for a given indicator ("1"significant deviation, "0"non-significant deviation) Indicator -1 (7) Carry out a qualitative analysis of the results, ie compare the corresponding pairs "evaluation of samples by Student's test" (one of the columns 1-3 of table 1) and "evaluation of samples by Fisher's test" (one of the columns 4-6 of table 1), which can be in variations: a. "0 (for averages) and 0 (for variances)"means that the averages are statistically the same and the variances are statistically the same, which leads to the conclusion that there is no impact of the developed methodology (or approach) on the studied indicator; b. "0 (for averages) and 1 (for variances)"means that the averages are statistically the same and the variances are statistically different, which leads to the conclusion that there is no difference in the variance of the impact of the developed methodology (or approach) on the studied indicator.If the variance is greater in the experimental group, the reasons should be clarified and the methodology should be improved or abandoned; c. "1 (for averages) and 0 (for variances)" means that the averages are statistically different and the variances are statistically the same, which leads to the conclusion about the existing, statistically significant impact of the developed methodology (or approach) on the studied indicator.It is necessary to assess the progress (dynamics) of the averages for a qualitative and correct conclusion about the effectiveness or vice versa (negative impact) of the implemented methodology; d. "1 (for averages) and 1 (for variances)" means that the averages are statistically different and the variances are statistically different, which leads to the conclusion of significant differences in the experimental and control groups, which usually indicates in favor of the experimental group and proves the positive impact of the proposed method (or approach) on the studied indicator.It is important to assess not only the dynamics of the means (increases or decreases) but also to compare variances: if during the experiment the variance decreases, it further indicates the quality of the impact of the developed methodology (or approach); if the variance increases, it means that with the change of the mean the scatter of estimates around the mean increases, ie in the experimental group the perception (and influence) of the chosen technique is ambiguous, which also requires some adjustment of the latter. The results' accuracy and reliability are based on using the correct mathematical apparatus and a clear interpretation of the results of applying the Student and Fisher criteria.Experimental data are collected and entered into the tables by the experimenter.Calculations are carried out automatically.Pairs are interpreted by the experimenter according to the specified rule-algorithm. Results We describe the use of the above algorithm in the example of processing the results of one of the stages of the pedagogical experiment. Mathematics education is the basis of the system of professional training of future specialists in computer engineering in technical higher educational institutions.Therefore, the effectiveness of the author's system of teaching higher mathematics to future computer engineers was tested. For the pedagogical experiment the control group was selected -KG (, 2 ), the group EG-1 (, 2 )clarifying experiment and the group EG-2 (, 2 )a formative experiment. Table 2. Significance of differences in indicators of numerical characteristics of knowledge formation in groups K, E, and F ("1"significant deviation, "0"non-significant deviation) Among others, the "Knowledge" indicator was studied based on final tests in various sections of higher mathematics.The results of the control works formed samples that were compared among themselves according to the algorithm.We evaluated each control work in 5 points.The average and dispersions for each sample of EG-2, EG-1, and KG were entered in the corresponding cells (in particular, these are columns with numbers 1.1, 2.1, 3.1 for averages, and 1.2, 2.2, 3.2 for variances).The results of the comparison of pairs of groups according to the Student and Fisher criteria were entered in the right part of the table (blue fields).Columns 4-6 show the results of the evaluation of pairs of samples according to the Student's criterion.Columns 7-9 show the results of the estimation of pairs according to Fisher's test.(Table 2) To understand the interpretation of the data, let's comment on the first line of the table corresponding to the course "Linear Algebra" (highlighted in bold). Students (KG, EG-1, EG-2) wrote a test paper for which they received certain marks.Means and variances were determined for each group: for example, for the KG group, the mean was 3.08, and the variance was 0.96.After that, the Student's test is applied to each of the pairs of groups (in MS Excel) and a conclusion is made about the statistical similarity or difference of the means.For the pair EG-2 and EG-1, the criterion gave statistical similarity, so for this pair we put "0" (column 4), and for the pair EG-2 and KG, the criterion confirmed a statistical discrepancy, so we put "1" (column 5).Similarly, we apply Fisher's criterion (calculated automatically in MS Excel).For the pair of EG-2 and EG-1, the criterion again gave statistical similarity, therefore, for this pair we put "0" (column 7), and for the pair of EG-2 and KG, the criterion confirmed a statistical discrepancy, so we put "1". To interpret the data, we compile the corresponding pairs of "averages and variances" based on the results of statistical analysis: -For groups EG-2 and EG-1 we have a pair of "0-0" (column 4 and column 7).This means that there was no shift in learning outcomes for these samples; -For groups EG-2 and KG we have a pair "1-1" (column 5 and column 8).This means that a shift in learning outcomes for these samples has occurred.Therefore, means and variances should be compared.The average in EG-2 is 4.06 (column 1.1).It is greater than the average in KG (3.08, column 3.1).This means that the influence of the method on educational achievements according to the analysis of averages should be considered positive.But this conclusion will be more valid if it is confirmed that the spread of grades for control works in the experimental group is smaller (that is, all grades are closer to the average than in the KG).Let's compare the variances of these samples.In the EG-2 group, the variance is smaller (0.79, column 1.2) than in the KG group (0.96, column 3.2).We conclude that the impact of the methodology was positive, and the learning results in the EG-2 group are higher and more reliable, as they have less dispersion (the average is higher and the variance is smaller). The analysis of the significance of the differences between the means and variances between the groups shows that the proposed system of teaching mathematical disciplines significantly increases the level of knowledge acquired by students.This is evidenced by the data in columns 5 and 8.The indicator indicates a significant difference in students' knowledge of all topics in favor of the proposed system.At the same time, the statistical average for all topics was higher in the experimental group of the formative stage. After statistical processing of the results of the refinement phase of the experiment, we saw that although the test results in the experimental group EG-1 were higher than the results of the control group CG, the difference was not consistently significant (columns 6 and 9).This meant that this element in the training system did not affect the results we wanted.The method of lectures and practical classes on vector algebra, analytical geometry, definite integral, series theory, functions of a complex variable, and operational calculus was improved (changed) (in particular, author's textbooks were introduced, which provided independent processing of higher mathematics sections by students). The results of statistical analysis after the formative experiment confirmed a positive shift in the results of knowledge acquisition after adjusting the methodology of classes, as evidenced by the data in columns 4 and 7 and 6 and 9. Analyzing the obtained results of the significance of the difference in the indicators of numerical characteristics, we see that in all parameters the difference between the proposed system of teaching mathematical disciplines and traditional is significant.The difference between the values in the experimental group of shaping (EG-2) and refining (EG-1) experiments indicates that the decision to test it with a refining experiment and then conduct a shaper was correct. Discussion Features of the process of teaching and education are not easy to study and reveal.Pedagogical processes are ambiguous.Their results depend on the simultaneous influence of many factors.It is enough to change the influence of one factor so that the results of the process differ significantly from each other.The pedagogical processes are characterized by uniqueness.If a researcher of natural sciences (in chemistry, physics) can repeat the experiment several times, using the same materials, creating the same conditions, the teacher-researcher does not have such an opportunity: re-research offers different working conditions and as a result -other results.That is why a "pure" experiment in pedagogy is impossible.Given this circumstance, teachers should draw their conclusions carefully and thoughtfully, understanding the relativity of the conditions in which these conclusions were obtained.Multiple observations (repetition of the experiment) allow in a generalized form to formulate conclusions and identify the most characteristic trends [11].It is this thesis that led to the intermediate pedagogical experiment, which is the basis of our methodology of statistical analysis. The idea of the proposed algorithm of statistical analysis appeared in 2008 [14] in developing and improving the model of professional training of future specialists in technical specialties. Its further use can be found in the works [8,16,18]. Fig. 2 . Fig. 2. Example of calculations in MS Excel Statistical analysis of two sample data (columns A and B) can be performed using the optional Data/Analysis Package, which includes the Two-sample F-test for variance and Pair two-sample t-test for means.These tools, as a result, tables of sample characteristics are formed:  for Fisher's testmean value, variance, sample (observation) size, number of degrees of freedom (df), an empirical value of statistics (F), and critical value of statistics (F critical );  for Student's t-testmean, variance, sample (observation) size, the correlation coefficient for the data set (Pearson correlation), the hypothesis-specified difference between the means for two samples (hypothetical difference), number of degrees of freedom (df), an empirical value of the statisticstatistic) and the critical value of the statistic (t is critical for one-sided and two-sided areas).
2023-02-12T16:06:24.063Z
2022-12-08T00:00:00.000
{ "year": 2022, "sha1": "8f9b4182d990171334b78edd06397648cdd85b72", "oa_license": null, "oa_url": "http://www.mecs-press.org/ijmecs/ijmecs-v14-n6/IJMECS-V14-N6-3.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "76756b02f2713b8953cded0eac399ea10614bd35", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
235015942
pes2o/s2orc
v3-fos-license
OBSERVE YOUR CHILD ABLUTION; THEY COULD HAVE OBSESSIVE- COMPULSIVE DISORDERS Obsessive-compulsive disorder (OCD) is a neuropsychiatric disorder characterized by recurrent intrusive, distressing thoughts and repetitive behaviours or mental rituals performed to reduce anxiety. The lifetime prevalence of OCD is 2.3% and it can happen to all people of various levels, including child and adolescent. The mean age of onset is 19.5 years, and a subset of patients, mostly males, have an early onset before 10 years of age. The lifetime risk of developing OCD is higher in females, who typically develop the disorder in adolescence [1]. Person with OCD usually presented with obsession involving various themes, namely contamination; repeated doubts; religious, need for symmetry and exactness, or taboo thoughts of a sexual, religious, or aggressive nature. Whereas, the most common compulsions are checking, washing, hoarding, and counting compulsions [1]. Introduction Obsessive-compulsive disorder (OCD) is a neuropsychiatric disorder characterized by recurrent intrusive, distressing thoughts and repetitive behaviours or mental rituals performed to reduce anxiety. The lifetime prevalence of OCD is 2.3% and it can happen to all people of various levels, including child and adolescent. The mean age of onset is 19.5 years, and a subset of patients, mostly males, have an early onset before 10 years of age. The lifetime risk of developing OCD is higher in females, who typically develop the disorder in adolescence [1]. Person with OCD usually presented with obsession involving various themes, namely contamination; repeated doubts; religious, need for symmetry and exactness, or taboo thoughts of a sexual, religious, or aggressive nature. Whereas, the most common compulsions are checking, washing, hoarding, and counting compulsions [1]. Children with early-onset OCD are often underdiagnosed for multiple reasons such as children hide the symptoms from their parents, and the parents are not aware of the symptoms [2,3]. These symptoms can affect their daily activities, including performing worship. The aim of this article is to highlight the possibility of excessive practice in performing wudhu/ablution among children as the initial sign of OCD among them. Discussion Wudhu is a ritual purification prescribed as washing parts of the body. It is done following commandments which making sure that the body parts are washed following a determined order: washing the face, washing the forearms, wiping the head, and washing the feet. The ablution is usually done as to pure oneself to do preceding rituals, e.g. the daily prayers. Just like most of the Islamic rituals, Muslims try their best to follow what Prophet Muhammad (PBUH) practised and this is called to follow the Sunnah. In ablution, besides the compulsory order of the washing, there are several sunnah that we carry out, for example, washing thoroughly each body part three times. Muslim children are taught about ablution (wudhu) before they perform the five times prayers (salat) by their parents at home. At school, these are taught during Islamic Education classes, and the practices are observed by the teachers (Ustaz) to ensure it is done to near perfect to the Islamic guidelines. OCD is diagnosed when the obsessions and/or compulsions cause marked distress, are recognized at least at some point as being excessive and/or irrational, are time consuming, and significantly interfere with the person's functioning [4]. In the case of an OCD Muslim school-age child, one can observe the obsessiveness and compulsion in their practice of wudhu. For them, the completeness of each washing in wudhu, which are the face, arms, head and feet, have to be repeatedly washed to ensure it abides the sunnah and at the same time to be performed correctly so that their prayer will be accepted. Therefore, typically they would spend long hours in the bathroom doing ablution and as a result, delaying their other activities such as being late to school. Being over meticulous in doing their ablution, some of them even wet their school uniforms while preparing for their afternoon prayers (Zuhr) which they usually performed at school. Sometimes, it involves them repeating whole ablution process after had thoughts of inadequacy during the prior attempt. This religious OCD symptom is known as scrupulosity and typically involves the perception of making sin where there is none and focused on minor details of one's religious practice [5]. EDUCATION It is a challenge for psychiatrists to treat children through a religious approach [5] due to differences in cognitive development. Therapists need to have good knowledge and understanding about the steps and rationale of performing wudhu and salat, e.g. the compulsory and the sunnah part for each practice, and how to adjust these practices therapeutically. For example, how to make the children understand they need to do all washings in wudhu once (compulsory) rather than three times (sunnah). It is important to be able to discuss what are the rationales of choosing between the two procedures so that they do not feel guilty if they do not follow the sunnah of the Prophet Muhammad (PBUH). Perhaps, more emphasize should be taken into how the Prophet take the easier steps between two choices and avoid wasting of water in ablution process to instil better understanding regarding the issue. Likewise, other behaviour therapy such as breathing exercise, thought-stopping and response prevention should be emphasised because these could be helpful in making them calmer and avoiding them from reinforce the compulsive behaviour. Figure 1 is an algorithm for the approach to treat ablution ritual related OCD. Children at this age (12 years of age and above) are at the formal operational stage of Piaget cognitive development [6] where they begin to think abstractly. They conceptualise and create logical structures that explain their physical experiences. They need a clear explanation of similar or additional information to complement what their Ustaz has demonstrated in their Islamic Education classes. If we fail to explain to what they expected, they might continue to search other sources such as the internet and other unreliable sources which may worsen the intrusive thought of incomplete wudhu, hence making them feel that their salat is invalid. These vicious cycles of obsessional thought must be prevented. As compared to adult, the management of OCD in children is relatively easier because children with OCD experienced lesser intrusive thoughts and less distressed [7]. If we can identify the early signs and symptoms of OCD, early intervention can be initiated that suits the children's cognitive abilities hence preventing the worsening condition. Family involvement of young children with OCD in the therapy is vital to ensure children practice what is taught in Cognitive Behavioral Therapy (CBT) as well as compliance in the pharmacotherapy [3]. Conclusion Children should be monitored when performing ablution because overemphasizing the practice to wash each body parts three times can become initial symptoms of OCD. Parents and teachers should have excellent knowledge and understanding about compulsory ablution and sunnah practices to help children understand the differences between these two, and appropriate to the children cognitive developmental stages.
2021-04-16T14:12:12.949Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "928b2a97e24231021369ffeccfe897a053c1128f", "oa_license": "CCBY", "oa_url": "https://mpaeds.my/journals/index.php/MJPCH/article/download/19/10", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "928b2a97e24231021369ffeccfe897a053c1128f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
219928279
pes2o/s2orc
v3-fos-license
Foramen Magnum Variant With Elongation of the Anterior Notch Morphological variations of the foramen magnum (FM) have been demonstrated to have different shapes and sizes, according to sex, age, and ethnicity. In this report, an ancient Roman skull was found to have a unique anterior notching further specified as an anterior elongation of the FM. To our knowledge, this feature has not been previously reported. The FM is one of the most challenging neurosurgical regions due to both its deep location and proximity to vital structures. Therefore, physicians and surgeons must account for FM anatomical variations in order to properly diagnose craniocervical pathology, interpret radiological images, and optimize surgical outcomes. In this case report, we describe the possible embryology and clinical importance of an apparently rare FM variant. Introduction The foramen magnum (FM) is the largest foramen of the skull, located in the posterior cranial fossa of the occipital bone. It marks the transition of the central nervous system between the spinal cord and brain. The spinal cord, vertebral arteries, anterior and posterior spinal arteries, meninges, and spinal roots of the accessory nerve pass through it. The craniocervical junction (CCJ) develops embryologically when the notochord induces neuroectodermal differentiation. The paraxial mesoderm contributes to the formation of bone and muscle in this region. The basin, median point of the anterior margin of the FM, is derived from the proatlas. The elongation of the clivus and anterior FM results from lateral sutural growth. The bone around the FM results from descent of the occiput and growth of the petro-occipital and sphenopetrosal junctions. The FM size and area is determined by the endochondral parts of the basiocciput, exocciput, and supraocciput and the interoccipital synchondroses between them [1,2]. The FM is surrounded by the pars lateralis, pars squama, and pars basilaris components of the occipital bone. A unique FM with anterior notching and elongation was found in an ancient Roman skull. This case demonstrates the unique morphologies of the FM. Its location is important as it is near the fourth ventricle, lower cranial nerves (IX-XII) and upper cervical nerves (C1-C2), and medulla oblongata [3]. Surgeons should understand the FM and surrounding region in order to prevent injury of structures passing through it and avoid complications during procedures. Case Presentation A unique FM with anterior notching was found in ancient roman skull (Figures 1, 2) FIGURE 1: Skull base of an infant noting the parts of the occipital bone Note the developing parts of the occipital bone surrounding the foramen magnum. FIGURE 2: A unique foramen magnum (FM) with anterior notching (white arrow) in an ancient Roman skull (left) For comparison, the normal FM without such an anterior notch (black arrow) is shown (right). Note the extension of the FM beyond the borders of the occipital condyles, which is out of the ordinary. The skull was that of an adult male. From an inferior view, the anterior notch of the FM was significantly elongated. The width of the FM was considered normal at about 27 mm, but the length was considerably longer than normal at approximately 40 cm (normal 30 cm) [1]. Additionally, this variant anterior part of the FM extended anterior to the occipital condyles almost to the level of the hypoglossal canal. No other anatomical variants were noted at the skull base in this specimen. The left and right occipital condyles were within normal limits in regard to position and size. The dimensions of the clivus were found to be within normal limits. The adjacent supraocciput and basiocciput were found to be within normal limits. The adjacent jugular foramina did not exhibit any anatomical variations. Discussion We presented a very unusual case where the FM extended beyond the position of the occipital condyles with an elongated anterior segment. Normally, the FM does not extend beyond the borders of the occipital condyles. The shape of the FM has many named variants, such as rhomboid, circle, heart, pear, and hexagon, although these names are inconsistent between studies. The FM may also be asymmetrical, or there may be a different degree of protrusion of the occipital condyles into it [4]. Aragão et al. described the most predominant morphological types of FM as pear-shaped, rounded, and tetragonal [5]. Samara et al. concluded that an irregularly shaped FM was most predominant in analysis of a Jordanian population [6]. However, the main type varies significantly in the literature due to inconsistent shape labels. Aside from interstudy discrepancies in FM shape labeling, variations in FM morphology are often due to sexual dimorphism (size) and different ethnic groups (shape). Zdilla et al. described males as having a longer sagittal length and larger FM area than females, but no significant difference in shape. The shape of the East Asian FM was closest to the average FM shape, followed by Europeans and Bengalis. The African, Bengali, and Malayan populations tended to have a more elongated FM shape [2]. Another determinant of variation in the shape of the FM is age. Lang described five different shapes of the FM: two semicircles, elongated circle, egg-shaped, rhomboid, and rounded. They compared them based on prevalence in two age groups: adults and children [7]. It was reported that the most common shape in the adult age group was two semicircles (41.2%), and the most common shape in the children age group was rhomboid (31.6%). These results illustrate the importance in considering age when considering the FM. The variation in FM shape occurs postnatally. During the fetal period, the characteristic shape of the FM is a long oval shape, likely due to the 5.4% faster growth speed in the sagittal direction compared to the transverse direction between the seventh month in utero and birth [2]. During postnatal growth, the FM shape becomes more variable. Between birth and sixth months of age, growth of the FM in the transverse direction is 7.6% greater than growth in the longitudinal direction [2]. Also occurring at birth, the ventral part of the FM widens due to an increased growth rate of the anterior interoccipital synchondroses, causing the ventral FM to invaginate more deeply into the basiocciput [8]. The elongation of the anterior notch of the FM seen in the present case may result from such growth. Variations in the shape and size of the FM are also relevant in forensic and anthropological studies as this structure's dimensions are used to determine the sex, age, ethnicity, stature, and other important identifying information of an individual. Conditions such as achondroplasia (small FM) and Chiari malformations (specifically Chiari type II malformations) (often large FM) demonstrate the variation in FM size [8]. In achondroplasia, the reduced size and shape restrict CSF egress through the FM, leading to potential hydrocephalus. In contrast, in Chiari malformations, the cerebellar vermis protrudes through the enlarged FM. However, this protrusion is thought to precede the formation of the FM. Significant elongation of the ventral FM would efface the posterior pharynx [4]. The variation in the shape of the FM is important to consider for neurosurgeons. Avci et al. described that with an ovoid FM, it is more challenging for a surgeon to access its anterior portion [9]. The degree of protrusion of the occipital condyles into the FM is also a factor. In the ovoid type of FM, more extensive bony removal of the occipital condyles may be required, whereas condylectomies with shorter occipital condyles may cause occipitocervical instability. Thus, the anterior elongation of the FM could affect surgical access to the anterior FM such as for resection of meningiomas or access to other structures of the CCJ such as in odontoidectomy. Additionally, such a variant might give surgeons more working space to resect tumors of this region without removing the condyles [10].
2020-06-11T09:04:53.400Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "5b17007ccaa1d62020d3301fd3d661b2c0d8e521", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/31333-foramen-magnum-variant-with-elongation-of-the-anterior-notch.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3dcd5819672cb1c5b59545801e14518653c01cbc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268783397
pes2o/s2orc
v3-fos-license
Dysfunctional mitochondria, disrupted levels of reactive oxygen species, and autophagy in B cells from common variable immunodeficiency patients Introduction Common Variable Immunodeficiency (CVID) patients are characterized by hypogammaglobulinemia and poor response to vaccination due to deficient generation of memory and antibody-secreting B cells. B lymphocytes are essential for the development of humoral immune responses, and mitochondrial function, hreactive oxygen species (ROS) production and autophagy are crucial for determining B-cell fate. However, the role of those basic cell functions in the differentiation of human B cells remains poorly investigated. Methods We used flow cytometry to evaluate mitochondrial function, ROS production and autophagy processes in human naïve and memory B-cell subpopulations in unstimulated and stimulated PBMCs cultures. We aimed to determine whether any alterations in these processes could impact B-cell fate and contribute to the lack of B-cell differentiation observed in CVID patients. Results We described that naïve CD19+CD27- and memory CD19+CD27+ B cells subpopulations from healthy controls differ in terms of their dependence on these processes for their homeostasis, and demonstrated that different stimuli exert a preferential cell type dependent effect. The evaluation of mitochondrial function, ROS production and autophagy in naïve and memory B cells from CVID patients disclosed subpopulation specific alterations. Dysfunctional mitochondria and autophagy were more prominent in unstimulated CVID CD19+CD27- and CD19+CD27+ B cells than in their healthy counterparts. Although naïve CD19+CD27- B cells from CVID patients had higher basal ROS levels than controls, their ROS increase after stimulation was lower, suggesting a disruption in ROS homeostasis. On the other hand, memory CD19+CD27+ B cells from CVID patients had both lower ROS basal levels and a diminished ROS production after stimulation with anti-B cell receptor (BCR) and IL-21. Conclusion The failure in ROS cell signalling could impair CVID naïve B cell activation and differentiation to memory B cells. Decreased levels of ROS in CVID memory CD19+CD27+ B cells, which negatively correlate with their in vitro cell death and autophagy, could be detrimental and lead to their previously demonstrated premature death. The final consequence would be the failure to generate a functional B cell compartment in CVID patients. Introduction Common Variable Immunodeficiency (CVID) is the commonest symptomatic primary humoral immunodeficiency, characterized by hypogammaglobulinemia and poor response to vaccination (1).Patients benefit from substitutive gammaglobulin therapy.Apart from recurrent infections, some patients present with noninfectious complications including autoimmune, autoinflammatory, and lymphoproliferative disorders requiring immunosuppressive or other treatments different from gammaglobulin (2)(3)(4).Monogenic mutations in genes related to inborn errors of immunity (IEI) have been described in approximately 30% of the patients (5). The common finding of abnormal late B-cell differentiation to memory and antibody-secreting cells (ASCs) found in CVID patients has provided the basis for several classifications that rely on the presence or absence of different subpopulations of memory B cells (6)(7)(8).The generation of memory B cells and ASC is essential for the development of humoral immune responses.For this to occur, mature B lymphocytes must receive signals provided by antigens through B-cell receptor (BCR) and T-cell help.T-cell cooperation is established through direct contact between T-cell membrane molecules and their corresponding B-cell ligands or through the secretion of cytokines that stimulate receptors located on the B-cell surface.One of the most important cytokines for B-cell differentiation is IL-21.In certain circumstances, stimulation through toll-like receptor (TLR) can substitute for T-cell help, driving B-cell differentiation (9)(10)(11).These stimuli also influence the apoptosis/survival balance that preserves B-cell homeostasis; specific requirements have been shown to be dependent on the B-cell maturation and activation status (12,13). Immune cell metabolism provides energy and a source of biomolecules but also determines immune-cell fate.Mitochondrial metabolic pathways play essential roles in cell activity, differentiation, stress, and aging and are important for Bcell activation and plasma cell generation (14)(15)(16)(17).Oxidative stress is associated with the intracellular production of reactive oxygen species (ROS), mainly generated in the inner membrane of the mitochondria by the electron transport chain (18).Although ROS play important roles in the regulation of cell signaling and homeostasis (19)(20)(21), excessive ROS can lead to cellular damage by oxidation of proteins, lipids, nucleic acids, and organelles.Impairment of mitochondrial function leads to depolarization, reduction in membrane potential, changes in membrane permeability, and, ultimately, apoptosis (22). Autophagy, an evolutionary conserved lysosomal degradation process, protects cells from oxidative stress by removing damaged cellular components such as mitochondria (23,24).Its role in memory and plasma B-cell development has been assessed only in mouse models (25).There are lines of evidence suggesting a reciprocal interplay between mitochondrial function and autophagy, but the molecular mechanisms underlying autophagy response to oxidative stress are largely unknown (26). We have previously found that an increased susceptibility to activation-induced apoptosis could be the cause of memory B-cell loss in a subgroup of CVID patients with a more compromised memory B-cell compartment (27).We have also demonstrated a disbalance in mitochondrial apoptosis regulation in this subgroup of patients (28). The aim of this study was to examine the interplay of mitochondrial function, ROS production, and autophagy processes in healthy human naïve and memory B-cell subpopulations and to determine whether any alterations in these processes could potentially impact B-cell fate and contribute to the lack of B-cell differentiation observed in CVID patients. Patients CVID patients were selected according to diagnostic criteria established by the International Union of Immunological Societies scientific group for primary immunodeficiency diseases (29, 30).The study included 25 patients diagnosed with CVID, comprising 17 women and eight men, with ages ranging from 31 to 78 and 18 to 74, respectively.Only patients with more than 1% of B cells were included in the study.Patients were categorized into two groups according to the European consensus classification for CVID (EUROclass) (8): i) CVID patients with <2% of IgD − CD27 + (switched memory phenotype) B cells or smB− (n = 13) and (ii) patients with >2% of IgD − CD27 + (n = 12) or smB+.All patients received substitutive gammaglobulin therapy every 21-28 days and were free from infection at the time of the study.Peripheral blood samples were collected before gammaglobulin replacement. Table 1 provides a summary of the patients' age, gender, and percentages of B-cell populations.Additionally, 25 age and sexmatched healthy blood donors were included as controls (Supplementary Table 1).The study was conducted in accordance with the ethical principles outlined in the 1975 Declaration of Helsinki, and it was approved by CEIC-IB (Balearic Islands Clinical Research Ethics Committee; IB 4322/20).Written informed consent was obtained from all subjects prior to their participation in the study. PBMCs were cultured (1 × 10 6 cell/mL) in 24-well flat bottom plates (for ROS and autophagy evaluation) or 96-well plates (for mitochondrial evaluation) and incubated in the absence (unstimulated) or presence of specific B-cell stimulus combinations: F(ab)2 goat anti-human IgA + IgG + IgM (anti- Flow cytometry Peripheral blood lymphocyte populations, cell death, mitochondrial membrane potential (MMP), mitochondrial mass (MM), ROS production, and autophagy levels were analyzed by flow cytometry using a BD FACSLyric (Becton Dickinson, Franklin Lakes, NJ, USA) cytometer and FlowJo v10 software for data analysis. Peripheral blood lymphocyte populations A surface staining protocol was applied to evaluate B-cell subpopulations and phenotypically classify CVID patients.Briefly, 50 µL of peripheral whole blood was incubated with a combination of fluorochrome-conjugated monoclonal antibodies (5 µL of each antibody) for 20 min at room temperature (25°C).Red blood cells were lysed for 10 min with 2 mL of FACS Lysing solution (Becton Dickinson) and washed with phosphate-buffered saline (PBS) before flow cytometry analysis.The combination of the following monoclonal antibodies was used to determine the distribution of Bcell subpopulations-anti-CD19-PECy7, anti-CD45-V500, anti-CD27-APC, anti-IgD-V450, and anti-CD21-BV605-all from Becton Dickinson. Mitochondrial function Mitochondrial function was assessed using MitoTracker probes (MitoTracker Green and MitoTracker Deep Red from Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions.Briefly, 2 × 10 5 harvested cells were stained with Live/Dead Aqua Dead Cell Stain (Invitrogen Molecular Probes, Eugene, OR, USA) for 20 min at room temperature (RT) followed by anti-CD19-PE-Cy7 and anti-CD27-BV421 (both from BD Biosciences, San Jose, CA, USA) staining for 15 min at RT.Then, cells were washed with PBS and stained with MitoTracker Green and MitoTracker Deep Red for the evaluation of mitochondrial mass and mitochondrial membrane potential, respectively, for 30 min at 37°C. Reduction of mitochondrial membrane potential is a hallmark of mitochondrial dysfunction.For this reason, viable (Live/Dead − ) naïve (CD19 + CD27 − ) and memory (CD19 + CD27 + ) B cells (Figure 1Ai), containing mitochondria with low membrane potential, identified as MitoTracker Deep Red low and MitoTracker Green + (Figure 1Aii), were considered B cells with dysfunctional mitochondria (CDM).Fold increase in the percentage of CDM induced by each single stimulus related to the unstimulated sample was expressed as a ratio: (single stimulus % CDM)/(unstimulated % CDM). ROS fold increase induced by each single stimulus related to the unstimulated sample was expressed as a ratio: (single stimulus % CellROX + cells)/(unstimulated % CellROX + cells). Briefly, 5 × 10 5 harvested cells were stained with Live/Dead Aqua Dead Cell Stain (Invitrogen Molecular Probes) for 20 min at 4°C.Cells were washed with cold PBS and stained with anti-CD19-APC and anti-CD27-BV605 (both from BD Biosciences) for 20 min at 4°C.After surface staining, cells were washed with 1X Assay Buffer and stained with 1X Autophagy Reagent B for selective permeabilization.Cells were washed immediately and stained with 1X Anti-LC3-II-FITC for 30 min at 4°C and finally fixed with 2% formaldehyde solution (Merck, Darmstadt, Germany) for 10 min at RT. Geometric mean fluorescence intensity (MFI) of Anti-LC3-II-FITC was used to evaluate autophagy levels in previously gated naïve (CD19 + CD27 − ) and memory (CD19 + CD27 + ) viable (Live/ Dead − ) B cells.Basal autophagy refers to autophagy measured in 24h unstimulated cultured cells.Autophagic flux refers to autophagy measured in 24-h unstimulated or stimulated cells in the presence of bafilomycin A1 (bafA1).Bafilomycin promotes the accumulation of autophagosomes by inhibiting autophagosome breakdown, thus favoring the detection of LC3-II (Figure 3A).Autophagic flux fold increase induced by each single stimulus related to the unstimulated sample was expressed as a ratio: (single stimulus Anti-LC3-II MFI)/ (unstimulated Anti-LC3-II MFI).Frontiers in Immunology frontiersin.orgcompare differences between two independent data sets related to a single experimental condition (e.g., CD19 + CD27 − vs. CD19 + CD27 + unstimulated control B cells, or control vs. CVID single B-cell subpopulation stimulated with anti-BCR).The Wilcoxon test was used to compare differences between two paired groups of treatments (basal and post-stimulation conditions or each single stimulus with its combination with other stimuli).Correlation between variables was measured using Pearson's correlation coefficient.A p-value less than 0.05 was considered statistically significant.Principal component analysis (PCA) was performed to evaluate relationships between the experimental variables studied and the patients' B-cell deficiency according to the EUROclass classification.To this end, in the PCA, the categorical B-cell deficiency variable was converted to numerical data, as follows: healthy controls = 0, smB+ CVID patients = 1, and smB− CVID patients = 2. Variables were considered significantly loaded in the PCA when the loading value was above or below ±0.5. Results 3.1 Cells with dysfunctional mitochondria in naïve and memory B cells from healthy controls and CVID patients We evaluated the percentage of naïve (CD19 + CD27 − ) and memory (CD19 + CD27 + ) viable B cells with dysfunctional mitochondria from healthy controls and CVID patients.Representative data of a CVID patient and paired healthy control are depicted in Figure 1Aii.Unstimulated CD19 + CD27 − control B cells, in contrast to CVID B cells, exhibited a higher percentage of dysfunctional mitochondria than CD19 + CD27 + control B cells (p < 0.01) (Figures 1Bi, ii).Next, we compared the percentage of B-cell subpopulations with dysfunctional mitochondria between healthy controls and CVID patients.We detected higher basal percentages of CD19 + CD27 − B cells with dysfunctional mitochondria in CVID patients compared to controls (p < 0.01) (Figure 1C). Unlike what was observed in healthy controls, anti-CD40 and CpG-ODN did not increase the percentage of CD19 + CD27 − CVID B cells with dysfunctional mitochondria (Figures 4Ai, ii).After anti-BCR and anti-CD40 stimulation, there was a statistically lower fold increase only in the percentage of CD19 + CD27 + CVID B cells with dysfunctional mitochondria compared to healthy controls (p < 0.05 B-cell subpopulations with dysfunctional mitochondria in healthy controls and common variable immunodeficiency (CVID) patients after stimulation.(A) Percentages of naïve CD19 + CD27 − (i) and (ii) and memory CD19 + CD27 + (iii) and (iv) B cells with dysfunctional mitochondria in healthy controls (i) and (iii) and CVID patients (ii) and (iv) after 24 h of culture without or with stimulation with anti-BCR, anti-CD40, CpG-ODN, anti-CD40+IL-21, or anti-BCR+CpG-ODN.(B) Fold increase in the percentage of B-cell subpopulations with dysfunctional mitochondria induced by each single stimulus or combination, related to unstimulated sample in (i) naïve CD19 + CD27 − and (ii) memory CD19 + CD27 + B cell from healthy controls and CVID patients.Each dot represents an individual.Green dots (naïve CD19 + CD27 − ) and purple dots (memory CD19 + CD27 + ); empty dots (healthy controls) and filled dots (CVID patients).Black horizontal lines illustrate the median of the group.and p < 0.01, respectively) (Figures 4Bi, ii).Despite this, the final result was a generalized higher level of cells with dysfunctional mitochondria in both CVID subpopulations after stimulation (Figures 4Ai-iv). ROS levels and production in naïve and memory B cells from healthy controls and CVID patients Next, we evaluated ROS production in naïve (CD19 + CD27 − ) and memory (CD19 + CD27 + ) viable B cells from healthy controls and CVID patients (Figures 2Ai, ii). Autophagy levels and autophagic flux in naïve and memory B cells from healthy controls and CVID patients In naïve (CD19 + CD27 − ) and memory (CD19 + CD27 + ) B-cell subsets from healthy controls and CVID patients, we evaluated basal autophagy levels and autophagic flux as described in the Autophagy section of the Materials and Methods (Figures 3Ai, ii). Unstimulated CD19 + CD27 − B cells from healthy controls displayed similar levels of autophagic flux to CD19 + CD27 + control B cells (Figure 3Bi).Unlike healthy controls, CD19 + CD27 + from CVID patients exhibited higher autophagic fl ux co mp ared to CD19 + CD27 − C V I D B c e l l s ( p < 0.0001) (Figure 3Bii). The abovementioned results were similar in CVID patients regarding the induction of autophagic flux and the differential behavior of CD19 + CD27 − and CD19 + CD27 + control B cells in response to distinct stimuli (Figures 6Ai-iv).However, when we compared the fold increase in autophagy between CVID patients and healthy control B cells, we found a significantly higher fold increase in autophagic flux in CD19 + CD27 − CVID B cells stimulated with anti-CD40 (p < 0.05) (Figure 6Bi).No differences were found for CD19 + CD27 + CVID B cells (Figure 6Bii). Principal component analysis of the experimental variables, in vitro cell death, and B-cell deficiency Next, we aimed to explore, by PCA, the potential relationships between the studied experimental variables related to B-cell metabolism and in vitro basal cell death (expressed as a percentage of SYTOX + cells) and the degree of B-cell deficiency in our cohort of CVID patients. The PCA performed in CD19 + CD27 − naïve B cells (Figure 7A) showed two principal components characterized by ROS production and autophagy [clustered in principal component 1 (PC1)] and mitochondrial dysfunction [clustered in principal component 2 (PC2)].In this PCA, the only remarkable result observed was a moderate positive correlation between ROS production and autophagy, which formed medium acute angles (Figure 7A).As expected, the contribution of CD19 + CD27 − naïve B cells in vitro basal cell death was irrelevant in the prediction model calculation (loading values of 0.273 and −0.042 in PC1 and PC2, respectively; Figure 7A).The B-cell deficiency degree showed a moderate positive correlation with mitochondrial dysfunction and autophagy; however, its loading value was very low in PC1 and slightly exceeded the proposed cut-off in PC2 (0.195 and 0.525, respectively, in Figure 7A). Next, the PCA performed with data from CD19 + CD27 + memory B cells (Figure 7B) yielded two principal components clearly shaped by ROS production and mitochondrial dysfunction in PC1 and autophagy in PC2.In this PCA, autophagy and mitochondrial function were apparently unrelated to them (nearly 90°angles formed) and, interestingly, showed a high negative correlation with ROS production (great obtuse angles formed).This was in accordance with the lower levels of ROS production and higher levels of autophagy previously demonstrated in CD19 + CD27 + memory B cells from CVID patients (Figures 2C, 3C, D, respectively).Moreover, CD19 + CD27 + memory B cells in vitro basal cell death and B-cell deficiency degree clustered in PC1 (with ROS production and mitochondrial function) with high significant loading values (0.827 and 0.614, respectively, Figure 7B).Interestingly, the PCA in CD19 + CD27 + memory B cells demonstrated a well-defined negative correlation between ROS production and either the in vitro basal cell death or the B-cell deficiency degree, given the nearly 180°angle that they formed (Figure 7B). Discussion ROS production, mitochondrial function, and autophagy are crucial for determining B-cell fate (31) and have been widely studied in mice.However, the interplay between these processes remains poorly investigated in human B cells.CVID patients present with deficient generation of memory B cells and antibody-secreting cells that are essential for the development of humoral immune responses. B A Principal component analysis (PCA) of experimental variables related to B-cell metabolism, "in vitro" B-cell death, and the degree of B-cell deficiency in our cohort of common variable immunodeficiency (CVID) patients.Loading plot of the first two principal components (PC1 and PC2) in naïve CD19 + CD27 − (A) and memory CD19 + CD27 + (B) B-cell subpopulations.M., R., and (A) refer to dysfunctional mitochondria (green), reactive oxygen species (ROS) production (red), and autophagy (blue) variables, respectively. The present study aimed to evaluate the interaction of mitochondrial function (mitochondrial membrane potential and mitochondrial mass), ROS production, and autophagy in naïve CD19 + CD27 − and memory CD19 + CD27 + B-cell subsets from healthy controls.Considering our previous results showing an imbalance in mitochondrial apoptosis regulation in memory B cells from CVID patients and an increased susceptibility to activationinduced apoptosis (27,28), we also studied if alterations in these processes in CVID B-cell subpopulations could influence their fate and be the cause of their lack of differentiation in CVID patients. Memory B cells differ from naïve B cells in important aspects including a lower threshold for activation, greater proliferative capacity, or survival period (32)(33)(34)(35).Consistently, we found that healthy controls CD19 + CD27 − and CD19 + CD27 + B cells differentially depend on these studied processes for homeostasis.In unstimulated cell cultures, we found higher amounts of cells with dysfunctional mitochondria in CD19 + CD27 − than in CD19 + CD27 + B cells that, at the same time, displayed strongly lower ROS levels, despite showing similar autophagic flux. ROS production plays a fundamental cellular dual role.At low levels, ROS act as second messengers essential in signal transduction.However, at high levels, ROS can cause organelle oxidative damage, particularly in mitochondria.ROS production increases when B cells are activated, playing a role as second messengers during B-cell activation and differentiation (31).In response to BCR stimulation, ROS production occurs in two waves: an early increase, within minutes upon stimulation, and a second wave of "mitochondrial" ROS production occurring at a later point (6-24 h) (36, 37).Wheeler et al. described that this late-phase ROS production is crucial for mouse spleen B-cell activation and survival.We evaluated this second and essential wave by studying healthy control B-cell subpopulation ROS production after 24-h stimulation.As expected, CD19 + CD27 − B cells increased ROS production when stimulated with all single stimuli and their combinations, while CD19 + CD27 + B cells, according to their lower threshold of activation, showed a moderate increase in ROS production (34,35). Mitochondria are the main source of ROS in the cell, and high ROS levels have been related to mitochondrial dysfunction (38).Surprisingly, there was a higher percentage of healthy CD19 + CD27 − B cells with dysfunctional mitochondria than healthy CD19 + CD27 + , contrasting with their lower ROS levels.We found that different stimuli exerted a different effect on these processes.CpG-ODN was the stimulus that induced a higher increase of cells with dysfunctional mitochondria and ROS production in both healthy CD19 + CD27 − and CD19 + CD27 + B cells, supporting that mitochondria are the main source of ROS production in our model.We also observed that the effect of the stimuli was dependent on the cell type.Anti-BCR induced ROS production but decreased the percentage of cells with dysfunctional mitochondria in healthy control CD19 + CD27 − cells, whereas it increased dysfunctional mitochondria, not increasing ROS production, in CD19 + CD27 + B cells. Autophagy is a cytoprotective pathway that protects cells from stressed conditions.However, disruption of autophagic mechanisms or excessive stress-induced autophagy, particularly by oxidative stress, may lead to "autophagic cell death" (39,40).Memory B cells, as naïve B cells, are quiescent before antigen activation and their differentiation into antibody-secreting cells.Accordingly, resting healthy naïve and memory B cells had similar autophagy levels and autophagic flux.BCR stimulation and B-cell activation promote autophagy (16,23).Consequently, anti-BCR and anti-BCR+IL-21 were the stimuli that induced higher autophagy levels in both CD19 + CD27 − and CD19 + CD27 + healthy B cells.CpG-ODN reduced autophagy in CD19 + CD27 − and had no effect on CD19 + CD27 + B cells, confirming the specific action of stimuli and the different cell type responses. When CVID patients were studied, we found that unstimulated CD19 + CD27 − CVID B cells exhibited a higher percentage of cells with dysfunctional mitochondria, lower ROS levels, but in a different way than healthy cells, and less autophagy compared to CD19 + CD27 + CVID B cells.Moreover, dysfunctional mitochondria, ROS production, and autophagy were higher in both unstimulated and stimulated CD19 + CD27 − CVID B cells than in their healthy counterparts, which was significant with certain stimuli.Concerning CD19 + CD27 + CVID B cells, there was a lower ROS production and higher levels of dysfunctional mitochondria and autophagy than their healthy counterparts. B cells from CVID patients displayed a ROS dysregulation compared to their healthy counterparts.Naïve B cells from CVID patients had higher basal ROS levels than controls, but interestingly, the ROS fold increase after stimulation was lower and never reached the same levels achieved by control B cells.Although ROS are required for B-cell activation and maturation, excessive ROS lead to oxidative stress (21,22).Therefore, in CD19 + CD27 − CVID B cells, the combination of high levels of basal ROS with their lower increase after stimulation suggests a ROS dysregulation that can result in dampened cell signaling.This could be detrimental to CVID naïve B-cell activation and differentiation to memory B cells. Additionally, CVID patients' CD19 + CD27 + B-cell compartment had lower ROS basal levels and even a significant decrease in ROS production after anti-BCR+IL-21 stimulation, compared to healthy controls.In keeping with this, in a previous study, we demonstrated that CD19 + CD27 + CVID B cells failed to upregulate anti-apoptotic Bcl-2 and Bcl-XL after anti-BCR+IL-21 stimulation (28).Interestingly, the dysfunction of ROS generation has been correlated with the reduction of memory B-cell compartment in chronic granulomatous disease (41).The decreased levels of ROS found in CD19 + CD27 + B cells could be detrimental, leading to their premature death.We previously reported an increased CVID basal level of Bax and Bim that correlated with low viability and high Caspase-3 activation only in CVID memory CD19 + CD27 + B cells (28).In keeping with these, we found a negative correlation between ROS production and in vitro cellular death in unstimulated CD19 + CD27 + CVID B cells that were not found in their healthy counterpart (Supplementary Figures 1Ci, ii), indicating that dysregulation of ROS balance contributes to the fact that peripheral CVID memory B cells are prompted to die from apoptosis. As previously mentioned, autophagy plays a crucial role in B-cell activation, and BCR stimulation promotes autophagy and triggers apoptosis if B cells are not co-stimulated with a second signal (16,23).CVID B cells exhibited higher basal autophagy levels in both CD19 + CD27 − and CD19 + CD27 + B subpopulations than their healthy counterparts; moreover, CD19 + CD27 + CVID B cells also showed higher autophagic flux.CVID B cells also exhibited higher autophagic flux levels than their healthy counterparts when stimulated with most of the stimuli.Hence, excessive autophagy reported in CVID memory B cells could reflect the loss of cellular homeostasis and be detrimental to their survival. PCA allowed us to link the studied metabolic experimental variables to in vitro B-cell death and B-cell deficiency.The analysis showed a different interaction of the evaluated metabolic variables in naïve CD19 + CD27 − and memory CD19 + CD27 + B cells.In naïve B cells, ROS production and autophagy clustered in the same component with a slightly positive correlation.Moreover, in vitro naïve cell death and B-cell deficiency degree variables had no relevance in the PCA model.Even though naïve CVID B cells presented an alteration in ROS levels and production that could condition their differentiation, the PCA suggested that the production of ROS and the autophagy process compensate each other, which leads to cell survival.In fact, there was no higher in vitro cell death in CVID naïve B cells compared to healthy controls (28) (Supplementary Figure 1Bi). However, we found that increased CVID memory B-cell death negatively correlated with ROS production (Supplementary Figures 1Bii, Cii).In PCA of memory B cells, ROS production, and autophagy were clearly "opposed" (they were placed in different components and formed open angles).Moreover, in vitro memory B-cell death and B-cell deficiency degree variables were relevant in the model (high loading), positively correlating with autophagy and negatively correlating with ROS production.The analysis indicates that, in this case, ROS level reduction occurs at the expense of uncontrolled autophagy that, finally, induces cell death. As a limitation of this work, we should mention the lack of wholeexome/genome sequencing data of our cohort of patients.It would be interesting to extend the study to a larger patient sample and try to correlate these findings with possible underlying mutations. In summary, we have found that naïve and memory healthy B cells differentially depend on mitochondrial function, ROS production, and autophagy for their integrity and function.CVID B-cell subpopulations show a loss of cellular homeostasis.An excessive autophagic flux, higher levels of dysfunctional mitochondria, and an alteration in ROS levels could affect the differentiation of CVID naïve into memory B cells while conditioning a higher susceptibility of CVID memory B cells to premature death.The final consequence is a failure in the generation of a functional B-cell compartment in CVID patients. 1 B FIGURE 1 B-cell subpopulations with dysfunctional mitochondria in healthy controls and common variable immunodeficiency (CVID) patients.(A) Dot plots of B-cell subpopulations with dysfunctional mitochondria from a representative control and CVID patient.(i) Gating strategy: viable B cells were selected based on forward and side scatter, non-expression of Live/Dead marker, and CD19 + expression.B-cell subpopulations were identified as naïve CD19 + CD27 − (green color) and memory CD19 + CD27 + (purple color).(ii) Cells with dysfunctional mitochondria were selected as MitoTracker Deep Red low and MitoTracker Green + from healthy control (left panels) and CVID patients (right panels).(B) Comparison between the percentages of naïve CD19 + CD27 − (green dots) and memory CD19 + CD27 + (purple dots) B cells with dysfunctional mitochondria in (i) healthy controls and (ii) CVID patients after 24 h of culture without or with stimulation with anti-BCR, anti-CD40, CpG-ODN, or the combinations anti-CD40+IL-21 and anti-BCR +CpG-ODN.(C) Percentages of unstimulated naïve CD19 + CD27 − (green dots) and memory CD19 + CD27 + (purple dots) B cells with dysfunctional mitochondria from healthy controls (empty dots) and CVID patients (filled dots).(B, C) Each dot represents an individual; black horizontal lines illustrate the median of the group.Mann-Whitney test p-values: **p < 0.01. FIGURE 2 FIGURE 2 Reactive oxygen species (ROS) in B-cell subpopulations from healthy controls and common variable immunodeficiency (CVID) patients.(A) Dot plots with histograms representing ROS production in B-cell subpopulations from a representative control and CVID patient.(i) Gating strategy: viable B cells were selected based on forward and side scatter, non-expression of SYTOX marker, and CD19 + expression.B-cell subpopulations were identified as naïve CD19 + CD27 − (green color) and memory CD19 + CD27 + (purple color).(ii) ROS production was identified as the percentage of cells positive for CellROX Deep Red probe (% CellROX + cells).Histograms show the percentage (upper right corner) of ROS-producing cells from healthy control (left histograms) and CVID patients (right histograms).(B) Comparison between the percentages of CellROX + naïve CD19 + CD27 − (green dots) and memory CD19 + CD27 + (purple dots) B cells from (i) healthy controls and (i) CVID patients after 24 h of culture without or with stimulation with anti-BCR, anti-CD40, CpG-ODN, anti-BCR+IL-21, anti-CD40+IL-21, anti-BCR+CpG-ODN, or anti-BCR+anti-CD40.(C) Percentages of unstimulated CellROX + naïve CD19 + CD27 − (green dots) and memory CD19 + CD27 + (purple dots) from healthy controls (empty dots) and CVID patients (filled dots).(B, C) Each dot represents an individual; black horizontal lines illustrate the median of the group.Mann-Whitney test p-values: ****p < 0.0001. TABLE 1 Age, gender, immunoglobulin levels, B-cell subpopulations, and classification of the CVID patient cohort.
2024-03-31T15:53:51.930Z
2024-03-26T00:00:00.000
{ "year": 2024, "sha1": "3eac170dfc8e7882cc4e19ea3a952fff88040c88", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1362995/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09b616ed6970edf2bce1ee4438e28b942a2eb829", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
148425258
pes2o/s2orc
v3-fos-license
PERfORMING ETHNOGRAPHY AND ETHNICITY An Early Documentation of finnish Immigrants in Nordiska museet This article discusses the first project of the Nordic Museum in Stockholm, Sweden, dealing with immigrants. It was carried out between 1972 and 1990, and it produced material based on interviews, participant observation, photographs and other written and visual sources. The article first examines why and how this extensive research project was carried out and then discusses the documentation project as performance. The project was an early attempt to document the contemporary lives of people through fieldwork, although the original aim of this pioneering project was merely to create and preserve ethnic identity by documenting “authentic” Finnish characteristics. Thus, it is a good example of changing paradigms in ethnological research. The Nordic Museum (Nordiska museet) in Stock-tor and policeman.Ethnologists as professionals holm provides a Nordic perspective on the question are not clearly distinguished by special clothes or of performance and ethnographic praxis raised by insignia, but nevertheless they also surely perform Dwight Conquergood.In his Rethinking Ethnogra-their jobs: they too have prescribed tones of voice phy (1991: 190) he asks, "What are the methodologi-and professional vocabularies, and their conduct is cal implications of thinking about fieldwork as the likewise marked by the visible exercise of authority.collaborative performance of an enabling fiction be- The informants can also be seen as performers: they tween observer and observed, knower and known?" are given a role by their researchers, and they are ob-He also wonders, how thinking about fieldwork as served in that role.Richard Schechner argues that performance differs from thinking about fieldwork identifying what is emphasized and what is omitted as the collection of data?As the reading of texts?He is important for understanding both the performfurther asks how the performance model shapes the ance process and the social world that contains and conduct of fieldwork?"The relationship with the is also shaped by particular performances.Any bepeople?The choices made in the field and the po-haviour, event, action, or thing can be studied "as" sitionality of the researcher?"For a performance, performance (Schechner 2006: 40, 208, 226).there is always a starting point, one or more proto-This article describes the Nordic Museum's first performances.Proto-performances can be found in project dealing with immigrants to Sweden and disthe performing arts, rituals and sports, but also in cusses why and how this extensive research project many occupations such as those of the lawyer, doc-was carried out.*The performers in this research were people of Finnish origin living in Sweden, as well as returnees and potential immigrants living in Finland.The researchers of the museum were interested in their informants' performance in everyday life: the informants were expected to perform "Finnishness", using a certain Finnish grammar and vocabulary -a Finnish choreography -designed by the researchers (cf.Kirshenblatt-Gimblett 1991: 397;Schechner 2006: 19).A touch of Swedish influence was also anticipated as a result of immigrant experience.I am not concerned here with traditional museum material -artefacts that were first incorporated in the museum's existing collections and later exhibited in the museum, conserved and possibly used as source materials for research or teaching (Hein 2000: 4-5).Rather, this article is concerned with documentary collections that have resulted from interviews and participant observation.Such documents are, however, also artefacts of ethnography (Kirshenblatt-Gimblett 1991: 394). The Nordic Museum's pioneer project experimented with old and new approaches in trying to change the ethnological research agenda.However, even though the research was intended to focus on identity, ethnicity and culture, the approach was still a traditional one that focused on materiality.Materiality back then was seen more or less as cultural traits and symbolic objects, not as practice, as Maja Povrzanoviç Frykman suggests (Povrzanoviç-Frykman 2008: 18). The Migrationen finland-Sverige Project The Nordic Museum's annual report for the year 1974 states that a research project on Finnish immigrants entitled "Finland-Sweden after the Second World War" had been initiated with funds provided by the foundation Riksbankens Jubileumsfond. 1 The project was, according to the report, to be carried out in cooperation with the Department of Geography of the University of Umeå in Sweden and the Department of Ethnology of the University of Jyväskylä in Finland.Its primary objective was "to examine the assimilation and ethnic identity of Finnish immigrants, in other words the extent to which the Finns had adapted to Swedish society."Research had, in the course of the year, been carried out in Virsbo in the municipality of Surahammar and Upplands Väsby near Stockholm, in other words "in a small mill community and on the outskirts of a city" (Nordiska museet under år 1974(Nordiska museet under år 1975: 160): 160).There were at the time of the documentation thousands of Finns living in Virsbo and Upplands Väsby.The project continued the following year.According to the annual report for 1975, the ethnological part of the study was completed as the result of fieldwork and compilation carried out during that year.The fieldwork had continued in Upplands Väsby and had begun in the Finnish districts of Karstula, Närpes (Fin.Närpiö), Nokia and Borgå (Fin.Porvoo).Finnish-speaking Karstula and Swedish-speaking Närpes had been chosen because emigration from these localities to Sweden in the 1950s and 1960s had been particularly marked.Finnish-speaking Nokia and Swedish-speaking Borgå had in turn been chosen because their populations included many returnees from Sweden.In Karstula and Närpes, the project sought to determine the factors influencing the decision of emigrants to return, and to obtain interview material concerning the time before the decision to emigrate was made.The aim was then to monitor the history of the migrant families for the next five to ten years.In the case of the returnees, the project was interested in the Finns' assimilation into Swedish society and their re-adjustment to a Finnish environment.The researchers conducting the interviews published surveys of the research localities.The collection of material was also expanded to take in Finnish children who had been evacuated to Sweden during the Second World War and Finns who migrated to the province of Värmland in Sweden.The experiences of these evacuees might, it was thought, be of significance in subsequent decisions to emigrate.Värmland was chosen because the researchers were interested in whether the Finns' identity differed in regions where there had been Finnish settlements for centuries.The publication of the research results also continued.In 1990, 18 years after the funding application had been submitted, the final report, När finländarna kom [When the Finns Came], was published.It combined both ethnological and geo- graphical perspectives, with an emphasis on the latter (Häggström, Borgegård & Rosengren 1990). Why Was the Material Collected? In 1975, Sweden changed its official immigration policy from one of assimilation to one of integration.Whereas the policy had previously aimed to assimilate immigrants into Swedish society, it now sought to permit -and even encourage -immigrants to preserve aspects of their prior culture.To what extent the change in immigration policy affected the implementation of the Nordic Museum's project is not known, but it may be assumed that the ongoing socio-political debate had a positive effect on the decision to provide funding.After all, the ethnologists in the museum's employ would make excellent detectives in ascertaining the special characteristics of Sweden's immigrant groups -characteristics that were possibly worth encouraging. In an article published in Fataburen in 1972, Göran Rosander, the initiator and leader of the project, pointed out that in the collections of the Nordic Museum there were no artefacts belonging to the Roma or to the immigrant labourers arriving in Sweden after the Second World War from Finland, Hungary, Yugoslavia, Greece and Turkey.According to Rosander, these groups were just as much members of the Swedish people (Swe.svenska folket) as the Swedes themselves, and their lives needed to be documented.He stated, "Museum pieces derived from the latter group should concentrate on festive customs, clothing, religion and perhaps toys and household effects" (Rosander 1972: 166).Since Finns were by far the biggest immigrant group in Sweden and constituted a group that could be characterised as a minority, 2 it was natural to begin the documentation with them.Nonetheless, according to Barbro Klein, it was not until the mid-1990s that the Swedish government enjoined all cultural institutions, including those in what is now known as the heritage sector, to take into consideration the fact that the country was now "multicultural".The Agenda Cultural Heritage programme, as the government called its project, was then expanded to embrace an idea of cultural diversity that included gender, generation, social class, disability and sexual orientation in addition to ethnic diversity.The broadened notions of "cultural diversity" and "cultural heritage" have become official ideologies and governmental responsibilities, and perhaps also bridges to integration (Klein 2006: 9). Government identity policy was, however, not the only reason for the Nordic Museum's extensive project.The collection of material on Finnish immigrants would at the same time address the challenge of investigating contemporary life.Whereas the main emphasis in museum documentation and acquisition had previously been on the past, the focus was now on the present day.When the Nordic Museum arranged a conference under the heading The Possibilities of Charting Modern Life in 1967, however, no consensus had been reached even on the definition of "modern life".For some it meant the 1870s, for others the 1960s (Silvén 2004: 152).There was severe pressure to expand museum documentation towards the present day.Almost next door, the ethnologists of the University of Stockholm had already partly changed their focus: Knut Weibust had conducted fieldwork in Portugal for his maritime ethnological study entitled The Crew as a Social System in 1958, and Mats Rehnberg had shifted the focus to rather more contemporary times in his 1965 Ph.D. dissertation on lighting candles on graves (Daun 1993: 333-334).Åke Daun's study (1969) dealing with the closure of a sawmill in Båtskärsnäs in northern Sweden close to the Finnish border broke new ground and in a way started a new era in Nordic ethnological research (Löfgren 1996: 54).The actual paradigm shift took place in the 1970s, when altogether three Ph.D. dissertations in ethnology focusing on modern life in some form were submitted in Sweden: Åke Daun's study dealt with a suburb (1974) The documentation of modern life at the Nordic Museum began with Samdok (a system of documentation of contemporary society) slightly later, officially in 1977.The above-mentioned Göran Rosander was again a key figure (Rosander 1980).In 1972, in the same article in which he proposed his idea to document the everyday life of post-war immigrants, he pointed out several shortcomings in the Nordic Museums's collection policy (Rosander 1972: 166).In that article, Rosander gives credit to Professor John Granlund, who a few years earlier had outlined his programme for documenting contemporary society by establishing research stations in different locations in Sweden where ethnology students would carry out fieldwork using modern anthropological methods.Granlund ends his article by saying: "It is time to update 1800s ethnology classics and their research questions.How far were they just pseudo problems in functional formulations?To what extent did they become open problems to which research should be directed?We have a responsibility to this research continuity" (Granlund 1967: 255). It is worth remembering, however, that the Nordic Museum had actually been sending out agents, equipped with notebooks and cameras, to document Swedish life in the country ever since the 1930s, and that contemporary life had also been recorded by the Nordic Museum on a small scale since the 1950s (Nilsson 1999: 98;Rosengren 2006: 104-105).Thus, the most important point about the change was not the period of time examined but what was studied.Eva Silvén crystallizes the idea behind the change when she writes that the aim of ethnological research oriented towards modern life was to foster an understanding of the times in which we live.The objects of ethnological research were no longer flails, folk costumes, watermills or tools for slash and burn, but people and society.The new winds of change blowing from the anthropological research communities were accompanied by new practices in museum archiving.Material obtained in the field was no longer chopped up thematically and topographically; instead, from 1965 onwards, it was examined holistically.The goal of museum acquisition was very clear: "The aim was no longer objective description but the way the people themselves understood and defined their own reality, and how society looked from their perspective" (Silvén 2004: 156-160, 181).Instead of mountains of artefacts, museums wanted narratives and photographs.Alongside the artefact-oriented museum there now emerged the narrative-oriented museum (Hein 2000: 7). A couple of decades later, Annette Rosengren, one of the interviewers on the Migrationen Finland-Sverige project, was, however, more critical of the way the migration project had been conducted.According to her, the most important thing was not so much analysis of the material but getting it ordered and written up in the archive.The purpose of the interviews and photographs was, she said, rather to create a context for the museum's artefact collections.The primary objective of the interviews was to counterbalance the superficial picture of contemporary society given by the media, which tended to seek out its unusual aspects (Rosengren 2006: 105). How was the Material Collected? The ethnographical project of the Nordic Museum consisted mainly of interviews conducted in people's homes and participant observation (Rosengren 2006: 105).Officials were also interviewed, and various events were documented.The project did not even include a collection of objects.I have not yet come across any research plan or even any interview forms, so any conclusions as to the questions asked can be drawn solely from the material collected.In short, we know the interviewees' replies but not the questions.There are plenty of answers: the Nordic Museum's archives contain 60 folders of written-up interviews, photographs, ground and layout plans, brochures, press cuttings, school essays and other material. 3 The interview transcripts, in accordance with 1970s practice, rely mostly on hand-written notes and were not taped.After the interview, the interviewer wrote a report and a transcription of his/her notes (Tyrfelt 1977: 2).The interviews were generally conducted in the interviewee's mother tongue, that is, either Finnish or Swedish.The Finnish transcriptions have been translated into Swedish.Birger Grape, a speaker of Meänkieli, a dialect of Finnish spoken along Sweden's northern border with Finland, conducted his interviews in Finnish and made transcriptions into Swedish.He best understood the nuances of the interview language and thus frequently added the Finnish expression in brackets after the Swedish translation.He also stresses in his interview report that the interviewee was, if necessary, asked to repeat the same thing several times so that something particularly important could be recorded word for word on paper.Among the interview transcriptions made by Grape in Virsbo are some questions translated into Finnish to which a specific reply was sought in the interview. The interview reports describe in detail how the interview was arranged and in what conditions it was conducted.Reading them enables us to step into the shoes of the museum researcher of the mid-1970s, to see how she or he performed his or her role as an ethnologist with a new research agenda.We can read how Birger Grape, for example, rang a doorbell in Upplands Väsby on December 10, 1974, and the door was opened by the family's 10-year-old daughter.She fetched her mother, who, according to the interview report, gave a friendly smile as she said hello; she is reported as having curlers in her hair.There on the doorstep an interview was fixed for a day in the near future.In his reports, Birger Grape repeatedly complains that it is difficult to ask personal questions right at the start of an interview.Documenting modern life in a brand-new suburb was a new world for the museum researcher. An account by Annette Rosengren gives us some inkling of the interview context and the interviewer's attempt to describe the world of the interviewee -a Finnish immigrant in Sweden -as precisely as possible: On Wednesday October 9, 1974, I came to conduct a second interview.It was a little past 7 when I arrived, and I apologised for being late.It came out that they had been expecting me at 6 and wondered why I had not come.There had been some misunderstanding.But everything was OK, and we were able to continue our interview where we left off the previous Friday.The evening was a repeat of the previous one, and at the end we had coffee and home-baked bread.We had fun, and I took a few photos.During the evening, Alfons [the interviewee] went out to buy the evening paper.He put on a jacket that was on the rack for outdoor clothes in the hall, a blue track-suit top made in Finland.His brown trousers were made in Finland. The people for the interviews conducted in Finland were chosen because they had either lived in Sweden at some point, were intending to emigrate there or were of a suitable age for emigration.They had been found with the help of the parish office and employment office officials or by the snowball method.The names of people and places in the transcriptions made by Swedish researchers in Finland are accurately recorded; at least there are no obvious errors in the way they are written.The interviews were autobiographical and covered the same themes with each interviewee.The interviews made in Sweden were equally precise and observed the same research ethical code as today: the interviewees were assured that the material would be used solely for research purposes, that their anonymity was guaranteed and that no photographs would be taken without permission.The interviewers also debated questions of research ethics in their reports.This often amounted to no more than a note that the interviewee was given a packet of coffee by way of thanks after the interview and sent a Christmas card and a free ticket for the Nordic Museum -in other words, the project did not wish simply to use people; it also contacted them later on.In some instances, however, the interviewers have reflected on the significance of the information obtained and on problems of preserving anonymity.Although there were hundreds of Finns living in the research localities, their networks were very dense (cf.Gradén, this volume), so maintaining confidentiality was a challenge. In Upplands Väsby, the interviews were made in collaboration with the geographers who were involved in the project.People were selected from the geographers' material to give ethnological inter- views.Some of the interviewees were found by the snowball method.In both Upplands Väsby and Virsbo, those chosen for interviews were people who had moved to Sweden either between 1958Sweden either between and 1963Sweden either between or between 1970Sweden either between and 1972. .In addition to interviewees of Finnish origin, native Swedes were interviewed in Virsbo.The latter were designated as the "control group".The aim of interviewing them was probably to filter out "Swedish" traits in order to find the truly "Finnish" characteristics in the material.In Värmland, the Nordic Museum researchers sought interviewees who had moved to Sweden immediately after the Second World War, in the period between 1945 and 1955. In addition to the interviews, the material includes photos of the interviewees' homes and living environments.There are also ground plans of the homes with inventories of furniture and in many cases layout plans.Both verbal and visual descriptions were made of the research localities.There are also some surprises, such as some fine photos of the former lockup at Lovisa police station.This documentation well reflects the fieldwork environment: a researcher from the Nordic Museum called on the Lovisa authorities in order to find the names of people to interview.On hearing that the Swedish visitor worked at a museum, the police wanted to show him their lockup.The researcher was very keen to photograph it, even though it did not directly tie in with the subject of the project. The Swedish researchers acted according to the same logic elsewhere as well: they documented everything they could.This obsessive approach to fieldwork is similar to Konstantin Stanislavski's method acting, where the behaviour onstage is based on ordinary life, and the actor "disappears into the role" (Schechner 2006: 176, 179).In Virsbo, for example, the researchers noticed people milling around a kiosk in the evenings, and they asked the kiosk keeper how the growing number of Finns was reflected in the everyday life of the kiosk.The everyday lives of Virsbo people -in the bank, post office, local shop, library, restaurant, school, factory, church, dance hall, sports contests, the Finns' Mother's Day celebrations and trade union meetings -were also recorded in photos.Not even the sauna, washroom and changing room were out of bounds to the photographer.Orvar Löfgren has described this passion for documenting as follows: "Sometimes, while gazing out of the window of a train and seeing some functionalist villa or cottage flash past in the landscape, I remember being fascinated by the idea of knocking on the door and getting yet another new perspective on Swedish everyday life" (Löfgren 1996: 53).We can then read in the memoirs of Åke Daun how he became so immersed in life at Båtskärnäs that he even changed his outward appearance.Snuff was the only thing he drew the line at (Daun 2003: 83). Early ethnological studies of both workers and immigrant communities dealt mainly with men; in this project, too, men were the norm and women exceptions.Couples were interviewed together, but the husband was always entered as the main interviewee.This even applied to cases where the wife had more to say than her husband.Children were not interviewed, but they would sometimes act as interpreters in interviews.In the case of women and children, the picture of Finns in Sweden is fortunately made clearer by photographs.There are, for example, some photos in the material taken in Upplands Väsby in late autumn 1974.The seven-storey blocks of flats in the centre of Upplands Väsby that nowadays dominate the landscape, and which at the time were the homes of many Finns, had just been built when the photos were taken.The photos taken in the yards, playgrounds and car parks supplement those taken in the homes.The documentation of Virsbo Bruk, the biggest employer of Finns in Virsbo, in turn shows women employed in the metal works.Dressed in overalls, they differ little from the men to look at, but the title Fru (Mrs) in the photo captions indicates that they were women.At least in the photos, men and women did similar work, married couples often working side by side. "The family has no National Costume or Knife" Richard Schechner argues: "One asks performance questions of events: How is an event deployed in space and disclosed in time?What special clothes or objects are put to use?What roles are played and how are these different, if at all, from who the performers usually are?How are the events controlled, distributed, received and evaluated?"(Schechner 2006: 49).It could be argued that in the fieldwork described above there were actually various layers of performance, intended or not, taking place within the fieldwork encounter.The ethnologists performed their ethnological function, and they expected their interviewees to perform certain "Finnicisms".We can see from the interviews done in Sweden that the ethnologists tried to bring out any particularly Finnish traits of the Finns living in Sweden.This is clearly evident from a statement that is common in the transcriptions: "The family has no national costume or puukko [sheath knife]."One person was offended at being asked about the tango, a particu-larly popular dance in Finland.She said she was sick of the Swedes saying, when she mentioned she was going to Finland for a visit, that of course she would be listening to tangos; next, no doubt, they would start talking about knives.The returnee migrants in Finland, for their part, were asked whether they had learned any new food customs in Sweden.For example, a vocational student who had previously lived in Sweden was asked whether he had learnt any new recipes while living in Sweden.At least, this may be deduced from the relatively laconic note: "Veli-Matti has not learnt any new recipes in Sweden."On another level of performance, it is noteworthy that the interviewers also paid special attention to the home interiors.If there was a täkänä (a woven wall cloth) or a rya rug hanging on the wall of the living room, this was mentioned as a special Finnish feature.Spinning wheels, horse collars, churns and flails brought from Finland and used to decorate the home were carefully photographed.The researchers also picked on various symbols of Finnishness, Finnish design (for example, vases and tableware), the Finnish flag and blue-and-white (the colours of the flag) in general.Interviewees were asked which of the items in their homes were from Finland.If an item did not look particularly Finnish, this was mentioned.For example, three blue china plates on the dining room wall of one family received the verdict: "Do not look particularly Finnish."For some reason, the researchers were always eager to report it if the interviewee had a copper coffee pot as an ornament; these seemed to be common.It is not known whether the museum was perhaps planning to acquire some copper pots or was simply seeking links with an agrarian background.While interviewing a woman who had been in Sweden as an evacuee during the war, Annette Rosengren almost apologized for categorizing the interviewee as Finnish in her research report because the woman spoke Swedish with no Finnish accent whatsoever. Finnish immigrants in Sweden also voluntarily performed "Finnishness" on certain occasions by wearing national costumes on formal occasions and happily showing off their ethnic textiles.There is, for example, one photo in the material taken at a Finnish Culture Day event held in 1974 for which the caption reads: "Finnish girls in national costume acting as ushers in the foyer."Handicrafts constituted a special category of their own in the competitions held on Finnish Culture Days.A photo taken of this section shows both rya rugs and täkänä wall cloths -and a Finnish woman dressed in national costume displaying textiles.Judging from a photo taken of the shop window, the local store sold Iittala vases and sauna requisites.The performance of "Finnishness" culminated in the evening: the caption of one photo taken at a Culture Day dance says that no one was very drunk and there were no fights -both drunkenness and aggression being stereotypically associated with Finns.It then goes on to say that some shouting in Finnish could nevertheless be heard in the course of the evening. Birthdays, anniversaries, marriages, funerals and the like were of particular interest to the researchers.The Finnish interviewees living in Sweden were asked where they wished to be buried.Other church customs also interested the interviewers.Did the interviewee go to church?Did their children go to Sunday school?And what sort of Bible did the family possess, the Finnish or the Swedish version?The interviewers also asked about citizenship, and especially whether the male interviewees had changed citizenship in order to avoid Finnish military service.Detailed questions were asked about how the annual festivals were celebrated.With regard to May Day, the researchers were especially interested to know whether the interviewees customarily lit bonfires.No questions were asked about the workers' May Day, though the custom of celebrating this day as a festival of the workers came out in the replies.Under the heading of "folklore", the interviewees were asked about Finnish sisu (meaning grit or determination), heavy drinking and violence.They were also asked about co-habitation and homosexuality under this rubric.It was assumed that Swedes were more liberal than Finns. A Peep into the Archives Fieldwork material reflects the life of the researchers just as much as that of the community they study.I now wish to create an overall picture of the corpus as a whole: I have read every third set of interview materials in each of the folders to be found in the archive of the Nordic Museum in autumn 2006. 4The material is so vast, amounting to dozens and even hundreds of interviews, that rather than being a cross section the survey barely scratches the surface. The interviewer usually began by briefly running through the interviewee's life history before going on to ask questions about housing, education, emigration to Sweden, plans to stay in Sweden, language skills, leisure activities and hobbies, annual festivals, food and stimulants.The majority of the questions are about annual festivals and food: bread, pies, casseroles, oven-cooked dishes, soups, meat, sausage and fish dishes, blood foods, cheeses, porridges, gruels, various types of flour, beverages with meals, vegetables, fruit, mushrooms, spices, cakes and buns.The same terms recur from one interview to another, especially in the case of annual festivals and food. Annette Rosengren, who was one of the researchers from the Nordic Museum doing fieldwork in western Finland, had met a shop assistant aged about 30 in the shop one morning and she agreed on an interview.The shop assistant had undoubtedly mentioned in the course of the conversation that her excavator-driver husband had spent five months working in Stockholm when he came out of the army.The couple were interviewed together.Keywords have been added in the margins of the transcription, again in accordance with ethnological tradition in the early 1970s.These words referred to the following categories: biography + occupation, dwelling, children and marriage, plans for the future, language, social network, special customs, Christmas, Christmas parties, St. Lucia's Day, New Year, Shrove Tuesday, våffeldagen (Waffle Day), Easter, May Day, Mother's Day, Whitsunday, Ascension Day, Midsummer, All Saints' Day, Independence Day, birthday, name day, wedding day, leisure time, holiday trips, summer cottage, Sundays, reading habits, courses, societies, firewood, berries and mushrooms, sport, hobbies and dances, the cinema, the theatre, restaurants and bingo.The transcrip- A Finnish ethnology student interviewed a 25-year-old man working at a paper mill in southern Finland whose name had been obtained from the employment office.The young man's working career was probably typical of the early 1970s: compulsory military service, work at a paper mill in Sweden, back to Finland after a couple of years, back to Sweden and another paper mill after a year, then to the Saab-Scania automotive works in another Swedish location the following year, a rubber factory in Finland the year after that, then after a couple of years there, a paper mill in another town in Finland, and from there fairly quickly back to live in the town where he was born and where he had started his journey. Annika Tyrfelt from the Nordic Museum interviewed a couple of which the husband had been born in southern Finland and the wife in Ingria, a region surrounding St. Petersburg.Both were born just before the outbreak of the Winter War between Finland and Russia in 1939.The husband was an electric fitter and the wife a housewife.They had just moved back to Finland after nearly ten years spent in Sweden.The man had been an evacuee in Sweden during the war, so the decision to immigrate to Sweden later had, he said, been easy.The reason why they had come back to Finland was that after 1972 it was not possible for foreigners to receive bank loans in Sweden.They consequently had to give up their dream of buying a house of their own, and they moved back to Finland.The reason for their return emerged when the interviewer asked whether they dreamt of buying a summer cottage.The man replied that "only the bosses can afford a summer cottage; workers dream of a house of their own" and told her the reason for their return.At the time of the interview, they were living in a flat, and the interviewer did not go back to the house theme -possibly because there was no question about this on the interview form and the interviewers felt that they had to stick to the script (Schechner 2006: 145). With the wisdom of hindsight, we might say that in this case today's ethnologist would have thrown the interview form away and let the interviewee speak, thereby allowing for a more in-depth analysis of the connection between being an evacuee and the decision to emigrate, and of the position of workers and Finns in Sweden.Instead, the 1970s ethnologist stated in her report that she was sorry the interviewee had wandered from the topic and that, when she prepared to photograph the home and draw a ground plan of it, the interviewees had picked the children's toys up off the living room floor despite her request not to.On the other hand, the interviews did, after all, take place in the homes of the interviewees, and even in their role as informants, they had the right to perform in the way they wanted to.Having a messy living room was not something they wanted to perform. When interviewing return migrants, the interviewees were still first asked to give a brief life history before going on to questions about family, language, emigration to Sweden, return to Finland, social networks in both Sweden and Finland, reading habits, purchase of a car, other consumer goods bought in Sweden, differences between Finland and Sweden in interior-decoration styles, annual festivals, food, differences and expectations in both Finland and Sweden, and leisure activities.The last of these topics covered questions about holiday trips abroad, summer cottages, entertainment, socializing with friends and television. The Nordic Museum has seven folders of interview materials from Upplands Väsby, amounting to dozens of interviews.Birger Grape from the Nordic Museum interviewed a man of about 30 and his wife, who was five years younger than him, in Upplands Väsby in December 1974.The man had just been on a course for caretakers but was unemployed at the time of the interview.The wife was a day nursery supervisor.They had two children.The couple had moved to Sweden in 1970, and had lived in two towns before moving to Upplands Väsby a year ago.The keywords in the interview transcription again summarize the course of the interview: context, people, environment, dress, annual festivals, food, contacts with Finland, leisure, study and culture, symbols, contrasts, folklore, opinions and values.The interviewer contacted the couple again a year or two later.He phoned them at home and asked them whether they had made use of their right to vote in Sweden's municipal elections on September 19, 1976.This was a new right, which the interviewees had used.Other interviewees were asked the same thing; doubtless, the museum was already at this stage keen to answer the call of integration policy. In October 1974, Annette Rosengren interviewed a family of Swedish-speaking Finns living in a terraced house in Upplands Väsby: a fitter of about 40, his wife of about the same age employed as an evening supervisor, and their two school-aged sons.The couple had migrated to Sweden in 1961.Some years earlier, the wife's brother, both parents and her sister and family had migrated to the same town along with many other Swedish-speaking Finns from Ostrobothnia in western Finland.The "Ostrobothnian traits" are marked in the material; the family's circle of acquaintances consisted of Ostrobothnians in their new home municipality, Upplands Väsby.They attributed many of their habits, such as stinginess and reserve, to the fact that they were "Ostrobothnian".They had had nothing to do with Finnish-speakers, possibly because they could not speak Finnish.Nor did they much like talking to Swedes because they spoke a different dialect of Swedish. The following year, in November 1975, Annette Rosengren interviewed a Finnish-born auxiliary nurse living in Värmland, who had been in Sweden for over 20 years, since she was 13.The keywords in the transcription are familiar from the interviews made in Virsbo and Upplands Väsby.There is, however, one difference: the interviewee was asked what her attitude was to "the Forest Finns", 5 and whether she in fact knew anything about them.The interviewee, oblivious to the meaning of the term, replied that she had even cared for elderly Finns from Finnish villages! Creating and Preserving an Identity A fish only notices it is a fish when a fisherman lifts it out of the water, says the Norwegian anthropologist Thomas Hylland Eriksen in his book Rötter och Fötter (Eriksen 2004: 95).By this he means that people often discover or become aware of their ethnic identity only when they feel that their status is insecure or even threatened.Ethnic identity is created and expressed in various situations when people belonging to different groups come together.Fredrik Barth and his school demonstrated in the 1960s that rifts between groups derive not so much from cultural differences as from the assumption that such differences exist.Ethnic identities are created through discourse about differences with both insiders and outsiders, and identities become crystallized as fixed antitheses where once there were just grey zones and nebulous transitions, says Thomas Hylland Eriksen in a discussion of the research of his fellow Norwegian scholars (Eriksen 2004: 86). The Migrationen Finland-Sverige research project of the Nordic Museum was like a colouring book in which the aim was to identify the grey areas of Finnishness and tinge them blue and white, the colours of the Finnish flag.In order to penetrate to the heart of the Finnish ethos, the researchers on the project interviewed Finnish-speaking and Swedishspeaking Finnish immigrants in a suburb of Stockholm and in a small factory community in Central Sweden, returnees from both Finnish and Swedish language groups, war-time evacuees and Finnish immigrants in an area with a long tradition of Finnish settlement.In order to bring out contrasts they also interviewed native Swedes living in the same areas and Finns who had no experience of living in Sweden and certainly no intention of emigrating there.The interviews concentrated on annual festivals and food customs.These were areas that most clearly revealed group boundaries and differences between Finns and Swedes.This objective was made very clear in a paper read by Göran Rosander, the leader of the project, at the seminar The Documentation of Immigrant Cultures held at the Nordic Museum in 1979.According to him, the aim of the museum's documentation of the lives of immigrants was to create and preserve an ethnic identity (Magnusson 2006: 134-135). Even though there were attempts to follow new trends of ethnology in the project, there was much of the "old ethnology" in it.Finnish culture was seen The project did not include a collection of objects or plans for an exhibition, just texts, drawings and photographs.Nevertheless, the material is so extensive that it alone would serve as the basis for almost any museum exhibition focusing on Finland in the 1970s.For Swedish researchers, it is a peephole into the physical and social environment of the early 1970s.This is undoubtedly one reason why the folders are neatly archived for researchers to use them. A number of research reports on the material have been published, but as far as I know, none of them has ever covered the entire corpus.To some extent, the reasons are no doubt connected with the ethics of research: the interviewees can still be recognized from their photos and narratives.In other respects, however, the material does not, to my mind, pose any ethical problems: the Nordic Museum has not greedily appropriated any material that really belongs elsewhere or the presentation of which would be unethical.Nor can the material ever, for reasons of research ethics, be made openly accessible to all (see Henning 2006: 151-152).Should the material be widely disseminated, the promise of anonymity might cause distress to people for whom the figures in the photos are not just women, men and children but friends and even loved-ones, complete with their names and personal histories (see Clifford 1991: 120-232).The contemporary urban Finn might further be annoyed or amused by the way the "typical" or "authentic" Finn is presented, the man with his sheath knife and the woman at her spinning wheel (cf.Bendix 1997: 7;Lionnet 2004: 93).However, the Finnish interviewees do not come across in the material as comic figures any more than their Swedish interviewers -if anything, rather the contrary is true. A similar project today would begin with a different premise.The Finnish interviewee now living in Sweden would not be asked whether he possessed a sheath knife or a national costume, because the questions would now be directed more at processes, cultural encounters and the construction of ethnic identity and the virtual community.Today's ethnologist does not understand ethnicity as something people have but rather as something they do (Pripp 2002: 20).In the Migrationen Finland-Sverige project, the elements that determined the Finnish immigrant identity were chosen by the museum researchers, not by the Finnish immigrants themselves.Had the interviewees been given an opportunity to talk freely about their everyday lives, about what they did, then the picture of the typical Finnish immigrant in Sweden would undoubtedly have been different.The fact that the data would -of course -nowadays be collected using a different research strategy does not to my mind lessen the value of the extensive material in the Nordic Museum's archive.The greatest merit of a large corpus of material is ultimately the unique information contained in the folders.Once the information supplied by the interview reports and transcriptions has been combined with the photos taken in the interviewees' homes and at work together with the ground plans and detailed furniture inventories, we almost have the feeling that we have been personally sitting in an interviewee's living room asking questions after an exhausting bike ride.People we have never met grow familiar as we read the interviews.Here are facts, events large and small: human fates at turning points in history and decisions taken at different stages of people's lives, all together. Richard Schechner states that anything and everything can be studied "as" performance (Schechner 2006: 1).This article studies performance in two ways, as performing ethnicity and performing ethnography.Those who performed ethnicity were Finnish immigrants who were living in Sweden or who had returned to Finland after living there for some years.Why were they willing to perform Finnishness even though they sometimes were critical and felt like animals in a zoo, supposed to act in a certain way?The interviewees were promised that their interviews and photos would be preserved for future generations.It appears from the transcriptions that this was important to many of the interviewees; they wanted the Finns to have a visible place in the history of Sweden (Grele 2005: 44) and thus they were willing to "perform Finnishness".Traditions are known to be important to immigrants everywhere in the world: repeated expressions and performances, images of the past are "stored" in bodily memories such as gestures, lullabies and food traditions, and passed on to following generations (Klein 2006: 10), and in this case, to museum archives. Richard Schechner also writes that performances are actions, and behaviour is the object of performance studies (Schechner 2006: 1).This article has shown that it is fruitful to analyze ethnologists as performers of ethnography.Because every possible detail was documented in the fieldwork notes, one can actually follow the actions of the researchers quite well as they described what they did and also what they were thinking.Schechner also argues (2006: 30) that performances exist only as actions, interactions, and relationships.Several phases in the performance of ethnography have been traversed in producing this article: the first phase took place when the ethnologists of the 1970s conducted their fieldwork; the next phase when the museum curators decided and carried out the filing of the material in the archives of the Nordic Museum; and the last when I, as an ethnologist, myself performed ethnography and analyzed the field-work material produced by my colleagues from the past."The struggle to write history, to represent events, is an ongoing performative process full of opinion and other subjectivities", concludes Richard Schechner (2006: 257). Notes * � This article is a part of my Academy of Finland-funded project Happy Days?The Everyday Life and Nostalgia of the Extended 1950s (2004Extended 1950s ( -2007)), Dimensions of Sway: The Meaning of Social Networks for Finnish Immigrants in Sweden (project number 137923). 1 � Riksbankens Jubileumsfond is a Swedish foundation with the goal of promoting and supporting research in the Humanities and Social Sciences.2 � Finnish has been defined as a minority language in Sweden since 2000.See SOU 2005:40.3 � From here on, the material referred to is Migrationen Finland-Sverige, sign.KU 10583, located in the archive of the Nordic Museum unless otherwise stated.4 � Folders 6, 16, 21, 25, 27, 29, 42, 49, 52, 54 and 55 could not be found in the archives in October 2010.5 � The "Forest Finns" is the name given to Finnish immigrants to Sweden and Norway mainly in the late sixteenth and early seventeenth centuries.Many of them settled in the province of Värmland. Ethnoologia Europaea :: Journal of European Ethnology 40:2 E-journal Copyright © 2010 Ethnologia Europaea, Copenhagen :: ISBN 987 87 635 3792 6 :: ISSN 1604 3030 http://www.mtp.hum.ku.dk/details.asp?eln=300294 it was in a box, as a collection of characteristics, customs and objects.Ethnicity, culture and national identity were treated almost as something people were born with.The traditional approach had traces of the "Cartographic Method" that Tim Tangherlini discusses in his article elsewhere in this volume of Ethnologia Europaea.Yet the project acknowledged the fact that a new Sweden was emerging and that museums and ethnology as a discipline had to face the challenge it presented.In fact, it was not only a question of a new Sweden, but of a new Nordic Space of migration and hybrid cultures as well.The new Nordic landscape of migration, adaptation and integration was documented like the old one, and the new and old paradigms co-existed in the project.
2019-05-09T13:12:08.213Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "827f12d82af421e7cd16fda2a527cb0ceb991a17", "oa_license": "CCBY", "oa_url": "https://doi.org/10.16995/ee.1068", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "795c8771680075c6b5512aa60809b9ddd351fdba", "s2fieldsofstudy": [ "History", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
214582007
pes2o/s2orc
v3-fos-license
The everyday lives of in- and outpatients when beginning therapy: The importance of values-consistent behavior Background/Objective The manifestation of functional impairment in patients’ daily lives and interference with things they value is poorly understood. If values are compromised in patients, as theory suggests, social contexts (and the lack thereof) are especially important – though this is currently unexplored. We therefore examined whether daily values-consistent behavior was associated with the importance of a value and whether it involved social or non-social activity. Method Using Event Sampling Methodology, we examined daily values-consistent behavior in 57 transdiagnostic inpatients and 43 transdiagnostic outpatients at the beginning of treatment. Patients’ values-consistent behavior, its importance, and (social vs non-social) context was sampled six times per day during a one-week intensive longitudinal examination. Results Across both groups, the probability of subsequent values-consistent behavior increased if (1) it was judged as more important by the patient or (2) if it was embedded in a social context. The probability of reporting values-consistent behavior was higher for outpatients than inpatients. Conclusions Clinicians are encouraged to examine the values of their patients more closely and to especially monitor important and/or social values. Incorporating these into clinical work might increase patients’ values-consistent behavior, which can play a role in reducing suffering. One criterium common to all DSM categories is that symptoms must cause a clinically significant impairment in functioning (American Psychiatric Association, 2000). However, functioning tends to be measured on an abstracted level (e.g., through assessing general working ability or satisfaction with working capacity; Trompenaars, Masthoff, Van Heck, Hodiamont, & De Vries, 2005). Information about how daily routines are implemented or stymied are usually measured retrospectively, while information assessed in a real-time fashion in participants' natural environment is largely missing. As a result, little systematic knowledge exists about the daily lives of patients as they present for treatment (Wersebe, Lieb, Meyer, Hofer, & Gloster, 2018). Patients' everyday lives are assumed to be distinguishable from individuals without a diagnosis. The omnipresence of the impairment in functioning across all DSM categories merits investigating a broad swath of diagnoses. For example, patients diagnosed with obsessive-compulsive disorder spend a substantial amount of time engaging in obsessions and compulsions (e.g., hand washing, ordering, checking) or patients diagnosed with depression who feel worthless or guilty often contribute to impairment in social, occupational, or other important areas of functioning (Kupferberg, Bicks, & Hasler, 2016). Another example are patients diagnosed with agoraphobia, who avoid places or situations from which escape might be difficult or embarrassing or in which help may not be available (American Psychiatric Association, 2000), thereby restricting their travel possibilities. Whereas symptoms capture part of the impairment, they do not inform about factors that exacerbate the functional impair-ment nor do they indicate when and how they are able to successfully navigate through daily life. Investigating patients' everyday life also has clinical implications. Daily life is impacted by adverse life events (such as death of a loved one or romantic breakups), which have been related to more depressive symptoms (Keller & Nesse, 2006). For example, a divorce can lead to social bonds being lost. Loss of social bonds, in turn, affects daily life and, in more severe cases, also daily functioning (Keller & Nesse, 2006). Therefore, regardless of whether stressors occur daily or as major life events, actively engaging in values may have a pivotal effect on subsequent suffering . However, perceiving something as important and acting or behaving in the direction of that value are two different things. In order to properly assess such behaviors, it is important to capture both the activities patients' value and whether they actually engage in such activities. Behaviors that are connected to goals and values are positively associated with social functioning (McCracken, Chilcot, & Norton, 2015). In patients there is an observable discrepancy between values and behavior (Čolić et al., 2020;Hoyer, Colić, Grübler, & Gloster, 2019). In the Acceptance and Commitment Therapy (ACT)-literature, such a discrepancy has been shown to contribute to lower levels of well-being (Gloster et al., 2015;Hayes, Luoma, Bond, Masuda, & Lillis, 2006). Increasing values-consistent behavior (i.e., behavior that is consistent with one's values) precedes reductions in suffering in outpatients with panic disorder . However, which factors are associated with increased behavior connected to goals and values remains an open question. Current instruments attempting to capture the congruence between values and behavior correspond to a very specific time point in life (Ivanoff, Jang, Smyth, & Linehan, 1994), or collect data in a retrospective fashion (Wilson, Sandoz, Kitchens, & Roberts, 2010). Therefore, concerns regarding biases introduced by retrospective recall are raised , while the question about what is important to patients in their everyday life, and whether there is a difference between in-and outpatients, remains open. When investigating patients' daily lives, it is important to capture the context in which they are acting. One of the most important contexts for humans is the social context (e.g., with a close friend or family member, in a group of strangers, alone, etc.; e.g., Rubin & Stuart, 2018). The social context is important regarding our health and well-being. For instance, social interaction had a motivating effect on participants, which were then more likely to continue exercising (Nielsen et al., 2014). The social context is especially important to examine in inpatient treatment as it likely differs from outpatient treatment. Inpatients usually stay in the hospital for at least one night, are more dependent on nursing care (Campos Andrade, Lima, Pereira, Fornara, & Bonaiuto, 2013), and are potentially in contact with other fellow patients. Outpatients depend less and have less contact with medical and nursing care, and spend less time in the health care setting. A hospital's social environment likely has different relevance for inpatients and outpatients (Campos Andrade et al., 2013). It is thus essential to consider the treatment setting to account for differing social contexts the patients are in. While patients may already live in a specific daily social context, inpatients in particular may form a new form of social context, specific to their treatment. Outpatients might more or less stay in their specific social context of their daily life. More research is needed to better understand the mechanisms that influence a patient and their social context. To answer the questions of what in-and outpatients value in their everyday life, what significance daily social interactions have, and what increases the probability that things people value translate into actual values-consistent behavior, it is necessary to understand patients' behavior in their natural environment as opposed to in the laboratory or by asking them to think about across several months and estimate an average (Myin-Germeys et al., 2018). Event Sampling Methodology (ESM) allows precisely this examination. The present paper's aim is to investigate the everyday life of in-and outpatients and the importance of daily behaviors and, more specifically, whether daily social (i.e., with other people) or non-social (i.e., without other people) behaviors impacted their values-consistent behavior. For the sake of clarity and brevity, we will henceforth use the term ''consistent behavior'' when referring to ''values-consistent behavior''. Towards this aim, we explored four research questions. First, in-and outpatients would report different probabilities of engagement in life areas (e.g. work, hobby, relaxing etc.) important to them (research question 1). Second, in-and outpatients would report different probabilities of consistent behavior (research question 2). Third, patients would show consistent behavior more frequently the more important the value domain was to them (research question 3a), and this would differ between in-and outpatients (research question 3b). Fourth, patients would show consistent behavior more frequently if the valued domain was social (research question 4a), and this would differ between in-and outpatients (research question 4a). Method Participants Participants (inpatients, n = 57; outpatients, n = 43) were recruited from two specialized clinics (inpatient and outpatient) from ongoing intake procedures. The mean age across the whole sample was 34.45 years (SD = 11.88, range: 18 to 65 years), and 48% of the participants were female. The mean age for the inpatients was 33.51 years (SD = 10.82, range: 18 to 65 years), and 42.11% of the participants were female. The mean age for the outpatients was 35.80 years (SD = 13.14, range: 18 to 64 years), and 55.81% of the participants were female. Participants represent a subset of patients recruited for a larger ongoing study on transdiagnostic treatment non-responding patients (see Villanueva, Meyer, Rinner et al., 2019). Inclusion criteria were: Minimum 18 years of age, ability to speak German sufficiently, present for therapy and ability to attend sessions, and signing an informed consent statement. Exclusion criteria were acute suicidal intent, acute substance dependency, active mania, previous experience with ACT, and inability to read or complete assessments. Otherwise all diagnoses were included (Villanueva, Meyer, Rinner et al., 2019). Participants presented with the following disorders: Affective disorders (35.45%), phobias and other anxiety disorders (37.79%), obsessive-compulsive disorders (13.30%), somatoform disorders (6.43%), impulse control disorders (3.97%), and attention deficit hyperactivity disorder (0.94%). When participants entered the clinic, medication was optimized when necessary, as determined by the attending physician in consideration of patient preference. Instruments and procedure This study reports on a seven-day phase of Event Sampling Methodology (ESM) from an overarching clinical trial. Participants completed informed consent procedures during the first week of treatment before data collection. During this first week, participants entered a seven-day phase of ESM, for which participants carried a study-issued smartphone. They kept the smartphone for seven days, after which they handed it back to the study personnel. The study was approved by the Ethics Committee of northwestern and central Switzerland (Ethikkommission Nordwest-und Zentralschweiz; EKNZ): Project 2165/13. For more details on the exact procedure, please see Villanueva, Meyer, Rinner et al. (2019). Event Sampling Methodology (ESM) Understanding participants' social behavior requires collecting data in participants' natural environment. Implementing ESM through usage of a smartphone allows the examination of patients' daily life, including the assessment of moods, thoughts, symptoms or behaviors, environmental and social contexts, all of which change over time. Thus, ecologically valid data can be collected in a real-time fashion while capturing dynamic changes of variables. Since human memory is subject to recall bias, ESM also reduces the effect of recall bias through real-time data collection (Gloster et al., 2008;Myin-Germeys et al., 2018;Rinner et al., 2019). Assessment All participants completed the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID; Wittchen, Wunderlich, Gruschwitz, & Zaudig, 1997) to determine diagnostic status at the beginning of treatment. We used the SCID-I (current diagnosis), which has moderate to excellent values for reliability and validity (DeFife & Westen, 2012;Lobbestael, Leurgans, & Arntz, 2011). Diagnoses were also rated on the Anxiety Disorders Interview Schedule (ADIS) severity rating scale (Brown, DiNardo, & Barlow, 1994). The diagnosis with the highest severity score was defined as the primary diagnosis. ESM data were collected six times a day using signalcontingent ESM on the smartphone every three hours (e.g., 8am, 11am, 2 pm, 5 pm, 8 pm, and 11 pm). ESM data collection was adjusted based on individual daily parameters of patients (e.g., waking time of participants, fixed breaks at work etc.). Participants responded to items on the smartphone with regard to multiple aspects of their behavior: First, they were asked about their plans and intentions (''What is the most important thing you are going to do in the next three hours?''), and asked to categorize it into one of the following value domains: Working/studying, commute, media usage, interacting with family, interacting with others, being alone/bored, household, hobby (except physical activity), physical activity, eating/drinking, or enjoying/relaxing. Participants could choose only one domain, therefore choosing none or more than one was not possible. Second, in the next questionnaire three hours later, they were asked about their past behavior (''What was most important to you in the last three hours?'') and asked to categorize it into the same previously mentioned domains. This item was not included in the morning questionnaire. The degree to which the planned and past behavior occurred in the same domain was the basis for the categorization of consistent vs. inconsistent behavior. For example, assuming the implementation of ESM at 8am, 11am, 2 pm, 5 pm, 8 pm, and 11 pm, each questionnaire was paired with the following questionnaire to compare the domains in which the planned and past behavior had occurred (e.g., 8am was compared to 11am, 11am was compared to 2 pm, etc.). Consequently, only the 8am questionnaire was not comparable to a preceding questionnaire, and the 11 pm questionnaire was not comparable to a following questionnaire because in both cases patients were assumed to be asleep. Third, they were asked about the importance of the past valued behavior: ''To what degree did you really want to spend your time like this?'' and ''To what degree does this behavior correspond to the way you want to live your life?'', both on a scale from 0-100 (not at all to very much). Further, some behavior happens in a social context (i.e., in interaction with other people) and some behavior happens outside of a social context (i.e., without interaction with other people). We subsequently dichotomized value domains into ''social domains'' vs. ''non-social domains'' to investigate patients' consistent behavior in social vs nonsocial contexts. Social domains included Working/studying, interacting with family, interacting with others, and eating/drinking. Non-social domains included the remaining domains, i.e., commute, media usage, being alone/bored, household, hobby (except physical activity), physical activity, and enjoying/relaxing. Examples that were listed by patients included the following: therapy or working in the laboratory (working/studying), going to the clinic or going home (commute), watch TV or listen to music (media usage), talking to the brother or playing with the son (interacting with family), arguing for my rights or make small talk over breakfast (interacting with others), waiting or feeling lonely (being alone/bored), tidying up or grocery shopping (household), reading or playing an instrument (hobby [except physical activity], going jogging or going for a walk (physical activity), eating dinner or drinking tea (eating/drinking), and sleeping or lazing around (enjoying/relaxing). Statistical analysis Data collected from ESM studies are repeated measures with interdependent observations of data nested within individuals. Data was included in the analyses if a participant answered more than 50% of the smartphone reminders. Twenty two participants completed less than 50% of ESM time points and were therefore removed from the data set. In consideration of the structure of the data, binomial Generalized Linear Mixed Models (GLMMs) were implemented for all research questions. For research question 1 (i.e., in-and outpatients would report different frequencies of engagement in life areas important to them), a GLMM was set up for each individual domain, resulting in 11 models, with treatment setting as the predictor. The outcome for research questions 3a and 4a was defined as consistent behavior, while the predictors were importance of the domain (research question 3a, patients would show consistent behavior more frequently the more important the value domain was to them) or social or non-social context of the domain (research question 4a, patients would show consistent behavior more frequently if the value domain was social). Treatment setting was included in these models as an additional predictor, but not as an interaction term (research question 2). Interaction effects between importance of the domain and treatment setting (research question 3b, there would be differences between in-and outpatients with respect to the relationship between consistent behavior and the importance of the domain) and social or non-social context of the domain and treatment setting (research question 4b, there would be differences between in-and outpatients with respect to the relationship between consistent behavior and social or non-social context of the domain) were calculated in separate models. GLMMs contained a random intercept to account for the dependency among repeated measures. Table 1. Results indicated that inpatients reported interacting with others and physical activity with significantly higher probability than outpatients. Outpatients reported Working/studying, and media usage significantly more often than inpatients. Enjoying/relaxing was rated as marginally more important for inpatients, and household was rated as marginally more important for outpatients. Results for research question 3a indicated that more consistent behavior was shown if the domain was judged as more important. Further, outpatients generally reported behaving more consistently than inpatients, regardless of importance (research question 2). Research question 3b showed that the interaction between importance and treatment setting (inpatients) was significant. This suggests that though for both groups the probability of consistent behavior increased if the importance of that domain increased, it did even more so for the inpatients. Results for research questions 2, 3a, and 3b can be found in Table 2 and Figure 1. Research question 4a examined whether the patients' consistent behavior was related to the (social vs. non-social) context of the domain. Research question 4b investigated whether the patients' consistent behavior was related to the treatment setting, or to the interaction between social vs non-social domains and treatment setting. Results for research question 4a indicated that more consistent behavior was shown if the domain was social. Results for research question 4a suggest a significant interaction between the context of the domain and treatment setting (outpatients). This suggests that though for both groups the probability of consistent behavior increased if the domain was social, it did even more so for the outpatients. Results for research questions 4a and 4b can be found in Table 2 and Figure 2. Discussion This study examined the everyday life of in-and outpatients. More specifically, we examined whether the importance participants attached to an activity, and the (social or nonsocial) context of an activity impacted the extent to which they exhibited values-consistent behavior. The results suggest three main findings. First, in-and outpatients value Second, more consistent behavior was shown in both groups the more important the domain was to the patients. Outpatients generally showed higher levels of consistent behavior than the inpatients. However, at higher levels of importance of a domain, the probability of consistent behavior increased significantly for the inpatients. Third, the context of the domain (social vs. non-social) proved to be important: The probability of consistent behavior was higher in social than in non-social domains. This was especially important for outpatients: If the domain was social, the probability of consistent behavior increased significantly for the outpatients. Value domains and treatment setting Several reasons may account for inpatients reporting that interacting with others, exercise, and (marginally) relaxing and enjoying their time as being important more often than outpatients. While this might reflect their real values, it might also be a function of their social context. First, inpatients experience social isolation and low social support (Ferguson et al., 2005). Thus, the possibility of interacting with others regularly in the clinic may become an essential part of their daily life. Note that inpatients reported specific importance for interacting with others, and not with family. Inpatients living in the same clinic usually spend the majority of the day together. Our result reflects that this time spent together indeed is important for inpatients --even though it does not always seem to be. Alternately, it may reflect the change in social interactions experienced when patients check in to an inpatient hospital. Second, the fact that inpatients attached more importance to exercising and enjoying/relaxing than outpatients might point to an increased awareness of the need of self-care. When inpatients neglect their self-care, this may include exercise or enjoying/relaxing. Being pulled out of one's usual environment and placed into a new daily environment, as in an inpatient setting, may also provide patients with more opportunities to practice self-care. Alternatively, inpatients may simply not have had as many opportunities to engage in domains that outpatients considered important. This may especially be relevant for working/studying. Outpatients, on the other hand, valued working/studying, media usage, and (marginally) household tasks more often than the inpatients. That outpatients valued working/studying more than inpatients is not altogether surprising, since these patients usually work while being in psychotherapy, while inpatients do not. Yet, it may carry significance: Possibly attaching a strong value to one's work/school/studies is preventing outpatients from getting worse. It could be that engaging in something for more than 40 hours a week without valuing it, is the type of problem that might tip the balance from presenting for outpatient to presenting for inpatient treatment. Further, outpatients valued using media (such as TV or internet) more often than inpatients. This might have several reasons: First, 24.56% of our outpatients were diagnosed with an anxiety disorder. There is a positive association between media use and anxiety (Vannucci, Flannery, & Ohannessian, 2017) and patients suffering from Social Anxiety or Major Depressive Disorder engage significantly more often in social interactions via their phones, compared to a control group (Villanueva, Meyer, Miché et al., 2019). Thus, this high reporting of using media might be a manifestation of patients with an anxiety disorder. Second, outpatients might be using the internet to stay in touch with others. If outpatients have a lot of stressors in their life (e.g., running from A to B because of work/school/studies, running errands, doing chores etc.) using technology might facilitate social contact, both for social and practical purposes (Baecker, Sellen, Crosskey, Boscart, & Barbosa Neves, 2014). For inpatients, this need might arise less, either because of a strong focus on oneself and one's disorder or because of social isolation. Household tasks might have been important for outpatients because they felt it needed to be done or because they derived satisfaction from getting things done. Considering the present results, clinicians might want to examine patients' values and value domains and incorporate those into the clinical work. Working on the patients' personal and deeply held values might increase the patients' motivation for therapy and aid them to lead a more fulfilling life (Hayes et al., 2006). Being consistent when things get important In this study, outpatients generally reported behaving more consistent than inpatients (regardless of importance). For inpatients, increased consistent behavior was related to an increase in the importance of the domain. One reason for these relationships might be that, possibly due to more severe symptoms, inpatients focus more strongly on some behaviors, which might not include values-consistent ones. More severe symptoms might in fact hinder patients from even knowing what is important to them, let alone behaving consistently to values. Clinicians might want to consider investigating patients' values and find the ones that are most important, especially with inpatients. Increasing valued behaviors has been shown to precede reduction in suffering . Attempting to increase values-consistent behavior could initially be focused on those most important values first to reduce suffering more efficiently. Being consistent when things get social Consistent with our expectations, social domains were associated with more consistent behavior across both groups. For the outpatients, social domains were associated with increased consistent behavior. This is consistent with previous cross-sectional research, which found patients' valued behaviors in social domains to be judged as more important and more valued than in non-social domains (Wersebe et al., 2017). The present result based on fine-grained ESM data collected every three hours extends this finding into patients' everyday lives. The replicability of the importance of social domains across data sets and data collection methods suggests a salient target for research and therapy. The positive association between consistent behavior and social domains found in outpatients might have several reasons: First, outpatients tend to have more social contact than inpatients (Ferguson et al., 2005), and therefore more opportunities to experience social domains as important. Due to possibly less severe symptoms they might also have more opportunities to behave in consistency with their values. Second, in order to be considered a functioning individual in today's society, some participation in social life is usually expected. Thus, social desirability (i.e., a tendency to respond in a way that corresponds with current social norms and standards; Perinelli & Gremigni, 2016) might render social domains more important to outpatients. Third, outpatients might be able to better differentiate what is important to them than inpatients. Additionally, as an outpatient, one may also simply have more capacity for social matters. Clinicians might want to examine patients' values and find the ones that are embedded in a social context. Initially focusing on social domains can possibly increase values-consistent behavior in outpatients, which in turn might aid to reduce suffering . Our results further underscore the importance of group therapy. Group therapy has been shown to be an effective approach for treatment, with patients reporting to be satisfied with the treatment (e.g., Weck, Gropalis, Hiller, & Bleichhardt, 2015), and treatment effects persisting or improving over a 12-month follow-up (Weck et al., 2015). Our results suggest that social value domains were associated with more behavior that is consistent to what one values, and it is possible that this association may underlie treatment satisfaction and persistence of treatment effects. Our results also suggest transdiagnostic relevance, similar to unwanted mental intrusions, which were shown to be of importance cross-culturally and transdiagnostically (Pascual-Vera et al., 2019). This makes the group setting an even more effective approach, since it can possibly be implemented across different diagnoses. Limitations The present study had four main limitations. First, ESM is a self-report measure and as such relies on reports of participants, rather than observations of participants. However, it is considered the current gold standard for data collection in people's daily life, and due to the fine-grained information captured is considered a more accurate measure of real-life behavior than questionnaires alone (Myin-Germeys et al., 2018). Second, categorizing value domains into social vs non-social is complex, because some domains might be social in some cases and non-social in others. For instance, working could happen both in a social or in a non-social context, depending on the job itself, the participant's position within a company or institution, and the company or institution itself (e.g. somebody who works predominantly alone in a library vs somebody who works predominantly in interaction with others as a waiter in a restaurant). Eating/drinking, hobby, physical activity, and enjoying/relaxing might for some people happen more often in the pres-ence of other people while others prefer to do these things alone, thus, for them they happen in a more non-social context. Future research might consider adding more items so participants categorize behaviors into social and non-social themselves, and items to investigate what factors determine whether behavior happens in a social or non-social context. Nonetheless, because the previous categorization of a valued behavior into one of eleven general categories was done by the patients themselves, we can still more accurately depict the experience of patients in their everyday naturalistic environment, than if we had categorized the behaviors. Third, participants reported on what was important to them and what will be important to them. Yet, we could not verify that they actually did what they reported. To verify whether consistent behavior was really carried out, future research must establish a verification process that considers participants' personal privacy. Fourth, although the overarching study collected variables with the intent to examine values, behavioral consistency, and social context in a transdiagnostic group of patients, the patients were not randomized across the exploratory research questions. As such, appropriate caution should be made in the interpretation of the results. Conclusion This study provides new insights into the everyday life of in-and outpatients, their values, how important daily social interactions are to them, and what contributes to valuesconsistent behavior. To our knowledge, this is the first study to investigate these aspects in a sample of transdiagnostic in-and outpatients, using state-of-the-art ESM. Clinical implications of this study include closer examination of patients' values: Especially important and social domains might merit special consideration by the clinician. Focusing on these in clinical work might increase patients' valuesconsistent behavior, which might be followed by a reduction in suffering and enabling the patients to lead a more fulfilling life. Overall, this study adds to the current knowledge of how the daily life of in-and outpatients might contribute to mechanisms that maintain or alleviate their suffering.
2020-02-27T09:31:19.720Z
2020-02-24T00:00:00.000
{ "year": 2020, "sha1": "ba67498c5fca1b41ed317d1b8be3f63ca2dc2752", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijchp.2020.02.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b59a7bbefb2ecba313dda851941e6de1267ea87f", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
250607735
pes2o/s2orc
v3-fos-license
Rare B decays and Lepton Flavour Non Universality tests at LHCb Summary of some of the most recent measurements performed by LHCb on rare decays and lepton favour universality tests. I. INTRODUCTION The analysis of decays hadrons containing charm of beauty quarks increases our understanding of the fundamental constituents of matter and their interactions. Although the Standard Model (SM) has been proven to predict the interaction among particles with great accuracy, there are still some unanswered questions for which physics beyond the Standard Model (BSM) may come into play. II. RARE DECAYS One way to search for New Physics is through the analysis of electroweak decays with low branching fractions or even forbidden in the SM. These decays, also known as rare decays, might receive contributions from new particles, such as dark matter candidates, altering their branching fractions with respect to the SM predictions. The goal then is to measure the branching fraction of such decays to check if they deviate from the SM predictions. Two of such rare processes are the B 0 s → µ + µ − and B 0 → µ + µ − decays, which are of special interest in probing the SM. In the computation of their branching fraction, the decay amplitude can be factorized into the hadronic and leptonic parts, allowing for a clean theoretical calculation of their branching fraction. Because the two final-state particles are muons, their signature in the detector is also very clean. These properties make the B 0 s → µ + µ − and B 0 → µ + µ − processes very sensitive to BSM physics. The branching fractions of the B 0 s → µ + µ − and B 0 → µ + µ − decays have been measured by LHCb using full Run 1 and Run 2 data [1,2], corresponding to a total integrated luminosity of 9 fb −1 . A pair of opposite charge muons forming a good quality and displaced vertex from the interaction region are selected as the signal candidates. The branching fraction for each decay is determined from maximum likelihood fits to the dimuon invariant mass. The fit is performed simultaneously in bins of a BDT classifier to increase the sensitivity of the data sample. The mass distribution of selected candidates together with the fit components are shown for BDT > 0.5 in Figure 1. From the fit, a branching fraction measurement for B 0 s → µ + µ − of (3.09 +0.46+0.15 −0.43−0.11 ) × 10 −9 is obtained. In the same analysis, a search for B 0 s → µ + µ − γ decays considering only initial state radiation and m µ + µ − > 4.9 GeV/c 2 has also been performed. Since no significant excess is found, the upper limit B(B 0 s → µ + µ − γ) < 2.0 × 10 −9 at 95% confidence level is obtained. Another interesting property of B 0 s → µ + µ − decays is its average decay time in an experiment, known as effective lifetime. This observable is dependent on the decay width asymmetry between heavy and light B 0 s mass eigenstates and the A µµ ∆Γs parameter, which is equal to unity in the SM. By measuring the B 0 s → µ + µ − effective lifetime, the A µµ ∆Γs can be evaluated, and the way each mass eigenstate contributes to the decay can be inferred. The latest LHCb result using full Run 1 and Run 2 data [1,2] yields τ µ + µ − = (2.07± 0.29±0.03) ps, which is consistent with the heavy mass eigenstate lifetime, as predicted by the SM, up to 1.5 standard deviations. The total model and the fit components are ovelaid. III. LEPTON FLAVOUR NON UNIVERSALITY In the SM, the electroweak coupling is universal for all the leptons. Differences in the decay rates involv-Tue11115 arXiv:2207.07471v1 [hep-ex] 15 Jul 2022 ing the three lepton species are only expected to arise due to the different masses of the charged leptons. This accidental symmetry of the SM, known as lepton flavour universality (LFU), might be violated in BSM scenarios. Lepton flavour universality can be tested in many different decays by evaluating the ratio between the branching fractions of decays involving different leptons. These ratios are well predicted in the SM since the associated QCD uncertainties largely cancel, making them a good probe for the BSM models. In the case of the b → sl + l − transition, the LFU is studied through the branching fraction ratio of decays involving muons and electrons: where X b is a hadron containing a b quark and H s is a hadron with one s quark. Although theoretically very clean, the measurement of these ratios is experimentally challenging due to the detection asymmetry between muons and electrons at LHCb. Muons are easy to reconstruct and trigger because they are the only particles reaching the muon chambers, leaving a characteristic signature in the detector. In the case of electrons, the reconstruction and triggering processes are more complex, mainly due to the bremsstrahlung emission undergone by electrons. If the bremsstrahlung takes place after the trajectory of the electron has been deflected by the magnet, the emitted photon will land in the same calorimeter cell as the electron, allowing for a recovery of the photon in the reconstruction of the electron energy. However, if the emission occurs before the magnet, the trajectory of the electron is deflected after the emission, resulting in the photon and the electron to land in different calorimeter cells. In such a case, a procedure to partially recover the energy of the photon is applied, worsening the momentum resolution. To reduce the systematics associated with the different detection of muons and electrons, the LFU tests are performed in LHCb using a double-ratio. This approach is used to measure the R K ratio at LHCb, an analysis performed using the entire Run 1 and Run 2 dataset. The B + → J/ψ(→ l + l − )K + is used as the normalization mode, and the R K is determined as The rare mode is selected by requiring 1.1 < q 2 ll < 6.0 GeV 2 /c 4 to reject contributions from the J/ψ resonant mode and other excited states (ψ(2S) and ψ(3770)) at high-q 2 , and φ(1020) at low-q 2 . The single ratio r J/ψ = B(B + → J/ψ(µ + µ − )K + )/B(B + → J/ψ(e + e − )K + ) is measured to validate the detection efficiencies. The measured value is consistent with unity as predicted by the SM, and no significant trend is observed in a number of kinematic regions. Since the r J/ψ ratio does not benefit from the cancellation of systematic uncertainties due to the different detection of muons and electrons, this result demonstrates the large control over the relative efficiencies for electrons and muons. The efficiencies and the yields from the resonant modes, obtained from maximum likelihood fits to data samples, are used as input variables to the simultaneous fit of the rare modes, which are shown in Figures 2 and 3. The resulting R K measurement is found to be R K = 0.846 +0.042 +0.013 −0.039 −0.012 , showing a discrepancy of 3.1 standard deviations with respect to the SM [3]. The isospin partner of the B + → K + l + l − , the B 0 → K 0 s l + l − decay, is also measured at LHCb, resulting in the R K 0 s ratio. A similar analysis strategy as for the R K is followed, using the B + → J/ψ(→ l + l − )K 0 s as the normalization channel for the double ratio. The K 0 s is reconstructed as K 0 s → π + π − , lowering the efficiencies and the precision with respect to the R K measurement. The result obtained using Run 1 and Run 2 data is R K 0 s = 0.66 +0.20+0.02 −0.14−0.04 , found to be 1.5 standard deviations below the SM prediction [4]. In addition, the R K * + ratio with B + → K * + l + l − as the rare modes has also been measured by LHCb using the full dataset available. The same analysis strategy as for R K is followed. The K * + is reconstructed as K * + → K 0 s π + with K 0 s → π + π − . A wider range in q 2 of 0.045 < q 2 ll < 6.0 GeV 2 /c 4 is used to include the enhancement of the branching fraction at low q 2 caused by the photon pole. A measurement of R K * + = 0.70 +0.18+0.03 −0.13−0.04 is obtained, found to be 1.4 standard deviation below the SM prediction of unity [4]. LFU tests in the baryonic sector offer complementary information to the meson sector due to the spin 1/2 in the initial state. Moreover, the form factor involved in the baryonic transitions are different from the mesonic decays presented so far, probing BSM models in different scenarios. The latest measurement on this respect performed by LHCb corresponds to the observation of the Λ 0 b → Λ + c τ −ν τ using Run 1 data [5]. The tau candidates are reconstructed in the decays τ − → π − π + π − ν τ and τ − → π − π + π − π 0 ν τ , from a π − π + π − combination, as the neutral pion is not reconstructed. The Λ + c candidate is reconstructed as Λ + c → pK − π + . The main source of background, caused by Λ 0 b → Λ + c π − π + π − X decays, is reduced by requiring the tau vertex to be downstream and displaced from the Λ + c vertex. The contributions of double charm processes such as Λ 0 b → D 0 s are controlled with a Boosted Decision Tree (BDT) which uses the information of the π − π + π − system. The branching fraction of the Λ 0 b → Λ + c τ −ν τ is determined from a 3 dimensional template fit to the pseudo-decay time of the τ , the q 2 τ ν and the BDT output. From the fit, the branching fraction is found to be B(Λ 0 b → Λ + c τ −ν τ ) = (1.50 ± 0.16 ± 0.25 ± 0.23)%, where the first uncertainty is statistical, the second systematic and the third due to the external branching fraction of Λ 0 b → Λ + c 3π, used as the normalization channel. LFU can be tested with the R Λ + c ratio defined as Using the known value for B(Λ 0 b → Λ + c µ −ν µ ) [6], R Λ + c is found to be R Λ + c = 0.242 ± 0.026 ± 0.040 ± 0.059, where the first uncertainty is statistical, the second systematic and the third due to the external branching fraction. This result is in agreement with the SM prediction of 0.324 ± 0.004 [7]. IV. SUMMARY Rare processes involving the b → sl + l − transition are sensitive to New Physics. The latest LHCb measurements of the branching fraction and effective lifetime of the B 0 s → µ + µ − decays agree with SM predictions. Since no significant excess is found for B 0 s → µ + µ − γ and B 0 → µ + µ − , upper limits are set to their branching fractions which are consistent with the SM predictions. Lepton flavour universality tests are also an excellent probe to search for New Physics. The latest result obtained by the LHCb for the R K ratio is three standard deviations below the SM prediction, suggesting a deficit in the muon mode. Although in agreement with the SM, the measurements of the R K 0 S and R K * + also point to the same direction. Studies of the LFU in the baryonic sector provide a complementary check of LFU due to the different form factors involved. The latest result obtained for R Λ 0 b , in agreement with the SM, shows the baryonic sector looks promising to test LFU and search for physics beyond the standard model.
2022-07-18T01:15:22.249Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "b6348440fb0e6c6b503c6fffb2250b907653174f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b6348440fb0e6c6b503c6fffb2250b907653174f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
159402287
pes2o/s2orc
v3-fos-license
Making managerial policy in the neoliberal moment By the 1990s, a consensus was emerging in British medicine about the need for new instruments of professional management and clinical regulation. In the four decades after the 1950s, professional, political, and public anxieties about standards of medical practice had grown inexorably. Critiques of variations and evidence in medical care had joined with concerns about cost and professional accountability to produce a ‘crisis’ over quality. Locally, some practitioners responded by intensifying projects for structured care, creating more precise protocols and undertaking institutional audits. Nationally, elite professional bodies and leading specialists produced guidelines to inform local developments, and sought to establish national datasets and audit systems. Through these changes, previously informal measures regulating clinical activity became explicit, and the rhythms and content of care became subject to new forms of structure and review. The Conservative governments of the 1980s and 1990s had also become interested in guidelines and medical audit. Motivated by historic drives to control costs and increase efficiency in the health service – as well as by neoliberal critiques of state and economy – the Thatcher and Major administrations substantially remade the dynamics of the NHS. Inspired by the novel concept of internal markets in public services, extensive reforms converted health authorities from planning and management bodies into state-funded healthcare purchasers, and transformed hospitals and community care agencies into trusts that secured finance through procuring contracts from purchasing bodies. In primary care, larger general practices were encouraged to operate as purchasers of certain hospital services (such as outpatient care), and new GP contracts introduced enhanced pay-for-performance elements. Guidelines and medical audit were to play important roles in the new system. Although remaining under the control of professional bodies, these instruments would enhance professional accountability and provide standards against which care could be measured before payments were made. The earliest moves in this direction were made in relation to chronic disease, with diabetes a prominent target. Long-term conditions were costly problems, and better management promised to improve prevention of expensive sequelae. Furthermore, as diseases that crossed institutional lines, intervention enabled governments to tackle the thorny GP-hospital divide. Measures to confront these problems were included in the 1990 GP contract (and subsequent revisions), as well as forming the basis of reviews into clinical standards in the mid-1990s. By the late 1990s, diabetes had become one of the first conditions to be subject to a National Service Framework (NSF). Given the contrast between professional and government political projects, this chapter explores how management of professional labour became government policy during the 1980s and 1990s. Despite disparities in aims, specialists and elite professional bodies found common ground with government and state agencies over the production of guidelines and audit structures. All parties saw benefits in co-operation and actively sought collaboration. Reliant upon medical professionals to construct new tools, government often acted through financial support for local programmes, supplemented by assistance for projects undertaken in national bodies. These efforts, moreover, were cultivated by key specialists and professional organisations, who sought resources and authority to develop new instruments. The creation of managerial policy, in other words, was co-constructed. Furthermore, this chapter stresses that elite bodies and leading specialists were crucial to initiating and connecting local, national, and international efforts to manage diabetes and its doctors. Personnel overlaps ensured strong consensus over the nature of reform across different scales of policy creation and service delivery. It was through mobile and influential figures, then, that government and professional projects were aligned enough for management of professional labour to become policy. Of course, the actual and intended effects of policy could be subverted by either government or profession, and the efforts of both sides could be mediated in practice. Nonetheless, their co-operation secured the basis for managerial policy and set the stage for more extensive future reform. Finally, this chapter suggests that the policy networks surrounding diabetes noted in Chapter 4 were essential in establishing the condition at the forefront of new managerial policies. As a costly, cross-sectoral problem, diabetes -and chronic disease more broadly -provided an Martin D. Moore -9781526113092 Downloaded from manchesteropenhive.com at 04/27/2019 11:55:48AM via free access important entry point for promoting managerial technologies in the health service. In part, such intervention was facilitated by the institutional and technological groundwork laid in earlier decades. However, governments were also concerned about the possible financial and political costs of intervening in certain areas, meaning that stateprofessional relations were not always smooth. It was here that the strength of the diabetes policy community became important. Through a vocal lay-professional organisation, interested civil servants, persuasive specialist advocates, and international pressure (especially from the WHO), diabetes was established as an important subject for novel managerial technologies. Diabetes thus provides a lens through which to view managerial policy, not only because of how it was conceived as a possible model for change, but also because of the ways in which it became an object of political interest. Managing British medicine before 1979 The 1980s and early 1990s were a period of radical innovation in British health policy. 3 During these years, Conservative administrations significantly altered the institutional configuration and dynamics of British healthcare, transforming the role of health authorities and central government in delivering health services. Neoliberal analyses of professionals, bureaucracy, state, and economy provided a broad underpinning for much reform. However, the Thatcher and Major governments were also motivated by a long-held desire of the British state to control NHS costs, and later initiatives built upon developments that took place before the 1980s. Parliament and the Treasury had placed constant pressure on NHS budgets since 1948. Initial hopes that expenditure would decline as national health improved were dashed very quickly. Governments tried numerous strategies over the post-war period to control costs, ranging from the introduction of charges (most notably for prescriptions in the 1950s) to the application of innovative budgetary rules, such as the Labour government's cash-limited budgeting of the mid-1970s. 4 Maintaining satisfactory levels of provision, however, required considerable resources, and efficiency savings could stretch only so far. Civil servants, politicians, think-tanks, and professional advisory bodies had noted the problematic connection between resource use and clinical decision-making soon after the creation of the NHS. However, the Ministry (and later Department) of Health felt unable to directly intervene in clinical judgement, given the poor quality of information available and the strength of anxiety that interference would generate backlash from both the public and the profession alike. 5 Instead, until the 1980s, state bodies and central professional advisory agencies sought to confront the issue of costs through improved service monitoring systems, and by encouraging clinicians to use institutional and comparative data to reform their own practices. These efforts began in the early 1950s. During these years, the CHSC sponsored the King's Fund and Nuffield Provincial Hospitals Trust to research alternative accounting systems within hospitals. New schemes linked costs with activity, enabling administrators to compare expenditure longitudinally and between institutions and to highlight possible areas for efficiency. Although they were trialled in various hospitals, implementation costs and administrative concerns about clinician interest resulted in less effective compromises being adopted. 6 Likewise, efforts to control prescribing costs in the 1950s were predicated upon exhortations about 'excessive prescribing' from the Chief Medical Officer in the Ministry of Health and on statistical analyses sent to GPs of their prescribing costs relative to other practitioners. 7 It was hoped that GPs would reflect on this information and alter their practices if their supposed deviations from common practice resulted in greater expenditure. The increasing costs of the drug bill indicate that such efforts did not achieve their ultimate objectives. 8 Similar, if more complex, techniques were applied to the problem of clinically driven costs in the 1960s and 1970s. The Ministry of Health's Hospital Plan, launched in 1962, loosely practised budget planning, linking finance to specified outcomes and producing national bed norms per population. 9 The Plan itself emerged during a decade within which programme planning, budgeting, and review became more common within Whitehall. 10 Similarly, during the mid-to late 1960s the Ministry developed a new hospital information system, in which hospitals attached data sheets to each inpatient case file and sent 'returns' to RHBs for statistical analysis. Although it experienced problems of timeliness and accuracy, through this Hospital Activity Analysis 'for the first time it was possible, in theory, for consultants to relate the use of resources to the characteristics of their patients, their diagnoses, and their treatments' . 11 This new data system was entangled with attempts to incorporate clinicians into management structures, such as Cogwheel divisions or the consensus management groups upon which the 1974 NHS reforms were built. Improved monitoring was also needed for Labour's turn to priority-setting and planning in the 1970s. 12 Information provision and professional self-management sat at the heart of all such activities, with the hope that self-regulation could bring expenditure under greater control. 13 However, the connection between professional autonomy and NHS expenditure disturbed successive Conservative administrations for reasons beyond traditional anxieties about public finances. Rather, the Party, and leading ministers, were increasingly influenced by neoliberal critiques of self-interested welfare professionals and state over-extension during the 1980s and 1990s. Such thinking shaped government policy, with reform packages inflected by debates about the efficiency of markets and the political importance of enterprise. In terms of the NHS, Conservative administrations developed attempts to promote professional self-management, but connected health policy with a broader remaking of the state. Before examining how these critiques informed health policy after 1979, it is worth returning to the often vexed question of 'neoliberalism', which we began to explore in Chapter 4. 14 Although not gaining political currency in Britain for decades, neoliberal critiques first emerged in the 1930s and 1940s. 15 At this time, a small number of economists and political philosophers reacted against what they saw as a crisis of liberalism, in which liberal governments created mechanisms for securing individual freedom (from disease or old age) by collectivising social risks. 16 Faced with post-war planning and destructive totalitarian regimes, neoliberal theorists sought to rethink liberalism, and recast state interventions in social and economic realms as a risk to the individualised self-determination supposedly at the heart of Western civilisation. 17 Markets in such analyses represented not only the most efficient means for allocating resources, but also a political bulwark. Economic freedom and competition provided the basis for all liberty, and state encroachment here would inevitably result in political authoritarianism. 18 Moreover, in simple economic terms, thinkers such as Friedrich Hayek suggested that central planning and bureaucracy stunted creativity and spontaneous order, and crucially lacked the means to create and process all the information required for efficient production and consumption. 19 Prices, by contrast, provided the signal for individuals to make their own choices, and inequality of outcome rewarded people who made the right decisions (or incentivised improvement, if they made the wrong ones). 20 In this sense, the role of the state was to establish the infrastructure for economic competition between private agents, and to provide limited, non-redistributive, social welfare (i.e. that which did not interfere with the rewards and pricing central to market competition). 21 As Foucault suggests, for early neoliberal theorists the market and its governance requirements thus constituted the raison d'être and limit of the state, but markets were not to be laissez-faire. 22 Unlike proponents of classical liberalism, neoliberal thinkers did not see the market as a natural phenomenon. From their perspective, states would be required to establish frameworks for economic activity, and to constantly monitor and intervene to guarantee competition (for instance, to prevent unfair practices) and manage the environment required for enterprise (e.g. by supporting education). This work would be ongoing as capital consistently produced newer and newer circumstances and arrangements. 23 Over the post-war decades, neoliberal critiques of state and economy were promoted by international networks of economists, political scientists and philosophers through think-tanks, business organisations, journalism, and academia. 24 Core ideas and languages changed over this period. At the height of the Cold War, figureheads like the American economist Milton Friedman amplified a rhetoric of laissez-faire and economic primacy, even if this looked very different from eighteenthand nineteenth-century variants. 25 Likewise, post-war neoliberal thinkers moved economic analyses into new realms. 26 Paramount in the British case were critiques of areas previously seen as distinct from private enterprise: state bureaucracy and professionally delivered welfare. According to such analyses, state employees and welfare professionals were not altruistic or service-oriented so much as self-interested and unaccountable. 27 Slowly, neoliberal critique concerning the efficiency of markets, the moral and political importance of competition, and the regulative role and limits of the state seeped into British political discourse, and Conservative politicians in particular engaged earnestly with these ideas from the 1970s onwards. 28 Arguments about the degenerative effects of the state on British life were central to crisis narratives around supposed political consensus, providing the platform for the 1979 Conservative election victory. 29 Into the 1980s and 1990s, neoliberalism was but one ideological framework within which the Conservative Party developed its thinking. 30 All government policy was subject to the dynamics of British politics, from the electorate's attachment to redistributive welfare (embodied in the NHS) to ministerial individuality and constraints imposed by previous policy decisions. 31 Indeed, a leaked government think-tank paper in 1982 proposed remaking health services on an insurance basis. It provoked such political backlash that future policy groups consistently dismissed the idea, and the Prime Minister felt it necessary to insist that the NHS was 'safe in our hands' . 32 Nonetheless, over the 1980s and 1990s, neoliberalism as a rationality for organising the state slowly gained influence, even if providing only a set of analytical principles rather than a dominant grand plan. 33 Thus the Conservative governments of 1979-97 reformed union rights and social security, arguing that such changes would enhance labour market functionality and restore the political bulwark of market choice and democratic decision-making. 34 They denationalised firms and industries to reduce public ownership and promote market competition, establishing the state in a monitoring and regulatory role. 35 Such ideas even entered into the provision of welfare services. In housing, alongside the 'right to buy' council house scheme, the Thatcher administrations repositioned the central state as a distributor of public funds and local councils as 'strategic enablers' for alternative providers. Compulsory competitive tendering was introduced for delivering new projects, and even non-profit housing associations had to compete for fixed grants on new builds and raise private finance for social housing projects. 36 In healthcare, neoliberal reforms built upon earlier impulses and tools that had been introduced to control costs and assist planning. Furthering practices of oversight and priority-setting developed under Labour governments, between 1982 and 1983 Conservative administrations instituted a review of financial auditing of NHS bodies, created a host of performance indicators (to enable cross-authority comparisons on resource use), and introduced annual performance reviews of health authorities. 37 As noted, parliamentary scrutiny committees provided a cross-party prompt in this direction. 38 The first major change in policy, though one intended to support reviews and efficiency measures, was the introduction of general management into the NHS. Consensus management teams were replaced by individual managers at each level of the health service, ensuring 'responsibility drawn together in one person, at different levels of the organisation, for planning, implementation and control of performance' . 39 The reforms followed a six-month review of the NHS conducted by a small team of senior civil servants and business leaders, led by Roy (later Sir Roy) Griffiths, then managing director of Sainsbury's supermarkets. 40 The resulting report, whilst very respectful of clinicians and the NHS, echoed public choice theorists by suggesting that it 'cannot be said too often that the National Health Service is about delivering services to people. It is not about organising systems for their own sake. ' The solution to inwardly focused professionals and bureaucrats emerged from the argument that there were 'clear similarities between NHS management and business management' . Thus the NHS could be subject to the same sorts of management roles and strategies as certain forms of profit-oriented organisation: managers would be concerned with 'levels of service, quality of product, meeting budgets, cost improvement, productivity, motivating and rewarding staff, research and development, and the long-term viability of the undertaking' , all of which, 'in the private sector … would normally be carefully monitored against pre-determined standards and objectives' . 41 As well as undertaking 'real output measurement, against clearly stated management objectives and budgets', managers were charged with enrolling clinicians more effectively into management (and doctors were invited to become managers themselves). 42 Doctors -whose decisions allegedly 'dictate[d] the use of all resources' -would help to set priorities, establish measurements of output in terms of patient care, and 'accept the management responsibility that comes with clinical freedom' . 43 Expressing ideas of what scholars would call 'new public management', Griffiths thus suggested that incorporating accounting and managerial techniques from 'private'-sector bodies would help reorient the NHS to efficient 'public' service. 44 Following Griffiths, the government embarked on a broader remaking of the health service in line with neoliberal and new public management analyses. 45 The Conservatives laid out the initial direction of travel in 1986 and 1987, with consultative and programme papers for reforms to primary healthcare. 46 Here the government discussed targetbased, pay-for-performance elements of GP work, as well as strategies to reduce resource-wasting variations in care. 47 Under the new plans, Family Practitioner Committees, previously administrative agencies, would be reconstituted as managerial bodies. Through specified performance indicators and annual reports from GPs, the committees (later renamed Family Health Services Authorities, FHSAs) could assess the quality and level of primary care provision. Furthermore, the committees would monitor variations in practice standards and care (e.g. differences in referrals) and be empowered to obtain independent professional advice to improve activity. 48 These reforms were enforced, despite considerable resistance, in the 1990 GP contract. 49 Complementary changes to the dynamics of the NHS were introduced by the 1989 White Paper Working for Patients and the resultant 1990 NHS and Community Care Act. 50 Through these documents, the third Thatcher administration (1987-90) made alterations to the roles and funding of health authorities, hospitals, GPs, and the Department of Health, all structured by the belief that 'the Government's main task must be to set a national framework of objectives and priorities' . In turn, authorities and managers -although 'remaining accountable to the centre' -'must then be allowed to get on with the task of managing' . 51 Under the new reforms, RHAs survived, but were expected to concentrate on the core managerial tasks of 'setting performance criteria', 'monitoring' and 'evaluating' performance in line with government objectives. 52 They retained responsibility for numerous operational roles (such as blood transfusion services), but were expected to delegate as much responsibility to districts as possible. In turn, District Health Authorities were to delegate as many operational functions to units (hospitals) as was feasible, whilst 'ensuring that the health needs of the[ir] populations … are met' . 53 Crucially, under the new arrangements District Health Authorities became purchasers of services as well as management bodies. RHAs received central funds, transferring money to districts according to assessed needs. 54 District authorities subsequently 'purchased' care for their patients from hospital 'providers', which might either be directly managed (through devolved management budgets) or exist as independent trusts (initially restricted to hospitals of over 250 beds), or might operate in another district (where superior services or rates might be offered) or work in the 'private sector' . 55 Management of activity came either through a mixture of standard-setting and performance review (in directly provided hospitals) or from contracting (in the case of trusts or private providers). Finally, GPs were also offered the opportunities to become 'fundholders' . If large enough (initially having at least 11,000 registered patients), a practice could apply to receive money for a defined range of non-acute services, and then 'purchase' care directly for patients from hospitals within or without its district. 56 As well as being a provider with a contract and payment for performance, it would also become a purchaser of services from secondary care institutions. These reforms had a range of aims, not least to enable finance to flow with patients through the service and to depute operational -and thus political -accountability to non-governmental parts of the state. 57 Moreover, whilst undoubtedly challenging pre-existing relationships and laying foundations for expanded private involvement in service delivery, the government's rejection of charges and insurance options meant that the reforms respected two significant principles of the NHS: central funding by taxation and universal access -a core of the supposed 'post-war consensus' -remained intact. 58 Underpinning these changes, however, was an analysis consonant with contemporary neoliberal values. The introduction of stricter monitoring and accountability practices would prevent state-employed professionals from empire-building and direct their energies to meeting 'legitimate' objectives, namely providing service within available budgets. Some of this surveillance was to be undertaken at the institutional level, through reviews of practices and health authorities. However, self-review would also be performed by doctors through mandated clinical audit, with the aim of 'learning lessons', and potentially identifying and correcting costly variations in care. 59 Regulated competition, now incorporated into state services, was also intended to ensure the most efficient use of state resources and maximise quality within a given capacity. The 'best' hospitals would supposedly attract funding from GPs and District Health Authorities, whilst contracting and fixed budgeting for institutions would encourage innovation and clinical efficiency. 60 Finally, the intended efficiency savings would facilitate reduced taxation and lower 'inflationary spending', freeing capital for politically and economically desirable entrepreneurial activity. Notably, the state in this vision was not 'rolled back' . Instead, through the use of contracting, targets, review, and financial deputation, the central state had the potential to extend government influence further into individual units, and from here to third-sector and private providers. 61 Such trends were also manifest in areas like education, with the 1988 Education Reform Act enabling individual schools to opt out of local authority management and funding, but only at the cost of a national curriculum, results tables, competition for places, targetsetting, and external audit. 62 The reforms of the 1980s and early 1990s supported the management of medical labour in numerous ways, with legislative documents referring to clinical protocol, guidelines, and audit. Conceptually, professional management fitted neatly with the managerialism of neoliberal analyses. As with performance management and dispersed statecraft, guidelines disaggregated and codified the tasks of clinical workers, and audits subjected work and outcomes to supposedly objective measurement against pre-stated, quantified performance indicators. 63 Of course, although clear conceptual cross-overs exist, there is nothing inherently neoliberal about establishing guidelines or setting and auditing targets. Socialist regimes and social democratic planning operate through similar practices. 64 Nonetheless, within the neoliberal-inspired reforms of the early 1990s, the management of professional labour became tied to projects to introduce accountability practices for bureaucracies and to foster competition and market activity in public life. In terms of medicine, guidelines and auditing not only provided potential tools for judging professional work and reducing costly variations in care. They also provided mechanisms through which contracting could take place, and new providers be brought into contact with state finance. For the Department of Health, then, promoting managerial technologies in medicine could serve multiple purposes and smooth the implementation of broader projects to remake the state and its major services. The effective implementation of government reforms relied upon co-operation from medical professionals. Politically, the response from the major professional bodies, individual doctors, and their allies within Parliament and the media was overwhelmingly critical. Many critics argued that the government sought to 'destabilise the NHS and replace it with a commercial' alternative. 65 Yet opposition on structural elements of reform masked support for elements of professional management. For instance, one contributor to the BMJ suggested that 'the notion of health care being bought and sold as a market commodity' raised 'fundamental questions about the possible lack of safety nets within a restructured health service' . Nonetheless, the author continued by declaring that 'the need for greater accountability is incontestable' . 66 Likewise, the well-known socialist GP Julian Tudor Hart strongly criticised the government's proposals for potentially distorting good medical practice, but his critiques of 'paying for means rather than ends' in the GP contract did not condemn targets or incentivised work per se. 67 For all the rancour, the third Thatcher administration and first Major administration (1990-92) passed legislation and imposed contracts, and thought turned to how to make the best of the new dynamics. More importantly, a cross-over of interest in the management of professional labour enabled elite professional bodies, specialist practitioners and researchers, the Department of Health, and the NHS to construct a consensus around managerial policy. Neoliberal critique may have brought government to the table, but pre-existing professional interests in management were essential to making managerial policy. Managing diabetes and its professionals under British neoliberalism Diabetes management was heavily influenced by the NHS reforms. Before the internal market, financial stringency was a challenge and potential opportunity for innovators. We noted in Chapter 4 how neoliberal politics interacted with the management of retinopathy. Policy support for statistical indication linked to cost reduction, moreover, provided opportunities for reformers and planners in other areas. In Manchester, for instance, pioneers of new diabetes centres -institutions dedicated to more patient-centred, multi-disciplinary care than outpatient clinics -used political drives for audit and reduced inpatient costs to their advantage. Through statistical analyses and the promise of savings, innovators garnered political support for organisational change. 68 Likewise, physicians in the South-East Thames Region formed a diabetes group to facilitate the construction of a strategic plan. The group used Hospital Activity Analysis data and questionnaires to calculate the costs and activities associated with diabetes care, making the case for better forward projections and expanded staffing. Once again, they justified such activity on the grounds of reduced costs. 69 Diabetes management, however, was also tied into neoliberal reforms and concerns evoked in government discussion of 'quality' . 70 As noted in the Introduction, diabetes care formed a central plank in the 1990 GP contract, which had been designed to improve the management of chronic disease. The contract built on pre-existing models of GP miniclinics in diabetes and other conditions (see Chapter 2) and reinforced professional interest in systematic, managed care (Chapter 3). Moreover, the contract was predicated on the rationale that improved clinical practice could achieve public health aims of secondary prevention of long-term sequelae (Chapter 1). Unlike professionally designed schemes, however, the government contract attached financial incentives to practice-based disease management. In exchange for payment, GPs engaged in performance management relationships with FHSAs. Practitioners would develop protocols with fellow professionals, and the relevant FHSA would assess care against agreed criteria to determine financial recompense. The new arrangements, therefore, reflected the mix of projects supporting managed medicine. One the one hand, cognisant of the conflict over contemporaneous organisational change, the government left protocol and audit as the responsibility of local professionals. Doctors assumed control of developing and managing new tools, which the state encouraged through funding and providing platforms for exchange. 71 On the other hand, the government tried to connect management of clinical labour with performance management structures designed to promote public health and efficiency savings. Despite potential conflict, and practitioners' anxieties that incentives might produce adverse effects, these interests formed the basis of managerial policies, and diabetes provided a key area of intervention. 72 Government interest in diabetes thus partially derived from the condition's financial and humanitarian costs and its place within a broader landscape of worrisome chronic diseases. However, political focus on diabetes (and other chronic conditions) also underlined how government support for professional management gravitated towards conditions in which the infrastructure and momentum for managed practice had previously been established. As noted in earlier chapters, practitioners had experimented with local protocol for systematic care since the 1970s, whilst elite specialists and professional bodies had been producing national guidelines and undertaking audit of diabetes management for over a decade. The BDA and Royal Colleges had thus repositioned themselves as guarantors of quality structured care, and sought to produce guidance to inform local practice. Moreover, the infrastructure for professional co-operation was also already in place. The RCP and BDA, for instance, had developed close connections, auditing national provision of staffing and facilities of diabetes during the mid-1980s. 73 With both agencies interested in clinical audit, the Department of Health was able to facilitate ongoing developments, for instance funding a joint BDA-Royal College working group exploring routine audit of process and outcome in the early 1990s. 74 Similarly, the Department could also use funding to local centres of innovationsuch as Manchester -as a means to further develop managerial tools. 75 Martin D. Moore Indeed, the link between protocol and payment in the 1990 contract undoubtedly reflected the pre-existing 'good sense' surrounding diabetes treatment and built such developments into the performancerelated system. International trends also accelerated the creation of managerial structures, and opportunities for professional-state co-operation, in British diabetes care. As noted in Chapter 5, the St Vincent Declaration of 1989 was integral here. The Declaration set out basic quantified targets for the care and prevention of diabetes to be applied across national contexts, and resulted from a conference of leading specialists, researchers, and civil servants held under the aegis of the European regions of the International Diabetes Federation and the WHO. 76 The British government signed the Declaration, which generated new national infrastructure. For instance, in 1992, the BDA and the health departments of England, Wales, Scotland, and Northern Ireland formed a joint St Vincent Taskforce to develop the auditing and care arrangements necessary to meet the proposed targets. The group comprised medical and nursing professionals, as well as healthcare purchasers, providers, and patient representatives. Some leading professionals even hoped that the Taskforce would be able to assist health authorities in their contracting duties, as purchasing bodies lacked relevant expertise. Once again, funding such work in diabetes was simpler than in other areas of care because of the infrastructure for co-operation already in place. The management of diabetes care, however, could also provide something of a model for the management of other conditions and areas of healthcare. Such a sentiment was expressed in the second report of the Clinical Standards Advisory Group (CSAG), published in 1994. 77 CSAG was a multi-disciplinary, statutory body with a rotating membership composed of nominees from the Royal Colleges and other leading professional bodies. 78 It was charged with making investigations into, and providing recommendations to government on, standards of care within given subjects. Created during intense conflict over NHS reforms, it was declared by politicians, members of the profession, and policy analysts to be an attempt to broker peace between professional bodies and the government. 79 Although potentially indicating the government's acceptance that the profession should set, monitor, and control its own standards, the Group's name and purpose also indicated a broad consensus over the need for more active surveillance and management of professional labour. The Group's second report was on diabetes care, and was researched and written by a specially chosen Diabetes Committee whose broad membership included both medical and nursing specialists as well as generalists, such as administrative officers, a nationally prominent GP, directors of public health, and leading figures within the RCP. 80 The Committee followed the remit laid out in Parliament: to 'advise on standards of clinical care for people with diabetes' , work which would entail 'reviews of existing statements of clinical standards, of the standards specified in NHS contracts, and of arrangements for auditing the delivery of services to contracted standards, in a representative sample of NHS districts and boards' . 81 Thus, upon creation, the Committee formed a sub-group (complete with co-opted members) to review nineteen existing international, American, and British standards, and to construct its own standards document. 82 This document served as a benchmark for multi-disciplinary groups which then visited providers, purchasers, clinical teams, GPs, and 'consumer representatives' to assess provision in eleven health districts of different sizes, locations, and reputations. From these visits, the Committee produced site reports, and the parent body published the Committee's own standards document and anonymised findings, along with its recommendations and the government's response, in a final report. The relationship between diabetes and neoliberal healthcare reforms was visible, firstly, in the very terms of reference for the Committee. The Secretary of State for Health requested analysis of standards within contracts, as well of the infrastructure in place for auditing contracts against those standards. Such demands were perhaps a reaction to broader concerns that integrated chronic disease care might have been disrupted by the 1990s reforms, and to the impenetrable 'wall' erected 'between the purchasing and provider role of the District Health Authority' . 83 In fact, worries about lost contracting expertise were so great that the NHS Executive in England commissioned guidance on needs assessment by one of the CSAG authors, and endorsed 'a small number of existing clinical guidelines' on diabetes in order to help purchasers draw up contracts. 84 (And such decisions, once again, marked points of convergence between professional visions of selfmanagement and performance management of the health service.) Secondly, connections between NHS reforms, diabetes, and professional management can be seen in how the exercise itself acted as a local and national review of care. The Committee produced its own standards document, and the Group's findings influenced care in at least some of the locales visited (see below). Finally, the report itself articulated the possible attraction of diabetes as a conduit for further managerial developments. 'This study has shown', the authors noted, 'that standards of care can be assessed against a consensus document. ' 'Our approach' , they went on, 'would appear to be a useful model for assessing provision of care for other diseases of public health importance. ' 85 It was a sentiment whose importance was amplified by the mixture of specialists and generalists on the Diabetes Committee, confirming the novelty of such managerial approaches at a systemic level as well as their applicability elsewhere. Enrolling the neoliberal state and creating managerial consensus in diabetes care Although there were areas of cross-over between elite professional endeavours to construct non-punitive technologies of medical management and neoliberal state programmes for professional and health authority performance management, these projects were by no means in complete alignment. Moreover, despite diabetes care providing an attractive proposition for the state to pursue its managerial interests, professionals themselves were central in promoting and co-constructing managerial instruments and policy around the condition. This is notable in the histories of both the St Vincent Declaration and the CSAG report on diabetes, as well as the creation of a later NSF for diabetes in Britain. The creation of the St Vincent Declaration was, for instance, pointedly political. The event owed much to the work of, amongst others (including British epidemiologist Hilary King), Professor Harry Keen, an internationally renowned British diabetologist. Keen felt that an international initiative to improve diabetes care -one backed by the WHO -would pressure national governments into more concertedly addressing the growing challenge of diabetes at a clinical and public health level. 86 Furthermore, this political orientation was embodied in the form of the Declaration. Although the contents of the Declaration had been left to experts, and despite precedent in the WHO 'Health for All' initiative in 1979, there had been debate during drafting as to whether target-setting was appropriate (especially in the absence of baseline data) and what particular targets should be chosen. 87 However, targets were adopted specifically because those involved feared the Declaration would be toothless without them. In the event, the conference adopted a mixture of quantified outcome targets (for instance, halving the rate of gangrene amputations in five years) and specific process and structure objectives, such as establishing 'monitoring and control systems using state of the art information technology for quality assurance' . 88 Those who worked on the Declaration and its subsequent projects felt that it probably did not affect practice at the point of individual exchanges between clinical teams and patients. Crucially, though, the Declaration did provide political tools and momentum with which bodies like the BDA could lobby government, and through which individual practitioners could encourage local doctors and NHS authorities to take up auditing and guideline practices. 89 Moreover, professional lobbying was central to convincing the government to support the Declaration and to create subsidiary working groups. Interviewees who worked in relation to St Vincent, for instance, recalled civil servants' hesitancy about signing the Declaration. They noted departmental concern about 'special pleading', the idea that if the Minister for Health agreed to specific programmes for diabetes then the government would be open to similar claims for a host of conditions. Eventually, after concerted pressure, the UK did sign, creating a path for the creation of various groups for guideline and audit development schemes. 90 Post-war policy networks also secured political support for the CSAG review of diabetes standards. The review emerged, in part, from the fate of diabetes within the Major government's public health initiative, The Health of the Nation. This programme continued the work of Working for Patients in laying out a role for the state as provider of a 'strategic framework' for public health, based on managerial principles of calculated target-setting and continuous performance assessment. 91 The centre would develop objectives, and, freed from the burden of delivering services day-to-day, health authorities could use contracts to achieve them. 92 Initial consultation produced sixteen areas for possible intervention, including diabetes. Reflecting a growing faith in guidelines and auditing, the suggested diabetes targets included 'the proportion of GP practices within a FHSA area who follow protocols agreed locally between hospital clinicians and primary care staff ' . 93 Despite the BDA submitting persuasive arguments for diabetes, the subsequent White Paper adopted fourteen quantified targets for five key areas: coronary heart disease and stroke, cancer, mental illness, HIV/AIDS and sexual health, and accidents. 94 The Major government suggested that these five areas met three key criteria, being areas of considerable premature death and avoidable ill-health, in possession of known effective interventions, and amenable to target-setting and monitoring. 95 Critics of the programme have suggested, by contrast, that alongside being causes of considerable NHS expenditure, the subjects chosen also contained historic trends favourable to future improvement for which the government might take credit. 96 Regardless of the reasoning, diabetes was omitted from the programme. However, interviewees who had close connection with the BDA suggested that the CSAG review of diabetes services was something of a 'sop' for the omission of diabetes from The Health of the Nation. The government was aware of needing to offer a concession, and influential figures within the CSAG parent group had colleagues' interests in mind when pushing for diabetes as an area of standards investigation. 97 The Group agreed, and the Diabetes Committee then pulled together leading figures in the field of diabetes management to drive the work forward. 98 In this sense, rather than professionals being enrolled into state projects, specialists and elite professional bodies used the state to engage in activities that fitted their own priorities, or at least to co-operate with the state in a way that would better manage British medicine and its populations. 99 As subsequent projects remained predicated upon professional expertise, participants believed that their work would improve care and empower professionals to manage their own practice, not only in ways that facilitated quality-assurance mechanisms, but also in ways with resource implications that conflicted with state concerns about costs. One site review from the CSAG report, for instance, provided the grounds for local institutions to hire a consultant diabetologist where previously one had not been in place. 100 Equally, as indicated above, one interviewee involved in policy work recalled how reports like the CSAG's provided a means for the BDA to make the case for further government or health authority activity, with changes probably increasing short-term financial costs. 101 As well as on professional and state co-operation, managerial policy for diabetes care also depended on the ways in which specialists moved between different bodies to produce a broad consensus on the core elements of 'quality' care. The existence of such consensus in diabetes care could be seen within the CSAG report, which suggested that 'within [the standards documents reviewed] there is a large measure of agreement on what constitutes care of acceptable quality' . 102 This overlap made it easier for the Diabetes Committee's sub-group to compile its own standards document, one which was wide-ranging in its focus but contained common elements discussed in Chapter 5, including lists of tests to be performed at medical and annual review, reflections on possible audit measures, recommendations for quantified performance indicators for patients, and discussion of the need for guidelines, registers, and recall-mechanisms. 103 In part, the commonality between extant standards documents reflected the broader 'good sense' about quality diabetes care discussed in earlier chapters. Yet this good sense -and its embodiment in the documentation of various agencies -was the product of elite practitioners and academics moving between bodies that produced standards and guidelines. Members of the CSAG, for instance, were involved in shaping the St Vincent Declaration and pioneered its subsequent work on audit. They also helped produce Royal College and BDA guidelines on diabetes management, worked on NHS Executive projects, and operated on many of the guideline committees formed and funded by the Department of Health. 104 Influential figures were also connected through training and research with other major figures in the field, such as Harry Keen, John Nabbarro, or Robert Tattersall. 105 Specific proposals and documents, in other words, emerged out of both broader political contexts and welldefined intellectual and policy communities. Moving between different levels of the health services, and different arenas of discussion and governance, helped these figures to align recommendations of local and regional NHS authorities, elite professional bodies, international organisations, and lay-professional and state-sponsored agencies. They thus provided sufficient agreement for managerial recommendations and infrastructures to emerge, and mediated potentially conflicting agendas. 106 Using government funding and activity, certain elite specialists and professional bodies helped set national standards and, through their production of tools for management, sat at the forefront of quality regulation and governance. At the same time, through its resources and support, the government sought to use this repositioning to its own advantage, encouraging professional management in ways that furthered neoliberal drives for accountability and financial control. Undoubtedly, there were tensions and conflicts. Governments did not always support the findings of committees. They could refuse proposals that had resource implications or required direct government intervention in service provision. Equally, different aims and political realities could undermine government efforts to impose forms of performance management or contain costs. Despite these conflicts, though, co-operation between state agencies and elite professionals laid the foundation for future political and structural transformations and the creation of more managed medical labour. Conclusion: NSFs and the making of managerial policy The structure of the NHS came under further scrutiny after the mid-1990s. The election of a Labour government in 1997 ended eighteen years of Conservative government and brought new analyses of the service to the fore. The Blair administrations ended fundholding and internal markets, but kept the division between purchasers and providers and enhanced primary-care influence over the service. New policy established the Primary Care Group -which brought together GPs and other primary healthcare providers in an area as budget managers -as the fulcrum of the service, and softened mechanisms of competition in favour of co-operation and long-term contracting. The new government also encouraged mixed-sector capital projects to increase hospital capacity. 107 Despite such changes, both Conservative and Labour governments from the early 1990s onwards retained an emphasis on guideline, audit, and healthcare management. Structurally, the Royal Colleges, elite specialist bodies, and ad hoc statutory groups had to share their role in producing guidance and undertaking review with new state agencies that reflected the growing rhetoric around Evidence-Based Medicine. During the late 1990s, independent Evidence-Based Medicine organisations like the Cochrane Centre were joined by state-sponsored agencies such as the National Institute for Clinical Excellence (NICE), Commission for Health Improvement, and National Audit Office. 108 New agencies could disrupt existing expert networks. For instance, one interviewee disliked the pressure for targets emerging from these agencies. A disagreement over the standardising drives of NICE meant that the interviewee was not involved in NICE guideline production work, despite great experience in this area. 109 Nonetheless, the emphasis of these agencies remained on providing guidance and undertaking review to ensure that local systems were set up to inform 'best practice' . 110 In terms of diabetes, the continuing political and professional support for managing medicine can be seen from the creation of an NSF for diabetes between 2001 and 2002. Once again, diabetes was at the forefront of managerial policy, with the diabetes NSF one of five initial frameworks designed to set national standards for care and provide strategic advice on achieving such standards. 111 The diabetes framework built on a belief in managerial technologies as central to 'driv[ing] up quality and tackl[ing] variations in care', although, marking a slight break with earlier standards, it was also oriented towards patient experience and empowerment. 112 The framework itself laid out twelve objectives for the NHS and discussed their implications for service providers and doctors, alongside providing a plan for how these objectives could be met. 113 It also found support in new contract arrangements for GPs established under the QOF in 2004, through which complex financial incentives were developed for diabetes management (and chronic disease management more broadly) and payment was closely related to process and outcome assessment. 114 As of 2018, both the QOF and the NSF are still in use, the QOF in formal contracting, the NSF indirectly, providing the basis for Diabetes UK's policy work. 115 Although the NSF appeared a striking innovation, interviewees involved in its creation underlined the importance of previous political work on diabetes to its construction, praising the policy networks, conceptual frameworks, and techniques developed over preceding decades. They recalled, for instance, the work of leading figures like Harry Keen and George Alberti (then President of the RCP of London), and lobbying from agencies like Diabetes UK. 116 Through slow concerted pressure and more light-touch conversations with ministers, civil servants, and the Chief Medical Officer, these actors were able to gain political momentum that was maintained by consecutive Ministers for Health. 117 Figures at the heart of this work and close to the External Reference Group that compiled the standards and delivery documents recalled using the intelligence and documents accumulated through the political efforts of the previous decade. 118 Indeed, the NSF itself directly made reference to 'build[ing] upon the vision of the St Vincent Declaration' . 119 The developments laid out in this chapter, and those proceeding it, provided the groundwork for approaches to diabetes -and British medicine more broadly -that have lasted through to the present day. Although, if focusing on contemporary infrastructure, we might suggest that the rise of managerial medicine was 'incomplete' by the mid-1990s, the principles, practices, and techniques of medicine that we have traced throughout the post-war period had nonetheless become the foundation for policy and professional practice. By the end of the 1990s, new actors were making managerial policy. None, however, questioned the idea that the structure and review of medical rhythms, decision-making, and outcomes were essential to guaranteeing quality care. By the start of the present century, the management of medical practice was an increasingly naturalised feature of the health services. Diabetes care, moreover, had been at the forefront of such developments.
2019-05-21T13:05:07.988Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "0944840bfd286f580cbb2064a01537016b804494", "oa_license": "CCBY", "oa_url": "https://www.manchesteropenhive.com/downloadpdf/9781526113092/9781526113092.00012.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "0254dc0011056e8b7533a4cc5ea7459bc6d670e4", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Political Science" ] }
73532736
pes2o/s2orc
v3-fos-license
Leverage Behaviour in the G ‐ 7 Countries and the Influence of The study addresses the capital structure readjustment process by comparing some theoretical predictions with statistical evidence from international data. Orthodox theories based on debt‐ratio mean reversion are challenged by testing the hypothesis of debt‐ratio target irrelevance and the proposition that institutional factors influence leverage behaviour. The results provide evidence for the dependence of market value debt‐ratio on stock returns in all the G‐7 countries. Ample corporate issuing is not used to counteract the effects of stock returns on capital structure. Firm specific characteristics supported by orthodox theories are found most applicable to the US, UK and Japan. The book‐value debt ratio shows relative dependence on past values, although this is found to be less the case in the Anglo‐American countries than in continental Europe. These results indicate that corporate management is not interested in market‐based debt‐ratio targets, book values may be of greater concern. JEL flokkun: G32 Lykil hugtök: Capital structure; debt ratio; target adjustment; corporate governance; The paper is based on a part of my M.S. thesis.I am indebted to Jan Bartholdy at the Aarhus School of Business and Gylfi Magnússon at the University of Iceland for their excellent supervision and Jesper Damkier at the Center for Analytical Finance, Aarhus School of Business, for his valuable support.Further thanks go to Ásgeir Jónsson, Jeffrey Cosser and an anonymous reader for helpful comments. Introduction The capital structure literature exhibits a number of elaborate theories predicting firm leverage behaviour, of which most are consistent with the presence of a debtratio readjustment process.The trade-off theory has been justified on the grounds of numerous individual economic and sociological factors which give rise to an optimal debt-ratio target.The pecking-order theory indicates that debt-ratio mean reversion takes place over time under plausible conditions.In fact, until recently, the randomness of capital structure was not a topic of much concern due to the convincing empirical evidence in favour of a target.However, the development of theoretical models and their application on ever more detailed sets of data in recent years leave many questions unanswered.This paper briefly reviews the literature and applies an international dataset to detect the factors that tend to dominate the debt ratio of publicly traded firms in the G-7 countries.The study draws on a few papers which have recently attracted attention by introducing new dimensions and controversies into the literature of capital structure.Welch (2004) challenges the static trade-off theory by finding evidence in the US contradicting the widespread assumption that the debt ratio readjusts to specific targets when offset by external forces.He introduces the "implied debt ratio" to observe the relevance of stock returns in time-contingent debt-ratio movements.He shows how stock returns tend to push market value debt ratios away from their previous levels and how lively debt and equity issuing activity is not managed to counteract such deviations.Secondly, Rajan and Zingales (1995) and others find some inconsistencies between theoretical prediction and leverage across countries due to differences in institutional arrangements between countries.Thirdly, part of the controversies stem from the behavioural finance literature.Baker and Wurgler (2002) cast doubt on the existence of a pecking order of financing by maintaining that market timing and equity issuing is a prominent method of financing, although capital markets are generally considered efficient.These findings are supported by Graham and Harvey (2001) and also Frank and Goyal (2003), who detect lively equity issuing in the US capital markets.This paper will touch on these three controversial topics more or less.As the paper is focused on cross-country comparisons, there are certain institutional aspects to bear in mind.Orthodox theories are limited by being "subject" to the Anglo-American marketoriented environment.Conflicts between stakeholders are particularly relevant in such a setup, due to dispersed ownership and the asymmetry of information transmission in financial markets.Where those features are dampened or blurred by a different financial infrastructure, theoretical predictions might become biased.By referring to the corporate governance literature, the G-7 countries should provide an interesting comparison with a representative spread containing both market-oriented and bank-oriented countries 2 . 2 The market-oriented countries, the UK, US and Canada, are characterized by large developed financial markets, whereas France, Italy and Germany rely more on bank financing.(Japan can be fitted to both groups depending on criteria).The corporate governance literature distinguishes between countries in such a way by referring to To summarize the capital structure issues tackled by this paper, they can be presented as the following hypotheses for testing: (i) The hypothesis of target irrelevance.The debt ratio shows no tendency to readjust to prior values or a fixed target because it is dominated by the influence of stock returns.Any debt ratio is as good as any other.(ii) The ample issuing hypothesis.Corporate issuing activity is sufficient to counteract the influence that stock returns have on the debt ratio, but management chooses not to use it.(iii) The stock-return hypothesis.Amongst firm characteristics, stock returns dominate other profitability measurements in terms of significance and effects on leverage behaviour.(iv) The target-irrelevance model hypothesis.The implied debt ratio is the only relevant debt-ratio determinant because it is driven by stock returns and the actual debt ratio is allowed to fluctuate accordingly, i.e. in a one-to-one relationship.(v) The corporate-governance hypothesis.Cross-country institutional differences have implications for the theoretical predictions of capital structure.The trade-off model shows greater resistance against the target-irrelevance model within the market-oriented countries and Japan.(vi) The ranking hypothesis.The economic impact of the implied debt ratio should increase when narrowing the debt definition towards long-term debt.This is because short-term debt is considered to facilitate more counteracting potentials than long-term debt.Debt-to-capital book value is least prone to stockreturn influence by definition and nature.(vii) The book-value hypothesis.The book-value debt ratio of the market-oriented countries shows less target persistence and more stock-return dependence.Accounting statements in the market-oriented countries are important signalling documents and are better connected to market values than those in the bank-oriented countries.Further, if equity issuing is dependent on market timing, there should be stronger correlation between the book and market values of the debt ratios. The results imply that there is considerable corporate issuing amongst publicly traded firms, but it is not devoted to debt-target adjustments in any of the G-7 countries.The theoretical factors provided by conventional theories to explain debtratio movements, e.g.firm characteristics, are influential but become trivial in comparison to the significance of stock returns.Some institutional effects are noted, both influencing the method of financing and the behaviour of book-value debt ratio. Motives for capital structure management In recent years, conflicts between capital structure theories have progressed, both with respect to the existence of a target equilibrium and the debt-ratio adjustment process.This section will briefly address these issues in the light of conventional and recent theories provided by the literature. institutional arrangements and their effects on shareholder conflicts through ownership and control. Capital structure management with a target A number of theories in modern capital-structure literature suggest the existence of an optimal debt-ratio target, derived from weighing benefits of debt against costs.Together they give rise to the trade-off theory.Firstly, Ross (1977) suggests tradeoffs between benefits and costs of debt derived from agency costs of asymmetric information.The debt ratio serves the purpose of signalling information to the financial markets about the debt capacity of the firm, i. e. the firm's true value, in a manner introduced by Spence (1973).Leyland and Pyle (1977) use same setup to show how the entrepreneur signals the true worth of his project, but by means of opposite signs, i.e. increasing equity ratio instead of debt value.The second type of trade-off stems from agency costs of shareholder and manager conflicts as introduced by Jensen and Meckling (1976) and Jensen (1986).Managers are considered to act in their own interest by consuming perquisites and ploughing idle cash into mature businesses for empire building, resulting in "organisational inefficiencies".Here debt would trim the firm of idle cash and place constraints on perks.Thirdly, there are trade-offs from agency costs of shareholder and creditor conflicts, e.g.under-investment and other shareholder temptations to fool creditors by "playing games" 3 .Finally, in addition to the agency costs, trade-off theories are further supported by weighing the benefits of interest tax shields against bankruptcy costs of financial distress.Although the role of taxes is thoroughly underpinned by the trade-off theory, empirical work does not seem to bear it out convincingly. 4 Capital structure management without a target Two capital-structure theories of interest challenge the trade-off theory.A crucial ingredient of the pecking-order theory advocated by Myers and Majluf (1984) is insiders' preoccupation with the stock price and a resulting interaction between insiders and the stock market.Due to asymmetric information, financing decisions can be ordered according to their effects on the stock price through interpretation of signals.The ranking or pecking order favours internal financing through reinvested earnings, as this has the most favourable effect on the stock price.However, new debt issues are of secondary importance and equity issue is the least attractive method, as external financing is considered to exert negative effects on stock price.What is more, the pecking-order theory explains why the tax shield, as a motivation for debt accumulation, as discussed earlier, is of secondary importance.The financial slack is used to pay off debt, whereas deficits call for issuance of the safest security first and equity financing as a last resort.Hence, there is no target adjustment in the short run because leverage is correlated with the cash flow.Yet, if investments are lumpy and positively serially correlated and the business cycle systematic, there will be strings of years bringing financial deficits and surpluses.Under such conditions, 3 Myers (1977) describes the hazards of under-investment.Brealy and Myers (2003, p. 505) mention a number of motives for playing games, e.g."risk shifting", "no added equity", "cash in and run", "bait and switch" and "playing for time". 4For example Miller (1977) and Harris and Raviv (1991) discuss tax effects on these lines.A tax variable was tried in several regression specifications to explain leverage, but without success (see Section 6.1).debt ratios will not follow a random walk but rather mean-reversion, and the targetadjustment model will consequently turn out to be appropriate, even though no target exists. The other theory of interest is a recent study which presents capital structure as a random variable, somewhat in line with the interpretations of Modigliani and Miller (1958).Welch (2004) shows how the market value debt-ratio fluctuates in accordance with stock returns, as if the preferred target were stock-price dependent, i.e. no particular fixed target exists.However, some limited debt-ratio reversion is found to take place towards previous historical debt-ratio values over long periods.A number of theoretical suggestions are presented.Firstly, the apparent lack of a target optimum corresponds with the pecking-order predictions.If there is correlation between cash balance and stock price, the readjustment inertia could be explained by the pecking order.However, stock price is more often psychological and random, whereas flow of funds and financial slack tends to be serially correlated (Myers (2003)).Secondly, asymmetric behaviour on the upside and downside of stock-price movements could result in random movements of debt ratio over time.On the upswing, management may refrain from issuing rebalancing debt due to entrenchment, whilst during the downswing, management avoids issuing rebalancing equity since it is regarded as undervalued.Thirdly, readjustment inertia could be provoked by high transaction costs of refinancing, in the wake of continuous debt-ratio fluctuations.However, this is doubtful, as refinancing is relatively cheap.According to Graham and Harvey (2001), transaction costs are not viewed as obstacles by firms. Fourthly, their survey indicates that "financial flexibility" and "credit rating" are the primary debt-policy factors in US firms.Such motives and many more found in their survey are inconsistent with active target readjustment and could account for random debt ratio.Finally, inefficient markets and irrational or behavioural factors might overrule target adjustments and provoke the observed adjustment inertia.In that sense, many factors can be contributing at the same time, e.g. market timing (Baker and Wurgler 2002) and irrational behaviour (Benartzi and Thaler 2001). The target-adjustment process Having reviewed theoretical propositions for the time-contingent movements of the debt ratio, the next step is to see how they fit the target-adjustment model: (1) ∆Dit = α + γ(Dit * -Di,t-1) + εit Shyam-Sunder and Myers (1999) explain within this framework how the management of a firm (i), can maintain the firm's capital structure at the optimal debt-ratio level at all times (Dit * ).Should a random shock, e.g. an unexpected stockprice movement, occur and push the capital structure (Dit) away from the optimum, the management can ensure readjustment.The speed of readjustment is reflected by the coefficient "γ".The optimal debt level is, however, unobservable.If it is assumed to be dependent on lagged values, the debt will have a mean-reverting behaviour: it will tend to bounce back to a mean value.Instantaneous target adjustment, if γ = 1, is an interesting special case of equation ( 1): ( This result hinges on the assumption that the adjustment is instantaneous, the intercept being zero and the random term being white noise.Restrictions on the adjustment coefficient have implications.If the adjustment coefficient is negative, firms will move away from target equilibrium.If 0 < γ < 1, then the adjustment process is gradual, indicating the influence of adjustment costs.Such costs are weighed against the cost of deviating from the target optimum, giving rise to an optimal speed of adjustment.The trade-off theory translates quite neatly to an empirical model of cross-sectional analysis through the target-adjustment model.At the optimal debt-ratio level (Dit * ), the advantages of borrowing and costs of financial distress would be balanced off at the margin.With reference to equation (2), firm "i" should be expected to operate, on average, at or close to Dit * at time t.The trade-off theory specifies different optimal levels, dependent on different firm-specific characteristics.In that way it can be used cross-sectionaly to explain leverage behaviour as a function of observable firm characteristics or proxies (Zit), (being approximations for theoretical factors): (3) Dit * = βZit By inserting (3) into (2) we get: Thus, the trade-off theory predicts, in addition to the mean reversion debt-ratio behaviour over time, that there is a cross-sectional relationship between debt ratios and those factors that affect the costs and benefits of leverage.It predicts that firms with a lot of taxable profits, little growth and investment opportunities but a lot of tangible assets will prefer relatively high debt ratios.Accordingly, the debt ratio is presumed to correlate with profits, tax rates, tangible assets and business risk.These factors, and many more provided by the literature, are represented as firm-specific characteristics (Zit) in equation ( 4). Shyam-Sunder and Myers (1999) challenge the target hypothesis implied by the trade-off theory within this framework by detecting some mean reversion as a result of the pecking order.In comparison, Welch (2004) proposes little reversion: stock returns automatically work the debt ratio up or down, through the market value of equity (in the debt ratio's denominator), because the management does not counteract its influence on the capital structure.The debt ratio, fully influenced under such conditions, is defined as the "implied debt ratio" (D I ) and the influence is of primary importance.The target-irrelevance theory centres on rejecting a causal relationship between the debt ratio and firm-specific characteristics.The trade-off theory's interpretation of such causal relationship is regarded as a type -II error, i.e. it is perceived to mistakenly "pick up" the effects of other, non-trade-off factors through firm characteristics 5 .The error is generated by the correlation of firm characteristics with the stock-return-induced debt ratio and the firm characteristics are thus wrongly valued as determinants of the debt ratio.The fact is, that whilst the trade-off theory assumes stable and targeted debt ratios for each individual firm, based on firm characteristics (βZit), they are in fact randomly dependent on the stockreturn-induced debt ratio ( i. e. the implied debt ratio, βDit I ). Debt-ratio dependence on the implied debt ratio, fitted within (1), results in: (5) ∆Dit = α + γ(βDit I -Di,t-1) + εit; where Dit * = βDit I (6) Dit = α + βDit I + εit; a deduced form if γ = 1 The target-adjustment model can be empirically estimated and used to test for the nature of debt-ratio readjustments by solving (5) as shown by ( 7): If γ = 0, the debt ratio will be stable and fixed, on average, around a historical mean, but if β = γ = 1, the debt ratio will spontaneously adjust to a randomly driven equity term in the debt ratio and no mean reversion will emerge.Note that both the tradeoff model ( 4) and the stock-return-induced debt-ratio model ( 6) suggest that γ = 1.Yet they tell completely different stories, and nesting one model within the other compares their relative importance by collecting the sum of ( 4) and ( 6): (8) Dit = α + β1Zit + β2Dit I + εit A formal framework for analysis The foundations for empirical testing need to be elaborated.The structure is based on Welch (2004), with some variants, and is to be applied to an international dataset. The main concern is to see whether actual debt ratios behave as though firms, on average, readjust their debt ratios to previous levels or to a static target.If this is not the case, attention is drawn to the effects of stock prices and whether the capital structure is allowed to fluctuate accordingly.Thus the factors of interest can be incorporated within the target-adjustment model (1) and presented in the following estimation equation, as suggested in ( 7): The term Dt is the actual firm debt ratio at time t, defined as the book value of debt (dt) divided by the book value of debt plus the market value of equity (et): The term D I t,t+k, is the implied debt ratio, which is relevant when no corporate issuing takes place over the time period from t to t+k.In other words d and e have their fixed values from time t whilst the sole varying factor over the period t+k in the D I t,t+k is the stock return (x): (11) D I t,t+k = dt / (et (1 + x t,t+k ) + dt) evidence for the existence of a target.They find the pecking order at work and argue that in view of the low power of the trade-off model, it usually attracts statistically more attention than it deserves. There are two competing hypotheses nested within (9): A perfect debt-ratio readjustment, over the time period t+k in the wake of deviating stock-price effects, is supported by: β2=1 and β3=0.On the other hand there can be a total lack of readjustment, with β2=0 and β3=1.The intention of including the constant β1 in equation ( 9) is to capture the effects of a constant non-changing target debt ratio over the time period t+k.If the target debt ratio were constant, the scenario would change to β1 = 1, β2 = 0 and β3 = 0. The corporate issuing activity which generates the dynamics of the capital structure and underlies equation ( 9) is represented by debt changes and equity changes over the time period t+k.The debt amount changes over time with new debt issues, debt retirements, coupon payments and debt-value changes, and can be represented by Δdt,t+k (total debt net issue): (12) dt+k = dt + Δdt,t+k In a similar manner, the amount of corporate equity issuing activity can be represented by Δet,t+k (net equity issuing), incorporating equity changes driven by new equity issues net of equity repurchases 6 : Hence, the dynamic underlying equation ( 9) becomes: The structure of analysis is naturally incorporated within the target-adjustment model and as a result some restrictions will be imposed on the variable coefficients.By feeding the details of equations ( 9)-( 14) into the target-adjustment model (1), we obtain: Note that D * t+k is assumed to be partially dependent on D I , being β3 D I t,t+k.By setting β3 = 1, D I becomes the target and the instantaneous adjustment process (γ=1) produces a random outcome within the target-adjustment model: In other words, the random behavior of the defined target, along with speedy adjustments, does not imply mean reversion or readjustment to a historical mean as characterised by the trade-off model.On the contrary, the opposite is detected, a randomly fluctuating debt ratio, with no relation to the historic mean.This phenomenon should be observed if management is perfectly satisfied with the 6 Although equations ( 12) and ( 13) are not of relevance in the estimation procedures to follow, they are enlightening as descriptive statistics and when contrasting corporate issuing activity with the stock-return-induced equity for counteracting potentials.continuous variation of the market value of debt ratio and is only concerned with other aspects of financing. The trade-off theory assumes that firms operate at or near their target optimum.This is why the theory predicts cross-sectional debt-ratio correlation with firm-specific characteristics.The target-readjustment implications of the trade-off theory can be challenged again in a different way from (9) by taking account of the firm-specific characteristics.By nesting D I amongst the trade-off-influencing firm-specific factors, its relative importance can be compared with the other elements of the trade-off model: By setting equation ( 21) equal to the target D * t+k, and plugging that into the target adjustment model ( 15), we obtain: As can be seen, if β3 = 1, the coefficient of D I is restricted in this specification by γ. Further, the target-adjustment model restricts the importance of the trade-off proxies by the adjustment speed (γβ2Zt+k).This specification needs β3 = γ = 1 to support a hypothesis of a random target. The Data Thomson's DataStream was the main provider of statistical information, most importantly by supplying the Worldscope balance sheet data.An advantage of this choice was a relatively broad selection of time-series data stemming from a large number of firms in a single data source.Another advantage was the comparative quality of the data supplied with regard to differences in accounting standards.Furthermore, as Welch (2004) and Rajan and Zingales (1995) used Compustat, trying a different data source for the same purpose was considered a more interesting challenge. The data sample period spans 24 years, from 1980 to 2003, and contains all publicly traded non-financial firms in the G-7 countries.The number of sample firms increased steadily over the sample period and the contrast between market-and bank-oriented countries is clearly reflected by the contrast of sample size.In the year 2002 the former group consisted of the US, with 5,797 sample firms and the UK, with 1,334, and the second consisted of France, with 661 sample firms, Italy, with 194 and Germany with 619.Technically Japan can be ranked with the former group when considering the size of the stock market, with 3,104 firms.Canada provided 841 firms.Different corporate governance systems are reflected through these figures and indeed in the number of firm-years, which will be referred to in the statistical tables. The accounting definition of debt may vary between the G-7 countries, and therefore we must rely somewhat on Datastream's data handling for comparability 7 .The selection of the relevant leverage measurement depends on what we want to interpret and data availability.The literature offers a number of options but a widely-accepted definition is that of debt to capital.The data source allowed three broad ways of measuring debt without sacrificing many observations, involving combinations of long-term debt, short-term debt and current liabilities.This study uses the ratio of long-term debt plus current liabilities to market value of capital. Both the numerator and denominator of the ratio can be exchanged for other alternative proxies incorporating the marginal benefits and costs of leverage.However, the chosen definition can be justified by being well accepted in the literature and results will be tested by other definitions for robustness. Target irrelevance vs. target adjustment The data will first be used to investigate the target irrelevance and ample issuing hypothesis (i) and (ii). Descriptive statistics Table 1 displays some descriptive statistics.The rows present normalized means of debt and equity financing, with the respective standard deviation stated below each mean 8 .Starting at the top of the table, we find the means and medians of the actual and implied debt ratios for the one-year horizon.The implied debt ratio does not deviate far from the actual debt ratio; however, in all cases except for that of Japan, the implied ratio deviates to lower values for the five-year horizon.Perhaps a consequence of low stock returns in Japan might explain why Japanese firms are the most levered in the G-7 countries.Another feature characterizing the Japanese firms is the low degree of standard deviation of the implied debt ratio.In fact, debt and equity issuing activity is also exceptionally low in Japan.The normalized net debt issue, on average, is only about 6% for the 5-year horizon and the total net issuing 8.6%, compared to a value generally around 30% and 50% respectively for the other countries.Germany is closest to the Japanese case as regards both issuing activity and low levels of return-induced equity growth, although there is a considerable difference in the debt-ratio levels.This applies both to the one-year and the five-year horizons.Whether a result of similar bank relations, economic climate or sheer coincidence, the uniformity stands out.The low degree of issuing over the sample period for the two countries could reflect constraints in the wake of economic stagnation, low investment levels and disappointing prospects (low returns) confronting banks. 7 Rajan and Zingales (1995) suggest methods to improve the comparative quality of the Compustat data, which seem to be consistent with the data definitions of Datastream. 8 The means are normalized by firm market values and trimmed of outliers between the 1 st and 99 th percentile for the one-year case and 5 th and 95 th percentile for the five-year case.Apart from these two cases there is a general resemblance among the G-7 countries with respect to issuing and total return.First, the stock-return is around 6% of firm market value for the one-year horizon, with the market-oriented economies showing greater returns (around 8%) compared to the bank-oriented ones (3%).This value generally reaches 20% to 45% for the five-year horizon, again reflecting roughly the same pattern between the two groups.Secondly, debt issuing, which is by definition net of gross issuing and retirement of debt, seems in most cases to centre around 4% for the one-year horizon and roughly 35% for the five-year horizon.Thirdly, net equity issuing is generally of less importance and exhibits greater variation between the countries, being most relevant for the market-oriented economies and least for the bank-oriented economies.Finally, in terms of collective net issuing activity, the G-7 countries present a mean value of about 7% of firm market value for the one-year horizon and 50% for the five-year horizon, with Canada above average and Japan and Germany below.Again, the countries can be ordered into the same groups on the basis of total corporate issuing. Issuing activity can be described both in terms of mean levels and standard deviation. Stock returns and issuing reach the same mean levels, and also show, to some extent, corresponding variability.This tells us, for example, that the stock-induced equity growth heterogeneity for the five-year horizon in the US, 103.2%, is not far from the managerial-activity-induced heterogeneity of 122.2%.Thus, there should be sufficient issuing activity in the US to counteract any stock-return fluctuations affecting the capital structure.The mean and standard deviation of total issuing activity of all the G-7 countries follow the stock-return-induced equity growth fairly closely, with notably greater standard deviation in the market-oriented economies.This can be seen as evidence for the ample-issuing hypothesis (ii) 9 . We can conclude that although debt issuing activity does not seem to rank in any particular way, both equity issuing and total issuing activity seem to be of more importance in the market-oriented economies than in the bank-oriented countries.This corresponds both to the capital market size and also the stock returns of the G-7 countries.In other words, higher stock returns in the market-oriented countries might reflect economic progress and higher growth rates and therefore necessitate external financing, supported by larger financial markets.Furthermore, there is evidence of ample corporate issuing.9 This hypothesis was also supported when corporate issuing was estimated by computing "internal debt ratio" variables, e. g.: D c t,t+k = (d t + ∆d t,t+k ) / (d t + ∆d t,t+k + e t + ∆e t,t+k ); for total issue (D c ), etc.These were compared with the implied debt ratio (D I ) in terms of correlation with the actual debt ratio (D) by regressing D i t,t+k on D t, for i = I, e, d and c as suggested by Welch (2004). Testing for target irrelevance vs. target readjustment The target-adjustment process and the estimation of equation ( 9), i.e. results from the Fama-MacBeth regressions explaining future actual debt ratios (Dt+k) 10 , are displayed in Table 2.In general terms, similar characteristics are found for most of the G-7 countries, which supports the irrelevance hypothesis (i) confirmed by low constants (α) and low past debt-ratio (D) coefficients.Firms are reluctant to revert to their original actual debt ratio as reflected by the modest increase in α and D coefficients across the time horizon of five years.Firms allow their debt ratios to fluctuate with stock returns as reflected by the implied debt-ratio (D I ) coefficients.This tendency is greater for the relatively short horizon, 1-3 years (around 70-90% correlation) but becomes weaker for longer horizons, 3-5 years (around 70% and below), as can be noted from a drop in the D I coefficients.Accordingly, the actual debt ratio (D) gains significance and value with longer horizons, but only to a very limited degree.Furthermore, in competition with the constant, D also loses economic significance. The smaller D coefficient reflects a smaller desire on the part of firms to revert to their starting debt ratios than a tendency of firms to prevent debt ratios from wandering too far away from a fixed constant. Table 2 reflects the same characteristic for most of the G-7 countries.However, two idiosyncratic features appear.First, the UK and the US have the highest constants for all horizons, representing the relative importance of a static debt ratio.They increase from a value around 5% with regard to the one-year horizon up to one of around 15% in the five-year case.Second, the two countries show a particularly low and insignificant coefficient for the past debt ratio for all horizons (although incrementally increasing).In comparison, the two bank-oriented countries, Japan and Germany, display considerable readjustment during the one-year horizon but their features align with those of the other continental European countries for the three-year and five-year horizons.This relatively larger D coefficient (β2) for these countries could be a sign of a greater degree of debt readjustment represented by low-cost access to bank loans through bank relations.However, this explanation is not convincing, as the characteristic disappears over longer horizons.Moreover, Japan and Germany represent the only cases where the importance of past debt ratio and readjustment loses significance over extended horizons.Other countries, on the other hand, show some limited readjustment tendencies.For Japan and Germany, this could, on the one hand, reflect a shift in the optimal debt-ratio target in the interim.On the other hand, the high value of β2 is conceivably observed by a lower variation in actual debt ratios compared to the variation in implied debt ratios.With respect to the relatively low level of return and issuing dynamics presented for Japan and Germany in Table 1, the latter possibility is more appealing.The Canadian firms seem more in line with those of Japan and Germany, so any explanation regarding readjustment motives is difficult on the basis of institutional characteristics. 10 The method is based on repeated cross-sectional regressions over the continuous time interval of the sample period.The coefficients presented are the means of those collected from each regression. The target irrelevance model vs. firm-specific characteristics The framework of analysis has until now been limited to two variables.This constraint will now be relaxed by adding alternative variables to test the stock-return hypothesis (iii), the target-irrelevance model (iv) and the corporate-governance hypothesis (v). Estimation with added explanatory variables Attempts made to distinguish between theoretical factors influencing debt ratio in empirical work have not proved fruitful. 11Therefore the customary methodology has focused on nested models, explaining leverage behaviour by using a variety of variables that can be justified on the grounds of any theory.Most variables are represented by trade-off theory firm-specific characteristics.In comparison to earlier cross-country studies, the specification to be presented uses larger samples, a longer observation period and a different methodology.The Fama-MacBeth cross-sectional regression method is applied to variables selected from DataStream with respect to those recommended by prior literature and their general availability over the sample period (see appendix).The estimation equation below distinguishes between flow variables V i t,t+k collected over the time period t to t+k and stock variables V j t: Table 3 displays results which show how dominating the effects of the implied debt ratio are.Several specifications were tried, with and without the implied debt ratio, and all revealed similar features.Excluding the implied debt ratio, stock returns on their own were economically and statistically more influential than profits and indeed any other variable.The implied debt ratio however, supersedes the stock returns variable in terms of importance when introduced into the structure.It absorbs some of the significant features of both the flow and stock variables and improves the explanatory power on all occasions by 3-30 percentage points 12 .The market-oriented countries and Japan turn out to have more significance in terms of valid coefficients than other countries. 11 See e.g.Shyam-Sunder and Myers (1999) and Frank and Goyal (2003). 12 The improvement was smallest in the case of Germany (44% variation explained), and greatest in the case of the US (59% explained).This is especially relevant for the US and Japanese samples, which are by far the largest in terms of firm-years and numbers of regressions, followed by the UK.In contrast, the continental European countries have merely one or two significant variables.This seems to support the corporate-governance implications of hypothesis (v), that capital structure theory seems more relevant where institutions encourage dispersed ownership and active stock markets. 13 It can be concluded that evidence has been found supporting hypotheses (iii), (iv) and (v).Stock returns absorb most influence from the profitability variables (iii).Most countries have a ΔD I coefficient of β2 ≈ 1 and a target-adjustment coefficient of γ ≈1, indicating a one-to-one relationship between ΔD I and debt-ratio change, supporting the target-irrelevance model (iv).The firm characteristics proxies of the conventional theories have their main strongholds in the market-oriented countries (including Japan), thus supporting (v).The low ΔD I coefficients of Japan, German and Canada are explained by the significant past debt ratio that is now included in the structure.However, as is reflected in Table 2, the coefficient values may be expected to align with those of the other countries for extended horizons. Extended horizon and alternative dependent variables Apart from running regression (23) on longer horizons, an interesting extension of the framework is to change the definition of the dependent variable.The definition of the debt ratio has not been an issue, but questions arise regarding its relevance for the observed estimates.It has throughout been defined as the sum of long-term debt and current liabilities, but as discussed earlier, several other definitions can be presented which explain capital structure adequately.Hypothesis (vi), the ranking hypothesis, will be examined simultaneously.Table 4 displays the normalized coefficients for four different definitions of the dependent variable and two time horizons 14 .Three general results are noticeable.First, the five-year effect of stock returns on debt-ratio changes is greater than that of the one-year effect.Second, stock-return dependence of the debt ratio is reduced when the debt-ratio definition becomes narrower and more concentrated, reaching a low for the book-value ratio.Bearing in mind that the short-term financing products should enable the easiest means of counteracting short-term shocks to the preferred market-value debt ratio, one should expect the broad definition to be least sensitive to stock returns.The results are contrary to hypothesis (vi).Thirdly, the correlation of book-value debt 13 Various other specifications were tried, both to explain debt ratio and debt-ratio changes, all of which produced similar effects.Following Welch (2004), a non-linear specification was tried: )) + Σ M j (β j V j t, + γ j V j t, ( ∆D I t,,t+k )) + ε t+k , where ∆D I t,t+k = D I t,t+k -D t .Although a few variables did not reach the same statistical significance as in Welch (2004), the estimated equation had similar explanatory power, variables had the same signs and most importantly, the implied debt ratio had similar measured economic influence (8.54% vs. 7.38%).However, apart from the UK and US cases, ∆D I t,t+k tended to be insignificant. 14 Here the term "normalized" means multiplying the coefficient with one standard deviation of the variable to obtain the magnitude of influence.ratio with stock return is lower than that of any of the market-value debt-ratio definitions.Yet, the market-oriented countries emerge with greater stock-return significance and higher coefficients for the one-year case.These three features also appear in Table 5. Table 5 presents a collection of coefficient estimates presenting the past debt ratio in regression (9) across three time horizons and four debt-ratio definitions 15 .Three features stand out.First, there seems to be an increasing tendency towards debt-ratio readjustment as the time horizon is extended, except in the cases of Germany and Japan.Secondly, in many cases the long-term debt ratio seems to have more readjustment tendencies than other type of debt, which supports the findings in Table 4 but is at odds with hypothesis (vi).Thirdly, the book-value debt ratio shows most persistence towards past debt ratios and, crucially, in the bank-oriented countries for all three time horizons, again supporting the outcome in Table 4.In other words, the book-value debt ratio tends not to fluctuate in accordance with stock returns in the short run (one year) but does so increasingly over longer periods (five years).This is not surprising, as improved returns are bound to have effects on the book-value debt ratio in the long term.However, over such periods the debtratio target might have shifted from the past debt levels.These features could be interpreted as reflecting management's preoccupation with accounting values and book targets of debt ratios, rather than the market-value debt ratio. Table 5 shows how, for extended horizons, the tendency to revert to the old bookvalue debt ratio generally decreases somewhat and the influence of stock returns improves to some degree.It seems that in the bank-oriented countries, past debt ratios are more useful in explaining future book-value debt ratios than is the case elsewhere.In contrast, firms in the market-oriented countries seem to let their book values adjust in accordance with stock-price behaviour.This is consistent with the book-value hypothesis (vii), and effectively means that accounting practices in each country matter.Wald (1999) finds support for this and claims German and Japanese accounting rules adhere more to historically based valuation. 16Another factor is market timing, if this is practised in the market-oriented countries.If so, the book value of equity will correlate with the stock price and the book-value debt ratio will respond more to stock returns and the implied debt ratio. 15Convergence over time is denoted with "+" and divergence with "-". 16 Evidence was also found for this when estimating correlations between debt ratios and their past values, which proved to be higher for the bank-oriented countries.Leverage behaviour is driven by similar factors in all the G-7 countries.However, the conventional theories have more strongholds in the market-oriented countries and Japan through more persistent debt-ratio influence of firm-specific characteristics.Stock returns have dominating effects on capital structure which management generally chooses not to counteract.The implied debt ratio absorbs crucial influence from the trade-off theory proxies, most severely in the bankoriented countries.The book-value debt ratios, on the other hand, show dependence on past debt-ratio values.This characteristic is stronger in the bank-oriented countries, possibly due to weaker links between accounting practices and market valuation and less market influence on the timing of equity issuing.It is proposed, accordingly, that corporate management is more concerned with book-value debtratio targets than their market-value counterparts.The motive underlying such preference is presumed to be the stable nature of accounting values and their signalling value.Secondly, it is argued that market-value debt ratio loses meaning and credibility as a target indicator under severe stock-price fluctuations.Correspondingly, management might favour keeping the market-value debt ratio within confidence intervals or restricting it to a flexible target. The results presented provide evidence for a fluctuating debt ratio, whether targeted or not, and fall in with those presented by Welch (2004), Graham and Harvey (2001) and partially Myers (2001).The volatile nature of the market-value debt ratio might discredit it as a practical benchmark for business purposes and reduce its value as an optimising target.Actually, management seems to allow market-value debt-ratio fluctuation but might, all the same, keep a watchful eye on the debt-ratio development over time.Consequently, firms dealing with financing decisions are ready to tolerate a floating debt ratio within certain confidence limits, only to be acted on when drifting to extreme boundaries.This observation gives rise to a conditional floating debt-ratio target and a target zone.As a result, the observed tendency of long-run debt ratio to revert to prior levels could be explained on such grounds.Further, target zones leave space for management to promote other financial priorities than debt-ratio targeting, as is suggested by the survey results of Graham and Harvey (2001). Tangibility Property, plant and equipment (02501) divided by assets (02999) Log assets Total assets (02999) adjusted to 2003 levels using the CPI. Log relative market capitalization Market value of equity (08001) divided by the price level of the relevant stock market index. Market-to-book ratio Market value of equity divided by the book value of equity, filtered between the 5% and 95% distribution interval Table 1 . Selected Descriptive Statistics in %.Cross-Country Comparison Table 3 . Regressions Explaining Debt Ratio Changes (D t+k -D t ): Adding Variables Table 4 . Effects of Stock Returns (D I t,t+k -D t ) on Debt Ratio Changes (D t+k -D t ) in %, where D t = d t /(d t +e t ) liabilities, long & short term debt, debt and the book value debt ratio respectively Two asterisks reflect 95% statistical significance and a third one added reflects a dominating economic influence on the debt ratio The coefficients are normalized on the variable standard deviation and presented in percentages Table 5 . Debt Ratio Target Readjustment in % Over Different Horizons
2018-12-17T19:36:39.198Z
2005-06-15T00:00:00.000
{ "year": 2005, "sha1": "9ea9a44370f9a3580949939aa50206f340226880", "oa_license": "CCBY", "oa_url": "http://www.efnahagsmal.is/article/download/a.2005.3.1.2/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9ea9a44370f9a3580949939aa50206f340226880", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
9001850
pes2o/s2orc
v3-fos-license
Integrative genomics analysis of chromosome 5p gain in cervical cancer reveals target over-expressed genes, including Drosha Background Copy number gains and amplifications are characteristic feature of cervical cancer (CC) genomes for which the underlying mechanisms are unclear. These changes may possess oncogenic properties by deregulating tumor-related genes. Gain of short arm of chromosome 5 (5p) is the most frequent karyotypic change in CC. Methods To examine the role of 5p gain, we performed a combination of single nucleotide polymorphism (SNP) array, fluorescence in situ hybridization (FISH), and gene expression analyses on invasive cancer and in various stages of CC progression. Results The SNP and FISH analyses revealed copy number increase (CNI) of 5p in 63% of invasive CC, which arises at later stages of precancerous lesions in CC development. We integrated chromosome 5 genomic copy number and gene expression data to identify key target over expressed genes as a consequence of 5p gain. One of the candidates identified was Drosha (RNASEN), a gene that is required in the first step of microRNA (miRNA) processing in the nucleus. Other 5p genes identified as targets of CNI play a role in DNA repair and cell cycle regulation (BASP1, TARS, PAIP1, BRD9, RAD1, SKP2, and POLS), signal transduction (OSMR), and mitochondrial oxidative phosphorylation (NNT, SDHA, and NDUFS6), suggesting that disruption of pathways involving these genes may contribute to CC progression. Conclusion Taken together, we demonstrate the power of integrating genomics data with expression data in deciphering tumor-related targets of CNI. Identification of 5p gene targets in CC denotes an important step towards biomarker development and forms a framework for testing as molecular therapeutic targets. Background The short arm of chromosome 5 (5p) frequently undergoes nonrandom changes in cervical cancer (CC) by exhibiting both copy number increase and deletions. Gain of 5p due to frequent appearance of isochromosome 5p in squamous cell carcinoma has been documented by karyotypic and chromosomal comparative genomic hybridization analyses [1][2][3][4]. Paradoxically, 5p also exhibits frequent loss of heterozygosity, which occurs early in the development of CC [5,6]. These findings suggest the presence of important proliferation-regulating genes on chromosome 5p involved in malignant progression of cervical epithelium. Despite the successful use of pap-smear screening programs in early detection and treatment of CC, this tumor remains a major cause of cancer deaths in women worldwide [7]. CC progresses by distinct morphological changes from normal epithelium to carcinoma through low-grade squamous intraepithelial lesions (LSIL) and high-grade SILs (HSIL). Currently, no biological or genetic markers are available to predict which precancerous lesions progress to invasive CC. Although infection of high-risk human papillomavirus (HPV) is recognized as an essential initiating event in cervical tumorigenesis, this alone is not sufficient for the progression to invasive cancer [8]. In spite of the recent progress in molecular aspects of CC, the genetic basis of progression of precursor SILs to invasive cancer in the multi-step progression of CC remains poorly understood [9]. Therefore identification of other "genetic hits" in CC is important in understanding its biology. Chromosomal gain and amplification is a common cellular mechanism of gene activation in tumorigenesis [10]. The aim of the present study was to examine the contribution of chromosome 5 copy number alterations (CNA) in CC tumorigenesis and identify copy number driven gene expression changes. We performed single nucleotide polymorphism (SNP) array and fluorescence in situ hybridization (FISH) analysis on invasive cancer and identified 5p CNI in a high frequency of primary tumors and cell lines. To unravel the consequence of 5p CNI on transcription, we utilized Affymetrix U133A gene expression array and identified a number of over expressed genes on 5p, which include RNASEN, POLS, OSMR, and RAD1 genes. These data, thus, suggest that transcriptional activation of multiple genes on 5p plays a role as driver genes in the progression of CC. Tumor specimens and cervical cancer cell lines A total of 219 specimens were utilized in the present study in various investigations. These include 9 cell lines, 148 primary tumors, 42 pap smears, and 20 normal cervical tissues. The cell lines (HT-3, ME-180, CaSki, MS751, C-4I, C-33A, SW756, HeLa, and SiHa) were obtained from American Type Culture Collection (ATCC, Manassas, VA) and grown in tissue culture as per the supplier's specifications. Twenty age-matched normal cervical tissues from hysterectomy specimens obtained from Columbia University Medical Center (CUMC), New York, were used as controls after enrichment for epithelial cells by microdissection. Cytologic specimens were collected using the ThinPrep Test Kit (Cytc Corporation, Marlborough, MA). After visualization of the cervical os the ectocervix was sampled with a spatula and endocervical cells obtained with a brush rotated three hundred sixty degrees. Exfoliated cells were immediately placed in PreservCyt Solution (Cytc Corporation, Marlborough, MA) for routine processing by a cytopathologist. Pap smears were collected from normal and precancerous lesions by simultaneous preparation of slides from the same spatula for both cytology and FISH. FISH slides were immediately fixed in 3:1 methanol and acetic acid, and stored at 4°C until hybridization. A total of 42 pap smears with the diagnosis rendered by a cytopathologist as normal/squamous metaplasia/ASCUS (N = 10), LSIL (N = 13) or HSIL (N = 19) obtained from CUMC were used for FISH analysis. The diagnosis of all HSILs was also confirmed by a biopsy. Of the 148 primary tumors, 93 were obtained as frozen tissues and 55 specimens as formalin-fixed paraffin-embedded tissues. All primary invasive cancer specimens were obtained from patients evaluated at CUMC, Instituto Nacional de Cancerologia (Bogota, Colombia) [11], and the Department of Gynecology of Campus Benjamin Franklin, Charité-Universitätsmedizin Berlin (Germany) with appropriate informed consent and approval of protocols by institutional review boards. All primary tumors were diagnosed as squamous cell carcinoma (SCC) except five that were diagnosed as adenocarcinoma (AC). Clinical information such as age, stage and size of the tumor, follow-up data after initial diagnosis and treatment was collected from the review of institutional medical records. Tissues were frozen at -80°C immediately after resection and were embedded with tissue freeze medium (OTC) before microdissection. All primary tumor specimens were determined to contain at least 60% tumor by examination of hematoxylin and eosin (H&E) staining of adjacent sections. High molecular weight DNA and total RNA from tumor, normal tissues, and cell lines was isolated by standard methods. The integrity of all RNA preparations was tested by running formaldehyde gels and samples that showed evidence of degradation were excluded from the study. Microarray analysis The Affymetrix 250 K NspI SNP chip was utilized for copy number analysis as per the manufacturer's protocol. Briefly, 250 ng of genomic DNA was digested with NspI, generic linkers were added followed by PCR amplification, end-labeling, and fragmentation following standard protocols. Hybridization, washing, acquisition of raw data using GeneChip Operating Software (GCOS), and generation of .CEL files was performed by the Affymetrix Core facility at our institute. We utilized 79 CC cases (9 cell lines and 70 primary tumors enriched for tumor cells by microdissection) and 7 microdissected normal cervical squamous epithelial samples as controls to serve as reference for copy number analysis. SNP data of test samples and normal cervical epithelial specimens were loaded to dChip to calculate signal intensity values using the perfect match/mismatch (PM/MM) difference model followed by normalization of signals within chip and between chips using model-based expression [12,13]. DNA copy number gains were obtained as determined by dChip using analysis of signal intensity values based on the Hidden Markov Model. Arrays with > 93% call rates were included in the analysis as per Affymetrix manual. Copy number data was obtained for chromosome 5 using Cyto-Band information files from the dChip website [14]. Both the raw copy number and log 2 ratio (Signal/mean signal of normal samples at each SNP) were computed to estimate copy number changes in chromosome view. Copy numbers < 1.5 were considered as deletion, 2.5 or more as gain in the raw copy number view. All the original data files were submitted to Gene Expression Omnibus (GEO Accession number: GSE10092). We utilized Affymetrix U133A oligonucleotide microarray (Santa Clara, CA) containing 14,500 probe sets for gene expression analysis. RNA isolated from 30 CC cases (21 primary tumors enriched for tumor cells by microdissection and 9 cell lines) and 20 microdisssected normal cervical squamous epithelial cells were utilized for expression studies. Biotinylated cRNA preparation and hybridization of arrays was performed by the standard protocols supplied by the manufacturer. Arrays were subsequently developed and scanned to obtain quantitative gene expression levels. Expression values for the genes were determined using the Affymetrix GeneChip Operating Software (GCOS) and the Global Scaling option, which allows a number of experiments to be normalized to one target intensity to account for the differences in global chip intensity. The .CEL files obtained from the GCOS software were processed and normalized by dChip algorithm as described above. An average percent present call of 54% was obtained among all samples, which is expected for high quality RNA as per the manufacturer. Arrays were normalized at PM/MM probe level and a median intensity array from normal as the baseline array using invariant set normalization [12,13]. Followed by normalization, model based expression values were calculated using PM/MM data view to fit the model for all probe sets. All original data files were deposited to GEO (Accession number: GSE9750). To obtain a list of differentially expressed gene signatures, we compared all normal with all tumor samples using the criteria of 1.75-fold change between the group means at 90% confidence interval and a significance level of P < 0.05. All negative expression values for each probe set were truncated to 1 before calculating fold changes and < 10% of samples with present call in each group were excluded. A list of differentially expressed genes identified on chromosome 5 was used in all subsequent supervised analyses using the same criteria between various groups to obtain relevant gene signatures. Fluorescence in situ hybridization (FISH) and HPV typing FISH was performed by standard methods on frozen tissue sections fixed in 3:1 methanol: acetic acid, tissue microarrays prepared from paraffin embedded tissues, and on pap smears fixed in 3:1 methanol: acetic acid. A dual color locus specific probe set containing spectrum orange labeled EGR1 (map to 5q31) and spectrum green labeled D5S23/D5S721 (map to 5p15.2) was obtained from Vysis (Downers Grove, IL). Hybridization signals on 100-500 interphase cells on DAPI counterstained slides were scored on Nikon Eclipse epi-fluorescence microscope equipped with Applied Imaging CytoVision software (San Jose, CA). Scoring of FISH signals on frozen and paraffin-embedded tissue sections was restricted to tumor cells based on the identification of areas of tumor on adjacent H&E sections by the pathologist (MM). FISH signal scoring on Pap smear slides was restricted to large and atypical epithelial cells. Presence of signals suggestive of gain in at least 3% cells was considered positive and the results correlated with parallel cytomorphologic findings. Human papillomavirus types were identified as described earlier [15]. Identification of 5p gain as the most frequent genomic alteration in invasive CC Affymetrix 250 K NspI SNP array analysis was performed on a panel of 79 CC cases (70 primary tumors and 9 cell lines) to identify genome-wide copy number alterations (CNA) (unpublished data). The dataset of chromosome 5 CNA from this analysis was utilized in the present study. CNA of chromosome 5 was found in 42 (53.2%) CC cases. Of these, 5p exhibited copy number gains in 34 (43%) cases while no detectable copy number losses were found on this chromosomal arm ( Figure 1A). Gain of 5p was the most commonly affected regions in the CC genome (see Additional file 1). On the other hand, gain of long arm of chromosome 5 (5q) was rare with only 3 (3.8%) tumors showing CNI. However, copy number losses on 5q were found in 25 (31.6%) tumors. Of these, 17 had concurrent 5p gains and 5q losses, while the remaining 8 only showed 5q deletion ( Figure 1B). Among the tumors that exhibited 5p CNI the entire 5p was gained and no minimal region of duplication or amplification could be delineated. Similarly, deletions on 5q span large regions often spanning the entire chromosomal arm and no consensus minimal deletion could be identified (Figure 1B). This data demonstrate that the chromosome 5p is a frequent target of CNI in CC, while accompanying deletions on 5q were found less frequently. To identify the clinical significance, we evaluated the association of chromosome 5 CNA with pathologic features such as histology, age, stage and size of the tumor, treatment outcome, and HPV type by univariate analyses and found no signif-Identification of chromosome 5p genomic alterations in cervical cancer icant associations. These data thus suggest that chromosome 5 CNA is a critical genetic alteration that may occur early in the development of CC. FISH validation of 5p gain in CC To validate the 5p gain identified by SNP array, we performed FISH analysis using a cocktail of two probes containing spectrum green-labeled 5p15.2 locus and spectrum orange-labeled 5q31 region on 101 CC cases. These include an independent panel of 55 tumors on a paraffin-embedded tissue microarray and an additional 46 tumors as frozen sections or pap smears. The latter include 23 tumors studied by SNP array (see Additional file 2). A total of 64 (63%) tumors showed an evidence for increased copies (3 or more) of 5p ( Figure 1C-E). An average of 4.4 copies (range 3-11) of 5p15 signals were found among the 64 cases that exhibited gain, while only 2.6 copies (range: 1-8) of the 5q31 region were present (Figure 1C-E). These data, thus, suggest that the 5p CNI is independent of ploidy of the tumor and support the SNP data showing the gain of 5p and associated loss of 5q. All the tumors that exhibited 5p gain by SNP array also showed gain by FISH. For example, the tumors T-207, T-218, and T-1981 showed simultaneous high copy numbers of 5p and loss of 5q by SNP array analysis. The FISH results on the same tumors are in complete agreement with the SNP data ( Figure 1). These results, thus, validate the SNP data and establish that 5p CNI as the most frequent genetic alteration in CC. Chromosome 5p gain is a late genetic event in CC progression CC progresses through distinct morphological changes during the transition from normal epithelium to carcinoma through low-and high-grade SILs. To identify the earliest stage in CC development in which the 5p CNI occur, we used a FISH assay on 42 consecutively ascertained pap smears simultaneously diagnosed by cytology as normal, squamous metaplasia or with atypical cells of undetermined significance (ASCUS) (N = 10), LSIL (N = 14) and HSIL (N = 19). Five of 19 (26.3%) HSILs showed four or more copies (range 4-7) of 5p ( Figure 1F). Of these, three HSILs exhibited tetrasomy 5 while 2 others showed evidence of 5p gain (5-7 copies vs. 3-4 copies of 5q) ( Figure 1F). No evidence of gain of 5p was found in any specimens diagnosed as LSIL, normal, squamous metaplasia or ASCUS. Thus, these data suggest that 5p gain is a relatively late event in the progression of CC. The biological behavior of HSILs varies where only a small proportion progresses to invasive cancer if left untreated [16][17][18]. Cytologic characterization alone doesn't permit the identification of HSILs at risk for progression from those that regress or persist. Because of this, all HSILs are currently treated by surgical excision or with an ablative therapy. Identification of genetic signatures defining the subset of high-risk HSILs could alter the treatment strategies. Chromosome 5p gain may serve as such a genetic marker in predicting the progression of HSILs. Identification of transcriptional targets of 5p gain, including Drosha, in CC We have shown 5p CNI as the most frequent genomic alteration in CC by combined SNP (see Additional file 1) and FISH analyses. We hypothesize that the increased 5p dosage may result in deregulation of genes that may confer oncogenic properties to its host cell. To identify such transcriptional targets on 5p, we utilized gene expression profiling data on Affymetrix U133A array analysis of 20 normal squamous epithelial samples (age range, 27-64 yr; Mean ± SD, 46.9 ± 7.6) and 30 CC cases (21 primary tumors; age range 28-70 yr; Mean ± SD, 48.3 ± 11.3; and 9 cell lines). Initial identification of differentially expressed gene signatures on chromosome 5 in CC was obtained by comparison of all probe sets on chromosome 5 present in U133A array between tumors and normal that exhibit significant (P < 0.05) differences using the criteria described in materials and methods. This algorithm identified 122 non-redundant probe sets with significant differences in expression levels in tumors compared to normal. This unique CC chromosome 5 gene signature, which distinguishes normal from tumor, includes 26 probe sets with down-regulated expression and 96 probe sets with increased expression (see Additional file 3). We anticipate that this differentially expressed gene data set will be useful in identifying target genes of CNI of 5p and loss of 5q in CC. Therefore, we focused our attention on this gene dataset in all subsequent supervised analyses of gene expression. Although a similar type of analysis identified down-regulated gene signature on 5q in invasive CC, no specific signature associated with 5q deletion could be identified (see Additional file 5). Analysis performed to identify the down regulated genes on 5q using all probes on chromosome 5q on U133A array showed a total of 17 down regulated genes (EGR1, PITX1, MAST4, GALNT2, ATP10B, DUSP1, HBEGF, RMND5B, HMGCR, CAST, CLTB, GX3, SPINK5, LOC653314, CXCL14, ISL1, and PIK3R1) in CC compared to normal cervical epithelium. Thus, these data suggest that the 5p gain is a critical genetic change in CC and the genes identified as a consequence of 5p gain may be important in its tumorigenesis. These data further suggest that the 5q loss may have little consequence to CC biology and may represent a by-stander genetic alteration associated with 5p gain. Discussion We provide multiple levels of evidence to support that genomic gain of chromosome 5p is an important genetic target in CC development. First, our SNP analysis identified 5p gain as the most frequent genetic alteration in invasive CC (see Additional file 1). By FISH we confirmed this finding using an independent cohort of CC specimens and seen only in high-grade SILs. Several previous studies have identified recurrent gain of 5p in many types of human cancers [19], including CC [1][2][3][4][20][21][22]. Gain of 5p also appears to arise at latter passages in HPV-immor-Supervised analysis of over expressed genes identified as a consequence of gain of chromosome 5q in cervical cancer Figure 2 Supervised analysis of over expressed genes identified as a consequence of gain of chromosome 5q in cervical cancer. Significantly differentially expressed genes were identified by filtering all the over expressed genes on chromosome 5p between tumor that showed gain of 5p and tumors with out 5p gain. In the matrix, each row represents the gene expression relative to group mean and each column represents a sample (shown on Top). T, represents primary tumor; CL, represents cell line. The dendrogram on left shows unsupervised clustering of genes differentially expressed between tumors with and without gain. The names of genes are shown on right. The scale bar (-2 to +2) on the bottom represents the level of expression with intensities of blue represents decrease and red for increase in expression. The groups within tumors shown at top represent no gain of chromosome 5p (I) and 5p gain (II). talized cervical keratinocytes and its acquisition confers the ability to invade collagen in tissue culture [23]. This close recapitulation of 5p gain in latter stages of an in vitro model and in the clinical specimens from CC patients provide a strong evidence that this change occurs late in the development and may play role in invasion. Of these genes, RNASEN (Drosha) over expression was identified in all tumors with 5p gain ascertained by SNP analysis (Figure 2). This finding suggests that RNASEN is one of the critical targets conferred by 5p CNI that may play a major role in tumor progression. Drosha executes the initial step in microRNA (miRNA) processing by cleaving pri-miRNA to release pre-miRNA. Drosha is also involved in pre-rRNA processing with specificity to double-strand RNA [25]. Drosha over expression was shown to regulate proliferation and predicts poor prognosis in esophageal cancer [26]. Drosha copy number gain and over expression was shown to influence global miRNA profiles in CC [27]. miRNAs play critical roles in various biological processes including cancer where miRNA fin-gerprinting can distinguish different lineage tumors [28]. Although the role of Drosha over expression in cancer is not well studied, a number of possibilities exist. Overexpressed Drosha may more efficiently process pri-miRNAs resulting in increased levels of mature miRNAs and the resulting miRNAs may effect transcription of several mRNAs that in turn affect the production of other pri-miRNAs [29]. In the context of its role in miRNA processing, our data suggest that Drosha over expression due to 5p gain is likely an important mechanism in later stages of CC progression. Previous studies have shown that oncostatin M receptor (OSMR) gene is gained and over expressed in CC, which is associated with adverse clinical outcome [30,31]. Oncostatin M (OSM) is a cytokine related to the IL-6 family of cytokines and its biological activity is mediated through the receptor complex. Upon ligand binding, OSMR can activate signaling pathways implicated in cancer such as STAT, PI3/AKT, and mediates inhibition of tumor growth [32]. Angiogenic factor VEGF is induced upon OSM stimulation in cervical cancer cell lines suggesting OSMR over expression contributes in CC tumorigenesis [31]. Our expression analysis also showed a number of genes that possess functions related to nucleic acid binding, DNA repair, and mitotic cell cycle (BASP1, TARS, PAIP1, BRD9, RAD1, SKP2, and POLS). Of these, the S-phase kinase-associated protein 2 (SKP2) plays a critical role in coordinating the G1/S transition, cell cycle progression, forms a substrate recognition subunit of SCF ubiquitinprotein ligase complex, and inhibits the tumor suppressor function of FOXO1. Over expression of SKP2 was found in many tumor types, consistent with a role of an oncogene, and is associated with poor clinical outcome [33]. RAD1 is a component of the 9-1-1 cell-cycle checkpoint response complex that plays a major role in DNA repair [34]. However, its role in cancer is not well understood. Three nuclear genes (NNT, SDHA, and NDUFS6) encoding mitochondrial proteins that play a role in oxidative phosphorylation (OxPhos) were also over expressed as a consequence of 5p gain. The mitochondrial OxPhos system plays a key role in energy production, the generation of free radicals, and apoptosis, the hallmark features of cancer cells [35]. Since tumor cells display enhanced biosynthesis capacity, a key feature of the metabolic transformation of tumor cells that support growth and proliferation, the mitochondrial OxPhos system may stimulate signaling pathways critical in tumor progres-sion. Although nothing is known about these genes in cancer, it remains to be determined whether one or more of these genes act individually or synergistically as oncogenes in regulating the metabolic transformation in CC. Since genetic activation of therapy targets such as ABL, C-KIT, Her2/neu, and EGFR has been successfully demonstrated to be essential for treatment response [36], our finding of 5p gene targets such as RNASEN, SKP2, and OSMR emphasize the need for functional analysis and dissecting signaling cascades involving these genes in ultimately obtaining therapeutic targets needed for cure and prevention of this devastating cancer. Conclusion In summary, we integrated multiple genomic data to identify 5p gain as the most recurrent chromosomal alteration Relative expression of differentially expressed genes as a consequence of 5p gain in relation to GAPDH in normal and tumors with and without gain of 5p gain Figure 3 Relative expression of differentially expressed genes as a consequence of 5p gain in relation to GAPDH in normal and tumors with and without gain of 5p gain. Genes are shown on top left-side corner of each panel. that occur at high-grade precancerous lesions in the development of CC. We identified the target 5p gain associated over expressed genes that play a role in miRNA processing, signal transduction, DNA repair and mitotic cycle, and oxidative phosphorylation, suggesting a functional role for this chromosomal region in progression of CC. Thus, the genes identified here will form a basis for functional testing of 5p gain and the gene expression levels can be used as a biomarker to identify patients with aggressive disease. Further studies in the context of 5p gain will allow deciphering critical gene targets to develop molecular based therapies for CC.
2016-05-10T16:20:09.210Z
2008-06-17T00:00:00.000
{ "year": 2008, "sha1": "7a6acbb96e5d831232a16df31439e4e6a21d4d0b", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-7-58", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a6acbb96e5d831232a16df31439e4e6a21d4d0b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245602278
pes2o/s2orc
v3-fos-license
The Cognitive-Enhancing Outcomes of Caffeine and L-theanine: A Systematic Review Attention-deficit hyperactivity disorder (ADHD) affects multiple cognitive domains, including impaired attention, hyperactivity, and increased impulsivity. According to the CDC, 9.4% of children between 2 and 17 years old have been diagnosed with ADHD. Neurotransmitters such as noradrenaline and dopamine have been suggested as crucial players in the pathophysiology of ADHD and are often targets of modern medication. Adenosine receptors types A1 and A2a in the brain are inhibited by caffeine: a stimulant known to augment attention by increasing cholinergic and dopaminergic transmission. The cognitive function of attention is also enhanced by the amino acid: L-theanine. The mechanism of action is that it behaves like a glutamate reuptake inhibitor while also acting in the hippocampus as a competitive low-affinity glutamate receptor antagonist. It’s also shown to have a neuroprotective effect by its action on the gamma aminobutyric acid (GABA)-A receptors. Our systematic review investigates the literature and clinical trials on the cognitive-enhancing effects of caffeine and L-theanine. Assessment of Biases For clinical trials, we detected biases using the Cochrane collaboration's risk of bias tool [8]. For observational studies, we used the Risk Of Bias In Non-randomized Studies--of Interventions (ROBINS-I ) tool [9]. Figure 1 uses the PRISMA flowchart to exhibit study results. Results These clinical trials measured one or more of the following parameters: cognition, reaction times, concentration, and/or others measure including headaches, tiredness, or alertness. Specifically, the tests used by one or more of these studies include NIH Cognition Toolbox, stop-signal reaction time (used to check control of inhibition), total cognition composite, d-prime in the Go/NoGo task, fMRI responses, mean recognition visual reaction time (RVRT), amplitudes of the mean peak-to-peak N2-P300 event-related potential (ERP), "headache" ratings, "tired" ratings, "alert" ratings, digit vigilance reaction time, correct serial seven subtractions, rapid visual information processing (RVIP) accuracy, and word recognition reaction time. The 2020 study from Kahathuduwa et al. explored acute outcomes of L-theanine, caffeine, and their combined effects on maintained attention, control on inhibition, and general cognition in the five boys diagnosed with ADHD [10]. Improvements by the L-theanine-caffeine combination were shown on impairments related to ADHD, meaning it may be a potential therapeutic consideration. Total cognition composite was improved with L-theanine in the NIH Cognition Toolbox (p = 0.040) vs placebo. Inhibitory control was worsened by caffeine, and L-theanine, separately, as suggested by longer reaction times seen in the stop-signal intervention (p = 0.031 and p = 0.053, respectively). Whereas, improvement in cognition was seen in the total cognition composite (p = 0.041), and in the Go/NoGo task (p = 0.033). Improvements in control of inhibition (p = 0.080) was also apparent. The combination was also associated with decreased task-related reactivity in the default mode network of the brain in the region associated with mindwandering, which meant decreased distractibility and improved concentration. The 2018 study by Kahathuduwa et al. investigated the outcomes of 200 mg of L-theanine, 160 mg of caffeine, a fusion of the two, and distilled water in a four-way crossover study design using nine healthy adult men [11]. A visual color stimulus discrimination task was performed by the subjects, where an fMRI scan was performed for 20 minutes, 60 minutes after administering L-theanine, caffeine, and their combination. The fMRI results confirmed a decrease in mind-wandering by showing fewer responses to distractor stimuli in regions of the brain where visual attention is regulated. It was also observed that Ltheanine decreases GABA levels and caffeine increases glutamate levels. This leads to the visible patterns of blood oxygenation level dependent (BOLD) responses which were seen with L-theanine alone and when combined with caffeine. The 2017 study by Kahathuduwa et al. investigated the impact of 200 mg of L-theanine, and 160 mg of caffeine, the combination of both, a single cup of black tea, and distilled water in a five-way crossover trial with 20 healthy adult males [12]. Participants took a dose of L-theanine, which was analogous to drinking eight cups of black tea; these effects are comparable to that of caffeine. Several measurements were assessed, as mentioned in Table 2, which demonstrated an improvement in the subject's cognitive and neurophysiological measures of selective attention after taking the combined product meaning, the additive outcomes from the L-theanine-caffeine fusion improve attention in high doses. were noted in subjects taking L-theanine. Greater accuracy in rapid visual information processing (RVIP). More reports of "mental fatigue" were also assessed in subjects taking L-Theanine. These subjects also showed faster digit vigilance reaction times. Quicker simple reaction time and quicker working memory (in terms of numbers) reaction time, and better accuracy of sentence verification were features of the Ltheanine-caffeine combination. Furthermore, "headache" and "tired" ratings were reduced and "alert" ratings increased. A significantly positive interaction on delayed word recognition reaction time was also noted with Ltheanine-caffeine combination individually, and in combination on the acute cognitive and mood effects in participants [13]. They found higher ratings for headache and lower correct serial seven subtractions in subjects administered L-theanine. Participants provided caffeine were noted to have quicker reaction time for the vigilance of digits, better accuracy for rapid visual information processing (RVIP), and greater reports of mental fatigue. Quicker simple reaction time, quicker working memory (in terms of numbers) reaction time, and better accuracy of sentence verification were recorded for participants taking the L-theanine and caffeine fusion. Participants reported a reduction in "headache" and "tired" ratings, while "alert" ratings were increased. Moreover, a significantly positive interaction on delayed word recognition reaction time was noted with the combination. The 2021 study by Baba et al. evaluated the usefulness of continuous matcha intake (contains L-theanine), caffeine, their combination, and placebo under stress conditions [14]. Mild stress was induced acutely using the Uchida-Kraepelin test (UKT), while the cognitive function was evaluated by using the Cognitrax. The function of attention was improved during and post-stress loading with one caffeine dose. The caffeine content in matcha was reported to likely have caused the lower reactive time seen in the Cognitrax. However, participants continuously taking matcha showed augmented amounts of work like the caffeine group, except the caffeine group did it on a singular dose. Ingestion of the combination of matcha with caffeine improves work performance and attention under psychological stress versus with caffeine on its own. Limitations The 2020 study by Kahathuduwa et al. has some limitations [10]. First, it used a small sample size of only five male children with ADHD [10]. This constrains the variance of outcome measures and generalizability to all children. Second, while neuroimaging findings may have been minimally affected, the results of the Go/NoGo test, the stop-signal assignment, and the NIH Cognition Toolbox, which rely on scoring by a person involved in the study, may be influenced by bias due to unblinding [10]. Third, participants were not provided standardized food/beverages or instructions to abstain from food before each visit where they were tested [10]. Their hunger level could impact the outcomes of maintained attention, impulsive behaviors, and results seen on fMRI [15]. These limitations suggest that the preliminary evidence presented needs more power by using a larger sample size in future clinical trials. The 2018 study by Kahathuduwa et al. was underpowered in sample size making it difficult to capture enough changes in BOLD fMRI responses from certain regions of the brain [11]. Second, only male subjects were selected in this study's sample in order to avoid interferences from the natural changes seen in the menstrual cycle on reaction times in female subjects, thereby limiting the generalizability of this study's findings. Third, it is well understood that caffeine affects cerebral circulation. However, cerebral circulation in human brains is not as well understood with L-theanine. Changes in blood oxygen levels in certain regions of the brain determine responses noted in BOLD fMRI. In this study, it is unclear whether the recorded observations were from true neural responses, or vascular responses which do not depend on neural activation changes. The 2017 study by Kahathuduwa et al. had technical limitations [12]. Early and late stages of processing related to attention are components of ERP, unlike EEG frequency components. Due to the technical limitations regarding baseline corrections for ERP waveforms, the amplitudes of the individual components could not be measured reliably by the study. Instead, N2-P300 peak-to-peak amplitude was measured by taking the difference between amplitudes of the N2 and the P300 peaks seen on EEG. Another limitation was the duration of reaction time tests, meaning the study was unable to evaluate the performance of the effect that treatment had on sustained attention over longer periods. The 2008 study by Haskell et al. was limited by a lack of understanding of the mechanisms of action underlying their reported findings [13]. Whether directly or indirectly, caffeine and L-theanine have been shown to affect several neurotransmitter systems including dopamine, serotonin, glutamate, and GABA. However, at the time of their publication, the study claims, the effects of L-theanine and caffeine in combination had not been studied on the level of receptors. The 2021 study by Baba et al. had limitations including the participants' ages, nationality, and quality of stress [14]. The observed effects were limited to participants that were Japanese. The study selected individuals between the ages of 50 to 69 years old, who drank green tea habitually. Furthermore, the study showed a positive anti-stress effect, but only for continuous calculations of single digits to assess attention and work performance. No quantifiable biomarkers were measured, nor qualitative assessments on how stressed a participant was were performed in this study. Table 3 shows the bias risk tool analysis of each study. The 2020 study from Kahathuduwa et al. showed that participants taking L-theanine demonstrated improvements in total cognition composite as seen using the NIH Cognition Toolbox (p = 0.040) compared to placebo [10]. The NIH Cognition Toolbox includes seven cognitive function tests: a flanker inhibitory control and attention test, a picture sequence memory test, a list sorting working memory test, a picture vocabulary test, an oral reading recognition test, dimensional change card sort test, and a pattern comparison processing speed test. On their own, caffeine and L-theanine showed deteriorating control on inhibition: There was an augmented reactive time for the stop-signal test for caffeine (p = 0.031) and for L-theanine (p = 0.053). However, improvements were noted with inhibitory control (p = 0.080); overall cognition composite (p = 0.041); and d-prime in the Go/NoGo assignment (p = 0.033) with subjects on the L-theanine-caffeine combination. This suggests that the combination enhances the user's attention, cognition, and inhibitory control, perhaps via synergistic effects. Improvement in these parameters is essential for patients with ADHD, though a larger sample size is needed to increase the power of this study and prove statistical significance. The 2018 study by Kahathuduwa et al. used fMRI on subjects to observe responses in visual color stimulus discrimination tasks [11]. L-theanine and L-theanine-caffeine fusion resulted in quicker target reactions versus placebo (difference of 27.8 ms (p = 0.018) and 26.7 ms (p = 0.037), respectively). Distractor stimuli in parts of the cerebrum where visual attention is affected showed decreased fMRI responses in participants taking L-theanine. Their results imply that the combination decreases mind-wandering, perhaps by increasing neural resources related to attention toward target stimulus, and lowering neural resources toward distractions. This would explain why the combination helps increase the user's attention on a given task. However, the study also needs a larger sample size, ideally including both men and women. Furthermore, caffeine's effect on cerebral circulation needs to be taken into account before making generalizations about the combination's perceived favorable effects. The 2017 study by Kahathuduwa et al. showed mean recognition visual reaction time (RVRT) was significantly improved by L-theanine (p = 0.019), caffeine (p = 0.043), and L-theanine-caffeine combination (p = 0.001), but not by tea (p = 0.429) or placebo (p = 0.822) [12]. The ERP is a time-locked measure of the electrical activity of the cerebral surface representing a distinct phase of cortical processing. Two components of the ERP which bear special importance to stimulus evaluation, selective attention, and conscious discrimination in humans are the P300 positivity and N200 negativity. Amplitudes of the mean peak-to-peak N2-P300 ERP were significantly larger when elicited with L-theanine (p = 0.001) and caffeine (p = 0.001) versus placebo; whereas a significantly larger mean N2-P300 amplitude was measured with Ltheanine-caffeine combination compared to placebo (p < 0.001), L-theanine (p = 0.029), or caffeine (p = 0.005). This means visual and motor conduction improved significantly with the combination. However, this approach was limited by its duration of only 10 target trials in each reaction time test. Therefore, the study could not evaluate the effect that treatment had on the performance of sustained attention over a longer period. Further investigation is still needed on the length of sustained attention when taking the Ltheanine-caffeine combination. The 2008 study by Haskell et al. showed higher ratings for headache and lower correct serial seven subtractions in subjects taking L-theanine alone [13]. Whereas improved RVIP accuracy, faster digit vigilance reaction time, and attenuated increases in mental fatigue (self-reported) were noted in subjects taking caffeine alone. Quicker simple reaction time and quicker working memory (in terms of numbers) reaction time, and better accuracy of sentence verification were features of the L-theanine-caffeine combination. Ratings of 'headache' and tiredness were reduced, while ratings of alertness were increased. A significantly positive interaction on the reaction time of delayed word recognition was also measured in participants taking the fusion of L-theanine and caffeine. Although promising, research is still needed to further understand the neurochemical effects of L-theanine and caffeine in combination at the receptor level to better explain these findings. The 2021 study by Baba et al. showed improved function in attention both during and post-stress loading with a single dose of caffeine [14]. Mild stress was induced using the Uchida-Kraepelin test (UKT), while the function of cognition was evaluated using the Cognitrax. The Cognitrax is an assessment procedure that uses reliable computerized neuropsychological tests to evaluate the neurocognitive status of patients covering a range of mental processes from simple motor performance, attention, memory, to processing speed. The Cognitrax showed reduced reaction times when observing participants that took a single dose of matcha. Matcha's caffeine content may have been the underlying cause for the observation. Participants that continuously took L-theanine (by ingestion of matcha) exhibited more amounts of work completed while UKT was administered, similar to those on a single dose of caffeine. The researchers concluded that there was improved attention and work performance for participants experiencing induced psychological stress when taking matcha in combination with caffeine versus with caffeine alone. However, the results may be skewed as many participants already had a habit of drinking green tea. Furthermore, since no quantifiable biomarkers were measured, nor qualitative assessments to assess a participant's level of perceived stress, further investigation on the combination's effect on stress, performance, attention, memory, and overall cognition is advised. Conclusions Caffeine and L-theanine are natural compounds found primarily in tea and coffee, respectively. The combination has shown improvement in short-term sustained attention and overall cognition. Reversed task-related mind-wandering and improved inhibitory control were also seen among boys with ADHD, while improvements in mild acute stress and increased amount of work were noted in the population of men and women aged 50-69 in Japan. After reviewing the studies, we found the combination shows favorable clinical significance in the domains of attention, memory, cognition, and hyperactivity. Overall, we conclude that the combination of L-theanine and caffeine is likely a safe and effective cognitive enhancer. Further research is still needed to explain the aforementioned limitations. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-01-01T16:04:29.506Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "1c9ff07ce606cab7621d87de85ddffebe9ab3911", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/72733-the-cognitive-enhancing-outcomes-of-caffeine-and-l-theanine-a-systematic-review.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "538cfd9b54d9be070ae2bef3e8e992b53702e472", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220526124
pes2o/s2orc
v3-fos-license
Politics of Prevention: Reflections From the COVID-19 Pandemic The COVID-19 pandemic from a prevention science perspective, including research topics, is discussed. Political considerations that influence prevention activities, with examples from the pandemic and from more typical prevention initiatives in schools and communities, are presented. The definitions of prevention science and prevention interventions are delineated, and a brief summary of prevention history is given. The relationship between health disparities and COVID-19 is discussed. Two theoretical perspectives that may help to inform effectiveness of COVID-19 prevention measures, health belief model and theory of reasoned action and planned behavior, are summarized. This article emphasizes the importance of adapting prevention applications to the intended recipients, especially ethnic and cultural groups. The need to strengthen prevention training in graduate education and strategies to reform the education to meet accreditation and licensing standards are suggested. Daily news reports, social media messages, press conferences, and other sources provide information and opinions about the coronavirus pandemic that has swept across the world like a major natural disaster. However, unlike natural disasters, the virus, also called COVID-19, the disease caused by the novel virus, knows no boundaries, and at this writing, neither a vaccine nor therapy has been developed to control the virus. Furthermore, despite months of study by expert specialists across the global scientific landscape, much is unknown about the virus; although, hopefully, more will be understood by the time this article is published. Thus, as of now, prevention is the cornerstone and main strategy to control and mitigate the spread of the virus. Although COVID-19 research has been initiated among social scientists, the research projects this author has seen focus on the important psychosocial effects of the virus, such as managing anxiety and stress, and providing psychological support. This author, appreciating that his sources are limited, has yet to see a social science research project that studies the effectiveness of recommended prevention interventions or other virus prevention initiatives from a psychosocial perspective. The National Institute of Mental Health (2020) recently published its strategic plan for research with prevention and cure as one of its major goals. It is timely to mount interdisciplinary research projects that address the psycho-social-behavioral aspects of COVID-19 prevention recommendations and other initiatives. Therefore, it is appropriate that the inaugural issue of the Journal of Prevention and Health Promotion (JPHP) includes a paper that speaks to this historic global pandemic, which relies primarily on prevention science and prevention interventions to reduce illness and death caused by Prevention is an interdisciplinary science, with contributions from many specialties. However, my primary area of training and specialization is prevention psychology. Therefore, I am writing this article from a prevention psychology perspective and recognize that other specialists may offer differing and complementary perspectives. The article is organized in five sections. Initially, distinctions between prevention science and prevention interventions are reviewed, along with a brief history of prevention. This discussion is followed by the influence of political considerations on prevention interventions, whether smaller scale interventions or major global interventions recommended to contain COVID-19. In the health disparities, prevention, and COVID-19 section, U.S. population health and economic disparities exposed by the pandemic and their influence on COVID-19 prevention recommendations are highlighted. The section prevention applications: understanding the audience provides guidance for the development of prevention applications. In this section, two theories are summarized: health belief model (HBM; Hochbaum, 1958;Rosenstock, 1974) and theory of reasoned action and planned behavior (TRAPB; Ajzen, 1991;Fishbein, 1967). They are presented as examples of theories with long histories studying prevention interventions and relevant within a COVID-19 prevention context. The future directions: implementing a prevention agenda for applied psychology section of the article offers suggestions for prevention research related to the pandemic, and recommendations for training in prevention science including multidisciplinary education in applied psychology. Throughout the article, examples as they apply to COVID-19 prevention interventions are discussed, as well as prevention projects that might be implemented within local institutions and communities. Prevention Science and Prevention Interventions Prevention science is an interdisciplinary specialization that draws expertise from multiple disciplines, including psychology, social work, medicine, public health, economics, and public policy. The Society for Prevention Research states that the major goal of prevention science "is to improve public health by identifying malleable risk and protective factors, assessing the efficacy and effectiveness of preventive interventions and identifying optimal means for dissemination and diffusion" (Society for Prevention Research, 2011, p. 3). This goal encompasses a broad range of human ecology across the life span and, within various environments, whether they be schools, communities, or nations, to maximize health and well-being. Prevention science is the foundation for the development of prevention interventions. Early on, Caplan (1964) developed a now classic framework to categorize prevention interventions. Caplan called prevention interventions (a) primary (to prevent a disease or illness and suitable for everyone, such as mass media vaccination messages), (b) secondary (delivered to those at risk, such as teen sex education programs), and (c) tertiary (to reduce the impact of an existing problem, e.g., rehabilitation programs for stroke victims). Caplan's framework was initially designed for public health or medical preventive interventions, such as childhood vaccinations, although the framework has been regularly applied to social, emotional, and behavioral interventions. However, in the context of behavioral health, primary prevention may not be a goal as preferred behaviors may change at different periods of a person's life. For example, a school-based prevention intervention goal might be to reduce teen pregnancy or delay alcohol use through psychoeducational interventions, but these will change as the adolescent matures into adulthood. As a follow-up to Caplan (1964), Gordon (1987) presented a continuum of prevention interventions that he labeled (a) universal, (b) selective, (c) indicated. Universal interventions, like primary prevention, are for everyone within a population or targeted group. Selective and indicated interventions (like secondary prevention) are designed for those at lesser or greater levels of risk in relation to the problem or disorder. Gordon did not believe that tertiary interventions belonged within a prevention intervention classification scheme because the problem had already occurred. Gordon's intervention classification was adopted by the Institute of Medicine's Committee on Prevention of Mental Disorders (Mrazek & Haggerty, 1994). More than 20 years ago, Romano and Hage (2000) expanded on earlier categories of prevention interventions presented by Caplan (1964) and Gordon (1987) to include the promotion of individual protective attitudes, behaviors and skills (protective factors), and systemic and advocacy interventions to promote health and well-being. Others have also expanded prevention interventions to include promotion of protective factors (Conyne, 2004;Cowen, 2000;National Research Council and Institute of Medicine, 2009) and advocacy for systemic interventions that promote community health (Pieterse et al., 2013;Prilleltensky, 2001). In terms of individuals and communities, promotion of protective interventions might include, for example, strengthening family-based services, offering affordable and quality child care services, providing community parent education programs, conducting workshops on job-seeking strategies, and promoting increased community adolescent recreational opportunities. Numerous examples have been implemented in schools for many years, including social-emotional learning programs designed to foster healthy peer relationships, self-awareness, and enhance self-esteem. Since mid-March 2020, U.S. public health professionals have strongly recommended practices to protect citizens from COVID-19. Very quickly, most citizens know about the potentially lifesaving behaviors, for example, stay at home and maintain social distance when outside the home, frequent handwashing, and masks in public. These behaviors of lifestyle rapidly became very common for most people across the globe. It is ironic that given the tremendous advances in medicine and other fields during the last 100 years, as of now, these protective preventive interventions are the best tools to contain the spread of the virus. Studies of COVID-19 preventive interventions offer rich potential to prevention scientists, researching topics such as effectiveness of recommended behaviors, compliance across different demographic groups, and effectiveness of varying media messages. Systemic prevention interventions that enhance personal, social, and physical well-being across institutions, communities, and larger entities, such as cities, states, or countries, have been advocated across many different problem areas (American Psychological Association [APA], 2014). For example, tobacco use and secondhand exposure is a major health hazard. As a result, amid much controversy, many communities across the United States and beyond prohibit the use of tobacco products in bars, restaurants, and other public places, such as outdoor recreational areas. To reduce addiction risk among teenagers and young adults, communities have also enacted preventive legislation by increasing to 21 years the legal age to purchase tobacco products. Another systemic intervention example is the restrictions on the marketing and purchasing of vaping products and e-cigarettes as communities have moved quickly to control advertising and purchases. The Centers for Disease Control and Prevention (CDC, n.d-b) has put forth strong recommendations against their use, considering them unsafe for youth, young adults, pregnant women, and adults who are not using tobacco products. Furthermore, although they may have some benefits to help tobacco users stop using tobacco, the health risks are unknown as is their ability to assist in smoking cessation. As such, e-cigarettes and vaping have been heavily regulated or banned in many countries and in several U.S. states (CDC, 2019; Global Center for Good Governance in Tobacco Control, 2019). Several years ago, South Korea initiated a country-wide initiative to prevent internet addiction (Cho, 2017). The systemic intervention includes several components delivered across the population, including addiction prevention education in schools, training internet addiction counselors, and comprehensive social media campaigns. In the United States, an ongoing and contentious battle on gun control and gun availability has been waged over many years (Spitzer, 2016), and the American Public Health Association calls gun violence an epidemic (Benjamin, 2015). Many scholars and prevention specialists argue that stricter gun-control measures save lives, whereas opponent objections are based on the second amendment of the U.S. Constitution (right to bear arms) and restriction of individual freedoms. In the United States, and other countries, many systemic prevention strategies are recommended and, in some cases, required, in attempts to mitigate the spread of COVID-19. Several states instituted "stay at home" policies and other recommendations. However, these measures have resulted in a severe economic depression across the country. The economic consequences have created a vigorous debate about the necessity for the prevention recommendations in parts of the United States. Although legislation has provided some financial compensation for businesses, and unemployment benefits for employees, the effects of the economic decline are devastating for many in the United States. The debate is a reminder that political considerations are very important to address when designing prevention interventions. Politics of Prevention Political considerations can influence the level of support for preventive actions. Therefore, it is important that prevention specialists consider the political dynamics that may surround a prevention intervention proposal, whether on a small scale as in one school, or a large school district or community. Although prevention specialists will be excited about an intervention they wish to implement, they must be cognizant of the political dynamics that surround an intervention. Therefore, it is necessary for prevention specialists to carefully assess sources of support for and resistance to an intervention. An intervention that is well supported in one locale or group may lack support in another group or setting. Careful attention to communicating with key stakeholders at the earliest stages of a prevention project is critical. As the COVID-19 pandemic has unfolded, preventive recommendations to reduce the virus spread have exposed major differences among stakeholders, regions, and political beliefs. The differences include social distancing and face mask use recommendations and timelines to open businesses, gatherings for religious purposes, and recreational areas. The core controversies center around economic issues, citizen health and well-being, and individual freedom versus the common good. Specialists from fields such as medicine and public health, and government officials debate the urgency and actions needed. The differences have become more disparate as the pandemic has evolved. Some become impatient with prevention recommendations as they impinge on personal freedoms and reduce sources of financial and social support and pleasure. Of course, political disagreements surrounding the prevention of the COVID-19 virus are much greater and immediate threats to health and well-being compared with more typical prevention applications that specialists offer in schools, communities, and workplaces. However, knowing about and considering differences among stakeholders are critically important for the success and sustainability of a prevention project. As an example, instructive for this discussion with relevance to prevention and psychology, is the process to gain APA approval of the Guidelines for Prevention in Psychology (APA, 2014). The Guidelines were approved by APA Council after about 5 years of development by a Guidelines Task Force of APA members. Although there were obstacles during the journey to approval, one is especially important in the context of this article. Guidelines drafts were reviewed by APA Committees and Boards as well as stakeholders within the public domain (e.g., state boards of psychology). One of the major concerns of APA governance bodies during the review process was the inclusion of phrases and terms such as "social action" and "advocacy." According to APA governance at the time, guidelines are not designed to promote a social agenda. Thus, to proceed with the approval process, the Task Force made concessions to remove these terms from the title and body of the article. Interestingly, APA has a very active advocacy initiative within its structure, reporting regularly to the membership about its work with policy makers on topics such as promoting social justice and human rights, reducing health disparities, addressing violence prevention, and encouraging members to do likewise. Perhaps APA only objected to the inclusion of the terms in guideline development at the time of approval, and the policy has now changed. However, at the time, the Guidelines Task Force was surprised by the APA position, because much prevention activity is focused on advocacy and social justice Kenny & Hage, 2009;Romano, 2015). Although the Guidelines were eventually approved, APA concerns about terminology and language were unexpected and caused significant delays in eventual approval. Just about everyone agrees that "prevention is better than cure." However, prevention specialists, especially those newer to the field, would be wise to consider differences among recipients and stakeholders. The implementation of prevention projects will often be supported or resisted in ways that mirror the larger population in which the prevention project is implemented. Furthermore, as seen with COVID-19 prevention recommendations, recipients and stakeholders may lose patience with prevention interventions as outcome evaluations do not yield immediate results. Although other types of evaluations (e.g., formative) are useful, stakeholders (e.g., community leaders, political figures) may expect an intervention to correct a problem rapidly. However, as seen with the hurried attention to develop a COVID-19 vaccine, infectious disease scientists remind us that development will take considerable time, require collaboration across the scientific community, and incur considerable costs before its effectiveness and safety can be established (Corey et al., 2020). Of course, developing a vaccine for a worldwide pandemic does not compare with local psychosocial prevention interventions, but the development, effectiveness, and sustainability of an intervention is, nevertheless, demanding and time consuming. In an APA convention presentation (Romano, 2013), I discussed three issues, not mutually exclusive, that are likely to lend controversy to prevention interventions, even though, at the outset, all might agree that the prevention idea is good. The issues are (a) values, (b) morality, and (c) economics. First, understanding individual and community values related to potential prevention interventions is important. A value-related issue is differences between the needs of the individual and needs of the community. What is good for the community may not be supported by individuals. In highly individualistic cultures such as much of the United States, collectivistic beliefs will create controversy. In the COVID-19 pandemic, recommendations to practice social distance, stay-at-home, and face masks, as measures to protect community health, have been resisted and angrily protested in U.S. cities. The issue is complex due to differing values between individuals, communities, and regions of the United States. Furthermore, due to work requirements and socioeconomic levels, some do not have the luxury of staying at home (e.g., health care providers, grocery store employees). Brown (2020) comments that stay-at-home and social distance recommendations are choices available to wealthier members of society, less so for members of lower socioeconomic groups. In collectivistic societies, with values and behaviors associated with community benefits, rather than individuals, citizens are more accepting of country-wide policies that have the potential to reduce community spread of COVID-19. Drawing comparisons between countries is difficult, due to factors such as enforcement of preventive regulations, availability of virus testing, methods of reporting, and population density. However, a few examples are illustrative. As of May 16, 2020, the United States had 4,526 COVID-19 cases and 269 deaths per 1 million population, whereas South Korea had 215 COVID-19 cases and five deaths per 1 million population, Singapore had 4,681 COVID-19 cases and four deaths per 1 million population, and Malaysia had 213 COVID-19 cases and 3 deaths per 1 million population (Worldometer, 2020). All three Asian countries, with a tradition of collectivism, have much lower death rates compared with the United States. Although Singapore's incidence rate is like the United States, the other two countries have much lower incidence rates compared with the United States. Values are also related to the use of contact tracing, a prevention strategy used by public health professionals to mitigate spread of community disease. Contact tracing is a process of contacting individuals who have been in close contact with someone who tested positive for the virus to recommend selfquarantine. Contact tracing is used in different countries and the United States. However, the strategy offers disadvantages, including training of public health personnel who are not familiar with contact tracing, costs, reluctance of people to accept information when notified that they have been exposed to the virus, and resistance of citizens to submit to government surveillance (Temple, 2020). The last disadvantage will be especially prominent if widespread surveillance is conducted via cell phone apps. Citizens in more individualistic countries are more likely to resist what they perceive as threats to freedom and privacy, and governmental interference. Singapore has been using contact tracing via cell phone apps since March 2020, perhaps one reason for the country's low COVID-19 death rate. The United Kingdom is developing a similar cell phone plan as a strategy to more quickly reduce virus spread and open the country to increased freedom of movement (Chowdbury et al., 2020). Values influencing prevention interventions were also revealed in the debate about cigarette smoking. In some locales, tobacco use is prohibited in closed spaces, and some cities also prohibit tobacco use in outdoor areas. Tobacco use regulations vary across U.S. communities. Similarly, in the context of schools, differing values among educators about the amount of time children are excused from academic classes to participate in social-emotional learning activities requires discussion. Prevention specialists need to work with educators and parents to balance academic instruction with proposed psychosocial prevention activities to reduce resistance to the intervention. Methods to resolve differences will be different based on school subjects, student grade level, school administrators, and parental preferences. The second issue to consider in prevention intervention planning is morality. An example from the COVID-19 pandemic is the issue of attendance at religious ceremonies and events when stay-at-home and social distancing orders are in place. Some argue that during this time of distress and need for community, it is especially important that people congregate with members of their faith community. Others contend that following the stay-at-home recommendation is the more moral position to stay healthy and minimize the virus spread. In a school-based example, some parents will accept and deem important prevention programs that teach sex education to develop healthy sexual behavior, reduce teen pregnancy, and promote respect and acceptance of different sexual identities. Other parents will disagree, stating that this type of education is best left to parents and the family. Also, bully prevention programs in schools generally receive strong support. A component of such programs to indirectly reduce bullying behaviors might include promotion of social groups and increased mental health support of students who are more likely to be bullied (e.g., lesbian, gay, bisexual, transgender, and queer [LGBTQ] students, special needs students). The need for such interventions is best explained to parents and stakeholders who may not be fully aware of the importance of the intervention in a comprehensive bully prevention program. The third issue that merits discussion is the economics of prevention. Finances may be a more acceptable form of resistance and used to camouflage other reasons for resisting, "this is a good idea, but we just can't afford it." This argument has been used in the COVID-19 pandemic as local and national leaders debate the importance of relaxing stay-at-home recommendations to support local businesses and community economies. Similarly, communities in the United States have outlawed the sale of electronic vaping devices to anyone below 21 years. Cities have instituted such laws based on the potential harmful effects of vaping and danger of nicotine addiction, especially in brain development of adolescents. However, stores that sell these products may lose business, similarly, to bans on selling tobacco products to adolescents and young adults. Another economic issue relates to the mental health of youth and young adults. Specifically, the need for mental health services for children, adolescents, and postsecondary students is growing rapidly, and resources to serve students in educational institutions are inadequate (Hunt & Eisenberg, 2010;Kaffenberger & O'Rouke-Trigiani, 2013;Oswalt et al., 2020). The units that house school counselors, school social workers, college and university counselors, and psychologists are often understaffed in educational institutions. Mental health professionals are heavily engaged in crisis-intervention work, which leaves less time for prevention activity. Data showing school counselor shortages have been presented for many years by the American School Counselor Association (ASCA). ASCA recommends a ratio of one school counselor to 250 students, whereas the mean ratio across the United States is 455 students to each counselor, with a range across the states from a low of 202 to 1 to a high of 905 to 1 (Bray, 2019). Different reasons across the states can account for such large discrepancies, but insufficient funding to support mental health professionals in schools and higher education usually resolves around limited public education funding and differing educational priorities Mitchell et al., 2017). Recent advocacy for increased student mental health support occurred in the St. Paul, Minnesota, school district when teachers went on strike in March 2020. This was the first district strike in 74 years. One of the main grievances of the educators was lack of student mental health support personnel. The strike ended just before the schools closed due to the pandemic, but not before the district agreed to increased funding for student mental health personnel. Funding decisions and values are intertwined, as values dictate spending, whether in personal finances, or within a large unit or system. Funds are dispersed based on values, and funding will dictate the strength and scope of prevention initiatives. A disadvantage of many prevention interventions is that immediate results are not usually realized. Therefore, prevention leaders must keep stakeholders engaged in the project through regular reporting of progress and evaluation processes. A final example that relates to values and funding is the suspension in fall of 2017 of the National Registry of Evidence-Based Programs and Practices (NREBPP) by the U.S. government. In January 2018, NREBPP was no longer funded by the U.S. government. NREBPP was a Substance Abuse Mental Health Services Administration (SAMHSA) program that had evaluated prevention programs across topics and age groups since 1997. Despite objections to the closure of NREBPP from different sectors of the country, federal health officials stated that NREBPP had a flawed system of evaluating programs, and a new system would replace it. The new system, also sponsored by SAMHSA, is called Evidence-Based Practices (EBP) Resource Center. However, Green-Hennessy (2018) stated that NREBPP had a long history, and the system had been strengthened over the years, and rather than replace NREBPP, the money could have been better spent to eliminate weaknesses or flaws in NREBPP. Perhaps there were other motivations for replacing NREBPP, but its demise was shocking to prevention specialists as NREBPP was an important resource. Hopefully, the EBP Resource Center is sufficiently improved compared with NREBPP to justify the funds to create it. Health Disparities, Prevention, and COVID-19 As the COVID-19 panic spreads across the United States, vast differences in incidence and death rates within population groups are observed. Although the data are incomplete as most jurisdictions have not reported data by race and ethnicity at this writing, what has been reported is alarming and distressing. For example, news outlets report that African Americans in some of the largest cities account for many more virus incidences and deaths disproportionate to their numbers in the population. Data from Chicago show although people who are Black make up 30% of the city's population, they account for 68% of the city's COVID-19 fatalities, and 58% of the virus cases. Similar data were found in Milwaukee, where people who are Black make up 26% of the city's population, but account for 81% of deaths. Michigan and Louisiana show similar disproportionate data (Cineas, 2020;Johnson & Buford, 2020). Similarly, the CDC (n.d-a) reports New York City data showing virus death rates substantially higher for people who are Black/African Americans and Hispanic/Latinx persons compared with people who are White and people who are Asian. As of mid-April 2020, data show the death rate for people who are Black at 92.3/100,000, Hispanic/Latinx persons at 74.3/100,000, people who are White at 45.2/100.000, and people who are Asian at 34.5/100,000. The devastating impact of the virus on the Navajo Nation populations was reported by Silverman et al. (2020), showing that the Navajo Nation had the highest per capita cases of COVID-19 in the United States at 2,304/100,000 surpassing New York City at 1,806/100,000. Multiple reasons account for these disparities including U.S. history of racism among ethnic minorities that leads to discrimination, low social economic status, inadequate or lack of health care, limited English language proficiency, immigration status, housing in confined spaces, and homelessness. Furthermore, the pandemic's universal prevention recommendations are difficult or impractical to follow for many. Frontline (e.g., health care personnel, factory workers, grocery store employees) employee work responsibilities cannot be conducted from a distance, and they are often lower paid. Thus, they do not have the luxury of following stay-at-home recommendations (Brown, 2020). The COVID-19 pandemic has shed a bright light on health care inequities and disparities in the United States. Health disparities have been a focus of scholars and U.S. officials for some time. The U.S. Office of Disease Prevention and Health Promotion (n.d.) notes that groups within the United States experience health disparities that contribute to poor health and ability to achieve maximum health. Groups include those based on race and ethnicity, sex, sexual identity, age, disability, socioeconomic status, and geographic location. Research from different scholarly perspectives has examined health disparities, including differences between rural and urban areas (James et al., 2017), impact of racial oppression on health outcomes (Gale et al., 2020), public policy solutions to address disparities (Assari, 2018), health care experiences of transgender binary and nonbinary university students (Goldberg et al., 2019), and access to integrated health care (Buki & Selem, 2012;Tucker et al., 2019). In addition to spotlighting health inequities, COVID-19 has also exposed extreme xenophobia, racial harassment, and discrimination primarily against Asian populations. A few U.S. leaders may have fueled this behavior by referring to the virus as the "Chinese virus," which some may interpret as people of Chinese ancestry spreading the virus, although leaders have denied the accusation. Although face masks have become more regularly used as the virus has spread across the country, some Asians feel stigmatized by using them, and thus, putting their health at risk (Zhou et al., 2020). The social, emotional, psychological, and behavioral components of preventing COVID-19 illness and deaths are important areas of study for prevention scientists. However, regardless of whether prevention interventions are large or small, to maximize positive outcomes, the interventions must be culturally relevant and prevention specialists culturally competent, partnering with population groups receiving the prevention intervention (Reese &Vera, 2007). The next section will further expand on this topic. Prevention Applications: Understanding the Audience The above discussion provides examples on how differing values, morality, funding, and ethnic and socioeconomic disparities can influence prevention initiatives, whether they be worldwide and very dangerous pandemics such as COVID-19 or local prevention applications. This section will summarize suggestions to assist prevention personnel as they develop prevention projects, and present them to stakeholders, including policy makers, community groups, and project recipients. It is understood that each stakeholder group may have different opinions about a prevention project, and they are likely influenced by their values, questions of morality, and funding considerations. Therefore, the prevention specialist must be willing to dialogue with members from each of the stakeholder groups prior to initiating an intervention. Some of the dialogue may be informal or in formal group meetings. Prevention activities that seem quite important and necessary to the prevention specialist may not be so for others who will have control over the implementation process, ongoing activities, evaluation, and sustainability of the intervention. The setting for a prevention intervention can vary from a relatively small institution (e.g., schools) to larger community settings, or, as with the pandemic, a global initiative. In the United States, pandemic media coverage is primarily focused on the United States, but there are implications for other nations in terms of working together to prevent virus spread. For example, nations are restricting air and sea travel across borders, and nations are collaborating on sharing medical supplies and working to develop a therapy and vaccine. However, some of the issues have been contentious and opinions vary on the importance of collaboration across nations and among political leaders. The United States and other nations are operating in unchartered waters with respect to COVID-19 decision making, as the last global pandemic occurred in 1918, when population size, health industries, communication systems, and world dynamics were very different. Countries determine to what extent they will collaborate, either through global organizations, such as World Health Organization, or within regions. Decisions will be driven by values, beliefs, trust, and importance attached to collaborate versus going it alone. Within the United States, several adjoining states have formed collaborations to share knowledge and strengthen the impact of their prevention measures. Similarly, prevention initiatives on the local level are likely to be successful and sustainable if local leaders, recipients, and beneficiaries of the prevention initiative are consulted from the very beginning of the project. One way to begin the dialogue is the formation of an advisory group. This group is best composed of members who have technical expertise about the project, represent the cultural and demographic characteristics of the community (or school), and are political stakeholders in the community. It is important that one or two coleaders of the group are invested in the success of the project but who have not initiated the project. The advisory group can then begin to discuss the project in relation to community needs and how best to meet the need. In developing prevention activities, it is recommended to consider not only behavior that needs to be prevented (e.g., school bullying) but also behaviors that are promoted to serve as protections for individuals and the larger community (e.g., respectful and inclusive school environment). Comprehensive prevention projects are best designed to stop or decrease problem behaviors by reducing risk factors, promoting protective factors, and addressing community (school) wide interventions that reduce risks and support protections. Thus, a robust prevention project will emphasize activities that are individual or small group oriented, as well as systemic interventions designed to reduce risks and promote protections across the system whether a school, school district, city, or other entity. Major COVID-19 prevention recommendations to prevent spread of the disease include stay-at-home, frequent handwashing, maintain social distance, and wear face masks to reduce risk and increase protection for self and others. The guidelines are followed and enforced in varying degrees of consistency within the United States and globally. Citizens decide the best behavior for themselves and the community, not unlike other prevention recommendations (e.g., seasonal flu shot, refrain from tobacco use). Although it took many years for some jurisdictions to approve legislation to restrict cigarette smoking in public places, for example, the highly contagious coronavirus does not allow the luxury of time, and citizens are dependent on public health and political leaders to offer prevention recommendations for the good of society. However, as with other types of prevention recommendations, individuals have freedom of choice to follow them in most countries. Most prevention specialists will have more modest and less immediate goals compared with stopping a global pandemic. There is a long history of prevention and promotion interventions across institutions and communities such as preventing sexual harassment and abuse on college campuses, reducing gun violence in communities, promoting social-emotional learning in children and youth, ending illegal drug use and inappropriate use of legal drugs across the life cycle, and preventing suicide (Vera, 2013). These problem behaviors are traumatic and potentially deadly. Fortunately, there are examples of prevention programs to reduce or eliminate problems within a given context. SAMSHA's EBP Resource Center, cited above, is one resource to search for prevention initiatives that have been reviewed and evaluated. However, it is recommended that prevention activities be adjusted or adapted to a location and population, as one set of activities and evaluation tools successful in one locale may not be effective in another context (Romano & Israelashvili, 2020). This recommendation was observed in prevention projects that were developed in different countries, but prevention scientists and specialists adapted the previously developed prevention activities to meet the needs and requirements of their own region or country (Israelashvili & Romano, 2017). Prevention is an interdisciplinary science, but it is not atheoretical. Prevention activities are best grounded in a theoretical framework that will support the intervention activities and the evaluation process. Some of the more commonly taught theories of psychotherapy for clinical use have formed a theoretical basis for prevention interventions (e.g., cognitivebehavioral; Christensen et al., 2010;Montgomery et al., 2009). Motivational interviewing, with person-centered theory as foundational, has also been used in a variety of prevention interventions (e.g., Strait et al., 2012). Transtheoretical model of behavior change has a long history of use within a prevention framework, especially interventions that address behavioral changes to improve health outcomes (e.g., Prochaska et al., 2009). In the following sections, two theoretical perspectives (i.e., health belief model [HBM] and theory of reasoned action and planned behavior [TRAPB]) will be summarized. These were chosen because of their long history within prevention science, and readers may not be familiar with them. Health Belief Model (HBM) HBM was developed within the U.S. Public Health Service in the 1950s to help understand reasons for people not participating in tuberculous screenings to prevent the illness and promote early disease detection (Hochbaum, 1958;Rosenstock, 1974). The prevention goals of COVID-19 are similar in terms of prevention and disease identification. The HBM researchers found that a person's beliefs about a disease and need for screening helped to differentiate those who participated in the screening and those who did not. HBM can be applied to COVID-19 and people's willingness to use prevention measures. According to HBM, four personal health beliefs are predictive of whether a person is likely to adhere to prevention recommendations and participate in screenings. They are (a) perceived susceptibility to the disease, (b) perceived severity of contacting the disease, (c) perceived benefits of participating in the prevention measures, and (d) perceived barriers and disadvantages to participating in prevention activities (Romano, 2015). Much research has been conducted to validate HBM variables in diverse populations in the United States and other countries. Examples of the research projects include willingness of low-income African American women to participate in cancer screenings and promoting behaviors that reduce sexual risks (Champion & Sugg Skinner, 2008). As applied to preventing COVID-19, HBM offers explanations for behaviors. For example, young adults on Southern beaches likely perceive themselves as less susceptible to the virus, compared with older adults. However, as knowledge about the virus has increased, young and middle-aged adults have also been victims of the disease, although not as severely as older persons. Those who understand and accept the benefits of pandemic prevention recommendations compared with disadvantages will more likely use them. According to the HBM framework, delivering targeted pandemic prevention information to subgroups of citizens based on the four HBM beliefs promises to yield more favorable compliance outcomes. HBM has value as a theoretical framework for more typical prevention projects, especially related to preventing behaviors that impair health. For example, HBM can be helpful to understand behaviors that place adolescents at risk of sexually transmitted infections, pregnancy, and drug and alcohol use. The four components of HBM can give prevention personnel a framework to better understand resistance to following prevention messages and participating in prevention activities. However, it is important to assess the health beliefs of the group receiving the intervention prior to developing prevention messages and activities. Theory of Reasoned Action and Planned Behavior (TRAPB) Theory of reasoned action (TRA) has a long history, dating back to Fishbein (1967) who developed the theoretical framework to better understand the relationship between personal beliefs, attitudes, and behavior. Several years later, Ajzen (1991) added planned behavior (PB) as an extension of TRA to address the amount of control that individuals believe they have over one or more behaviors. TRAPB is more complex than HBM, as TRAPB addresses several variables that can influence participation in a health promotion or prevention campaign. TRAPB posits that intentions to carry out a desired behavior will be more likely followed if the individual's attitudes, social norms of those important to the person, and perceived personal control support the desired behavior. The relationships of these variables can be presented symbolically as: behavior ~ intentions ~ (attitudes + norms + control; Montaño & Kasprzyk, 2008;Romano, 2015). A major component of the theory is a process called elicitation research. The process involves conducting group interviews of a similar but different sample of future intervention participants to ascertain personal beliefs, attitudes, behavioral intentions, social norms, and perceived control over the desired behavior. Once elicitation data are collected, they will inform intervention activities and messages. The theory is widely used. According to a review of 82 theories used in designing and evaluating interventions to change health-related behaviors informed by social scientists, TRAPB was the second most frequently used theory behind the transtheoretical model of behavior change (Davis et al., 2015). According to Fishbein (2000, as cited in Montaño & Kasprzyk, 2008, the theoretical constructs of the theory have been studied in more than 50 high-and low-income countries. With respect to the COVID-19 prevention recommendations, TRAPB can help explain people's willingness to follow recommendations. For example, does a person's attitude about a prevention recommendation lead to increased use? Do others important to the person follow the prevention guidelines and does the person believe they have control over the behavior? With respect to preventing virus spread, most people have personal control over the CDC prevention recommendations, unless employment requirements reduce their assessment of personal control. Also, their intention to follow recommendations is a function of their attitudes toward the behavior and the level of perceived social support to follow the recommendations. For example, in the United States, some leaders are less likely to follow some of the recommendations, resulting in poor modeling and weakening social support for them. Romano and Netland (2008) describe a hypothetical example of TRAPB. In their example, the authors show how TRAPB and elicitation research are used to reduce physical aggression among sixth-grade boys. Through elicitation research, prevention personnel learn about differences between subgroups of all sixth-grade boys in the school, as it cannot be assumed that all sixth-grade boys (or any group) will have similar beliefs, social support, and perceived personal control to carry out intended behaviors. Without collecting subgroup information about these variables beforehand, differences between subgroups are unknown. Elicitation research provides a process to adjust or better align prevention activities with TRAPB variables important to subgroups, leading to better outcomes. Of course, other theoretical frameworks to guide prevention projects can be considered by prevention specialists. For example, Conyne (2004) has summarized several prevention strategies, including self-competency facilitation, community organizing and systems intervention, and redesign of the physical environment. If a project is based on a theoretical model, project goals, design, activities, and evaluation methods will help to explain outcomes, and hopefully lead to sustainability as future changes to improve the intervention are made based on the theoretical model. Future Directions: Implementing a Prevention Agenda for Applied Psychology Prevention scientists and applied prevention specialists are experiencing a global epidemic of historic proportions. Prevention is the main strategy to prevent the spread of COVID-19. However, despite overwhelming news coverage and mass media reports, little, if any, coverage is presented on the role of behavioral science expertise in helping to control the pandemic. There are many behavioral science specialists devoted to assisting others in this time of crisis. This activity is highly valued and understandable given the emotional impact of the pandemic. In addition, remediation and crisis-intervention education is prominent within the helping professions. Furthermore, the public's perception of applied psychology and other helping professions is to fix problems, rather than prevent them. However, prevention science can be instrumental in assisting in multiple ways during this epidemic. For example, prevention specialists from across disciplines and in research teams are well positioned to study prevention-based research questions. Hopefully, some of the research has begun, and the National Institute of Mental Health (NIMH) prevention research agenda cited above will encourage development of future research projects. A few research questions to consider are as follows: (a) Do the major media messages of social distancing, handwashing, and mask wearing serve all segments of the U.S. population equally? (b) How might these messages be perceived within different ethnic, cultural, and socioeconomic groups? (c) What types of media are most effective to reach diverse population groups? (d) How might the health beliefs of different groups influence their adherence to preventive actions? (e) How do attitudes, beliefs, and sense of personal control influence adherence to prevention recommendations? (f) What social influences are most effective to promote the use of prevention recommendations within groups, whether they be family, government officials, or others within personal networks? (f) How does compliance with prevention recommendations compare across nations? These are a few of the questions that can be examined utilizing the expertise of prevention social scientists. It is critically important that professionals from diverse specialties such as psychology, public health, medicine, social work, public policy, and economics work in collaboration in efforts to contain the spread of COVID-19 through preventive measures. As with other specialties, applied psychology must continue to emphasize and encourage the role of prevention within the profession. For example, in counseling psychology, much has been accomplished, including the publication of this inaugural journal issue. However, much more needs to be accomplished during the next decade, and, hopefully, a more robust recognition of the importance of prevention psychology in the public domain and policy decisions will occur. Prevention Training, Accreditation, Licensing The advancement and prominence of prevention psychology, along with prevention science in other social science disciplines, will require adjustments in training strategies to meet accreditation and licensing requirements. Unfortunately, prevention education is seriously lacking in much of applied psychology, although some progress has been made in the last decade (see Hage et al., 2007;Romano, 2015). As Conyne et al. (2008) discuss, there are multiple ways to provide prevention training within graduate education and postgraduate training. One key component to prevention education is encouraging student coursework outside the major area of study. Applied psychology programs are encouraged to make it more possible for students to enroll in courses in fields such as public health, medicine, social work, public policy, and economics. Furthermore, field work, practicum, and internship experiences could also give attention to training experiences in prevention science. This model of multidisciplinary education can also be more widely applied in other disciplines. However, graduate programs in applied psychology are already packed with courses to meet accreditation and licensing requirements, but the APA accreditation process may offer some enlightenment. APA is reviewing accreditation standards for the newly developed master's program in health services psychology (MPHSP; Grus, 2019). My cursory review of the proposed accreditation standards for MPHSP found them lacking in prevention content. Accreditation standards for this program, like doctoral programs, are categorized into broad psychological content areas, and graduate programs usually offer specific courses to meet the standards. Because prevention science education is relevant to multiple content areas (e.g., social, affective, cognitive, behavioral), prevention education can be infused across multiple courses instead of one or more stand-alone courses. This strategy would reduce expansion of the curriculum. If graduate programs show that specific courses or multiple courses that include prevention content meet accreditation and licensing board standards, infusion of prevention education is possible. However, such changes require faculty with interest, expertise, and commitment to prevention science, and students who desire such education. The COVID-19 pandemic has highlighted the importance of prevention to reduce disease and death. Although it is hoped that COVID-19 is a once-ina-lifetime pandemic, there will be other epidemics that risk health, hopefully on a smaller scale, and the expertise of prevention scientists from the behavioral sciences will be sought. However, apart from health-related epidemics, prevention science must continue to provide guidance and expertise related to major social problems (e.g., bullying and social violence, poor school achievement, drug and alcohol addiction, racial stereotyping, and sexual harassment). This article highlighted the role of prevention science in COVID-19 while providing examples and applications across schools and communities. As a final comment, counseling psychology is commended and congratulated for producing this inaugural JPHP, only the second journal sponsored by the Society of Counseling Psychology (APA Division 17) in its 75-year history. The journal is an important outlet to disseminate prevention research and scholarship by scientists and practitioners from different disciplines and specialties. It took several years to launch JPHP, and now the inaugural issue is published during a massive and deadly global pandemic in which prevention is central to containment of the virus. Appropriately, JPHP is being launched at a momentous time in the history of the world. Declaration of Conflicting Interests The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author received no financial support for the research, authorship, and/or publication of this article.
2020-07-15T13:06:21.849Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "27cb4131231e8785b6cd245057fc41e28c059c66", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7358972", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d1ba5afd1e82a3d760c85912e78518c7f892b28d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207889531
pes2o/s2orc
v3-fos-license
Preparation and Characterization of Flame-Retarded Poly(butylene terephthalate)/Poly(ethylene terephthalate) Blends: Effect of Content and Type of Flame Retardant A flame retardant named TAD was synthesized by the reaction of 9,10-Dihydro-9-oxa-10-phosphaphenanthrene-10-oxide and triallyl isocyanurate at first. Then, novel flameretarded materials based on PBT and PET resin were formulated via melt blending with TAD, expandable graphite (EG), and a mixture of both. The effect of flame retardant type and TAD content on the flame behavior of PBT/PET blend was carefully investigated. TAD contributed towards higher LOI value and better UL-94 performance than EG. However, the best V-0 rating in the UL-94 test was achieved by the incorporation of TAD/EG mixture into the resin matrix. TAD/EG combination exhibited clear synergistic effect on both reducing the flaming intensity and increasing the residual char layer, as confirmed by cone calorimeter tests and TGA results. SEM images combined with XPS analysis revealed that expansion and migration of EG locked the P-containing radicals from decomposing TAD into the condensed phase, which led to the formation of compact and continuous char layers. All the results in our studies demonstrate that incorporation of TAD with a charring agent EG is an effective and promising technique to develop flame-retarded PBT/PET material, which has high potential for applications in the areas of electronic devices, household products, and automotive parts. Introduction Poly(butylene terephthalate) (PBT) and poly(ethylene terephthalate) (PET) are commonly considered as two of the most important engineering polyesters in industry [1,2]. PBT shows a wide range of applications in electronic and automotive products due to its rapid crystallization rate, solvent resistance, good dimensional stability, excellent electrical properties, and good processability [3,4]. Unlike PBT, which is primarily used in injection molding parts, the main application areas of PET are fibers, films, and containers for packaging [5][6][7]. PET has higher heat-deflection temperature and stiffness compared to PBT, while PBT demonstrates advantages in crystallization rate, processing, and dimensional stability [8,9]. Blending two or more polymers has been shown as a simple, effective, and low-cost approach to obtain a novel composite with integrated and potential enhanced properties without clearly sacrificing their advantages [10][11][12]. Thus, development of PBT/PET blends has attracted a significant amount of attention from researchers and industry [13][14][15][16]. Blending PBT with PET achieves a product with high electrical insulation properties and good mechanical properties due to the synergistic effect of these two polyesters in the crystallization process [17]. Besides, the crystallization reaction in the gas phase [25,26]. Moreover, to further increase the flame retardant efficiency of DOPO, research efforts have been devoted to design and synthesize novel DOPO derivative by combining DOPO with other flame-retardant agents. Especially, after reacting with triazine-based flame retardants, such as melamine cyanurate (MCA) and melamine polyphosphate (MPP), the flame-retardant effect of DOPO in the gas phase can be significantly enhanced owing to the inert, incombustible Ncontaining gas released by triazine-based flame retardants under heating [27,28]. Recently, an emerging flame retardant named TAD was synthesized by Tang et al. [29,30] by combining DOPO with triallyl isocyanurate (TAIC) exhibiting high flame retardant efficiency in the epoxy resin. This TAD was found to act mainly in gas phase, with additional slight charring effect, which might be less effective for anti-dripping of PBT/PET blends during combustion. On the other hand, expandable graphite (EG), as an economic and well-performed charring additive, is widely applied to develop a number of fire-retardant applications [31][32][33]. It is an intercalated graphite compound whereby oxidants such as sulfuric acid and potassium permanganate are inserted between the carbon layers of graphite. When exposed to a heat source, EG can expand in the perpendicular direction and generate a vermicular structured layer to protect the matrix from heat flux penetrating inside and retard the further decomposition of polymer chain [34]. However, if EG is used as the only fire retardant in the polymer, its efficiency is low and limited [35]. Consequently, a new fire retardant system based on combination of TAD and EG was designed in this article in order to overcome the disadvantages of these two fire retardants and obtain enhanced synergistic anti-flaming effect. Hence, the aim of this work is to develop novel materials with excellent fire retardant performance based on the PBT/PET blends and the fire retardant system of TAD and EG. First, the flame retardant TAD was carefully synthesized. Next, fire retardant materials were formulated by blending TAD or TAD/EG with a toughing PBT/PET blend reported in our previous work [9].To the best of our knowledge, the research work about introducing this novel flame retardant system based on TAD and EG to improve the anti-flaming property polyesters is rarely reported. The effect of TAD loading and addition of EG on the flame-retardant performance, thermal properties, and flaming behavior of flame-retarded PBT/PET blends were investigated using different techniquessuch as limited oxygen index (LOI), UL-94 vertical burning test, cone calorimeter tests, and thermogravimetric analysis (TGA). Simultaneously, the flame-retardant mechanism of TAD and TAD/EG combination on PBT/PET blend was also explored by scanning electronic microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). Synthesis of TAD The TAD was prepared using a one-step synthesis method, as previously described [29]. DOPO (324 g, 1.50 mol) was firstly melted at 145 • C with mechanical stirring in a three-neck flask. Then, TAIC (124.5 g, 0.50 mol) was introduced into the melt DOPO at the addition rate of 12.45 g per 5 min. The temperature of reaction system was then heated to 155 • C with mechanical stirring for another 2 h. The final TAD powder was collected from the cooled and ground reacted product at room temperature. The reaction routine is shown in Figure 1. Synthesis of TAD The TAD was prepared using a one-step synthesis method, as previously described [29]. DOPO (324 g, 1.50 mol) was firstly melted at 145 °C with mechanical stirring in a three-neck flask. Then, TAIC (124.5 g, 0.50 mol) was introduced into the melt DOPO at the addition rate of 12.45 g per 5 min. The temperature of reaction system was then heated to 155 °C with mechanical stirring for another 2 h. The final TAD powder was collected from the cooled and ground reacted product at room temperature. The reaction routine is shown in Figure 1. Preparation of Flame-Retardant PBT/PET Blend All the PBT and PET pellets, POE-g-GMA, nucleating agent, and fire retardants were dried in a ventilated oven before processing to avoid possible moisture degradation reactions. The detailed formulations of different samples are summarized in Table 1. In the resin matrix, the PBT/PET/GPOE/Surlyn 8920 ratio is fixed as 40/60/20/0.3. Different combinations of fire retardants were mixed evenly with matrix resins and other additives before extrusion. Then, the mixture was introduced into a corotating twin-screw extruder (TSE-35A, Nanjing Ruiya Co., Ltd., China). Notably, the length to diameter ratio of the screw was 48, the diameter of the screw was 35 mm, and the temperature profiles of the barrel were 40-160-180-200-220-230-240-250-245 °C from the hopper to the die. The extruded rods were dried at 80 °C for 6 h and then hot pressed (10 MPa, 5 min, 250 °C) to obtain suitable testing bars for further characterization. Characterization The limited oxygen index was tested by a HC-2C oxygen index meter (Nanjing Shangyuan Analysis Instrument Company, China) according to ISO 4589-1984, and the specimens used for the test were 130 mm×6.5 mm×3 mm in dimension. The UL-94 vertical burning tests were performed on a CZF-2 instrument (Nanjing Jiangning Analytical Instrument Factory, China). The dimensions of the sample were 130 mm×13 mm×3 mm. The thermal combustion properties of samples were measured with a cone calorimeter (FTT, East Grinstead, UK) as per ISO5660 at an external heat flux of 50 kW/m 2 . The dimension of the samples was 100 mm×100 mm×3 mm. The thermogravimetric analysis was performed on a STA409 PC/PG machine (Netzsch, Bavaria, Germany). The sample masses ranging from 2 to 3 mg were heated from room temperature to 600 °C at the rate of 20 °C/min under a nitrogen atmosphere. Preparation of Flame-Retardant PBT/PET Blend All the PBT and PET pellets, POE-g-GMA, nucleating agent, and fire retardants were dried in a ventilated oven before processing to avoid possible moisture degradation reactions. The detailed formulations of different samples are summarized in Table 1. In the resin matrix, the PBT/PET/GPOE/ Surlyn 8920 ratio is fixed as 40/60/20/0.3. Different combinations of fire retardants were mixed evenly with matrix resins and other additives before extrusion. Then, the mixture was introduced into a corotating twin-screw extruder (TSE-35A, Nanjing Ruiya Co., Ltd., China). Notably, the length to diameter ratio of the screw was 48, the diameter of the screw was 35 mm, and the temperature profiles of the barrel were 40-160-180-200-220-230-240-250-245 • C from the hopper to the die. The extruded rods were dried at 80 • C for 6 h and then hot pressed (10 MPa, 5 min, 250 • C) to obtain suitable testing bars for further characterization. Characterization The limited oxygen index was tested by a HC-2C oxygen index meter (Nanjing Shangyuan Analysis Instrument Company, China) according to ISO 4589-1984, and the specimens used for the test were 130 mm × 6.5 mm × 3 mm in dimension. The UL-94 vertical burning tests were performed on a CZF-2 instrument (Nanjing Jiangning Analytical Instrument Factory, China). The dimensions of the sample were 130 mm × 13 mm × 3 mm. The thermal combustion properties of samples were measured with a cone calorimeter (FTT, East Grinstead, UK) as per ISO5660 at an external heat flux of 50 kW/m 2 . The dimension of the samples was 100 mm × 100 mm × 3 mm. The thermogravimetric analysis was performed on a STA409 PC/PG machine (Netzsch, Bavaria, Germany). The sample masses ranging from 2 to 3 mg were heated from room temperature to 600 • C at the rate of 20 • C/min under a nitrogen atmosphere. Scanning electronic microscopy (SEM) was performed using a S-3400N instrument (Hitachi, Tokyo, Japan) to observe the surface morphology of the char layer formed from specimens after cone calorimeter testing. The elemental analysis of the residual char from samples after cone calorimeter testing was performed on a 1/AXIS UltraDLD X-ray photoelectron spectroscopy (Kratos, Kyoto, Japan). Residual chars were sufficiently ground and mixed before analysis. The tensile, flexural, and impact property of all samples were tested on a Universal Testing Machine (MTS, Eden Prairie, MN, USA). At least 5 specimens of each formulation were tested, and the average value was calculated. Flame-Retardant Performance The LOI and UL-94 vertical tests were performed to determine the flame performance of PBT/PET blend and flame-retarded PBT/PET materials, and the results are summarized in Table 1. Without the addition of fire retardants, the resin matrix displayed an extremely low 22.0 LOI value. In addition, the test bar burned continuously, accompanied with flammable dripping during UL-94 test. After the addition of low 4% TAD, the LOI value of PBT/PET blend significantly increased to 25.2; meanwhile, the UL-94 rating still remained as no rating (NR). The LOI values of samples gradually increased with further addition of TAD in the PBT/PET blends. Besides, the UL-94 performance of PBT/PET blend was also enhanced with the TAD content. Similar findings were also observed in the EP thermosets with TAD. When the TAD content reached 12 wt%, the LOI value of the sample reached 28.4 and passed UL-94 V-1 rating. When the TAD loading was increased to 16% in the composite, the LOI value showed a slight decrease, but the UL-94 rating remained unchanged. Blend only with 12 wt% EG had a LOI value of 25.8, which is lower than the sample only containing 12 wt% TAD, indicating that the addition of equivalent TAD contributed towards more clear effect on increasing LOI value of PBT/PET blend. Notably, after the incorporation of 6 wt% TAD and 6 wt% EG into the matrix resin, the blend reached the highest LOI value of 29.2 and passed UL-94 V-0 rating. These results imply that using TAD and EG mixture leads to more significant flame-retardant effect on the PBT/PET blend compared to neat TAD. Cone Calorimeter Test Cone calorimeter test was employed to characterize thermal combustion behavior of PBT/PET blend and flame-retarded PBT/PET blends. Figure 2 displays the heat release rate (HRR) curves of the PBT/PET blend and different flame-retarded PBT/PET materials. Table 2 summarizes the partial characteristic parameters obtained from cone calorimeter test, such as peak of heat release rate (PHRR), total heat release (THR), average of effective heat of combustion (mean-EHC), average CO 2 yield (mean-CO 2 Y), and total smoke release (TSR). The PBT/PET blend without flame retardant had an extremely high PHRR value of 1087.7 kW/m 2 . After incorporation of TAD, the PHRR value gradually decreased with TAD content in the blend. The parameter PHRR is usually employed to assess the flammability of materials; thus, the results prove that TAD can effectively inhibit the combustion intensity of PBT/PET blend, which is also in agreement with the LOI results. The mean-EHC values of PBT/PET blends containing TAD were also reduced compared to blend without TAD, indicating that the amount of fuels was decreased. The reduction of mean-EHC is due to the quenching effect of the decomposed TAD fragments, which terminated the combustion free radical chain reaction and decreased the amounts of fuels. THR is commonly used to evaluate the fire safety of the materials in a real fire. As shown in Table 2, TAD contributed to lower THR value of PBT/PET blends especially at high addition amount, implying that the fireretardant effect of TAD is more prominent at high loadings, which is in agreement with the UL-94 results. Compared with TAD12, the sample TAD6EG6 with TAD and EG mixture showed similar THR value, but significantly lower PHRR and mean-EHC value, which may partly explain why TAD6EG6 passed UL-94 V-0 rating but TAD12 could only reach V-1 rating. Regarding to sample EG12 with 12 wt% EG, the HRR curve shifts a bit left compared with the curve of other samples with TAD, indicating EG is easy to be activated on fire [36]. It is interesting that EG12 exhibits lower PHRR and THR compared with TAD12, while it has worse UL-94 performance than TAD12. This phenomena is due to that EG can not act like TAD to eliminate the free radical groups formed during combustion and release inert gas to dilute flammable gas, resulting in long flaming time over 30 s (NR rating in UL-94). Mean-CO2Y value was also decreased after the addition of TAD in the PBT/PET blend, which demonstrates that the resin matrix combusted less sufficiently than PBT/PET blend without adding fire retardants. This is strong evidence that TAD can effectively hinder the combustion of volatiles in the gas phase during fire, resulting in less CO2. Furthermore, the TSR value of PBT/PET blends gradually increased with TAD loading, indicating formation of more residue char instead of fuels during combustion. Notably, PBT/PET blend with the mixture of The PBT/PET blend without flame retardant had an extremely high PHRR value of 1087.7 kW/m 2 . After incorporation of TAD, the PHRR value gradually decreased with TAD content in the blend. The parameter PHRR is usually employed to assess the flammability of materials; thus, the results prove that TAD can effectively inhibit the combustion intensity of PBT/PET blend, which is also in agreement with the LOI results. The mean-EHC values of PBT/PET blends containing TAD were also reduced compared to blend without TAD, indicating that the amount of fuels was decreased. The reduction of mean-EHC is due to the quenching effect of the decomposed TAD fragments, which terminated the combustion free radical chain reaction and decreased the amounts of fuels. THR is commonly used to evaluate the fire safety of the materials in a real fire. As shown in Table 2, TAD contributed to lower THR value of PBT/PET blends especially at high addition amount, implying that the fireretardant effect of TAD is more prominent at high loadings, which is in agreement with the UL-94 results. Compared with TAD12, the sample TAD6EG6 with TAD and EG mixture showed similar THR value, but significantly lower PHRR and mean-EHC value, which may partly explain why TAD6EG6 passed UL-94 V-0 rating but TAD12 could only reach V-1 rating. Regarding to sample EG12 with 12 wt% EG, the HRR curve shifts a bit left compared with the curve of other samples with TAD, indicating EG is easy to be activated on fire [36]. It is interesting that EG12 exhibits lower PHRR and THR compared with TAD12, while it has worse UL-94 performance than TAD12. This phenomena is due to that EG can not act like TAD to eliminate the free radical groups formed during combustion and release inert gas to dilute flammable gas, resulting in long flaming time over 30 s (NR rating in UL-94). Mean-CO 2 Y value was also decreased after the addition of TAD in the PBT/PET blend, which demonstrates that the resin matrix combusted less sufficiently than PBT/PET blend without adding fire retardants. This is strong evidence that TAD can effectively hinder the combustion of volatiles in the gas phase during fire, resulting in less CO 2 . Furthermore, the TSR value of PBT/PET blends gradually increased with TAD loading, indicating formation of more residue char instead of fuels during combustion. Notably, PBT/PET blend with the mixture of TAD and EG (TAD6EG6) had further lower value of mean-CO 2 Y Polymers 2019, 11, 1784 6 of 11 but higher value of TSR compared to that only with TAD (TAD12) or EG (EG12), suggesting that TAD and EG possessed a clear synergistic effect on both inhibiting the burning intensity and promoting char formation. Thermal Stability The TGA curves of TAD0, TAD12, TAD16, TAD6EG6, EG12, and fire retardants TAD and EG under a nitrogen atmosphere are shown in Figure 3 and some typical data are collected in Table 3. The parameter T 5% refer to the temperature at which weight loss is 5%. The char residue (%) is the unburnt residue at 600 • C. The TGA curves of all the fire retarded materials showed similar shape, only with different solid residues at 600 • C. As per Table 3, there was no significant difference in T 5% between the neat resin matrix and fire-retardant composites, indicating that neither TAD nor EG exerted their flame resistance by inducing the decomposition of the resin matrix. The char residue of composites with TAD was clearly higher than that of the neat resin matrix. This is because the phosphaphenanthrene group of TAD would decompose to phosphoric acid or polyphosphoric acid compounds and promote the resin matrix to form more char residues during combustion. Comparing sample TAD12 and TAD6EG6, the combination of TAD and EG resulted in more char residues than TAD or EG individually, which reveals that combination of TAD and EG had a better charring effect on the PBT/PET blend. In order to determine whether interactions between TAD and EG occurred, the theoretical char residue was calculated from the experimental TGA data of TAD0, TAD, and EG, assuming no interactions. Theoretical char residue (C) was calculated according to the equation below: where P, T, E are char residue of PBT/PET blend without flame retardants, pure TAD, and neat EG, respectively. The result is 10.9% much lower than experimental data of TAD6EG6, confirming the synergistic flame-retardant effect between TAD and EG within the PBT/PET matrix and consequently increase the char residue. TAD and EG (TAD6EG6) had further lower value of mean-CO2Y but higher value of TSR compared to that only with TAD (TAD12) or EG (EG12), suggesting that TAD and EG possessed a clear synergistic effect on both inhibiting the burning intensity and promoting char formation. Thermal Stability The TGA curves of TAD0, TAD12, TAD16, TAD6EG6, EG12, and fire retardants TAD and EG under a nitrogen atmosphere are shown in Figure 3 and some typical data are collected in Table 3. The parameter T5% refer to the temperature at which weight loss is 5%. The char residue (%) is the unburnt residue at 600 °C. The TGA curves of all the fire retarded materials showed similar shape, only with different solid residues at 600 °C. As per Table 3, there was no significant difference in T5% between the neat resin matrix and fire-retardant composites, indicating that neither TAD nor EG exerted their flame resistance by inducing the decomposition of the resin matrix. The char residue of composites with TAD was clearly higher than that of the neat resin matrix. This is because the phosphaphenanthrene group of TAD would decompose to phosphoric acid or polyphosphoric acid compounds and promote the resin matrix to form more char residues during combustion. Comparing sample TAD12 and TAD6EG6, the combination of TAD and EG resulted in more char residues than TAD or EG individually, which reveals that combination of TAD and EG had a better charring effect on the PBT/PET blend. In order to determine whether interactions between TAD and EG occurred, the theoretical char residue was calculated from the experimental TGA data of TAD0, TAD, and EG, assuming no interactions. Theoretical char residue (C) was calculated according to the equation below: (1) where P, T, E are char residue of PBT/PET blend without flame retardants, pure TAD, and neat EG, respectively. The result is 10.9% much lower than experimental data of TAD6EG6, confirming the synergistic flame-retardant effect between TAD and EG within the PBT/PET matrix and consequently increase the char residue. Figure 4 shows the digital images of PBT/PET blend and flame-retarded PBT/PET materials after cone calorimeter test. As shown, PBT/PET blend without anti-flame additive left small amount and thin char after burning, suggesting weak char forming ability ( Figure 4A). After incorporation of TAD into the composite, more residue chars were formed ( Figure 4B), corresponding to the TGA results. However, the status of this char, especially in the middle part, was fluffy and not compact. Similar morphology was also observed for TAD reinforced epoxy thermoset, which can be attributed to increased gas release under action of TAD during combustion. As shown in Figure 4C, combination of TAD and EG contributed to form more dense and compact char compared to neat TAD, demonstrating the synergistic effects between TAD and EG on char formation of PBT/PET blend. Figure 4 shows the digital images of PBT/PET blend and flame-retarded PBT/PET materials after cone calorimeter test. As shown, PBT/PET blend without anti-flame additive left small amount and thin char after burning, suggesting weak char forming ability ( Figure 4A). After incorporation of TAD into the composite, more residue chars were formed ( Figure 4B), corresponding to the TGA results. However, the status of this char, especially in the middle part, was fluffy and not compact. Similar morphology was also observed for TAD reinforced epoxy thermoset, which can be attributed to increased gas release under action of TAD during combustion. As shown in Figure 4C, combination of TAD and EG contributed to form more dense and compact char compared to neat TAD, demonstrating the synergistic effects between TAD and EG on char formation of PBT/PET blend. The microscopic morphologies of these residues were further characterized under SEM, and the results are shown in Figure 5. Large and open holes were clearly observed on the residual char of the neat resin matrix ( Figure 5A), which were due to the volatilization of flammable gases during combustion. As for composite with 12 wt% TAD, the loose structure is clearly presented in Figure 5B. However, the porous structure cannot prevent the exchange of the fuel and oxygen or protect the matrix from the flame efficiently. Thus, this morphologycombined with cone calorimeter data demonstrates that the flame-retardant effect of TAD in gas phase was stronger than that in the condensed-phase. Figure 5C shows a compact and continuous surface of the char layer from composite with 6 wt% TAD and 6 wt% EG, implying that TAD and EG interacted in the condensed phase and resulted in the formation of well-sealed char layer, which protect the matrix from the penetration of heat flux and retards further decomposition of the resin matrix. This phenomenon may explain the reason behind better UL-94 rating of sample TAD6EG6 than sample TAD12. Flame-Retardant Mechanism To further investigate the synergistic flame-retardant effect of TAD and EG, XPS was used to analyze the elemental contents of the residual chars from cone calorimeter tests, and the results are The microscopic morphologies of these residues were further characterized under SEM, and the results are shown in Figure 5. Large and open holes were clearly observed on the residual char of the neat resin matrix ( Figure 5A), which were due to the volatilization of flammable gases during combustion. As for composite with 12 wt% TAD, the loose structure is clearly presented in Figure 5B. However, the porous structure cannot prevent the exchange of the fuel and oxygen or protect the matrix from the flame efficiently. Thus, this morphologycombined with cone calorimeter data demonstrates that the flame-retardant effect of TAD in gas phase was stronger than that in the condensed-phase. Figure 5C shows a compact and continuous surface of the char layer from composite with 6 wt% TAD and 6 wt% EG, implying that TAD and EG interacted in the condensed phase and resulted in the formation of well-sealed char layer, which protect the matrix from the penetration of heat flux and retards further decomposition of the resin matrix. This phenomenon may explain the reason behind better UL-94 rating of sample TAD6EG6 than sample TAD12. Figure 4 shows the digital images of PBT/PET blend and flame-retarded PBT/PET materials after cone calorimeter test. As shown, PBT/PET blend without anti-flame additive left small amount and thin char after burning, suggesting weak char forming ability ( Figure 4A). After incorporation of TAD into the composite, more residue chars were formed ( Figure 4B), corresponding to the TGA results. However, the status of this char, especially in the middle part, was fluffy and not compact. Similar morphology was also observed for TAD reinforced epoxy thermoset, which can be attributed to increased gas release under action of TAD during combustion. As shown in Figure 4C, combination of TAD and EG contributed to form more dense and compact char compared to neat TAD, demonstrating the synergistic effects between TAD and EG on char formation of PBT/PET blend. The microscopic morphologies of these residues were further characterized under SEM, and the results are shown in Figure 5. Large and open holes were clearly observed on the residual char of the neat resin matrix ( Figure 5A), which were due to the volatilization of flammable gases during combustion. As for composite with 12 wt% TAD, the loose structure is clearly presented in Figure 5B. However, the porous structure cannot prevent the exchange of the fuel and oxygen or protect the matrix from the flame efficiently. Thus, this morphologycombined with cone calorimeter data demonstrates that the flame-retardant effect of TAD in gas phase was stronger than that in the condensed-phase. Figure 5C shows a compact and continuous surface of the char layer from composite with 6 wt% TAD and 6 wt% EG, implying that TAD and EG interacted in the condensed phase and resulted in the formation of well-sealed char layer, which protect the matrix from the penetration of heat flux and retards further decomposition of the resin matrix. This phenomenon may explain the reason behind better UL-94 rating of sample TAD6EG6 than sample TAD12. Flame-Retardant Mechanism To further investigate the synergistic flame-retardant effect of TAD and EG, XPS was used to analyze the elemental contents of the residual chars from cone calorimeter tests, and the results are Flame-Retardant Mechanism To further investigate the synergistic flame-retardant effect of TAD and EG, XPS was used to analyze the elemental contents of the residual chars from cone calorimeter tests, and the results are summarized in Table 4. The relative C content of residual char from PBT/PET blend in combination with TAD and EG was higher than that of composite with only TAD owing to the thermal stability of EG at high temperatures. Besides, combination of flame retardants contributed to more relative P content remaining in the char residue as that 0.25% P per 1% TAD was reserved in the char of TAD6EG6 but only around 0.18% P per 1% TAD was kept in the char of TAD12. The remaining N content in the TAD6EG6 was 0.14% N per 1% TAD, which is slightly lower than that of TAD12 (0.16% N per 1% TAD). According to the previous research work [24,29,37,38], the flame-retardant effect of TAD was due to its decomposition products during combustion: (i) P-containing free radicals which can quench the free radicals from degraded resin matrix, and terminate the chain reaction in the gas phase, and (ii) incombustible N-containing gas which diluted the flammable gases released from the resin matrix. TAD acts weak in the condensed phase due to that most of the phosphorus is released to the gas phase, and only minor charring activity is remained [30]. Hence, the mechanism for the synergistic effect of TAD and EG can be concluded as follows. During combustion, EG initially expanded and migrated on the resin matrix, locked more P-containing fragments from decomposed TAD into the condensed phase to form a compact and continuous char layer with significant enhanced barrier effect. The amount of P-containing fragments in the gas phase decreased; however, the release of inert N-containing gas was not clearly negatively impacted. Consequently, the overall flame-retardant performance of the combined TAD and EG on the PBT/PET blend was better than that of neat TAD. Mechanical Properties To study the influence of adding flame retardants on the mechanical property of PBT/PET blend, mechanical testing was performed. The results are summarized in Table 5. Without adding flame retardants, the specific PBT/PET blend developed by us shows high impact property with good balance between toughness and stiffness. The bending and tensile property of PBT/PET blend are slightly enhanced by adding 4 wt% TAD, which is due to TAD acts as filler at low addition amount. After incorporation of 12 wt% TAD, decrease of stiffness was clearly observed, might owing to poor interface interaction between TAD and polymer matrix or aggregation of fire retardants. In addition, adding EG contributes to further decrease of flexural and tensile performance compared to TAD at the same addition amount of 12 wt%. The values of bending modulus, bending strength, and tensile strength of PBT/PET blend with combination of TAD and EG were in-between those of TAD-and EG-based PBT/PET materials. The clear degeneration was observed in impact property after adding flame retardants. Notably, when adding 6 wt% EG and 6 wt% TAD in PBT/PET blend, the notched impact strength is 15.1 kJ/m 2 , only 28.5% of the value of PBT/PET blend without flame retardants. However, considering the additional excellent anti-flaming performance, TAD6EG6 with overall relatively considerable mechanical property, still has high potential for applications in the areas of electronic devices, household products, and automotive part. Conclusions Novel flame-retardant materials were successfully developed based on the PBT/PET blend and TAD or TAD/EG combination. The effect of TAD loading and addition of EG on the flame-retardant property, flame behavior, and thermal stability of the resulting materials were explored. TAD contributed to higher LOI value than EG. However, PBT/PET blend with EG/TAD combination exhibited better UL-94 performance compared to that with only TAD. Cone calorimeter test combined with TGA confirmed that TAD/EG combination possessed a clear synergistic effect on both inhibiting the burning intensity and promoting the char formation. SEM images and XPS analysis revealed that the synergistic flame-retardant effect was due to the expansion and migration of EG on the resin matrix which locked more P-containing fragments from decomposed TAD into the condensed phase to form a compact and continuous char layer with significantly enhanced barrier effect, without much loss of the quenching effect of TAD in the gas phase. All these results clearly demonstrated that the incorporation of TAD with a charring agent EG is an effective and promising method to enhance the anti-flame properties of PBT/PET blend, although degeneration was observed in mechanical property compared to neat PBT/PET blend. The resultant material exhibited high potential for applications in the areas of electronic devices, household products, and automotive parts.
2019-11-07T14:09:33.808Z
2019-10-31T00:00:00.000
{ "year": 2019, "sha1": "f48cb6504ea44c29f0919642f371a544711fcdcd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/11/11/1784/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86c34f5faf7d15518dceff5e31808b5712ed1540", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
3498983
pes2o/s2orc
v3-fos-license
Fibroblast growth factor receptor 4 induced resistance to radiation therapy in colorectal cancer In colorectal cancer (CRC), fibroblast growth factor receptor 4 (FGFR4) is upregulated and acts as an oncogene. This study investigated the impact of this receptor on the response to neoadjuvant radiotherapy by analyzing its levels in rectal tumors of patients with different responses to the therapy. Cellular mechanisms of FGFR4-induced radioresistance were analyzed by silencing or over-expressing FGFR4 in CRC cell line models. Our findings showed that the FGFR4 staining score was significantly higher in pre-treatment biopsies of non-responsive than responsive patients. Similarly, high expression of FGFR4 inhibited radiation response in cell line models. Silencing or inhibition of FGFR4 resulted in a reduction of RAD51 levels and decreased survival in radioresistant HT29 cells. Increased RAD51 expression rescued cells in the siFGFR4-group. In radiosensitive SW480 and DLD1 cells, enforced expression of FGFR4 stabilized RAD51 protein levels resulting in enhanced clearance of γ-H2AX foci and increased cell survival in the mismatch repair (MMR)-proficient SW480 cells. MMR-deficient DLD1 cells are defective in homologous recombination repair and no FGFR4-induced radioresistance was observed. Based on our results, FGFR4 may serve as a predictive marker to select CRC patients with MMR-proficient tumors who may benefit from pre-operative radiotherapy. INTRODUCTION Despite technical and therapeutic improvements in recent years, colorectal cancer (CRC) remains one of the most deadly cancers worldwide, in both men and women. Radiotherapy is an integral part of the management strategies for colorectal cancer, especially as a neoadjuvant treatment for locally advanced stage II and III rectal cancer. However, the efficiency of radiotherapy in the treatment of rectal cancer varies significantly between different patients [1]. The mechanistic basis for this intrinsic resistance may be found in differences in DNArepair and/ or survival processes [2]. In response to radiation-induced double strand breaks (DSBs), the histone variant, H2AX, is rapidly phosphorylated as the first step in recruiting DNA repair proteins [3] -most importantly RAD51, the central catalyst of the error-free homologous recombination (HR) repair [4]. RAD51-dependent HR repair significantly contributes to cell survival and induces cellular resistance to ionizing radiation [5,6]. The fibroblast growth factor receptor (FGFR) family is a class of receptor tyrosine kinases (RTKs) that includes four highly conserved receptors (FGFR1-4) [7]. FGFRs are known to play crucial roles in tumor cell proliferation, angiogenesis, migration and survival [8], and are Research Paper overexpressed or over-activated in many human cancers [9][10][11][12][13]. Increased FGFR expression and/or activity has also been reported to play a role in treatment resistance towards both conventional and EGFR-targeting strategies [14][15][16]. With regard to radiation therapy, inhibition of FGFR1 was found to increase radiation-induced cell killing of mesothelioma cells [17], and targeting FGFR3 enhanced radiation-response in squamous cell carcinomas [18]. In rectal cancer patients, Li et al. [19] showed a correlation between high FGFR2 expression and poor therapeutic response to neoadjuvant chemoradiation. By contrast, restoration of FGFR2 enhanced radiosensitivty of prostate cancer cells by increasing apoptosis [20]. FGFR4 was found to be up-regulated in about 25% of all CRC cases and showed oncogenic potential in cell line models of CRC [13]. FGFR4 expression was found to be upregulated in apoptosis-resistant clones after exposure to DNA-damaging agents [21]. Furthermore, FGFR4 silencing resulted in decreased activity of pro-survival signaling, expression of the anti-apoptotic proteins, and showed synergistic interaction with 5-fluorouracil (5-FU) and oxaliplatin in colon cancer cell lines [22]. Here, we investigated for the first time the role of FGFR4 in the resistance of colorectal cancer cells to radiotherapy, and the possible mechanisms of interaction with the DNA damage response machinery (DDR). Our findings indicate that targeting FGFR4 induces radiosensitization that is associated with the attenuation of DSB repair by RAD51-mediated homologous recombination. FGFR4 correlates with poor clinical outcome in neoadjuvant chemoradiation-treated rectal cancer patients For 43 patients who received neoadjuvant therapy, pre-treatment biopsies were available for analysis. The patients were 28% female and 72% male and their median age was 68 years ( Table 1). The majority suffered from locally advanced tumors (40/43 patients; 93%) with affected lymph nodes (30/43 patients; 69.7%). The neoadjuvant treatment caused a reduction of tumor size in 18 patients (41.8%) and a decrease of node involvement in 17 patients (39.5%). Complete remission (stage 0) was observed in 4 (9.3%) cases. Pathological response was determined based on the presence of viable tumor cells in the tissue specimen after surgery [23]. Sections obtained from both the pre-treatment biopsies and the surgical specimens were stained to determine FGFR4 and RAD51 protein levels. Representative examples of negative, weak, moderate or strong staining are shown in Figure 1A and 1B. Positive staining was observed in 39/43 (90.7%) cases for FGFR4 and 29/43 (69.8%) cases for RAD51 ( Figure 1C and 1D). When patients were grouped according to FGFR4 staining intensity, no significant association was observed between FGFR4 expression and gender or age. Analysis with regard to the pre-treatment or post-treatment tumor stage revealed a tentative association with FGFR4 levels that did not achieve statistical significance (Table 2). Downstaging was achieved in 9 patients of the low-FGFR4 group and 7 patients of the high FGFR4 group (p > 0.05). 3 of the 4 patients who showed complete clinical response (post-treatment stage 0) were in the low-FGFR4 group. When local response was assessed by the number of viable tumor cells in the surgical specimens, a significant correlation was found: moderate to high expression of FGFR4 was observed in 78.3% of the weakly or nonresponsive cases, but in only 21.7% of responsive patients ( Table 2; p = 0.03 by χ 2 -test). Also FGFR4 expression was significantly lower in patients showing complete or strong response as compared to weakly or non-responsive patients ( Figure 2A; p = 0.04). No statistically significant difference was observed for RAD51 staining ( Figure 2B). In addition, FGFR4 and RAD51 were analyzed in surgical specimens of non-responsive patients whose tumors were surgically resected after the neoadjuvant treatment. In these tumors a strong co-expression was observed for FGFR4 and RAD51 ( Figure 2C and 2D). FGFR4 is upregulated in radioresistant HT29 cells in correlation with homologous recombination-regulating proteins To establish an in vitro model for the analysis of the underlying cellular mechanisms we evaluated radiosensitivity of CRC cells using clonogenic survival assays (Supplementary Figure 1A). HT29 cells were significantly less radiosensitive as compared to both SW480 and DLD1 cells, represented by higher radiation ED 50 ; 4.42 ± 0.13 Gy for HT29 as compared to 2.6 ± 0.07 Gy for SW480 (p < 0.0001) and 2.52 ± 0.12 Gy for DLD1 (p < 0.0001). We investigated FGFR4 expression in these cell lines and found that the radioresistant HT29 cells showed 42% (p < 0.01) and 85.6% (p < 0.0001) higher expression than SW480 and DLD1 cells, respectively, as measured by qPCR (Supplementary Figure 1B). The efficiency of homologous recombination repair in the 3 cell lines was determined by HR reporter assay using GFP-based reporter construct [24], and the efficiency was highest in HT29 cells and lowest in DLD1, significantly and positively correlating with FGFR4 expression (r = 0.9, p < 0.05; Supplementary Figure 2). 24 h after exposure of HT29 cells to γ-rays, FGFR4 mRNA was increased in a dose-dependent manner ( Figure 3A) and was 1.6-fold higher than the mockirradiated control after a 6 Gy dose (p < 0.05). We also assessed the expression levels of the HR-related proteins: RAD51, BRCA1 and BRCA2, in response to radiation in HT29 cells ( Figure 3B-3D). Similar to FGFR4, mRNA-levels of these genes were dose-dependently upregulated by radiation reaching an increase of 1.92-fold (p < 0.01), 2.24-fold (p < 0.05) and 2.86-fold (p < 0.01) compared to non-irradiated cells for RAD51, BRCA1 and BRCA2, respectively. The cell cycle profile of irradiated HT29 cultures showed 2.1-fold (p < 0.001) increases of G2/M fraction 24 h after a single 6 Gy dose, as compared to mockirradiated cells ( Figure 3E). The G2/M arrest was further confirmed by detection of cdc2 carrying a deactivating phosphorylation at Tyr15 ( Figure 3F) at 6, 12 and 24 h after IR. In addition, cyclin B levels were increased, while the phosphorylation of histone H3 at Ser-10, a crucial event for the onset of mitosis, was found to drop early after irradiation until complete inhibition at 24 h post irradiation. HT29 cells are radiosensitized by RAD51 depletion To assess the role of RAD51 in radioresistance of HT29 cells, we performed immunofluorescence staining to observe the localization of RAD51 before and after irradiation with 6 Gy ( Figure 4A). In the control cells, RAD51 appeared to be abundant and was localized not only nuclear but also perinuclear. 24 h after exposure to 6 Gy of γ-rays, damage foci were visible when stained for γ-H2AX and RAD51 that was recruited to these repair foci. Also, we investigated the regulation of RAD51 on the protein level by western blotting ( Figure 4B) and observed a transient increase of the RAD51 after 24 h followed by a steady return to control levels at 48 and 72 hours. At these later time points unresolved damage became apparent through an increase of the γ-H2AX in the cells ( Figure 4B). Knockdown of RAD51 was achieved using siRNA oligonucleotides that efficiently depleted RAD51 expression ( Figure 4C). This resulted in higher persistence of γ-H2AX ( Figure 4B) and in a significant decrease of survival ( Figure 4D, p < 0.0001). FGFR4 silencing radiosensitized HT29 cells via attenuation of DSB repair by HR To investigate the role of FGFR4 in the radioresistance of HT29 cells, two different strategies were followed. First, we used siRNA-induced FGFR4 silencing ( Figure 5A), which caused a significant decrease of the surviving colony forming cells (p < 0.01) after radiation ( Figure 5B). This is represented by a shift in the dose-response curve and a lower radiation ED 50 (3.83 ± 0.18 Gy) as compared to scrambled controls (4.6 ± 0.09 Gy). Secondly, we used the FGFR inhibitor PD173074 to block FGFR4-dependent signaling. 2 µM PD173074 were applied 3 h before irradiation and the treatment was continued after irradiation and resulted in a significant reduction of the surviving fraction (p < 0.01) as well as 15.3% decrease of the radiation ED 50 (3.81 ± 0-16 Gy vs. 4.51 ± 0.09 Gy for control) ( Figure 5C). Phosphorylation of FGFR4 was effectively prevented by the drug ( Figure 5D). With regard to RAD51, both FGFR4 depletion and signaling blockade resulted in an accelerated decrease of RAD51 protein levels as determined by western blotting ( Figure 6A and 6B). This indicates that the effect of FGFR4 on radiation response is mediated through the regulation of this repair protein. Overexpression of RAD51 controlled by a CMV promoter increased RAD51 levels in HT29 cells ( Figure 6C) and also abolished the decrease of cell survival induced by FGFR4 knockdown (p < 0.05; Figure 6D). Furthermore, irradiation of FGFR4-silenced HT29 cells resulted in significantly higher γ-H2AX-foci accumulation (p < 0.05; Figure 7A and 7B). Increased FGFR4 expression increased survival of SW480 cells but not the mismatch repairdeficient DLD1 cells after irradiation To answer the question whether FGFR4 overexpression conveys radioresistance to sensitive cells, FGFR4-overexpressing SW480 and DLD1 cells were obtained. DLD1 were utilized as a model of mismatch repair (MMR) defective cells (microsatellite instable, MSI), while SW480 is a microsatellite-stable (MSS) cell line. Increased FGFR4 expression significantly improved survival of SW480 cells ( Figure 8A, p < 0.05) and resulted in a 41% increase of radiation ED 50 (3.01 ± 0.06 Gy vs. 2.13 ± 0.27 Gy for pcDNA3). On the other hand, FGFR4 overexpressing DLD1 cells did not respond with increased cell surviving fraction ( Figure 8B). RAD51 protein levels were stabilized by FGFR4 overexpression in both cell lines ( Figure 8C and 8D). Functional activity of the DSB repair appeared fundamentally different, however. In SW480 cells, FGFR4 induced clearance of DNA breaks after irradiation resulting in a significant decrease of persisting nuclear γ-H2AX-foci ( Figure 8E, p < 0.01). In DLD1 cells the persisting radiation-induced γ-H2AX foci were not reduced ( Figure 8F). This was further confirmed by the significant enhancement of the HR-repair capacity, which was exclusively observed in SW480 cells ( Figure 8G, p = 0.0002), but not DLD1 cells ( Figure 8H, p = 0.6), upon increased FGFR4 expression. DISCUSSION Overexpression of FGFR4 was observed in several cancers and has been reported to be associated with aggressive tumors and poor prognosis in breast cancer [25], squamous cell carcinoma [26], ovarian cancer [11], non-small cell lung cancer [27], gastric cancer [28], as well as colorectal cancer [13,22]. It has also been reported to be associated with therapy response [21,22]. Ionizing radiation is known to induce cell killing through induction of DNA-damage, with double strand breaks (DSBs) being the most fatal. To cope with that, cells have evolved several repair mechanisms, the most important being the error-prone non-homologous end joining (NHEJ), and the error-free homologous recombination (HR). Cancer cells were found to become resistant to radiation by increasing the activity of DNA repair proteins involved in the HR repair machinery [6,29]. Our work now reports that FGFR4 enhanced the response of human colorectal cancer cells to radiation therapy by upregulating RAD51 and consequently increasing HR capacity. The results demonstrate that high FGFR4 expression in the tumor correlated with poor response to radiotherapy in 43 patients who underwent neoadjuvant treatment for rectal cancer. Specifically, 3 of the 4 patients who achieved complete clinical response showed only low FGFR4 levels and 78.3% of the specimens with high FGFR4positive staining were obtained from patients that did not favourably respond to radiotherapy (Table 2) suggesting a predictive value of FGFR4-levels in pre-treatment biopsy specimens. Moreover, the FGFR4-score was shown to be significantly higher in partially and non-responsive patients as compared to those who strongly responded to the neoadjuvant chemoradiation regimen (Figure 2A). For RAD51, we observed a trend to higher protein levels in biopsies from non-responders, however not statistically significant. This may be due to the small cohort we analyzed and the difference may become significant in a higher-powered study. A published report by Tennstedt et al. [30] using a cohort of 1213 CRC patients actually did identify RAD51 as a marker for poor prognosis. However, the endpoint studied was overall survival, while we only assessed the immediate response to neoadjuvant treatment. The fact that our cohort consisted of only rectal cancer patients probably is not critical as Tennstedt et al. did not see differences between the complete cohort and a rectum-only subcohort [30]. Interestingly, we also observed strong co-staining of FGFR4 and RAD51 in surgical specimens of patients who have not responded to neoadjuvant radiotherapy ( Figure 2C and 2D) indicating that specifically those tumor cells that expressed high FGFR4 and upregulated RAD51 had survived the radiation treatment. Hence, FGFR4 and RAD51 (B) staining intensity in pre-treatment biopsies was scored for responders and non-responders, according to the immunoreactive scoring (IRS) described in the "materials and methods." The figures show the individual values together with the mean intensity score ± SEM, *p < 0.05 -t-test. Representative staining of FGFR4 (C) and RAD51 (D) in a resected rectal tumor of a patient who did not respond to the neoadjuvant chemoradiotherapy regimen. Scale bar = 100 μm. www.impactjournals.com/oncotarget overexpression may predict neoadjuvant radiotherapy response, serving as an indicator to select CRC patients who could potentially benefit from neoadjuvant radiotherapy. On the cellular level, we have demonstrated that repair of radiation damage was dependent on RAD51mediated homologous recombination in the radioresistant HT29 cells. After irradiation, HT29 cells underwent a transient G2/M arrest and transcriptionally upregulated the HR-associated genes RAD51, BRCA1 and BRCA2. RAD51 protein was increased as compared to control cells and was recruited to γ-H2AX-positive damage foci in the nuclei of irradiated cells (Figure 4). This process is known to be restricted to the G2 phase of the cell cycle, where G2-arrest allows time to repair the damage [31]. In our study, the IR-induced G2 arrest was shown by FACS-analysis ( Figure 3E), and further demonstrated by an increase in the deactivating Tyr15-phosphorylation of cdc2 (CDK1), increased levels of cyclin B, and decreased phosphorylation of histone H3 ( Figure 3F). In addition to halting the cell cycle, this may result in diminished CDKmediated phosphorylation of BRCA2 -a modification that inhibits HR by impairing the interaction of BRCA2 with RAD51 [32]. In spite of the optimal conditions for HR that were observed in the radiation-resistant HT29 cells, DSB is incomplete so that residual damage accumulated 2-3 days after a 6 Gy dose of γ-irradiation ( Figure 4B). After siRNA-mediated RAD51 silencing, the accumulation of residual γ-H2AX was increased over time accompanied by a significant reduction in colony formation capacity after irradiation ( Figure 4D, p < 0.0001). This confirmed that RAD51 is a crucial promoter of survival in the radioresistant CRC cells. Previous studies have reported the involvement of tyrosine kinase receptors like epidermal growth factor receptor (EGFR), insulin-like growth factor type 1 receptor (IGF-1R), and hepatocyte growth factor receptor (c-Met), in radiation-induced DNA damage repair by homologous recombination through the regulation of RAD51 [33][34][35][36]. We now, introduce FGFR4 as a new candidate receptor capable of mediating radioresistance of CRC cells. FGFR4 expression correlated with HR-repair capacity in our CRC cell models (Supplementary Figure 2). However, overexpression of FGFR4 did not affect the baseline expression of RAD51 in either SW480 or DLD1 cells (unpublished observation). Rather, the radiation-induced expression of the protein was enhanced (Figure 8). FGFR4 was also found to be upregulated in a dose-dependent manner after irradiation, in correlation with the HR-regulating proteins RAD51, BRCA1 and BRCA2 (Figure 3). Silencing of FGFR4 by siRNA-mediated knockdown or inhibition of the FGFR4 kinase significantly lowered RAD51 protein levels and radiosensitized HT29 cells ( Figure 5) demonstrating FGFR4-mediated regulation of RAD51 in these cells. Increased RAD51 expression successfully rescued FGFR4-silenced HT29 cells ( Figure 6) confirming that RAD51 regulation mediated the FGFR4-induced radioresistance. However, overexpression of FGFR4 only increased cell survival in the MMR-competent cell line SW480, but not in the MMR-deficient cell line DLD1 (Figure 8). This is in agreement with several studies indicating the involvement of the mismatch repair system in radiationinduced DSB repair. In MMR-deficient CRC cell lines, high sensitivity to γ-irradiation as a result of impaired NHEJ as well as defective HR repair, has been reported by others [37,38]. It has been demonstrated that the recruitment of RAD51 to the damage sites is delayed in MSH2 deficient cells [39], like DLD1. Also, loss of MSH2 may influence the NHEJ pathway at the step of pairing of terminal DNA tails, as reported [40]. Moreover, expression of MLH1 was found to be induced by irradiation and its loss resulted in increased cell cycle progression plus increased radiation-induced chromosomal translocations [41]. Finally, a significant negative correlation has been observed between RAD51 expression and the loss of the MMR proteins, MSH and MLH [30]. Our results demonstrate that similar levels of RAD51 protein caused a significant increase of γ-H2AX-foci clearance capability of FGFR4-overexpressing SW480 cells but not of FGFR4overexpressing DLD1 (Figure 8C and 8F). As persistence of γ-H2AX foci marks delayed repair and correlates with radiosensitivity [42][43][44], the lack of γ-H2AX-foci clearance in DLD1 cultures demonstrated the functional inefficiency of the RAD51-dependent HR repair in the MMR-deficient cells. This was further proven by using a fluorescence-based homologous recombination repair construct, which showed significant increase of the repair capacity of FGFR4-SW480 cells, but not FGFR4-DLD1 cells ( Figure 8G and 8H). In view of our results as well as the mechanistic data discussed above, upregulation of RAD51 after irradiation in tumors lacking mismatch repair proteins reported by Tennstedt et al. [30] may be the result of a compensatory reaction to the impairment of HR. In summary, our data suggest that overexpression of FGFR4 induced radioresistance by promoting resolution of radiation-induced strand breaks and tumor cell survival exclusively in the mismatch repair-proficient CRC cells but not the mismatch repair-deficient ones. Thus we define a new role for FGFR4 as regulator of radiation-induced DSB repair in colorectal cancer, making it a candidate predictive marker that identifies those patients who may best profit from neoadjuvant chemoradiation. It may also be a candidate target for innovative combination therapies to increase radiation response. Cell lines Human colorectal cancer cell lines, SW480 and HT29, were obtained from the American Type Culture Collection. DLD1 was obtained from European Culture Collections. The cell lines were kept under standard culture conditions using minimal essential medium containing 10% FCS (Sigma-Aldrich, St. Louis, USA) under standard tissue culture conditions (5% CO 2 at 37°C). All the cell lines were authenticated by Eurofins (Vienna-Austria). Ionizing radiation and in vitro radiosensitivity assay Cells were irradiated with different doses of γ-radiation (2, 4 and 6 Gy) using a Co-60 radiotherapy unit (Theratron 760, Theratronics, Ottawa, Canada). The surviving fraction of cells was determined by the clonogenic assay and calculated relative to the nonirradiated mock control [45]. Homologous recombination repair assay The analysis of homologous recombinaionmediated DSB repair was performed using chromosomally integrated fluorescent reporter construct, kindly provided by Dr. Andrei Seluanov, as previously described [24]. The assay based on the restoration of normal GFP gene after repair of I-SceI-induced DSB within the GFP-Pem1 gene, which designed to be exclusively repaired by HR. The measured GFP signal by FACS correlates with the HR repair capability of the cells. DsRed was used as indicator of the transfection efficiency. RNA isolation and quantitative real-time PCR assay Total cellular RNA was isolated using Trifast (PeqLab, Germany) reagent according to the manufacturer's instructions, and the mRNA reversely transcribed into cDNA. Reverse transcription products were amplified using TaqMan-based assay performed using the ABI 7500 fast real-time PCR system (Applied Biosystems, Foster City, California, USA), as previously described [46]. Establishment of a stable FGFR4-expressing cell line Stable overexpression of FGFR4 was achieved by transfection of SW480 and DLD1 cells with a plasmid expressing wild-type FGFR4 using TransFectin reagent (Bio-Rad, USA), and selection of over-expressors with geneticin (G418, PAA, Pasching, Austria) as described previously [13]. Control cells received pcDNA3 vector DNA (Invitrogen). Overexpression of RAD51 Increased expression of RAD51 in HT29 cells plated in 6 well plates was done using a pCMV6-XL4 vector expressing RAD51 (ID SC309019, Origene, USA) that was introduced into the cells using SilentFect (BioRad). Protein isolation and western blotting Protein was extracted using HEPES lysis buffer supplemented with protease inhibitors cocktail (Complete -Roche, Germany) and phosphatase inhibitors. The protein concentration was determined using the Bradford assay (Bio-Rad, Germany). Proteins were analyzed by western blotting. The antibodies used are listed in Supplementary Table 1. Detection was performed using ECL Western Blot Detection Reagents (GE Healthcare). Flow cytometry For cell cycle analysis, cells were harvested at the indicated time point after irradiation. Nuclei were isolated, stained with propidium iodide and analyzed using a FACS-Calibur (BD, Franklin Lakes, NJ, USA), as described previously [47]. www.impactjournals.com/oncotarget Immunofluorescence Cells were seeded onto coverslips and fixed using Histofix 4% (Sigma). Fixed cells were permeabilized using 0.2% Triton X100 in PBS and incubated with p-H2AX (Ser139) rabbit monoclonal antibody (Cell Signaling) and/or RAD51 mouse polyclonal antibody (Abnova) (see Supplementary Table 1). After secondary labeling with Alexa 488 conjugated goat anti-rabbit and/or TRITC-conjugated goat anti-mouse antibodies, slides were washed 3 times in PBS. Coverslips were mounted using DAPI containing Vectashield ® , sealed in polyurethane and stored at 4°C in the dark. Confocal fluorescent images were obtained using a Zeiss LSM 700 confocal microscope (Carl Zeiss, Germany), with a 63× objective. Patients and clinical samples Biopsy specimens were collected retrospectively from 43 patients with rectal cancer who received neoadjuvant chemoradiation treatment at the General Hospital of Vienna during the years 2012-2014. The patients gave their informed consent, and biopsies were taken during colonoscopic examination before preoperative radiotherapy. Tumor specimens were also collected at surgery. The study protocol was approved by the ethics committee of the Medical University of Vienna. All the patients received neoadjuvant regimen of Xeloda ® (capecitabine) plus a total of 50 Gy dose. The response to radiotherapy was determined by histopathological examination of surgically resected specimens and classified according to the amount of viable tumor cells in the resected tissue, as described by Dworak et al. [23]. Specifically, 0 -no regression; 1 -dominant tumor mass with few signs of fibrosis; 2 -dominantly fibrotic material with few tumor cells or groups; 3 -very few tumor cells in fibrotic tissue; 4 -complete response -no tumor cells, only fibrotic mass. Immunohistochemistry FGFR4 staining was carried out according to a standard immunohistochemistry (IHC) protocol [48] using polyclonal rabbit-anti-FGFR4 antibody C-16 (Santa Cruz, CA), or mouse polyclonal anti RAD51 antibody (Abnova) (see Supplementary Table 1). The stained slides were scanned with a Panoramic Midi automated slide scanner (3DHISTECH, Hungary). Quantification of positive cells and staining intensity of the FGFR4 and RAD51 stained biopsies and tumor tissue samples was done using Definiens' TissueMap ® software. Statistical analysis Unless otherwise stated, results are presented as mean values ± SEM for three replicate experiments. Data were analyzed by student's t-test using GraphPad Prism software (GraphPad, San Diego, CA, USA). Alternatively, one-way ANOVA/ Pearson's chi-square test was used for analyzing the association between FGFR4 expression and clinicopathologic parameters. A p-value of ˂ 0.05 was regarded significant (*p < 0.05, **p < 0.01, ***p < 0.001).
2018-04-03T05:55:51.402Z
2016-09-17T00:00:00.000
{ "year": 2016, "sha1": "5eb329e7e787576a59ca69b7ee0ebc17463830c1", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=12099&path[]=38288", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5eb329e7e787576a59ca69b7ee0ebc17463830c1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
67858155
pes2o/s2orc
v3-fos-license
The effect of an exercise program in pregnancy on vitamin D status among healthy, pregnant Norwegian women: a randomized controlled trial Background Vitamin D insufficiency is common in pregnant women worldwide. Regular prenatal exercise is considered beneficial for maternal and fetal health. There is a knowledge gap regarding the impact of prenatal exercise on maternal vitamin D levels. The objective of this study was to investigate whether a prenatal exercise program influenced serum levels of total, free and bioavailable 25-hydroxyvitamin D (25(OH)D) and related parameters. This is a post hoc analysis of a randomized controlled trial with gestational diabetes as the primary outcome. Methods Healthy, pregnant women from two Norwegian cities (Trondheim and Stavanger) were randomly assigned to a 12-week moderate-intensity exercise program (Borg perceived rating scale 13–14) or standard prenatal care. The intervention group (n = 429) underwent exercise at least three times weekly; one supervised group training and two home based sessions. The controls (n = 426) received standard prenatal care, and exercising was not denied. Training diaries and group training was used to promote compliance and evaluate adherence. Serum levels of 25(OH)D, parathyroid hormone, calcium, phosphate, magnesium and vitamin D-binding protein were measured before (18–22 weeks′ gestation) and after the intervention (32–36 weeks′ gestation). Free and bioavailable 25(OH)D concentrations were calculated. Regression analysis of covariance (ANCOVA) was applied to assess the effect of the training regime on each substance with pre-intervention levels as covariates. In a second model, we also adjusted for study site and sampling month. Intention-to-treat principle was used. Results A total of 724 women completed the study. No between-group difference in serum 25(OH)D and related parameters was identified by ANCOVA using baseline serum levels as covariates. The second model revealed a between-group difference in levels of 25(OH)D (1.9, 95% CI 0.0 to 3.8 nmol/L; p = 0.048), free 25(OH)D (0.55, 95% CI 0.10 to 0.99 pmol/L; p = 0.017) and bioavailable 25(OH)D (0.15 95% CI 0.01 to 0.29 nmol/L; p = 0.036). No serious adverse events related to regular exercise were seen. Conclusion This study, a post hoc analysis, indicates that exercise may affect vitamin D status positively, and emphasizes that women with uncomplicated pregnancies should be encouraged to perform regular exercise. Trial registration ClinicalTrials.gov: NCT00476567, registered May 22, 2007. Electronic supplementary material The online version of this article (10.1186/s12884-019-2220-z) contains supplementary material, which is available to authorized users. Vitamin D affects muscle directly by binding of 1,25(OH) 2 D to the vitamin D receptor (VDR) and indirectly through the calcium and phosphate balance [8]. Physical activity is reported to increase 25(OH)D levels, however, this has been proposed to be attributed to solar ultraviolet B (UV-B) radiation [9][10][11]. Yet, a positive association has also been observed between indoor physical activity and 25(OH)D levels [10,11]. Data from intervention studies on the effects of long-term exercise on vitamin D status are scarce [9,12]. The American College of Obstetrics and Gynecologists (ACOG) recommends women with uncomplicated pregnancies to exercise on moderate intensity for at least 20-30 min most days of the week [13]. The impact of prenatal exercise on vitamin D has, however, been little explored [12,14,15]. Therefore, based on a randomized controlled trial (RCT) of 855 pregnant women, designed to investigate health effects of exercise, we aimed to do a post hoc analysis to explore a potential relation between regular exercise in pregnancy and the vitamin D endocrine system. Study design and participants Authors of this study conducted a two-armed, two-center RCT, and health effects of a 12-week exercise program during pregnancy was compared with standard prenatal care [16]. Gestational diabetes was the primary outcome [16]. Information was collected that enabled a post hoc analysis to assess the effects of regular exercise on vitamin D levels and related parameters. Between April 2007 and June 2009 in Trondheim and October 2007 and January 2009 in Stavanger, pregnant women attending the 18-week routine ultrasound were enrolled [16]. Eligible women were healthy Caucasian, 18 years or older with a singleton live fetus. In accordance with ACOG, exclusion criteria were pregnancy complications, high risk for preterm delivery or diseases that could hinder participation [13,16]. Women living far from the hospital were excluded (Additional file 1, study protocol). Clinical data and blood samples were collected before and after the intervention (gestational weeks 18-22 and 32-36, respectively). The study was approved by the Regional Committee for Medical and Health Research Ethics (REK 4.2007.81) and performed in accordance with the Declaration of Helsinki. The trial is registered in the ClinicalTrials.gov (NCT 00476567). Randomization and masking The women received information about the study, and gave informed written consent [16]. Concealed randomization in blocks of 30, using a digital computer technique was performed. The personnel involved in the exercise-program and outcome assessment had no influence [16]. Masking of participants and study investigators to group allocation was not possible. Intervention procedures The intervention group was provided a 12-week standardized exercise program, including both aerobic and strength training (20-36 weeks′ gestation), in line with ACOG and the Norwegian National Report on Physical Activity and Health [13,16]. Group exercise sessions of 60 min, led by a physiotherapist, were offered once a week. Additionally, the women were encouraged to exercise at home at least twice weekly [16]. The controls received standard prenatal care and customary information by midwife or general practitioner, and exercising was not denied. Both groups received written recommendations on diet, pelvic floor muscle exercises and pregnancy-related lumbopelvic pain [16]. In both groups, questionnaires were used before and after intervention to assess physical activity. Exercise intensity was measured by the Borg rating of perceived exertion (RPE) scale (score [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], and level of moderate physical intensity (score [13][14], in accordance with ACOG [13,16]. Training diaries and group training was used to promote compliance and evaluate adherence, which was defined as exercising 3 days weekly or more at moderate intensity. A self-administrated optical mark readable Food Frequency Questionnaire, containing around 180 food items was used before and after the intervention to obtain information about vitamin D and calcium [17]. Serum analyses and calculation of free and bioavailable 25(OH)D Fasting blood samples were drawn before and after the intervention [18]. The following analyses were performed at Trondheim University Hospital: 25(OH)D and parathyroid hormone (PTH) by electrochemiluminescence immunoassay (ECLIA), calcium by a colorimetric method, and phosphate, magnesium, albumin and creatinine by photometric methods. All assays were delivered by Roche Diagnostics Ltd., Switzerland. Total calcium was corrected for albumin concentration. Vitamin D-binding protein (DBP) was analyzed at the Hormone Laboratory, Oslo University Hospital by an in-house competitive radioimmunoassay with GC-globulin (Sigma-Aldrich Corp, St. Louis, MO, USA) and polyclonal anti-GC-globulin antibodies (DakoCytomation, Glostrup, Denmark). Reference range, limit of detection and coefficient of analytical variation (CV) for the different analyses are presented in Additional file 2. An equation developed by Bikle et al. was applied for determination of free 25(OH)D [5]. Bioavailable 25(OH)D was calculated as the sum of albumin-bound and free 25(OH)D (Additional file 3) [6]. Outcomes The main outcome was the effects of exercise in pregnancy on total, free and bioavailable 25(OH)D. Secondary outcomes were effects on PTH, total and corrected calcium, magnesium, phosphate, and DBP. Statistical analyses The analysis was performed according to the intention-to-treat (ITT) principle, and the approach to handling missing data was complete case analysis. SPSS statistics Version 24.0 (Armonk, NY: IBM Corp) and Stata version 13 (StataCorp LP, College Station, TX, USA) were applied. The power calculation and sample size estimation were done for the primary outcome, gestational diabetes [16]. Few experimental studies have investigated the effects of exercise on young women, and so far, no RCTs have addressed the vitamin D response to an exercise program in pregnancy [15,19]. The power calculation in the present study was based on a study exploring the effect of short-time exercise on 25-hydroxyvitamin D in young women [19]. In the present study, a sample size of 772 (386 in the intervention group and 386 in the control group) conferred 80% power with two-sided p = 0.05, to detect a between-group difference of 5 nmol/L in 25(OH)D levels. Regression analysis of covariance (ANCOVA) was used to assess the effect of the training regime on each substance, with pre-intervention levels as covariates. In a second model, we also adjusted for study site and sampling month. We performed a sensitivity analysis using a mixed-effects model with random slope for 25(OH)D. The estimates were similar, and therefore only estimates from ANCOVA are presented. Participants A total of 875 pregnant women were assessed for eligibility [16]. Twenty women were excluded, and 855 were randomized into either the intervention or control groups ( Fig. 1). A total of 724 women (85%) completed the study. Loss to follow-up was 15%: 46 of 429 (11%) in the intervention group, and 86 of 426 (20%) in the control group. No serious adverse events related to regular exercise were seen, and nobody withdrew due to adverse events. Participant characteristics are presented in Table 1, and baseline serum levels of vitamin D and related parameters in Table 2. Pre-pregnancy body mass index (BMI) was 23.0 in the intervention group and 23.3 among the controls. According to classification by the World Health Organization, both groups had normal BMI [20]. Any minor differences in baseline characteristics between the groups were within the expected limits for random allocation. In the intervention group, 214 (50%) participants adhered to the exercise program. After adjusting for baseline concentrations of each substance, the ITT analysis showed no significant effect of the exercise program on levels of total, free and bioavailable 25(OH)D and related substances. In a second model we additionally adjusted for study site and sampling month, and revealed a significant between-group difference in serum levels of 25(OH)D (1.9, 95% confidence interval (CI) 0.0 to 3.8 nmol/L; p = 0.048), free 25(OH)D (0.55, 95% CI 0.10 to 0.99 pmol/L; p = 0.017) and bioavailable 25(OH)D (0.15, 95% CI 0.01 to 0.29 nmol/L; p = 0.036). PTH, corrected calcium, phosphate, magnesium, DBP and albumin did not differ between groups. Both statistical models showed similar effect estimates, but the 95% CI was narrower in adjusted model. The results are presented in Table 3 and Fig. 2. Discussion To the best of our knowledge, this is the first study to assess the effects of an exercise program during pregnancy on circulating vitamin D levels. After adjustment for relevant covariates, we observed higher levels of total, free and bioavailable 25(OH)D in the exercise group. The study design, the large sample size and the statistical modeling reduced the risk of bias. Regular exercise during pregnancy is recommended due to health benefits. This is supported by our data which suggest that exercise in pregnancy may affect vitamin D status positively. The effects of prenatal exercise on vitamin D and related parameters We observed a between-group difference in 25(OH)D levels of 2 nmol/L after long-term exercise during pregnancy. Several studies have explored the impact of exercise on vitamin D in the non-pregnant state [9,12,19,21]. Levels of 25(OH)D increased by 21.5 nmol/L (8.6 ng/mL) in elderly individuals executing a 8-week aerobic program in combination with antioxidant supplementation [22]. The influence of seasonal variation was, however, uncertain, due to lack of controls. In contrast, no effects on 25(OH)D levels were seen among non-pregnant Finnish women after a 12 months impact exercise program [21]. A high proportion of lost to follow-up and low compliance may have affected the results [21]. Data on the effects of short-term exercise on vitamin D are diverging [12,19]. A rise in 25(OH)D was reported after a bicycling endurance exercise session among young, healthy Japanese men and women [19]. After 24 h, mean 25(OH)D level among women was 5.3 nmol/L higher than at baseline [19]. In the current study, the between-group difference in 25(OH)D was modest compared with the Japanese study. The discrepancy may be due to differences in exercise type and length, mean basal levels of 25(OH)D, analytical methods and the pregnant state [9,19,21]. Moreover, ITT analysis was performed in the present RCT, and it is reasonable the large number of noncompliant women (50% in the intervention group) contributed to diluted results. A novelty of the present study lies in the assessment of free and bioavailable 25(OH)D. It has been proposed that these are important biomarkers for vitamin D status in the pregnant state since the DBP concentration alters [5,6,23]. A proportionally larger between-group difference was observed in free 25(OH)D compared to total concentration. In accordance with observations in the non-pregnant state, DBP levels were unaffected by exercise [19]. We observed no effects on PTH, calcium, phosphate, magnesium and albumin levels. Previous reports concerning the impact of exercise on PTH and calcium are conflicting, and research on pregnant women are lacking [12,19,21,22]. Mechanisms for exercise-induced changes in the vitamin D endocrine system There may be several mechanisms for the exercise-induced rise reported both in 25(OH)D and 1,25(OH) 2 D levels, however, they are not fully understood, and it is unknown if they differ between the pregnant and nonpregnant state [12,19]. Firstly, 25(OH)D may be mobilized from skeletal muscle during exercise. Muscle cells contain a reservoir of 25(OH)D, and can accrue and return the vitamin to the extracellular space [24]. Regular exercise may increase muscle mass, thus providing a larger pool of 25(OH)D, which can be mobilized. This may be beneficial during pregnancy as the substantial rise in 1,25(OH) 2 D concentration is dependent on sufficient 25(OH)D [4]. A rise in circulating 1,25(OH) 2 D is suggested to block the muscle cells′ ability to store 25(OH)D, thereby facilitating release [24]. Accordingly, more circulating 25(OH)D is available for 1,25(OH) 2 D synthesis [24]. The effects of vitamin D are mediated through 1,25(OH) 2 D, exerting genomic and nongenomic actions via VDR in muscle cells [8,25,26]. Increased VDR expression was found in rodents after a bout of resistance exercise, but not after endurance exercise [25]. Additionally, intramuscular expression of cytochrome P450 27B1, the enzyme converting 25(OH)D to 1,25(OH)2D, was higher in rats performing resistance exercise compared with controls [1,25]. Sixteen weeks vitamin D supplementation has also been shown to enhance VDR gene expression in skeletal muscle in older women [26]. We did not obtain muscle biopsies from our participants, and therefore expression of VDR and cytochrome P450 27B1 could not be assessed. Adipose tissue is another potential source for the exercise-induced rise in vitamin D [27]. It is claimed that the vitamin is stored and sequestered in adipose tissue leading to less availability [27]. This suggest that obese people exhibit a more modest response in 25(OH)D due to exercise, however, this needs to be confirmed. A weight-loss program in overweight and obese women resulted in higher serum 25(OH)D, indicating that a reduction in fat mass increases the access [27]. Our participants had a normal pre-pregnancy BMI, which may imply a more pronounced vitamin D response compared to obese women. Finally, synthesis and release of 25(OH)D from the liver could be increased due to exercise. However, data Continuous variables are given as means ± standard deviations (SD), and categorical variables are given as numbers (n) with percentages (%) a One is missing in the control group b One is missing in the intervention group Serum levels are presented as means ± standard deviations (SD), and categorical variables are given as numbers (n) with percentages (%) a One is missing in the control group b Two are missing in the control group c One is missing in the intervention group and two are missing in the control group on this topic are lacking. A rat study, addressing the effects of long-term exercise, showed that degradation of 25(OH)D may be reduced [28]. Higher 24,25(OH) 2 D levels were observed among immobilized rats compared to the exercise group and controls, implying that physical activity prevents catabolism of 25(OH)D [28]. It is unknown if this translates to humans, thereby contributing to higher vitamin D status by exercising. Measurement of 24,25(OH) 2 D levels is warranted in future studies. Some studies have shown an increment in circulating 1,25(OH) 2 D after exercise [12,22]. This could be attributed to a temporary decrease in ionized calcium, as well as phosphate, followed by a rise in PTH, which stimulates 1,25(OH) 2 D production [11,12]. We observed no between-group difference in total and corrected calcium, phosphate, magnesium and PTH levels. Due to rapid feedback mechanisms, transient changes in calcium and PTH levels may be difficult to detect in long-term exercise studies [11,12,21]. PTH secretion during exercise could also be stimulated by catecholamines and acidosis [12]. The heterogeneous results regarding PTH may be attributed to differences in exercise type, intensity and duration, in addition to physical fitness [12,21]. Furthermore, the exercise-induced PTH response is dependent on the resting level [12,21]. There is a knowledge gap concerning the impact of exercise on prenatal PTH secretion. During pregnancy, PTH declines or remains stable, and other regulators including PTH-related protein (PTHrP) may account for most of the circulating 1,25(OH) 2 D [1][2][3]. Further studies are needed to fully understand the complex relationship between exercise in pregnancy and alterations in the maternal vitamin D endocrine system. Clinical implications Developmental origins of health and disease have gained increased attention, and maternal hypovitaminosis D during fetal life is suggested to be of significance for the risk of CVD and osteoporosis in the offspring [4,7,29]. Vitamin D has direct effects on skeletal muscle, and deficiency has been associated with atrophy of type II (fast-switch) fibers [8]. This is reflected in negative health effects as myopathy and muscle weakness [8,26]. Moreover, low vitamin D is associated with fatty infiltration of the skeletal muscle independent of BMI among women, and muscle adiposity is suggested to affect muscle strength [8,30]. In line with this, vitamin D supplementation has been shown to improve muscle strength, physical performance, balance and to reduce falls [8,30]. Vitamin D is an important regulator of calcium homeostasis and bone metabolism [1,21]. Weight-bearing exercise has positive effects on bone mineral density (BMD) among none-pregnant adults [31,32]. Low 25(OH)D levels among pregnant women has been associated with reduced peak bone mass in the offspring [33]. The maternal bone turnover is reported to increase during pregnancy, and a modest decline in BMD may appear [1]. A recent trial showed that BMD loss during pregnancy was smaller among physically active compared to sedentary women [34]. However, the effects of exercise on bone remodeling in pregnancy is not elucidated. CVD accounts for half of all deaths among European women [35]. Vitamin D is important for cardiovascular function and deficiency is negatively associated with CVD [7,10,11]. Vitamin D and physical activity seem to modify CVD risk, and to have synergistic beneficial Table 3 Estimates of the intervention effects after a prenatal exercise programme among Caucasian pregnant women Full model, Simple model + study site and sampling month effects [36]. Vitamin D deficiency in pregnancy may influence blood pressure, and the renin-angiotensin system in the offspring, resulting in increased CVD risk [7]. Given the high prevalence of vitamin D insufficiency in pregnancy worldwide, preventive strategies to avoid adverse health effects in mother and offspring are needed. The increase in vitamin D levels due to regular indoor physical activity, described in a previous observational study, was estimated to be as effective as around 5 μg vitamin D supplement intake daily [10]. Exercise intensity and volume Regular physical activity potentially improves cardiorespiratory fitness, usually measured in maximal oxygen uptake (VO 2 max) or metabolic equivalent tasks (METs) [37]. VO 2 max does not change during pregnancy, resting heart rate (HR) is increased, and maximal HR is slightly lower compared with post-partum [37,38]. Moderate intensity exercise has been classified as corresponding to moderate 3-6 METs [37,38]. This has been questioned, and it has been argued the intensity should be relative to the woman's own maximal aerobic capacity [38]. In the present study, the exercise intensity was relative to the capacity of each participant as the Borg RPE scale was used. Although pregnant women also can monitor exercise intensity by HR, the ACOG's guidelines, advocates the usage of the Borg RPE scale [13,38]. Strengths and limitations The major strengths of the present study are the large sample size, ITT analysis, statistical modeling and the standardized procedures for sampling. However, the study has limitations: The study is a post hoc analysis of a RCT that was not designed to answer the specific research question. Although post hoc analysis is prone to data dredging bias when performing multiple unplanned analyses, we analyzed available serum from the original RCT to get necessary data to the current study [39]. Furthermore, multiple comparisons are a weakness with the post hoc analysis and increase the risk for the associations observed to be due to chance alone [40,41]. Hence, the results should be interpreted with caution. The participants were well-educated Caucasian women with low-risk pregnancies, which may affect the generalizability. Serum 25(OH)D was analyzed by ECLIA (Roche), although liquid chromatography-tandem mass spectrometry (LC-MS/MS) is considered to be the gold standard [42]. Calculated free 25(OH)D may give an overestimation compared to direct measurement of free 25(OH)D [43]. We did not obtain data on individual sunlight exposure, and thus UV-B could be claimed to contribute to the observed differences in vitamin D levels. However, the study design, including a large number of participants, and the statistical modeling reduces the risk for bias. The loss to follow-up was 15% in the present trial. Generally around 5% loss to follow-up in a RCT is acceptable, whereas more than 20% may be a serious threat against the validity [44]. Based on the fact that in trials with lifestyle interventions, drop-out rates less than 20% are rarely achieved, Altman has proposed that one must consider the circumstances when assessing if a trial is good [45]. All in all, we cannot rule out that the attrition bias has weakened the internal validity in the present study. Only 50% in the intervention group adhered to the intervention. This is in agreement with previous studies reporting a decrease in regular exercise during pregnancy [46,47]. In a recent Norwegian cohort study (n = 3482) only 15% of the pregnant women in second trimester followed the recommendations from ACOG [48]. A RCT from Brazil reported that high compliance is challenging in studies exploring the effects of prenatal exercise among pregnant women. The study investigated the effect of prenatal exercise program three times weekly for 16 weeks and reached an adherence of only 40% [49]. A future RCT, investigating the effects of regular prenatal exercise on vitamin D levels as a pre-defined primary outcome, and with a more complete adherence is warranted. Conclusions Regular exercise during pregnancy is recommended due to positive health effects. This is the first RCT investigating effects of long-term exercise on vitamin D and related parameters during pregnancy. Our data indicate that exercise may affect vitamin D status positively and emphasize that women with uncomplicated pregnancies should be encouraged to perform regular exercise. However, this is a post hoc analysis and the results need confirmation in future RCTs.
2019-02-22T04:39:54.586Z
2019-02-20T00:00:00.000
{ "year": 2019, "sha1": "2a3a8dae6a2f0e694039cc80098e6ffa5d64966b", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-019-2220-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a3a8dae6a2f0e694039cc80098e6ffa5d64966b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
107839066
pes2o/s2orc
v3-fos-license
BLACK SEA RAPANA VENOSA – A PROMISING SOURCE OF ESSENTIAL LIPIDS Background: A diet rich in seafood has been linked to a variety of health benefits. While worldwide overfishing results in declining fish stocks, the growing demand for alternative sources of marine lipids has been expected. Rapana venosa (veined Rapa whelk) has become valuable seafood with nutritional and economic importance in the Black Sea region. Purpose: The aim of the present study was to provide knowledge about biologically active lipids in Black Sea Rapana venosa, harvested in the region of Varna. Material/Methods: Lipid classes were separated and purified by column and thin-layer chromatography. The saponifiable lipid fraction was derivatized into fatty acid methyl esters (FAMEs) and analysed by gas chromatography–mass spectrometry (GC-MS). Non-saponifiable lipids were identified by high pressure liquid chromatography coupled with UV/Vis and fluorescence detectors (HPLCUV-FL). Results: Rapana venosa was characterized by low lipid content (0.50 g.100g-1 ww) with beneficial PUFA/SFA and n-6/n-3 ratios and high content of vitamin D3 and astaxanthin. Lipids comprised mainly of polar lipids. Polyunsaturated fatty acids represented more than 50% of total fatty acids, most abundant being from the omega-3 series. Sum of EPA and DHA accounted at 40.8% of total fatty acids. Lipid quality indices indicated the good anti-atherogenic and atni-trombogenic properties (AI and TI < 1) of rapana meat. Conclusions: The study revealed that Rapana venosa from the Black Sea is a good source of high quality marine lipids and presents a high potential for developing functional foods and/or dietary supplements with beneficial health effects. INTRODUCTION Seafood consumption has been linked to a variety of health benefits, which has led to intensive research over the past three decades.Seafood is à rich source of polyunsaturated fatty acids (PUFA), phospholipids, carotenoids, vitamins (vitamin D and B12), various micronutrients and essential amino acids.Marine organisms could provide the necessary intake of very long-chain omega-3 (VLC n-3) PUFA, having a protective role on cardiovascular health.Moreover, omega-3 (n-3) PUFAs from seafood (fish, crustaceans, mollusks) are considered more effective than those of landcorp origin [1].Marine oils, rich in fatty acids bound to phospholipids (shellfish, crustaceans, algae) have many advantages compared to fish oils since they are much more stable to oxidation.In addition, dietary phospholipids act as natural emulsifiers, which facilitate and improve the digestion of nutrients in the intestine [2]. Most of the researches in the literature discuss the composition of fish, crustaceans and cephalopods, yet information on the nutritive value of shellfish is generally scattered.Marine mollusks are the second major phylum of marine invertebrates.Veined Rapa whelk (Rapana venosa) is a marine origin gastropod that is recognized as one of the worst invader species worldwide [3].Nowadays, veined Rapa whelk now has become valuable seafood with nutritional and economic importance in the Black Sea region.Over the past decade, there has been considerable fishing particularly in Bulgaria and Turkey as it is widely consumed in East Asian cuisine [4].Studies have reported that Rapana venosa consumption improves the lipid profiles and antioxidant capacities in the serum of rats fed on atherogenic diet [5]. To our knowledge, there are limited studies on lipid composition, fatty acids composition, fat-soluble vitamins and carotenoids, lipid quality indexes of Rapana venosa from the Bulgarian Black Sea coast.The aim of the present study is to provide new information on the lipid content, lipid classes, fatty acid profiles, fat-soluble vitamins, carotenoids (beta-carotene, astaxanthin) and cholesterol content of Rapana venosa meat. Sampling Live samples of Rapana venosa were purchased from a local enterprise for fish and seafood processing near Varna, Bulgaria in the spring of 2017.Animals were transported to the laboratory in wet tissue towels in an ice box.They were washed and processed immediately.https://doi.org/10.5272/jimab.2019251.2401Lipid extraction, separation and purification Total lipids were extracted by the method of Bligh and Dyer (1959) [6].They were subsequently separated into neutral lipids (NL) and phospholipids (PL) by column chromatography using a glass column (10 mm dia × 20 cm) packed with a slurry of activated silicic acid (70 to 230 mesh; Merck, Darmstadt, Germany) in chloroform.The fraction containing NL was eluted with chloroform, while PL -with methanol.The amounts of total lipids and lipid classes were determined gravimetrically.The purity of each fraction was tested by thin-layer chromatography, using Silica gel F254 plates (thickness = 0.25 mm; Merck, Darmstadt, Germany). Fatty acid derivatization and analysis Lipid fractions were methylated using 2% H2SO4 in anhydrous methanol and n-hexane [7].Fatty acid compositions of TL, NL and PL were determined by gas chromatography with masspectrometry (GC/MS) of the corresponding fatty acid methyl esters (FAME).Chromatographic separation was performed by Thermo Scientific FOCUS Gas Chromatograph on a TR-5 MS capillary column (30 m, 0.25 mm i.d.).For identification and quantification of FAME peaks, authentic standards (SUPELCO FAME Mix C4-C24) were used. Fat-soluble vitamins and carotenoids analysis Retinol, cholecalciferol, cholesterol, astaxanthin and β-carotene were extracted from tissue by alkaline hydrolysis and simultaneously analyzed by high performance liquid chromatography as previously described [8]. Statistical analysis Student's t-test was employed to estimate the significance of values.Statistical significance was indicated at p<0.05. RESULTS AND DISCUSSION Lipids and fatty acid composition Spring samples of Rapana venosa (April 2017) showed low lipid content: 0.50 g.100g -1 ww.Polar lipids (phospholipids, PL) predominated, accounting 63 %, while neutral lipid fraction was 30 % of total lipids.The results for the fatty acid composition of total lipids and lipid classes as well as nutrition quality indices are listed in Table 1.The values for FA in the present study are reported as a percentage of total fatty acids and as mg.100g -1 edible portion due to discrepancies in expressing the values only in percentage since the latter could be inaccurate for estimating the nutritive content.The FA profile of total lipids and lipid fractions presented similar distribution: PUFA>SFA>MUFA.It is well known that animals can synthesize SFA and MUFA de novo.In addition, the World Health Organization (WHO) recommended the replacement of high SFA intake with PUFA or MUFAs, preferable from seafood origin [9].Thus, the information for alternative sources of unsaturated FAs, especially phospholipids PUFAs are very important for consumers and pharmacists.Although FAs composition of marine mollusks depends on the environmental factors, such as temperature, salinity, pollution and diet, most of the studies reported same pattern (PUFA>SFA>MUFA) for Rapana venosa lipids from the Black Sea [10][11]. Rapana venosa meat contains only 0.122 g SFA per 100 g edible portion, thus can be classified as low-saturated fat food (containing less than 1.5 g per 100 g) [12].PUFAs accounted for more than 50 % of fatty acids in all lipid fractions.One hundred grams of rapana meat contained 246.8 mg of PUFA, two-thirds of them in the form of polar lipids.It is important, since phospholipids act as natural emulsifiers, easing digestion and absorption of nutrients in the gastrointestinal tract.Rapana venosa lipids are rich of very long-chain PUFA -eicosapentaenoic acid (EPA, 20:5n-3) and docosahexaenoic acid (DHA, 22:6n-3) in particular.These PUFAs can reduce the platelet adhesion and aggregations, have blood pressure reducing properties and thus influencing positively cardiovascular diseases (CVD).DHA plays structural and functional roles in brain and retina tissues.Therefore DHA consumptions is important to ensure optimum neural and visual functions [13].EPA and DHA represented 40 % of TFA or almost 70 % of PUFA in TL.From this point of view, Rapana venosa is a very good source of these two fatty acids, as more than 70% of these FAs have a phospholipidic origin, which significantly increases its bioavailability.Sum of EPA and DHA found in this study was 170.1 mg per 100 g EP.In the past decades, the Black Sea sprat species (Sprattus sprattus L.) and freshwater rainbow trout (Oncorhynchus mykiss W.) are the most consumed fish in our country.According to previous studies, 100 g EP of Black Sea sprat delivers between 620 mg and 780 mg EPA and DHA [14], while 100 g EP rainbow trout provides 660-790 mg EPA+DHA [15].Although fish is considered the main source of EPA and DHA, rapana meal consumption could contribute to enhanced intake of these biologically active fatty acids, nevertheless maintaining low-saturated fat levels (only 29.3% of TFA).Moreover, 75% of omega-3 fatty acids in rapana tissues are bonded to phospholipids, which facilitates and increases their absorption and bioavailability.For that reason, the inclusion of Black Sea molluscs in the diet may be beneficial to resident population, increasing the intake of essential omega-3 fatty acids [16]. In the past decades, a higher intake of SFA and n-6 PUFAs is a typical dietary pattern in European countries, which results in a high and unsafety n-6/n-3 FA ratios.The WHO recommends that the n-6/n-3 ratio should not exceed 10 in a diet and its decrease in the human diet is essential to help prevent coronary heart disease by reducing the plasma lipids.Moreover, in all lipid classes, n-3 PUFAs remained the dominant one, especially in PL (155.2mg100g -1 EP) and low and beneficial n-6/n-3 ratios (below 0.23).PUFA/SFA ratio is another indicator of nutrition quality assessment, supposed as a measure of the tendency of the diet to affect the incidence of CVD [13].In this study, the PUFA/SFA ratio was found lower than 4.0 (Table 1) in all lipid fractions, which is within the recommendations of the Department of Health (1994) [17].Calculated lipid quality indices are used to measure of the ability of the Rapana lipids to reduce blood lipids (AI), platelet activity (TI) and functional effect of long chain PUFAs on cholesterol metabolism (h/H).In this study, low AI and TI, and high h/H levels were found in both PL and NL fractions (Table 1), which can classify Rapana edible tissue as beneficial for human consumptions.In addition, Rapana venosa lipid fractions analyzed in the present study showed indices values more favorable compared to those reported by Prato et al [13] for commercial scallop species from the Ionian Sea. Fat-soluble vitamins and carotenoids The results obtained for vitamins A, D3, carotenoids -beta-carotene and astaxanthin, and cholesterol are presented in Table 2. Carotenoids are important metabolites, essential for the normal growth, metabolism and reproductive cycle of mollusks.They exhibit high antioxidant activities.Rapana venosa species is able to synthesize and accumulate astaxanthin from beta-carotene by oxidative metabolic pathway [18].In a previous study [19], autumn samples of Rapana venosa presented significantly higher amounts of carotenoids.Vitamin D3 content in this study accounted for 18.3 µg per 100 g of rapana meat, which supplies more than 100% of the recommended daily intake [20].Hence, Rapana venosa from the Black Sea can be regarded as a good source of this vitamin, while presenting low cholesterol content (19.8 mg per 100 g). CONCLUSION The present study reveals that Rapana venosa from the Black Sea is characterized by high amounts of marine bioactive lipids -long-chain PUFA (EPA and DHA), carotenoids with antioxidant properties (astaxanthin) and vitamin D3.This Black Sea gastropod contained very low amounts of SFA, but high omega-3 PUFAs and could assist to implement dietary recommendations for replacement of high SFA intake with PUFA or MUFAs, if possible with seafood origin. Although oily fish is considered the main source of omega-3 PUFA and vitamin D3, rapana meal consumption could contribute to the enhanced intake of these nutrients, whilst maintaining low-saturated fat and cholesterol levels.Moreover, the inclusion of Black Sea mol- Physiol. 1959 Aug;37(8):911-7.[Crossref] 7. BDS EN ISO 12966-2:2017. luscs in the diet may be beneficial to the resident population, increasing the intake of essential omega-3 fatty acids. ACKNOWLEDGMENTS The authors would like to thank the National Science Fund of Bulgaria for the financial support.The study is a part of a project DM09/2 from15 Dec 2016 "Seasonal variations in lipid profile and thermal stress effect on the lipid composition of Black sea Mytilus galloprovincialis and Rapana venosa". CONFLICT OF INTEREST Authors declare no conflict of interest. Table 1 . Fatty acid composition of lipid classes and nutrition quality indices of Rapana venosa from the Black Sea coast Table 2 . Fat soluble vitamins and carotenoids content of Rapana venosa from the Black Sea coast . Department of Health 1994.Nutritional aspects of cardiovascular disease: Health and social subjects (Report No 46) HMSO, London, 187 p. 18. Borodina AV, Maoka T, Soldatov AA. [Composition and content of carotenoids in body of the Black Sea gastropod mollusc Rapana venosa (Valenciennes, 1846)].[in Russian] Zh Evol Biokhim Fiziol.
2019-04-11T13:07:44.536Z
2019-03-06T00:00:00.000
{ "year": 2019, "sha1": "19187fb5e4590fbdb59b57af3ea185af22cd7bb6", "oa_license": "CCBYSA", "oa_url": "https://www.journal-imab-bg.org/issues-2019/issue1/JofIMAB-2019-25-1-TitlePage.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "19187fb5e4590fbdb59b57af3ea185af22cd7bb6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
118529492
pes2o/s2orc
v3-fos-license
Inferences on the Relations Between Central Black Hole Mass and Total Galaxy Stellar Mass in the high-redshift Universe At the highest redshifts, z>6, several tens of luminous quasars have been detected. The search for fainter AGN, in deep X-ray surveys, has proven less successful, with few candidates to date. An extrapolation of the relationship between black hole (BH) and bulge mass would predict that the sample of z>6 galaxies host relatively massive BHs (>1e6 Msun), if one assumes that total stellar mass is a good proxy for bulge mass. At least a few of these BHs should be luminous enough to be detectable in the 4Ms CDFS. The relation between BH and stellar mass defined by local moderate-luminosity AGN in low-mass galaxies, however, has a normalization that is lower by approximately an order of magnitude compared to the BH-bulge mass relation. We explore how this scaling changes the interpretation of AGN in the high-z Universe. Despite large uncertainties, driven by those in the stellar mass function, and in the extrapolation of local relations, one can explain the current non-detection of moderate-luminosity AGN in Lyman Break Galaxies if galaxies below 1e11 Msun are characterized by the low-normalization scaling, and, even more so, if their Eddington ratio is also typical of moderate-luminosity AGN rather than luminous quasars. AGN being missed by X-ray searches due to obscuration or instrinsic X-ray weakness also remain a possibility. INTRODUCTION The frontier of high redshift galaxies and quasars has now reached a relatively large sample. Hundreds of Lyman Break Galaxies (LBGs) with colors consistent with z > 6 have been detected in deep fields (e.g., Finkelstein 2015, and references therein), and tens of luminous quasars are known at z > 6 (e.g., Fan 2012, and references therein). The population of fainter active galactic nuclei (AGN) is still elusive. Partly, current surveys are not deep enough to detect them directly, and, partly, X-ray stacking of LBGs has led to no signal detected (Willott 2011;Fiore et al. 2012;Cowie et al. 2012;Treister et al. 2013). Searches for point sources in deep X-ray fields has also led to inconclusive results (Giallongo et al. 2015;Weigel et al. 2015;Cappelluti et al. 2015). The X-ray non-detections have been used to estimate an upper limit on the black hole (BH) mass density at z > 6 through an analog of Soltan's argument (Soltan 1982), and on the luminosity a putative AGN can have in these galaxies (Treister et al. 2011(Treister et al. , 2013. With some assumptions on the Eddington ratio, this can be translated into an upper limit on the BH mass. The apparent result is that, if LBGs host BHs, they are accreting at low rate, or are less massive than expected on the basis of extrapolations of the correlation between BH mass and bulge mass at z = 0 (Marconi & Hunt 2003;Häring & Rix 2004;Kormendy & Ho 2013). However, it is far from clear if high-redshift LBGs have well developed bulges. 1 Hubble Fellow Reines & Volonteri (2015, RV15 thereafter) have studied the relation between BH mass and total stellar mass for nearby galaxies (z < 0.055), including both galaxies with quiescent and active BHs. For the latter, the BH mass estimate is based on reverberation mapping or single-epoch virial estimates, the same technique used at higher redshift. Likewise, their stellar mass measurements rely on mass-to-light ratios, as done on higher redshift samples. Therefore they adopted the same methods used for mass measurements at higher redshift, where detailed information on stellar kinematics and bulge properties is not available. They found that the relation between BH mass and total stellar mass for moderateluminosity AGN, predominantly hosted by lower-mass galaxies, has a normalization that is approximately an order of magnitude lower than BH-bulge mass relations largely constrained at high mass. In this paper we assess whether the lower normalization identified for the low-mass galaxies, typically lacking strong bulges, can explain the lack of an X-ray detection in the stack of LBGs. We couple galaxy stellar mass functions (MFs) with BH-stellar mass relations, and estimate the redshift evolution of the BH mass density and MF. We also take a complementary approach of coupling AGN luminosity functions at z = 6 with an empirical Eddington ratio distribution, derived from the high-luminosity end of the luminosity function, to determine the BHMF. Schulze & Wisotzki (2011). We start by paraphrasing some text from a paper by Schulze & Wisotzki (2011). We adopt the following convention: MBH masses are given by µ = log M BH , the stellar mass is s, with s = log M * , and the luminosity l = log L AGN . Given a galaxy MF, Φ * (s), and a function g(µ | s) which gives the probability of finding a BH of mass µ in a galaxy of mass s, the BHMF becomes: (1) The integral of the BHMF then gives the mass density in BHs. Similarly, the integral of the galaxy MF gives the stellar mass density. Based on the empirical correlation between µ and s: µ = γ + αs, with log-normal intrinsic scatter σ, i.e. A similar approach links the AGN luminosity function, Φ AGN , to the BHMF, through f (λ), the probability distribution of the logarithmic Eddington ratio λ, recalling that l = 38.11 + λ + µ, and a duty cycle, D: We consider here the Φ MBH,AGN (µ) at z = 6 derived by Willott et al. (2010a), starting from the quasar luminosity function by Willott et al. (2010b), with f (λ), fitted on the sample of z ∼ 6 quasars with estimated BH mass, described by a lognormal distribution withλ = log(0.6), σ = 0.3, and D = 0.75. Additionally, a fraction of AGN are obscured, and they are missed by observations. We include a luminosity dependent correction for obscuration based on Ueda et al. (2014). Note that Ueda et al. (2014) limit their redshift evolution to z ∼ 2. They found that the fraction of obscured quasars increases with redshift, but, conservatively, we keep the z = 2 value even at higher redshift. Galaxy mass functions Several different measurements and analytical fits to the galaxy stellar MF can be found in the literature. Many of them are summarized in Behroozi et al. (2013) and Madau & Dickinson (2014), where differences and uncertainties are discussed (see Fig.11 in Madau & Dickinson 2014). We will further discuss this in section 3. We start from the galaxy MF of Ilbert et al. (2013). We use their best fit parameters for the full sample, and the fit for "quiescent" galaxies as a proxy for elliptical galaxies. At z > 4 we consider four galaxy MFs: González et al. (2011), plus the correction for nebular lines proposed by Stark et al. (2013), Duncan et al. (2014) and Grazian et al. (2015), all converted to a Chabrier initial MF for consistency with RV15. The stellar mass density for the various MFs obtained by integration for stellar masses > 10 8 M ⊙ is shown in Fig. 1. In the following we will use as a reference the MF by Grazian et al. (2015) as "middle ground", and discuss how results change using other MFs. We adopt three different functional forms for the scaling between BH mass and galaxy stellar mass. The first is a simple linear scaling, so that the BH mass is 2 × 10 −3 the stellar mass: BH-stellar mass relationships as often done in the literature, by extrapolating the BH-bulge mass relation of Marconi & Hunt (2003) and Häring & Rix (2004). This is our "vanilla" model. We also include the two total stellar mass relationships found by RV15 for ellipticals and bulges, typically with high stellar masses: and for moderate-luminosity AGNs, typically in lowermass host galaxies: "HighMass" and "LowMass" fits hereafter. Both these relationships have an intrinsic scatter ∼ 0.5 dex. In what follows we will adopt a scatter of 0.5 dex for all scalings as a reference and then discuss the effect of a tighter or broader scatter. We perform a Monte Carlo experiment with 50,000 draws for each BH or galaxy mass unless otherwise stated. 3. RESULTS 3.1. Evolution of BH mass density We start by looking at an integral quantity ρ BH , the BH mass density versus redshift, integrating Φ MBH,GAL from µ = 5 to µ = 9. For reference, at z = 0 we show the mass density obtained by Shankar (2013). At z > 0, the main constraints come from Soltan's argument, where Grazian et al. (2015) at z > 4. Red circles: LowMass fit. Purple diamonds: HighMass fit. Green stars: LowMass fit below 10 11 M ⊙ and the HighMass fit above. Orange triangles: fixed BH-stellar mass ratio of 2 × 10 −3 , as often done in the literature, by extrapolating the BH-bulge mass relation of Marconi & Hunt (2003) and Haring & Rix (2004). Pink squares: HighMass fit and MF of Ilbert et al. (2013) for quiescent galaxies only. In all cases we include all BHs, quiescent and active. Grey hatched region: Soltan's argument. Vertical grey line: z = 0 BH mass density. The limits at z > 6 are derived from searches for AGN in stacked high-z galaxies or from the integrated X-ray background. These limits do not include Compton Thick AGN and require a BH to be active at some level, typically > 10 42 − 10 43 erg s −1 in the soft or hard X-ray band. the AGN luminosity function is integrated over time, from t max to t(z), and rescaled by a (fixed) radiative efficiency, ǫ, to obtain the density of mass accreted on BHs as a function of redshift: We adopt as a reference the estimate by Merloni (2016) at z < 4, including contributions of unobscured AGN, Compton-thin and Compton-thick AGN, and ǫ = 0.1, and show also the cases with ǫ = 0.06 and ǫ = 0.3. At z > 6 we report all the current upper limits, derived either on deep X-ray observations (Willott 2011;Cowie et al. 2012;Fiore et al. 2012;Treister et al. 2013) or from the integrated X-ray background (Salvaterra et al. 2012). These upper limits do not include Compton Thick AGN, so that in reality there may be a fraction of BHs not accounted for. We also stress that Soltan's argument estimates the mass density accreted in luminous phases throughout cosmic time up to z. The total mass density can be higher when accounting for non-radiative BH growth, e.g. via mergers, radiatively inefficient accretion or heavily obscured accretion episodes, and when includ-ing inactive BHs. The integral of Φ MBH,GAL instead provides the total mass density in BHs, irrespective of the luminosity. In Fig. 2 we summarize the main results on the redshift evolution of the BH mass density. At z < 1, there is a general consensus: taking the full MF of Ilbert et al. (2013), and assuming the vanilla fit, or including quiescent galaxies only and fit HighMass give similar results. The reason is that, while the mass in galaxies locked in quiescent galaxies is about half of the total stellar mass density, the BH mass locked in elliptical galaxies dominates the full population because BHs represent a higher fraction of their stellar mass. Using the LowMass fit only, instead, leads to an underestimate of the total BH mass density. Results become more interesting at higher redshift. Firstly, the fraction of stellar mass in quiescent galaxies drops significantly. Therefore, even considering that BHs represent a larger fraction of the stellar mass in ellipticals, the global contribution to the BH mass density falls. Therefore, if BHs require a bulge component, BHs represent a higher fraction of the stellar mass of the bulge at increasing redshift. Secondly, for the full population, the mass density in BHs is always above the limits imposed by lack of X-ray detections in stacking of high-z galaxies, except for the LowMass fit , i.e., for the other fits to hold, X-ray limits imply most of the BH mass density was not accreted in a luminous phase. Increasing the scatter only increases the BH mass density (Lauer et al. 2007;Somerville 2009;Volonteri & Stark 2011). Even reducing the scatter to zero, however, the vanilla or HighMass fits overestimate ρ BH,acc given by the observational constraints at highz. BHs represent a smaller fraction of the stellar mass of the galaxy at higher redshift, and/or local moderateluminosity AGN are good proxies for the BH-to-host relationship at high-z. A combination of the LowMass and HighMass fit ("hybrid"), using s = 11 as dividing line (RV15), provides a reasonable evolution of the mass density at all redshifts, with only a slight tension with most upper limits at z > 6. For the same µ − s relation of Eq. 6, the uncertainty given by the unknowns in the MF amount to ∼1 dex, with the MF by Duncan et al. (2014) requiring the strongest (negative) evolution in the µ − s relation to accommodate observational upper limits. In summary, the choice of the scaling relation has clear consequences for the derived BH mass density. At low redshift these are less marked, since massive galaxies contribute significantly to the mass density. At high redshift, since massive galaxies are largely absent, the contribution from low mass galaxies is more important. Connection to the quasar population We focus here on what the scaling relations imply for the z ∼ 6 luminous quasars, at the high-mass end of the BHMF, µ > 8. In Fig. 3 we compare Φ MBH,GAL to Φ MBH,AGN at z = 6. With the vanilla and hybrid fits, Φ MBH,GAL > Φ MBH,AGN (see also the discussion in Willott et al. 2010a;Volonteri & Stark 2011), requiring, e.g., a lower duty cycle or occupation fraction. For the LowMass fit, Φ MBH,GAL is in good agreement with Φ MBH,AGN at µ > 9. The masses of BHs powering the most luminous quasars, however, are estimated to be above the z = 0 scaling (assuming that the total dynamical mass corresponds to the stellar mass, Wang et al. 2013). To mimic their luminosity/flux limit, we associate a luminosity to BHs in Φ MBH,GAL through f (λ), adopting the functional form and parameters given in section 2. If, for the Low-Mass fit, we select only BHs with bolometric luminosity > 10 46 erg s −1 , similar to currently-detected z ∼ 6 quasars, this subset of the population, at s < 12, is described by an apparent scaling between BH mass and galaxy stellar mass: shallower and with a higher normalization than the scaling describing the full underlying population. This is a consequence of selection effects (Lauer et al. 2007;Volonteri & Stark 2011): at relatively low galaxy mass, only BHs above the mean of the intrinsic scaling can reach very high luminosity. BHs powering luminous quasars are more likely to lie above the intrinsic relation, which is recovered lowering the luminosity threshold. 3.3. Implications for detecting AGN in LBGs X-ray stacking gives more direct upper limits on the luminosity of a putative AGN in LBGs, with typical stellar masses of ∼ 10 9 M ⊙ . According to Treister et al. (2013) at z = 6 the luminosity in the hard X-ray band is < 1.6 × 10 42 erg s −1 . We show in Fig. 4 the fraction of galaxies hosting an AGN detectable above a given X-ray luminosity as a function of galaxy stellar mass, where we convert from bolometric luminosity to hard X-ray using the bolometric corrections of Marconi et al. (2004). We adopt againλ = log(0.6), D = 0.75 and a correction for obscuration. This Eddington ratio, however, was estimated on luminous quasars, and it is higher than the typical value for "normal" AGN. The same applies to the duty cycle (e.g., Schulze & Wisotzki 2010). The absorbed fraction is also very conservative. We also assume that all galaxies host a BH. While today it is not clear how many galaxies with s ∼ 9 have BHs (Reines et al. 2013), a LBG with s ∼ 9 represented a massive galaxy at z ∼ 6, and it is expected that such massive galaxies have been seeded with a BH by that time (Volonteri 2010). Statistically, the fraction of galaxies with mass ∼ 10 9 M ⊙ hosting an unobscured AGN with L X > 1.6 × 10 42 erg s −1 is only ∼0.01 using the LowMass fit. Treister et al. (2013) stack 223 galaxies, and find no detection. Therefore the predicted luminosities are only slightly higher than the upper limit in the stack. If we select only BHs above this luminosity threshold, we can convert the mass function into an expected number of AGN in the 4Ms CDFS, covering about 10 −6 of the sky area. Between z = 6 and z = 7 we expect 2.22 +0.79 −1.75 AGN with L X > 1.6 × 10 42 erg s −1 for the LowMass fit. The vanilla fit gives 4.18 +0.59 −1.46 . The accretion properties derived from luminous quasars are significantly different than the local Seyferts defining the LowMass fit, making the estimates above conservative. The median Eddington ratio for the local AGN sample is around a factor of 10 lower (using the median L bol and M BH from the RV15 sample). With the LowMass fit, assuming a mean Eddington ratio of 0.06 in the lognormal distribution of Eddington ratios for BHs in galaxies with s < 11, the fraction of AGN at a given luminosity decreases (Fig. 4), and between z = 6 and z = 7 we expect 0.15 +0.60 −0.15 AGN with L X > 1.6 × 10 42 erg s −1 in the 4Ms CDFS. For reference, the vanilla fit predicts 1.86 +0.71 −1.77 AGN. These results are based on the galaxy MF by Grazian et al. (2015). For the galaxy MF predicting the largest number of galaxies, thus the most difficult to reconcile with a low number of BHs and AGN, Duncan et al. (2014), at L X > 1.6 × 10 42 erg s −1 we find 3.40 +0.84 −3.40 sources in the 4Ms CDFS area for the Low-Mass fit; 5.58 +0.66 2.33 for the vanilla fit; in all cases adoptingλ = log(0.6), making these upper limits. Assumingλ = log(0.06) at s < 11, the numbers decrease to 0.65 +1.19 −0.65 and 3.06 +0.80 −3.06 . 4. CONCLUSIONS In this paper, we have drawn inferences on highredshift BHs and their relation to their hosts. We have tested whether the relation between BH and galaxy stellar mass found by RV15 for local AGN (z < 0.055), can explain the lack of an X-ray detection in the stack of LBGs, because of the low normalization with respect to the BH-bulge mass relation characterizing bulgedominated quiescent galaxies. We convolve galaxy stellar MFs with BH-stellar mass relations, and estimate the redshift evolution of the BH mass density and the BHMF at z = 6. We stress the speculative nature of this paper. It is very hard to draw firm, robust conclusions given the uncertainties on the observables. Despite the uncertainties, we can highlight some trends, and explain the current non-detection of moderate-luminosity AGN in LBGs using scaling relations for BH masses and AGN luminosities derived on observational samples. The main results can be summarized as follows: • The fraction of stellar mass in quiescent galaxies drops significantly with redshift. If BHs require a bulge component, the ratio between BH and bulge mass must evolve positively with increasing redshift, in the sense that BHs represent a higher fraction of the stellar mass of the bulge. • The total mass density in BHs is always above the limits imposed by lack of X-ray detections in stacking of high-z galaxies, except for the LowMass fit. Local moderate-luminosity AGN are good proxies for high-z galaxies, and/or BHs represent a smaller fraction of the total stellar mass of the galaxy at high-z. • Using the BH-stellar mass scaling derived from local AGN hosted by low-mass galaxies jointly with, very conservatively, the accretion properties derived only from luminous quasars (Willott et al. 2010a) is close to explaining the paucity of AGN in LBGs. Moderate-luminosity AGN have lower Eddington ratios than luminous quasars, which makes the scarcity of AGN in LBGs even more reasonable. • If the BH-stellar mass scaling at high-z corresponds to today's BH-bulge mass, the lack of AGN in LBGs favors lower Eddington ratios for their BHs. We have shown that using the empirical scaling between BH and galaxy mass, determined on local AGN hosted by relatively low-mass galaxies, can explain the few, if any, moderate-luminosity AGN at z > 6. One possibility is also that such AGN are intrinsically X-ray weak (Luo et al. 2014), or that obscuration is more important than currently thought. Treister et al. (2013) also suggest alternative possibilities for such a low space density derived from the X-ray observations, among them a low BH occupation fraction at these redshift, a low AGN duty cycle, and/or BH growth through mergers. Getting firmer constraints on the mass of the host galaxies of the current sample of luminous quasars, and pushing at the same time for detections of AGN, e.g., using alternative techniques such as line ratios in the ultraviolet (Feltre et al. 2016) on the existing sample of LBGs would greatly help in understanding the link between BHs and galaxies at early times.
2016-02-18T08:10:37.000Z
2016-02-18T00:00:00.000
{ "year": 2016, "sha1": "94a52641aef55f612c92dff9bda24581ff92c946", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1602.05711", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "94a52641aef55f612c92dff9bda24581ff92c946", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118423619
pes2o/s2orc
v3-fos-license
Entanglement and Nonlocality in Many-Body Systems: a primer Current understanding of correlations and quantum phase transitions in many-body systems has significantly improved thanks to the recent intensive studies of their entanglement properties. In contrast, much less is known about the role of quantum non-locality in these systems. On the one hand, standard,"theorist- and experimentalist-friendly"many-body observables involve correlations among only few (one, two, rarely three...) particles. On the other hand, most of the available multipartite Bell inequalities involve correlations among many particles. Such correlations are notoriously hard to access theoretically, and even harder experimentally. Typically, there is no Bell inequality for many-body systems built only from low-order correlation functions. Recently, however, it has been shown in [J. Tura et al., Science 344, 1256 (2014)] that multipartite Bell inequalities constructed only from two-body correlation functions are strong enough to reveal non-locality in some many-body states, in particular those relevant for nuclear and atomic physics. The purpose of this lecture is to provide an overview of the problem of quantum correlations in many-body systems - from entanglement to nonlocality - and the methods for their characterization. -Introduction Nonlocality is a property of correlations that goes beyond the paradigm of local realism [1,2,3,4,5,6]. According to the celebrated theorem by J.S. Bell [3], correlations among the results of measurements of local observables performed on some entangled states do not admit a local hidden-variable (LHV) model (cf. [7] for a review on LHV models). In other words, these correlations cannot be described by observers who have access only to correlated classical variables. In such instances, the observed quantum correlations are named nonlocal and we talk about quantum nonlocality, or Bell nonlocality. This can be detected by means of the so-called Bell inequalities [3] -the celebrated example of such is the famous Clauser-Horne-Shimony-Holt inequality (CHSH) [8]. In general, Bell inequalities are inequalities formulated in terms of linear combinations of the probabilities observed when performing the local measurements on composite systems, and their violation signals nonlocality. Quantum, or Bell nonlocality is interesting for at least three reasons: • It is a resource for quantum communication, secure key distribution [9,10,11], or certified quantum randomness generation [12,13,14]. Hence, it is one of the most important elements of the future quantum technologies. • Its characterization is a challenging complex and difficult problem, proved to be, depending on formulation, NP-complete or NP-hard ( [20,21]; see also [5,6] and references therein). Quantum-mechanical states that violate Bell inequalities are necessarily entangled and cannot be represented as mixtures of projections on simple product states [22] (for a review on entanglement see [23]); the opposite does not have to be true. Already in 1991 Gisin proved [24] that any pure state of two parties violates a Bell's inequality. This result was extended to an arbitrary number of parties by Popescu and Rohlich [25]. But, Werner in the seminal paper from 1989 [22] constructed examples of mixed bipartite states that admit a LHV model for local projective measurements, and nevertheless are entangled. This result was then generalized by Barrett to arbitrary generalized measurements [26]. Very recently, it has been shown that entanglement and nonlocality are inequivalent for any number of parties ( [27] and references therein). On the other hand, entanglement, despite being a weaker property of quantum states than nonlocality, has proven to be very useful to characterize properties of many-body systems, and the nature of quantum phase transitions (QPT) [28]. For instance, focusing on lattice spin models described by local Hamiltonians, the following properties are true (for a review see [29,30]): • The reduced density matrix for two spins typically exhibits entanglement for short separations of the spins only, even at criticality; still entanglement measures show signatures of QPTs [31,32]; • By performing optimized measurements on the rest of the system, one can concentrate the entanglement in the chosen two spins. One obtains in this way localizable entanglement [33,34], whose entanglement length diverges when the standard correlation length diverges, i.e., at standard QPTs; • For non-critical systems, ground states (GSs) and low energy states exhibit the, so-called, area laws: the von Neuman (or Rényi) entropy of the reduced density matrix of a block of size R scales as the size of the boundary of the block, ∂R; at criticality logarithmic divergence occurs frequently [35] (for a review see [36,37]). These results are very well established in 1D, while there are plenty of open questions in 2D and higher dimensions; • GSs and low energy states can be efficiently described by the, so called, matrix product states, or more generally tensor network states (cf. [38]); • Topological order (at least for gapped systems in 1D and 2D) exhibits itself in the properties of the, so called, entanglement spectrum, i.e. the spectrum of the logarithm of the reduced density matrix of a block R [39], and in 2D in the appearance of the, so called, topological entropy, i.e. negative constant correction to the area laws [40,41]. A natural question thus arises: Does non-locality play also an important role in characterization of correlations in many-body systems? Apart from its fundamental interest, so far the role of nonlocality in such systems has hardly been explored. As already mentioned, entanglement and nonlocality are known to be inequivalent quantum resources. In principle, a generic manybody state, say a ground state of a local Hamiltonian, is pure, entangled and, because all pure entangled states violate a Bell inequality [25], it is also nonlocal. However, this result is hardly verifiable in experiments, because the known Bell inequalities (see, e.g., [42,43,44,45,46]) usually involve products of observables of all parties. Unfortunately, measurements of such observables, although in principle possible [47,48], are technically extremely difficult; instead one has typically "easy" access to few-body correlations, say one-and two-body, in generic manybody systems. Thus, the physically relevant question concerning the nonlocality of many-body quantum states is whether its detection is possible using only two-body correlations. The plan of these lectures is the following: In Section 2 we present a crash course in entanglement theory, and talk about bipartite pure and mixed states, about entanglement criteria, and entanglement measures. Section 3 is devoted to the discussion of some aspects of entanglement in many-body systems. There we talk about the computational complexity of many-body problems, and relate it to entanglement of a generic state. We then explain area laws, and indicate why they give us hopes to find new efficient ways of solving many-body problems with new numerical tools. These new tools are provided by the tensor network states. Section 4 introduces the problem of non-locality in many-body systems; we use here the contemporary approach called device-independent quantum information theory that talks about properties of correlations between measurements only. Here we introduce the concept of classical correlations, quantum-mechanical correlations, and non-signalling correlations. CHSH inequality and its violations are shortly presented here. In Section 5 we enter into the problem of nonlocality detection in many-body systems based on Bell inequalities that involve only two-and one-body correlators. Here we explain the idea of permutationally invariant Bell inequalities. Finally, Section 6 discusses physical realizations of many-body non-locality with ionic and atomic models. These are promising systems in which the quantum violation of our Bell inequalities could be observed. -Crash course on entanglement In this section, we focus on bipartite composite systems and follow the presentation of Ref. [29]. We will define formally what entangled states are, and present one important criterion to discriminate entangled states from separable ones. However, before going into details, let us introduce the notation. In what follows we will be mostly concerned with bipartite scenarios, in which traditionally the main roles are played by two parties called Alice and Bob. Let H A denote the Hilbert space of Alice's physical system, and H B that of Bob's. Our considerations will be restricted to finite-dimensional Hilbert spaces, so we can set H A = m and H B = n . Thus, the joint physical system of Alice and Bob is described by the tensor product Hilbert space To give an illustrative example of an entangled state from H AB let us consider the maximally entangled states: where d = min{m, n} and {|i A } and {|i B } are some orthonormal bases (for instance the standard ones) in H A and H B , respectively. The reason why this state is called maximally entangled will become clear when we introduce entanglement measures. For pure states, the separability problem -the task of judging if a given quantum state is separable -is easy to handle using the concept of Schmidt decomposition which we introduce in the following theorem. Theorem 1. Every pure state |ψ AB ∈ H AB with m ≤ n admits the following decomposition called also the Schmidt decomposition, where the local vectors |e i and |f i form parts of orthonormal bases in H A and H B , respectively. Then, λ i are some positive numbers that satisfy r i=1 λ 2 i = 1, and r ≤ m. The proof of the Theorem 1 employs the singular value decomposition of the matrix describing the coefficients one gets by expanding the state in arbitrary orthonormal bases from Alice's and Bob's Hilbert spaces. The numbers λ i > 0 (i = 1, . . . , r) and r are called, respectively, the Schmidt coefficients and the Schmidt rank of |ψ AB . It is also worth noticing that {λ 2 i , |e i } and {λ 2 i , |f i } are eigensystems of the density matrices representing the first and second subsystem of |ψ AB and r is their rank. Now, one immediately realizes that Theorem 1 provides a very simple separability criterion for bipartite pure states: a state |ψ AB is separable if, and only if its Schmidt rank is one. Moreover, this criterion is operational, i.e., to check if a given pure state is separable, it suffices to determine the rank r of one of its subsystems: if r = 1 (the corresponding subsystem is in a pure state) then |ψ AB is separable; otherwise it is entangled. Note that the maximally entangled state (1) is already written in the form (2), with r = d and all the Schmidt coefficients equal to 1/ √ d. 2. Bipartite mixed states: Separable and entangled states. -Let us now pass to the case of mixed states. Having learned the definition of separability for pure states, one could naively expect that mixed separable states are those taking the product form ρ A ⊗ ρ B . This intuition is, however, not entirely correct and one can argue that all convex combinations of such product states should also be called separable. This is why the separability problem for mixes states complicates considerably. In order to recall the definition of mixed separable states -first formalized by Werner in 1989 [22] -in more precise terms let us consider the following state preparation procedure. Imagine that in their distant laboratories, Alice and Bob can produce and manipulate any physical system. Apart from that they can also communicate using a classical channel (for instance a phone line), however, they are not allowed to communicate quantumly, meaning that Alice is not allowed to send any quantum particle to Bob and vice versa. These two capabilities, i.e., local operations (LO) and classical communication (CC), are frequently referred to as LOCC. Now, let us suppose that in their local laboratories Alice and Bob can prepare one of K different states |e i ∈ H A and |f i ∈ H B (i = 1, . . . , K), respectively. Let us then assume that in each round of the preparation scheme, Alice generates with probability p k an integer k (k = 1, . . . , K), which she later sends to Bob using the classical channel they share. Upon receiving k, Alice and Bob use their local devices to prepare the states |e k and |f k , respectively. The state that Alice and Bob share after repeating the above procedure many times is of the form which is the aforementioned convex combination of product states. This is also the most general state that can be prepared by means of LOCC provided that initially no other quantum state was shared by Alice and Bob. This gives us the formal definition of separability [22]. Definition 2. A mixed state ̺ AB acting on H AB is called separable if, and only if it admits the decomposition (3). Otherwise, it is called entangled. It then follows from this definition that entangled states cannot be prepared locally by two parties even if they are allowed to communicate over a classical channel. To prepare entangled states the physical systems must be brought together to interact( 1 ). Mathematically, a nonproduct unitary operator (i.e., not of the form U A ⊗ U B ) must necessarily act on the physical system to produce an entangled state from an initial separable one. Let us recall that the number of pure separable states K necessary to decompose any separable state into a convex combination of pure product states according to Eq. (3) is limited by the Carathéodory theorem as K ≤ (nm) 2 (see [23,49]). No better bound is known in general, however, for two-qubit (H AB = 2 ⊗ 2 ) and qubit-qutrit (H AB = 2 ⊗ 3 ) systems it was shown that K ≤ 4 [50] and K ≤ 6 [51], respectively. The question whether a given bipartite state is separable or not turns out to be very complicated (see, e.g., Refs. [23,53]). Although the general answer to the separability problem still eludes us, there has been significant progress in recent years, and we will review some such directions in the following paragraphs. 3. Entanglement criteria. -An operational necessary and sufficient criterion for detecting entanglement still does not exist (see, nevertheless, Ref. [59] for a non-operational one). However, over the years the whole variety of sufficient criteria allowing for detection of entanglement has been worked out. Below we review one of them, while for others the reader is referred to Ref. [53]. Note that, even if such an operation necessary and sufficient condition is missing, there are numerical checks of separability: one can test separability of a state using, for instance, semi-definite programming [54,55]. In general -without a restriction on dimensions -the separability problem belongs to the NP-hard class of computational complexity [56]. Partial transposition is an easy-to-apply necessary criterion based on the transposition map first recognized by Choi [57] and then independently formulated in the separability context by Peres [58]. ( 1 )Due to entanglement swapping [52], one must suitably enlarge the notion of preparation of entangled states. So, an entangled state between two particles can be prepared if and only if either the two particles (call them A and B) themselves come together to interact at a time in the past, or two other particles (call them C and D) do the same, with C having interacted beforehand with A and D with B. where {|i } and {|µ } are real bases in Alice and Bob Hilbert spaces, respectively, we have In an analogous way one defines partial transposition with respect to Bob's subsystem, denoted by ̺ TB AB . Although the partial transposition of ̺ AB depends upon the choice of the basis in which ̺ AB is written, its eigenvalues are basis independent. The applicability of the transposition map in the separability problem can be formalized by the following statement [58]. Proof. It follows from Definition 2 that by applying the partial transposition with respect to the first subsystem to a separable state ρ AB , one obtains where the second equality follows from the fact that A † = (A * ) T for all A. From the above one infers that ρ TA AB is a proper (and in particular separable) state, meaning that ρ TA AB ≥ 0. The same reasoning shows that ρ TB AB ≥ 0, which completes the proof. Due to the identity ̺ TB AB = (̺ TA AB ) T , and the fact that global transposition does not change eigenvalues, partial transpositions with respect to the A and B subsystems are equivalent from the point of view of the separability problem. In conclusion, we have a simple criterion, called partial transposition criterion, for detecting entanglement: if the spectrum of one of the partial transpositions of ̺ AB contains at least one negative eigenvalue then ̺ AB is entangled. As an example, let us apply the criterion to pure entangled states. If |ψ AB is entangled, it can be written as (2) with r > 1. Then, the eigenvalues of |ψ AB ψ AB | TA are λ 2 i (i = 1, . . . , r) and ±λ i λ j (i = j i, j = 1, . . . , r). So, an entangled |ψ AB of Schmidt rank r > 1 has partial transposition with r(r − 1)/2 negative eigenvalues violating the criterion stated in Theorem 2. Note that in systems of two qubits or a qubit and a qutrit the partial transposition criterion provides the necessary and sufficient condition for separability [59]. This is no more true in higher dimensions, due to the existence of entangled states with positive partial transposition [49,60]. 2 . 4. Entanglement measures. -Although the separability criterion discussed above allows one to check whether a given state ρ AB is entangled, it does not tell us (at least not directly) how much entanglement it has. Such a quantification is necessary because entanglement is a resource in quantum information theory. There are several complementary ways to quantify entanglement of bipartite quantum states (see [61,62,63,64,65,66,67,68,69,70,23] and references therein) and in what follows we briefly discuss one of them. Let us now introduce the definition of entanglement measures (for a more detailed axiomatic description, and other properties of entanglement measures, the reader is encouraged to consult, e.g., [23,69,70]). The main ingredient in this definition is the monotonicity under LOCC operations. More precisely, if Λ denotes some LOCC operation, and E is our candidate for the entanglement measure, E has to satisfy i.e., it should not increase under LOCC operations. Another requirement says that E vanishes on separable states. At this point it is worth noticing that from the monotonicity under LOCC operations (7) it already follows that E is constant and minimal on separable states and also that it is invariant under unitary operations (see Ref. [23]). 2 . 5. Von Neumann entropy. -A "good" entanglement measure for a pure state |ψ AB is the von Neumann entropy of the density matrix describing one of its subsystems, say the first one which arises by tracing out Bob's subsystem of |ψ AB , i.e., ̺ A = Tr B |ψ AB ψ AB |. Recalling then that the von Neumann entropy of a density matrix ρ is defined through S(ρ) = −Tr(ρ log ρ), the following quantity was shown to be an entanglement measure [66]. Notice that for the maximally entangled states (1) one has E(|ψ (d) On the other hand, E is an entanglement measure only for pure states. Separable mixed states have classical correlations, and thus the non-zero entropy of the reduced density matrix. In the following we will concentrate on the entanglement properties of the ground states of many-body systems. There the von Neumann entropy of a density matrix reduced to some region R will play a fundamental role. 1. Computational complexity. -Let us start this discussion by considering simulations of quantum systems with classical computers. What can be simulated classically [30]? The systems that can be simulated classically are those to which we can apply efficient numerical methods, such as the quantum Monte Carlo method that works, for instance, very well for bosonic unfrustrated systems. Sometimes we may apply systematic perturbation theory, or even use exact diagonalization for small systems (say, for frustrated antiferromagnets consisting of 30-40 spins 1/2). There is a plethora of variational and related methods available, such as various mean field methods, density functional theory (DFT), dynamical mean field theory (DMFT), and methods employing tensor network states (TNS), such as Matrix-Product States (MPS), Projected-Entangled-Pair States (PEPS), Multi-scale Entanglement Renormalization Ansatz (MERA), etc. What is then computationally hard? Generic examples include fermionic models, frustrated systems, or disordered systems. While MPS techniques allow for efficient calculation of the ground states and also excited states in 1D, there are, even in 1D, no efficient algorithms to describe the out-of-equilibrium quantum dynamics. Why do we still have hopes to improve our classical simulation skills in the next future? This is connected with the recent developments of the tensor network states and observation that most of the states of physical interest, such as the ground states of local Hamiltonians, are non generic and fulfill the, so called, area laws. 2. Entanglement of a generic state. -Before we turn to the area laws for physically relevant states let us first consider a generic pure state in the Hilbert space in m ⊗ n (m ≤ n). Such a generic state (normalized) has the form where {|i |j } is the standard basis in m ⊗ n and the complex numbers α ij may be regarded as random variables distributed uniformly on a hypersphere, i.e., distributed according to the probability density with the only constraint being the normalization. As we shall see, such a generic state fulfills on average a "volume" rather than an area law. To this aim we introduce a somewhat more rigorous description, and we prove that on average, the entropy of one of subsystems of bipartite pure states in m ⊗ n (m ≤ n) is almost maximal for sufficiently large n. In other words, typical pure states in m ⊗ n are almost maximally entangled. This "typical behavior" of pure states happens to be completely atypical for ground states of local Hamiltonians with an energy gap between ground and first excited eigenstates. More precisely, one has the following theorem (see, e.g., Refs. [71,72,73,74,75,76,77]). Theorem 3. Let |ψ AB be a bipartite pure state from m ⊗ n (m ≤ n) drawn at random according to the Haar measure on the unitary group and ̺ A = tr B |ψ AB ψ AB | be its subsystem acting on m . Then, Notice that the above result can be estimated very easily by relaxing the normalization constraint in the distribution (10), and replacing it by a product of independent Gaussian distributions, P (α) = i,j (nm/π) exp[−nm|α ij | 2 ], with α ij = 0, and |α ij | 2 = 1/nm. According to the central limit theorem, the latter distribution tends for nm → ∞ to a Gaussian one for m i=1 n j=1 |α ij | 2 centered at 1 of width ≃ 1/ √ nm. One then straightforwardly obtains that tr̺ A = 1, and after a little more tedious calculation that tr̺ 2 A = (n + m)/nm, which agrees asymptotically with the above result for nm ≫ 1. -Area laws Generally speaking, area laws mean that, when we consider a large region R of a large system L in a pure state, some of the physical properties of R such as the von Neumann entropy of the reduced density matrix ρ R representing it will depend only on the boundary ∂R (cf. Fig. 1). 1. Quantum area laws in 1D. -Let us start with the simplest case of one-dimensional lattices, L = {1, . . . , N }. Let R be a subset of L consisting of n contiguous spins starting from the first site, i.e., R = {1, . . . , n} with n < N . In this case the boundary ∂R of the region R contains one spin for open boundary conditions, and two for periodic ones. Therefore, in this case the area law is extremely simple: The case of D = 1 seems to be quite well understood. In general, all local gapped systems (away from criticality) satisfy the above law, and there might be a logarithmic divergence of entanglement entropy when the system is critical. To be more precise, let us recall the theorem by Hastings leading to the first of the above statements, followed by examples of critical systems showing a logarithmic divergence of the entropy with the size of R. Consider the nearest-neighbor interaction Hamiltonian where each H i,i+1 has a nontrivial support only on the sites i and i + 1. We assume also that the operator norm of all the terms in Eq. (13) are upper bounded by some positive constant J, i.e., H i,i+1 ≤ J for all i (i.e., we assume that the interaction strength between ith site and its nearest-neighbor is not greater that some constant). Under these assumptions, Hastings proved the following theorem [78]. Theorem 4. Let L be a one-dimensional lattice with N d-dimensional sites, and let H be a local Hamiltonian (13). Assuming that H has a unique ground state separated from the first excited states by the energy gap ∆E > 0, the entropy of any region R satisfies with c 0 denoting some constant of the order of unity and ξ = min{2v/∆E, ξ C }. Here, v denotes the sound velocity and is of the order of J, while ξ C is a length scale of order unity. Let us remark that both constants appearing in the above theorem come about from the Lieb-Robinson bound [79] (see also Ref. [80] for a recent simple proof of this bound). This theorem tells us that when the one-dimensional system with the local interaction defined by Eq. (13) is away from the criticality (∆E > 0), the entropy of R is bounded by some constant independent of |R|. One can naturally ask if there exist gapped systems with long-range interaction violating (12). This was answered in the affirmative in Ref. [81,82], which gave examples of onedimensional models with long-range interactions, nonzero energy gap, and scaling of entropy diverging logaritmically with n. The second question one can ask is about the behavior of the entropy when ∆E → 0 and the system becomes critical. Numerous analytical and numerical results show that usually one observes a logarithmic divergence of S(̺ R ) with the size of the region R (we refer the reader to recent reviews [37,83], and to the special issue of J. Phys. A devoted to this subject [36]). Concluding, let us mention that there is an extensive literature on the logarithmic scaling of the block entropy using conformal field theory methods (see Ref. [84] for a very good overview of these results). Quite generally, the block entropy at criticality scales as or, more in general for the Rényi entropy( 2 ) where c is called the central charge of the underlying conformal field theory, and a is the cutoff parameter (the lattice constant for lattice systems). Recently, these results were generalized in Ref. [85], where the authors derived the area laws only from the assumption of the exponential decay of correlations, and without any assumption about the gap. 2. Higher-dimensional systems. -The situation is much more complex in higher spatial dimensions (D > 1). The boundary ∂R of the general area law, Eq. (17), is no longer a simple one or two-element set and can have a rather complicated structure. Even if there are no general rules discovered so far, it is rather believed that holds for ground states of local gapped Hamiltonians. This intuition is supported by results showing that for quadratic quasifree fermionic and bosonic lattices the area law (17) holds [37]. Furthermore, for critical fermions the entropy of a cubic region R = {1, . . . , n} D is bounded as γ 1 n D−1 log 2 n ≤ S(̺ R ) ≤ γ 2 n D−1 (log 2 n) 2 with γ i (i = 1, 2) denoting some constants [86,87,88]. Notice that the proof of this relies on the fact that the logarithmic negativity( 3 ) upper bounds the von Neumann entropy, i.e., for any |ψ AB , the inequality S(̺ A(B) ) ≤ E N (|ψ AB ) holds. This in turn is a consequence of monotonicity of the Rényi entropy S α with respect to the order α, i.e., S α ≤ S α ′ for α ≥ α ′ . This is one of the numerous instances, where insights from quantum information help to deal with problems in many-body physics. Recently, Masanes [80] showed that in the ground state (and also low-energy eigenstates) the entropy of a region R (even a disjoint one) always scales at most as the size of |∂R| with some correction proportional to (log |R|) D -as long as the Hamiltonian H is of the local form where each H i has nontrivial support only on the nearest-neighbors of the ith site, and, as before, satisfies H i ≤ J for some J > 0. Thus, the behavior of entropy which is considered to be a violation of the area law, can in fact be treated as an area law itself. This is because in this case( 4 ) [|∂R|(log |R|) k ]/|R| → 0 for |R| → ∞ with some k > 0, meaning that this behaviour of entropy is still very different from the typical one that follows from Theorem 3. That is, putting m = d |R| and n = d |L\R| with |L| ≫ |R|, one has that S(̺ R )/|R| is arbitrarily close to log d for large |R|. More precisely, the following theorem was proven in Ref. [80]. Theorem 5. Let R be some arbitrary (even disjoint) region of L. Then, provided that certain "natural" bounds on correlation functions (polynomial decay with distance) and on the density of states (number of eigenstates of the Hamiltonian limited to R with energies smaller than e is ( 4 )It should be noticed that one can have much stronger condition for such scaling of entropy. To see this explicitly, say that R is a cubic region R = {1, . . . , n} D meaning that |∂R| = n D−1 and |R| = n D . Then since limn→∞[(log n)/n ǫ ] = 0 for any (even arbitrarily small) ǫ > 0, one easily checks that S(̺R)/|∂R| 1+ǫ → 0 for |∂R| → ∞. exponentially bounded by |R| γ(e−e0) , where γ is a constant, and e 0 is the lowest energy) hold, the entropy of the reduced density matrix ̺ R of the ground state of H satisfies where C collects the constants D, ξ, γ, J, η, and d. If R is a cubic region, the above statement simplifies, giving S(̺ R ) ≤ C|∂R| log |R| + O(|∂R|) with C being some constant. 2.1. Area laws for mutual information -classical and quantum Gibbs states. So far, we considered area laws only for ground states of local Hamiltonians. In addition, it would be very interesting to ask similar questions for nonzero temperatures. Here, however, one cannot rely on the entropy of a subsystem, as in the case of mixed states it is no longer an entanglement measure. Instead, one can use the quantum mutual information which measures the total amount of correlation in bipartite quantum systems [91]. It is defined as where ̺ AB is some bipartite state and ̺ A(B) stand for its subsystems. It should be noticed that for pure states the mutual information reduces to twice the amount of entanglement of the state. Recently, it was proven that thermal states ̺ β = e −βH /tr[e −βH ] with local Hamiltonians H obey an area law for mutual information. Interestingly, a similar conclusion was drawn for classical lattices, in which we have a classical spin with the configuration space d at each site, and instead of density matrices one deals with probability distributions. In the following we review these two results, starting from the classical case. To quantify correlations in classical systems, we use the classical mutual information, defined as in Eq. (20) with the von Neumann entropy substituted by the Shannon entropy H(X) = − x p(x) log 2 p(x), where p stands for a probability distribution characterizing random variable X. More precisely, let A and B = S \ A denote two subsystems of some classical physical system S. Then, let p(x A ) and p(x B ) be the marginals of the joint probability distribution p(x AB ) describing S (x a denotes the possible configurations of subsystems a = A, B, AB). The correlations between A and B are given by the classical mutual information We are now ready to recall the results of [92]. Let us now show that a similar conclusion can be drawn in the case of quantum thermal states [92], where the Markov property does not hold in general. A and B (L = A ∪ B). Thermal states (T > 0) of local Hamiltonians H obey the following area law Theorem 7. Let L be a lattice consisting of d-dimensional quantum systems divided into parts where H ∂ stands for interaction terms connecting these two regions. Let us notice that the right-hand side of Eq. (23) depends only on the boundary, and therefore it gives a scaling of mutual information similar to the classical case (22). Moreover, for the nearest-neighbor interaction, Eq. (23) simplifies to I(A : B) ≤ 2β h |∂A| with h denoting the largest eigenvalue of all terms of H crossing the boundary. 3. The world according to tensor networks. -Quantum many-body systems are, in general, difficult to describe: specifying an arbitrary state of a system with N two-level subsystems requires 2 N complex numbers. For a classical computer, this presents not only storage problems, but also computational ones, since simple operations like calculating the expectation value of an observable would require an exponential number of operations. However, we know that completely separable states can be described with about N parameters -indeed, they correspond to classical states. Therefore, what makes a quantum state difficult to describe are quantum correlations, or entanglement. We saw already that even if in general the entropy of a subsystem of an arbitrary state is proportional to the volume, there are some special states which obey an entropic area law. Intuitively, and given the close relation between entropy and information, we could expect that states that follow an area law can be described (at least approximately) with much less information than a general state. We also know that such low entanglement states are few, albeit interesting -we only need an efficient and practical way to describe and parametrize them ( 5 ). Consider a general pure state of a system with N d-level particles, When the state has no entanglement, then c i1i2...iN = c iN where all c's are scalars. The locality of the information (the set of coefficients c for each site is independent of the others) is key to the efficiency with which separable states can be represented. How can we keep this locality while adding complexity to the state, possibly in the form of correlations but only to nearest-neighbors? As we shall see, we can do this by using a tensor at each site of our lattice, with one index of the tensor for every physical neighbor of the site, and another index for the physical states of the particle. For example, in a one-dimensional chain we would assign a matrix ( 5 )Note, however, that an area law does not imply an efficient classical parametrization (see, e.g., Ref. [146]. for each state of each particle, and the full quantum state would be written as where A [k] i k stands for a matrix of dimensions D k × D k+1 . A useful way of understanding the motivation for this representation is to think of a valence bond picture [93]. Imagine that we replace every particle at the lattice by a pair (or more in higher dimensions) of particles of dimensions D that are in a maximally entangled state with their corresponding partners in a neighboring site (see Figure 2). Then, by applying a map from these virtual particles into the real ones, we obtain a state that is expressed as Eq. (25). One can show that any state |ψ ∈ ( d ) ⊗N can be written in this way with D = max m D m ≤ d N/2 . Furthermore, a matrix product state can always be found such that [94] • In fact, Λ [k] is a matrix whose diagonal components λ k n (n = 1, . . . , D k ) are the non-zero eigenvalues of the reduced density matrix obtained by tracing out the particles from k + 1 to N , i.e., the Schmidt coefficients of a bipartition of the system at site k. An MPS with these properties is said to be in its canonical form [95]. Therefore, Eq. (25) is a representation of all possible states -still cumbersome. It becomes an efficient representation when the virtual bond dimension D is small, in which case it is typically said that the state has an MPS representation. In higher dimensions we talk about PEPS [96]. When entanglement is small (but finite), most of the Schmidt coefficients are either zero or decay rapidly to zero [94]. Then, if |ψ contains little entanglement, we can obtain a very good approximation to it by truncating the matrices A to a rank D much smaller than the maximum allowed by the above theorem, d N/2 . In fact, one can demonstrate the following fact [95]. This Lemma is most powerful in the context of numerical simulations of quantum states: it gives a controllable handle on the precision of the approximation by MPS. In practical terms, for the representation to be efficient the Schmidt coefficients λ need to decay faster than polynomially. However, we can be more precise and give bounds on the error of the approximation in terms of entropies [97]: Lemma 1. For any pure state |ψ , there exists an MPS |ψ D with the bond dimension D such that The question now is when can we find systems with relevant states that can be written efficiently as a MPS; i.e. how broad is the simulability of quantum states by MPS. For example, one case of interest where we could expect the method to fail is near quantum critical points where correlations (and entanglement) are singular and might diverge. However, at least in 1D systems, the following fact remains true [95]. -Nonlocality in many body systems Let us now turn to nonlocality in many-body systems. We start by explaining what the concept of nonlocality means, using the contemporary language of device independent quantum information processing (DIQIP). Recent successful hacking attacks on quantum cryptographic devices stimulated a novel approach to quantum information theory in which protocols are defined independently of the inner working of the devices used in the implementation, hence the term DIQIP. 1. Probabilities and correlations -DIQIP approach. -The idea of DIQIP is at best explained with the graphical scheme presented on Fig. 3. We consider here the following scenario, usually referred to as the (n, m, d) scenario. Let us consider n spatially separated parties A 1 , . . . , A n and imagine that each of them possesses a black box with m buttons representing the measurement choices (or observables) and d lights representing the measurement outcomes. Now, in each round of the experiment every party is allowed to press one of the buttons causing one of the lights to shine. The only information accessible in such an experiment is contained in a set of (md) n conditional probabilities P (a 1 , . . . , a n |x 1 , . . . , x n ) of obtaining outputs a 1 , a 2 , . . . , a n , provided observables x 1 , x 2 , . . . , x n were measured. In what follows we enumerate the measurements and outcomes as x i = 1, . . . , m and a i = 0, . . . , d − 1, respectively. The set of all such probability distributions is convex as by mixing any two of them one obtains another probability distribution; in fact, it is a polytope. From the physical point of view (causality, special relativity) the probabilities must fulfil the non-signalling conditions, i.e., the choice of measurement by the k-th party, cannot be signalled to the others. Mathematically it means that for any k = 1, . . . , n, the following condition a k P (a 1 , a 2 , . . . , a k , . . . , a n |x 1 , x 2 , . . . , x k , . . . , x n ) = P (a 1 , a 2 , . . . , a k−1 , a k+1 . . . , a n |x 1 , x 2 , . . . , is fulfilled. In other words, the marginal probability distribution describing correlations seen by the n parties except the kth one is independent of x k . We call correlations satisfying the above constraints nonsignalling correlations. It is easy to see that they also form a polytope. Let us also notice that the above conditions together with normalization clearly reduce the number of independent probabilities. For instance, in the simplest (2, 2, 2) scenario there are eight independent probabilities out of sixteen and they can be chosen as P (0, 0|x 1 , x 2 ), P A (0|x 1 ), and P B (0|x 2 ) with x 1 , x 2 = 1, 2. The local or classical correlations are defined via the concept of a local hidden variable λ. Imagine that the only resource shared by the parties is some classical information λ (called also LHV) distributed among them with probability q λ . The correlations that the parties are able to establish in such case are of the form P (a 1 , . . . , a n |x 1 , . . . , x n ) = λ q λ D(a 1 |x 1 , λ) . . . D(a n |x n , λ), where D(a k |x k , λ) are deterministic probabilities, i.e., for any λ, D(a k |x k , λ) equals one for some outcome, and zero for all others. What is important in this expression is that measurements of different parties are independent, so that the probability is a product of terms corresponding to different parties. Classical correlations form a convex set which is also a polytope, denoted È (cf. Fig. 3). Its extremal points (or vertices) are the above form, i.e., n i=1 D(a i |x i , λ) with fixed λ. The famous theorem of John Bell states that the quantum-mechanical probabilities, which also form a convex set Q, may stick out of the classical polytope [3]. The quantum probabilities are given by the trace formula for the set of local measurements P (a 1 , . . . , a n |x 1 , . . . , x n ) = tr(ρM x1 a1 ⊗ · · · ⊗ M xn an ), where ρ is some n-partite state and M xi ai denote the measurement operators, meaning that M xi ai ≥ 0 for any a i , x i and i, and for any choice of the measurement x i and party i. As we do not impose any constraint on the local dimension, we can always choose the measurements to be projective, i.e., the measurement operators additionally satisfy M xi The concept of the Bell inequalities is explained in Fig. 4. Any hyperplane in the space of probabilities that separates the classical polytope from the rest is a Bell inequality: everything, which is above the upper horizontal dashed line is obviously nonlocal. But, the most useful are the tight Bell inequalities, which correspond to the facets of the classical polytope, i.e. its walls of maximal dimensions (lower horizontal dashed line). To be more illustrative, let us now present a particular example of a Bell inequality. To this end, let us consider the simplest (2, 2, 2) scenario consisting of two parties, each measuring a pair of two-outcome observables. The only nontrivial tight Bell inequality in this scenario -the CHSH Bell inequality [8] -can be written in the "probability" form as where ⊕ stands for addition modulo two. Let us notice that in the case when all measurements have only two outcomes, i.e., d = 2, correlations can be equivalently expressed via expectation values The advantage about the "correlator" picture is that the non-signalling conditions are already incorporated in it. On the other hand, the correlators must satisfy a set of inequalities corresponding to the non-negativity conditions of probabilities p(a 1 , . . . , a n |x 1 , . . . , x n ) ≥ 0. To illustrate the "correlator representation", let us consider again the simplest (2, 2, 2) scenario. The eight independent conditional probabilities fully describing correlations in this scenario are equivalent to eight expectation values M x2 with x 1 , x 2 = 1, 2. Also, the CHSH Bell inequality (33) can be rewritten in its "more standard" form as From now on we concentrate on the (n, 2, 2) scenario (two two-outcome measurements). The complexity of characterizing the corresponding classical polytope is enormous. It is fairly easy to see that the number of its vertices (extremal points) is equal to 2 to the number of all possible choice of parties and observables, i.e., 2 2n , so it grows exponentially with n. The dimension of the space of probabilities is the number of choices of measurements by each party, which is 2+1, since each party has at their disposal 2 observables or it may not measure anything, to the power n. One then has to subtract 1 from this result, since if all parties do not measure, the result is trivial. Clearly, the resulting dimension 3 n − 1 grows exponentially with the number of parties. It is then not surprising at all that the problem of characterization of the classical polytope is, depending on formulation, NP-complete or NP-hard. Already for few parties finding all Bell inequalities is an impossible task. 2. Detecting non-locality in many body systems with two-body correlators. -Clearly, if we want to find Bell inequalities for many-body systems, we need some simplifications. This was the idea behind the recent papers [98,99], which focus on Bell inequalities involving one-and two-body correlators. In what follows we will refer to such Bell inequalities as two-body Bell inequalities. Notice in passing that several criteria allowing for entanglement detection in manybody systems from such quantities are already known (see, e.g., Refs. [100,101,102,103,104]). Restricting the study to low-order correlations reduces the dimension of the space and, thus, may simplify the problem of finding non-trivial Bell inequalities. However, it is not as simple as it sounds. First, one wants these Bell inequalities to be valid for any number of parties which, due to the fact that the complexity of the set of classical correlations grows exponentially with n, usually appears to be a very difficult task. Second, one wants such Bell inequalities to be useful, that is, to be capable of revealing nonlocality in some physically interesting states. However, intuitively, most of the information about correlations in the system are contained in high-order correlators, i.e., those involving many observers, and so Bell inequalities based on them are expected to be better at detecting nonlocality. All this makes the task of finding Bell inequalities from two-body correlators extremely difficult. It should be stressed in passing that as proven in Refs. [105,106,107] all-partite correlations are not necessary to detect multipartite nonlocality. Recently, a positive answer to the above question has been given in Ref. [98] by proposing classes of Bell inequalities constructed from one-and two-body expectation values, and, more importantly, showing examples of physically relevant many-body quantum states (i.e. ground states or low energy states of physically relevant Hamiltonians) violating these inequalities. Notice that finding and classifying such states is an interesting task in itself, especially in a view of the fact that many genuinely entangled quantum many-body states have two-body reduced density matrices (or in other words covariance matrices) that correspond to two-body reduced density matrices of some separable state. This is the case of the so-called graph states, as demonstrated in Ref. [108]; obviously one cannot detect entanglement of such states with two-body correlators, not even mentioning nonlocality. Let us now briefly describe the way the two-body Bell inequalities were found in [98]. First, by neglecting correlators of order larger than two one projects the polytope onto much smaller one È 2 spanned by two-body and one-body correlations functions. In this way we have achieved a severe reduction of the dimension of the polytope: where for convenience we wrote them down using expectation values instead of probabilities; recall that in the case of all observables having two outcomes both representations are equivalent. For "interesting" n the inequalities (36) still contain too many coefficients; in fact, the dimension of the corresponding polytope grows quadratically with n (for, say, n = 100, dim È 2 = 20000, i.e., it is still too large). To further simplify the problem one can demand that the Bell inequalities under study obey some symmetries. In particular, in Refs. [98] and [99] Bell inequalities obeying permutational and translational invariance have been considered. In what follows we discuss in more detail the results of Ref. [98]. 3. Permutational Invariance. -Let us now restrict our attention to two-body Bell inequalities that are invariant under a permutation of any two parties. It is fairly easy to see that their general form reads with k, l = 0, 1 are the symmetrized one-and two-body expectation values, respectively. Geometrically, we have mapped the two-body polytope È 2 to a simpler one È S 2 whose elements are five-tuples (S 0 , S 1 , S 00 , S 01 , S 11 ) consisting of the symmetrized expectation values. Obviously, by doing this projection È 2 −→ È S 2 one is able to limit the dimension of the local polytope to 5 and, more importantly, this number is independent of the number of parties. Still, the number of vertices of the projected polytope is 2(n 2 + 1), i.e., it scales quadratically with n, so the characterization of all permutationally invariant two-body Bell inequalities is not trivial at all. 5 . 4. Symmetric two-body Bell inequalities: example. -Here we consider an exemplary Bell inequality belonging to the class (39): where we have substituted x = y = −σ = 1 and µ = 0. Now, to see whether this Bell inequality is violated by some quantum states, let us assume that all parties measure the same pair of observables M . 4 shows two plots of the ratio Q v /β C of the maximal quantum violation Q v of this inequality with the above settings and the classical bound β C = 2n. In the left plot we show the dependence on n. The relative violation remains significant (of order 1) for n of order of 10 4 , and seems to grow or to saturate at large n. In the right plot we show maximal violation as function of the angle θ that defines the second observable. Again, the maximal violation remains significant for a large set of angles close to the optimal one. 5 . 5. Many-body symmetric states. -The next question to answer is what are the state that violated the two-body Bell inequalities, and which states can be detected by measuring these inequalities? To this aim we considered the Lipkin-Meshkov-Glick Hamiltonian [109], which is commonly used in nuclear physics, and more recently in trapped atoms and trapped ions physics Its ground state is the famous Dicke state [110]: for n even it is a symetric combination of all states with exaclty n/2 zeros (spins down), |D n/2 n = S(|{0, n/2}, {1, n/2} ) But , for n odd, we have a doubly degenerate ground state |D [n/2] n or |D [n/2]+1 n , for which integer part of n/2 or integer part of n/2 plus one spin are down. It was shown in Ref. [98] that the nonlocality of these states can be revealed with the aid of the following Bell inequalities with α n = n(n − 1)(⌈n/2⌉ − n/2) = n β n , γ n = n(n − 1)/2, δ n = n/2, ε n = −1, and the classical bound is found to be β n C = (1/2)n(n − 1) ⌈(n + 2)/2⌉. Again the observables are taken as M The results are presented in Fig. 5 . 5. We see that (42) is violated by the ground state of the LMG Hamiltonian, and that the violation is not so large, but significant; this time it actually decreases slowly with n. At this point it is worth mentioning that the detection of nonlocality in this case can be realized by measuring the total spin components and their fluctuations: these quantities can be measured with a great precision in current experiments with cold atoms and ions, for instance using the spin polarization spectroscopy. Indeed, the considered Bell inequality requires measurements of S 0 = 2 S z , S 1 = 2 m · S , S 00 = 4 S 2 z − n, S 11 = 4 (m · S) 2 − n, and where m is a unit vector determining the spin direction in the second measurement. It is worth mentioning that in the second paper [99], the more complex case of translational invariance was also considered -to this aim the parties were enumerated as if they were located in a 1D chain with periodic boundary conditions (a 1D ring). Recall that in this case the general form of a Bell inequality is where S i (i = 0, 1) are defined as before and T 's are translationally invariant two-body correlators given by with k = 1, . . . , ⌊n/2⌋ for i = j and k = 1, . . . , n − 1 for i < j. The number of coefficients is now of order of 3n, so the problem becomes intractable for n large. We have, nevertheless found and classified all tight Bell inequalities for 3 and 4 parties, and provided some examples of five-party Bell inequalities that involve correlators between next neighbours only. -Conclusions Let us conclude by listing several experimental setups in which nonlocality in many-body systems may be tested using the two-body Bell inequalities: • Ultracold trapped atoms. Dicke states have been recently created in spinor Bose-Einstein condensates (BEC) of Rubidium F = 1 atoms, via the parametric process of spin changing collisions, in which two m F = 0 atoms collide to produce a m F = ±1 pair [111]. These recent experiments demonstrate the production of many thousands of neutral atoms entangled in their spin degrees of freedom. Dicke-like states can be produced in this way, with at least 28-particle genuine multi-party entanglement and a generalized squeezing parameter of 11.4(5) dB. Similarly, Rubidium atoms of pseudo-spin 1/2 in BEC may be employed to generate scalable squeezed states (for the early theory proposal see [112], for experiments see [113]). Very recently non-squeezed (non-Gaussian) entangled states of many atoms [114] were generated in this way. The number of atoms used in these experiments are of order of thousands and larger. So far, these numbers and experimental errors and imperfections are too large, while the corresponding fidelities too small to detect many body non-locality. In principle, however, it is possible to perform these experiments with mesoscopic atom numbers (say ≤ 100), controlling the atom number to a single atom level (see Ref. [115] for the resonant fluorescence detection of Rb 87 atoms in a MOT, and Refs. [116,117] for optically trapped spin 1/2 fermions). • Ultracold trapped ions. Ultracold trapped ions with internal pseudo-spin "talk" to each other via phonon excitations, and in some condition behave as spin chains with long range interaction. This was originally proposed in Ref. [118], using inhomogeneous magnetic fields, and in Ref. [119], employing appropriately designed laser-ion interactions. The pioneering experiments were reported in Refs. [120,121]. While in the early theory studies [119,122,123,124] spin interactions decaying with the third power of the distance were considered, it was experimentally demonstrated that management of phonon dispersion allows to achieve powers between 0.1 and 3 in the 2D arrays of traps [125]. Recent state of art represents the work on experimental realization of a quantum integer-spin chain with controllable interactions [126]. We have studied trapped ion systems in relation to long range SU (3) spin chains and quantum chaos [127], and trapped-ion quantum simulation of tunable-range Heisenberg chains [128]. In the latter paper we demonstrated that significant violation of the Bell inequalities, discussed in this lecture, is possible even for the ground states of the models with large, but finite interaction range. The experimental scheme is presented in Fig. 7. • Ultracold atoms in nanostructures. Yet another possibility concerns systems of ultracold atoms trapped in the vicinity of tapered fibers and optical crystals (band gap materials). The experimental progress in coupling of ultracold atomic gases to nanophotonic waveguides, initiated by Refs. [129,130,131], is very rapid (cf. [132]). Early theoretical studies concerned remarkable optical properties of these systems (cf. [133,134,135,136]). Ideas and proposals concerning realization of long range spin models were developed more recently, and mainly in Refs. see [137,138,139]. • Cold and ultracold atomic ensembles. Last, but not least, one should consider cold and ultracold ensembles (for an excellent review see [140]), in which, by employing quantum Faraday effect, one can reach unprecendented degrees of squeezing of the total atomic spin (cf. [141,142]), and unprecendented degrees of precision of quantum magnetometry (cf. [143]). Note that in many concrete realisations the many body Bell inequalities derived in this paper require precise measurements of the total spin components, and their quantum fluctuations. Quantum Faraday effect, or in other words spin polarization spectroscopy, seems to be a perfect method to achieve this goal; note that in principle it allows also to reach spatial resolution, and/or to measure spatial Fourier components of the total spin [144,145].
2015-01-12T17:42:26.000Z
2015-01-12T00:00:00.000
{ "year": 2015, "sha1": "18e5d06faed967d66c613d9bf341be188cc15244", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "18e5d06faed967d66c613d9bf341be188cc15244", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237965487
pes2o/s2orc
v3-fos-license
The Threshold Effect of FDI on Regional Innovation Capability— From the Perspective of Intellectual Property Protection As one of the important channels of technology spillover, foreign direct investment (FDI) has a significant impact on regional innovation capability, which is restricted by the intensity of intellectual property protection. In order to explore the relationship between these three factors, this paper constructs a nonlinear threshold regression model based on China’s provincial panel data from 2009 to 2018, and empirically analyzes the threshold effect of FDI on regional innovation capability with the intensity of intellectual property protection as the threshold variable. The results show that the impact of FDI on regional innovation capability has a significant single threshold effect of intellectual property protection intensity. Only when the intensity of intellectual property protection remains near the threshold value, can FDI promote regional innovation capability to the greatest extent. Introduction Throughout the history of human development, innovation is an inexhaustible driving force for the development of a country and a nation. In the decades since the reform and opening up, China has successfully achieved leapfrog development, the secret of which lies in technological innovation and industrial upgrading. In the context of economic globalization, innovation has increasingly become a key factor for a country to enhance its international competitiveness and prestige. Under the condition of open economy, modern economic growth theory holds the view that innovation comes from two ways, one is independent innovation which is brought by R & D, the other is introduction innovation and imitation innovation caused by FDI. Especially for developing countries, FDI has become an important channel for them to introduce advanced technology and enhance their innovation capability. One of the purposes for China to vigorously promote FDI is to rapidly improve China's innovation capability and realize technological progress through international technology spillover. However, in this process, the effect of FDI on China's innovation capability is affected by many factors, including the intellectual property protection system. In recent years, intellectual property protection has become an important topic, especially since China signed TRIPs, it has become more and more important to establish a scientific intellectual property protection system in China. Most of the existing studies are carried out from two aspects, one is the impact of FDI on regional innovation capability, the other is the impact of intellectual property protection intensity on the promotion of regional innovation capability by FDI. Scholars have different opinions on whether FDI can promote regional innovation capability. Tang et al. (2018) analyzed the impact of FDI on innovation by taking the service industry as the research object, and the results showed that FDI significantly promoted the innovation capability of regional enterprises through learning effect and competition effect [1] . There are also some scholars believe that the technology spillover effect of FDI is influenced by some factors. For example, the level of economic development, the level of financial development, the degree of foreign trade openness, the regional economic structure. However, some scholars believe that FDI can not improve the innovation capability of the host country. Haddad and Harrison (1993) believed that FDI could not promote regional technological progress, but inhibited its innovation [2] . Another kind of research focuses on the impact of intellectual property protection intensity on FDI technology spillover effect. Globerman et al. (2000) showed that FDI under the constraint of intellectual property protection system would significantly promote the occurrence of technology spillover effect [3] . However, Maskus (2000) believed that the relationship between the intensity of intellectual property protection and technological innovation was not a simple linear relationship, but an inverted "U" relationship [4] . Lerner (1995) found that in the condition of weak intellectual property protection system, strengthening the intensity was conducive to the regional innovation. But if the intellectual property protection intensity was too strong, the opposite result would appear [5] . In conclusion, most existing studies show that FDI has an impact on regional innovation capability, and this impact is related to the intensity of intellectual property protection. However, the existing researches have the following disadvantages: First, most of these conclusions are biased towards strict or loose intensity of intellectual property protection. Second, the existing studies pay less attention to the mechanism of FDI's impact on regional innovation capability. Third, some scholars point out the conclusions from the reality of foreign countries, but these conclusions are not based on the situation in China, so they can not explain the problems in China. These disadvantages provide inspiration for this paper. 2 Theoretical analysis and research hypothesis 2.1. The mechanism of FDI influencing regional innovation FDI is a common channel of international technology diffusion, which mainly improves regional technological innovation capability through competition effect, learning effect and linkage effect. First, the competition effect. The entry of transnational corporations often leads to the intensification of competition. On the one hand, in order to compete in the market and maintain market share, local enterprises must reduce production costs, improve efficiency and product quality through technological innovation. On the other hand, the competition between transnational corporations and local enterprises also includes the competition of resources. Transnational corporations will attract regional high-quality talents with more favorable treatment, which hinders the progress of independent innovation of local enterprises. Therefore, from the perspective of competition effect, it is still difficult to conclude the role that FDI plays in the innovation of local enterprises. Second, the learning effect. The subsidiaries of transnational corporations produce in the host country. In order to achieve the target task, they will employ local labors. If the employees leave after mastering the technology, they will bring positive learning effect to the local enterprises. The subsidiaries of transnational corporations also create technology spillover effect for the local enterprises, forming the FDI spillover effect with staff turnover as the carrier. In addition, the subsidiaries of transnational corporations can also realize learning spillover during business operations, or learn and imitate advanced technologies of transnational corporations through product exchanges, scientific research cooperation and other ways. In this process, a path of "learning -imitation -innovation" has been formed. Third, the linkage effect. When transnational corporations enter the host country, they will establish connections with the upstream and downstream industrial chains of local enterprises, and outsource some intermediate products to other companies. The subsidiaries purchase intermediate products and services from local suppliers resulting in backward linkage, and supply products and services for downstream enterprises resulting in forward linkage, which forms the vertical technology spillover. The backward linkage plays a positive role by improving product standards, while the forward linkage promotes the improvement of enterprise innovation capability by improving the quality of intermediate products and joint development. Based on these, this paper puts forward the following hypothesis 1: Hypothesis 1: FDI can promote regional innovation capability through competition effect, learning effect and linkage effect. The impact of the intensity of intellectual property protection on the innovation effect of FDI The intellectual property protection system is an authoritative legal protection system for scientific and technological innovation achievements. With the deepening of international opening pattern and the complex international trade environment, the trade frictions caused by intellectual property are increasing day by day, which urgently requires the improvement of the intellectual property protection system. On the one hand, strengthening the intensity of intellectual property protection can effectively protect the innovation achievements, reduce the free rider problem, solve the problem of technological externality, get rid of the concerns of investment enterprises, and facilitate the technical communication and business cooperation between transnational corporations and local enterprises. On the other hand, strengthening the intensity of intellectual property protection will increase the cost of imitation innovation of local enterprises, inhibit the enthusiasm of regional technological innovation, and form the monopoly position of foreign investors. However, if the intellectual property protection is too loose, the technology of foreign-invested enterprises will be easily imitated. They will lose their competitive advantage, which will weaken their investment willingness. Based on these, this paper puts forward the following hypothesis 2: Hypothesis 2: The impact of FDI on regional innovation capability is limited by the threshold effect of intellectual property protection intensity. Model and variable selection From the previous theoretical analysis, we can see that there may be a non-linear relationship between FDI and regional innovation capability under the influence of intellectual property protection intensity and other factors. Therefore, this paper uses the non-linear panel regression model proposed by Hansen (1999), namely "threshold regression model", and takes the threshold value as an unknown variable into the model to construct a piecewise function, so as to test the role of intellectual property protection intensity in the impact of FDI on regional innovation capability. Referring to Hansen's (1999) "threshold regression model" and the existing references, this paper introduces other controlled variables, and constructs a single threshold panel model with the intensity of intellectual property protection as the threshold variable, regional innovation capability as the explained variable, and FDI as the core explanatory variable: Where, i represents the province, t represents the year, and represents the fixed effect. is the indicator function, which is 1 if the condition in parentheses is true, and 0 if the condition is not true. represents the threshold value to be estimated, and represents the random disturbance term. Considering the huge differences in the development of different regions in China, this paper uses provincial panel data for empirical analysis. Due to the Outline of national intellectual property strategy was promulgated in 2008, the intensity of intellectual property protection in various regions had been greatly strengthened, so the sample time is from 2009 to 2018. 1. Explained variable Regional innovation capability (inno). Although the final result of innovation is to increase the sales revenue of products, compared with it, patents represent the level of technological innovation activities and scientific research output in a region, so the number of granted patents is more intuitive and the data accuracy is higher. Therefore, this paper uses the number of granted patents owned by each province to measure the regional innovation capability. 2. Explanatory variable Foreign direct investment (fdi). This paper uses the actual amount of foreign direct investment used by each province of China over the years to measure FDI. 3. Threshold variable The intensity of intellectual property protection (ipr). Referring to the method of Dai (2014), the specific calculation method of intellectual property protection intensity is as follows: represents the number of law enforcement cases about intellectual property in region i in year t, represents the number of granted patents, (2) Financial development level (fin). It is measured by the ratio of loan balance of financial institutions in each province to regional GDP. (3) The degree of foreign trade openness (open). This paper uses the total import and export of each province over the years to measure the degree of foreign trade openness. (4) Government influence (gov). It is measured by the proportion of financial expenditure on science and technology of each province in the total financial expenditure of each province over the years. (5) Human capital (hum). ℎ 6 9 12ℎ ℎ 16 . Where, , , ℎ ℎ , represent the proportion of the population with education at primary school, junior high school, senior high school, college or above in the population above 6 years old in each province. (6) Technological level (tec). This paper uses the turnover of technology markets in each province over the years to measure the technological level. Empirical analysis Stata is used to test whether the model has threshold effect. In order to facilitate the operation, the triple threshold model is first implemented to obtain the significance test results of threshold effect as shown in Figure 1. P=0.7067, it is obvious that the result is not significant under the triple threshold effect, and then the double threshold effect test is carried out. The result is shown in the figure below, P=0.59, the result is not significant, indicating that there is no double threshold effect, and then the single threshold effect test is carried out. P=0.03, the single threshold effect is significant, so this paper selects the single threshold model for analysis. The single threshold effect is significant, which indicates that the impact of FDI on regional innovation capability will be different because of the differences in the intensity of intellectual property protection. The threshold value is estimated and the confidence interval is calculated. The threshold value is 0.0112, under the 95% confidence level, and the confidence interval of the threshold value is (0.0105, 0.0113). Fig. 4. Threshold value and confidence interval Draw the trend graph of single threshold function. It can be seen that the single threshold value 0.0112 is the lowest point in LR graph. At 95% confidence level, the values of threshold confidence interval are below the dotted line, which indicates that the threshold estimation is true and effective. Then the parameters of the single threshold model are estimated. In Figure 6, we focus on the regression coefficient of lnfdi. 0 indicates the impact of FDI on regional innovation capability when the threshold variable is lower than the threshold value. At this time, the coefficient of lnfdi is -0.02, and it does not pass the significance test, which indicates that FDI can not significantly promote regional innovation capability. 1 indicates the impact of FDI on regional innovation capability when the threshold variable is higher than the threshold value. The coefficient of lnfdi is -0.03 at this time, and it does not pass the significance test, which indicates that FDI can not significantly promote regional innovation capability. To sum up, it shows that only when the intensity of intellectual property protection remains at an appropriate level, that is, when the intensity of intellectual property protection is at the threshold level, FDI can significantly promote regional innovation capability. The reason for this phenomenon is that when the intensity of intellectual property protection is too low, the advanced technology of transnational corporations is easy to be imitated by local enterprises. So they will reduce the investment in the region, and reduce the possibility of technology spillover, which will hinder the improvement of regional innovation capability. When the level of intellectual property protection is too high, the advanced technology of transnational corporations is difficult to spread out, and the imitation cost of local enterprises is greatly increased, which will reduce their enthusiasm of innovation. Then observe the parameter estimation results of the controlled variables. The economic development level and the financial development level have a significant positive impact on the improvement of regional innovation capability. Although the degree of foreign trade openness, the government influence and the technological level can also have a positive impact on regional innovation, this effect is not significant. But the human capital has a negative impact on regional innovation capability, which is contrary to the expected results. Finally, this paper uses the number of patent applications instead of the number of granted patents to recalculate the intensity of intellectual property protection, and uses it as a threshold variable for robustness test. It is consistent with the previous results, indicating that the regression model is robust. Conclusions The results show that only when the intensity of intellectual property protection remains near the single threshold value, the promotion effect of FDI on regional innovation capability will be significant. At the same time, the economic development level and the financial development level have a significant positive impact on the improvement of regional innovation capability. These conclusions are of great significance for China to make more efficient use of FDI to promote regional innovation. Based on the results of this study, the following policy suggestions are put forward: First, give full play to the technology spillover effect of foreign direct investment. Second, formulate a reasonable intellectual property protection system. Third, build the perfect sci-tech finance system. Fourth, insist on the principle of selfreliance and self-improvement in science and technology, and create new advantages for development in an allround way.
2021-08-27T16:34:36.363Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "11d9679591a64c948180008e6bf5dbdf4ce5fe1a", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/51/e3sconf_eilcd2021_03023.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "368a01d3bf5065303092546f936fcb1bd72ab03c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
119948502
pes2o/s2orc
v3-fos-license
Configuration with four chirped volume Bragg gratings in parallel combination for large dispersion applications Abstract. A type of configuration with four identical chirped volume Bragg gratings (CVBGs) in parallel combination was proposed to improve dispersion, and the maximum group delay of nanoseconds can be reached at an oblique incidence. The diffraction properties of CVBGs at oblique incidence were simulated with the transfer matrix method. The performance of this configuration on dispersion, including group delay dispersion and cubic dispersion, was well studied with detailed numerical simulation. With optimization of the CVBG structural parameters, including the grating thickness, spatial chirp rate, and refractive index modulation, the configuration can be applied in large dispersion applications. Introduction The dispersive element plays an important role in a chirp pulse amplification (CPA) system, 1 which has been widely used in high-peak-power ultrashort lasers. In petawatt highenergy laser facilities, diffraction grating pairs have been employed as the dispersion elements to stretch the laser pulse, avoid laser-induced damage to the laser media, and increase the light extraction efficiency. However, due to the low laser-induced threshold of diffraction gratings, the aperture of the gratings is required to be as large as 1m × 1m to reduce the power density. 2 In addition, the distance between the gratings has to be several meters to obtain enough group delay dispersion (GDD). Therefore, a dispersion element with a high laser-induced damage threshold and large dispersion is drawing much attention in high-power and large-dispersion applications. In the last decade, chirped volume Bragg gratings (CVBGs) fabricated in photo-thermo-refractive (PTR) glass 3 have attracted growing interest in ultrashort laser systems due to the high laser-induced damage threshold 4 and large dispersion. 5 In 2005, Liao et al. 3 used a 10-mm long CVBG to stretch a 1-ps pulse to a 100-ps duration, and compressed the stretched pulse back with the same CVBG. Thereafter, a variety of studies on CPA systems with CVBGs in PTR glass were reported. [5][6][7][8] Due to the diffraction properties, 9 the stretching or compression ratio of CVBGs is related to the grating thickness. For the chirped pulses with a certain bandwidth, a thick CVBG will result in a large stretching or compression ratio. To compress a pulse from nanoseconds to picoseconds, a CVBG with a thickness of about 100 mm is required. 10 However, the thickness of common CVBGs is only about several centimeters. 11 An alternative solution is to use multiple CVBGs in combination to achieve a larger dispersion. In the scheme of a series combination, CVBGs have different structural parameters, such as the grating thickness, grating period, and refractive index modulation. Each CVBG diffracts beams with wavelengths in different and adjacent bandwidths. A large bandwidth, therefore, can be obtained. 12 However, a disadvantage of the series combination is that the distance between two adjacent CVBGs may result in temporal waveform clipping of the compressed pulses. What is more, angular alignment is also required to avoid divergence of the beams diffracted by different CVBGs. To solve these problems, obtain higher dispersion and optimize the pulse stretching and compression, a configuration of four identical CVBGs in parallel combination is proposed in this paper. In the parallel combination configuration, each CVBG can diffract all wavelength components of the incident beam, and the stretching or compression ratio will increase as the total length of the CVBGs increases. That is, the number of CVBGs increasing. With oblique incidence and in good alignment, the output beam has no spatial chirp and the propagation direction is in the incident direction. With the transfer matrix method 13 and ray tracing method, 14 the dependence between the GDD and the cubic dispersion (CD) with CVBG parameters of thickness, spatial chirped rate (SCR), and refractive index modulation are investigated. Diffraction Properties of Chirped Volume Bragg Gratings at Oblique Incidence The diffraction efficiency and spectral phase of the CVBG working at oblique incidence were simulated with the classical transfer matrix method. The transfer matrix method had been well studied and successfully used to study the diffraction properties of CVBGs working at normal incidence in our early work. 13 The CVBG in simulation here has a grating thickness of 3 cm and an SCR of 0.67 nm∕cm, and the central grating period of the CVBG is 351 nm. The amplitude of the refractive index modulation was well chosen to obtain 100% diffraction efficiency. In the numerical simulation with the transfer matrix method, the CVBG was divided into multiple uniform slabs in which the light beam propagation can be easily characterized with a transfer matrix. Each grating period of the CVBG was divided equally into 30 slabs. Working at normal incidence, the diffraction efficiency and spectral phase of the CVBG are plotted in Fig. 1 with black lines. The central Bragg wavelength is 1053 nm and the bandwidth is 6 nm. For the incident beams with wavelengths outside the bandwidth, the diffraction efficiency decreases sharply. The spectral phase outside the CVBG bandwidth is constant or linear with respect to the angular frequency, and the dispersion area of the CVBG is limited within the bandwidth. Inside the dispersion area, the spectral phase can be well fitted with a quadratic polynomial which may be used in dispersion compensation applications. With the incident angle increasing from zero, as indicated in Fig. 1, the bandwidth and dispersion area move toward a shorter wavelength since the Bragg-matching condition is met at a shorter wavelength. According to the Bragg-matching condition, 15 the central Bragg wavelength of the CVBG varies along with the incident angle where Λ 0 is the central grating period of the CVBG, n (about 1.49) is the refractive index of the PTR glass, and θ is the incident angle in air. With the SCR of the CVBG, the bandwidth can be obtained where ΔΛ is the value of the grating period variation along the CVBG thickness direction and d is the grating thickness of the CVBG. Configuration of Chirped Volume Bragg Gratings in Parallel Combination For a single CVBG with thickness d, the maximum time delay (MTD) over the bandwidth of the CVBG can be obtained with an approximate expression where c is the light speed in vacuum. The thickness of single CVBG is limited by grating fabrication. CVBG with the thickness of less than 40 mm is now available, which provides an MTD of about 400 ps. In larger dispersion applications, such as nanosecond CPA systems, an MTD of several nanoseconds may be required to stretch the pulse, increase the light extraction efficiency and obtain a high output energy. To increase the dispersion, CVBGs can be combined to avoid a further increase in the grating thickness. CVBGs in series combination, as shown in Fig. 2(a), can improve the dispersion and obtain the required MTD. However, the distance between the combined CVBGs may result in slipping in the temporal waveform of compressed pulses. Differing from the CVBGs in a series combination, CVBGs in parallel combination can improve the dispersion in a low-cost and compact way, as shown in Fig. 2(b). The CVBGs in parallel combination have the same structural parameters, such as the grating thickness, the grating period, and the amplitude of the refractive index modulation. Working at oblique incidence, no other optics is required to avoid overlapping between the incident beam and the diffracted beam from the same grating. However, the configuration will result in spatial chirp, and the aperture of the diffracted beam from the CVBGs will expand horizontally. As shown in Fig. 2(b), for a well-collimated beam with no spatial chirp incident onto CVBG-I, the diffracted beam expands due to the oblique incidence. After being stretched or compressed by CVBG-II, the horizontal diameter of the beam doubles. As the number of CVBGs in parallel combination increases, the horizontal aperture of the output beam expands monotonically. CVBGs with a larger aperture will be required to diffract beams efficiently. What is more, this spatial chirp must be compensated to make the beam operation easy and efficient. In order to get rid of the spatial chirp resulting from the oblique incidence, a configuration with four CVBGs in parallel combination was proposed here, as shown in Fig. 2(c). The last two CVBGs (CVBG-III and CVBG-IV) will totally compensate the spatial chirp that results from the first two CVBGs (CVBG-I and CVBG-II). The configuration with four CVBGs in parallel combination will not change the beam propagation direction when the CVBGs are aligned well enough, and it can be used as a plug and play device. The maximum horizontal beam aperture occurs in the space between CVBG-II and CVBG-III. The aperture of CVBG-II and CVBG-III shall be larger than CVBG-I and CVBG-IV to efficiently diffract the expanded beam. The configuration with four CVBGs in parallel combination has two kinds of MTD: (a) the MTD from the four CVBGs and (b) the MTD induced by the structure. The MTD from the CVBGs originates from different diffraction depths in CVBGs, while the latter is due to different optical paths in air. As indicated in Fig. 2(c), the sign of the structure-induced MTD is always opposite to that of the MTD from the four CVBGs. Along with the increase in the incident angle, the structure-induced MTD increases and the dispersion of the configuration with four CVBGs in parallel combination decreases. As shown in Fig. 3, the ratio between the structure-induced MTD and the MTD from the four CVBGs increases as the incident angle increases, and the ratio reaches the maximum value of 23% when the incident angle is 45 deg. In order to avoid a decrease in the dispersion, the incident angle shall be chosen to be below 6 deg, and the ratio is less than 1%. Dispersion of Four Chirped Volume Bragg Gratings in Parallel Combination As indicated in the Bragg-matching condition, beams with different wavelengths will be diffracted at different depths in the CVBG, which is related to the dispersion of a CVBG. The dispersion of the configuration with four CVBGs in parallel combination, however, consists of the dispersion in gratings and the dispersion out of the gratings, as shown in Fig. 2(c). With the transfer matrix method, the dispersion resulting from the four CVBGs can be obtained. The dispersion out of the gratings can be easily calculated with the ray tracing method. In order to characterize the dispersion of the configuration, GDD and CD were employed here. GDD and CD are the coefficients of the quadratic and cubic terms in a Taylor expansion of the spectral phase, ϕ, which is expanded about the pulse center angular frequency, ω 0 , corresponding to λ 0 To obtain the GDD and CD of the configuration, the spectral phase was first calculated by a wave propagating between input and output ends of the configuration based on the transfer matrix method and the ray tracing method. In addition to the incident angle, the effects of SCR, grating thickness, and refractive index modulation on GDD and CD were studied to understand the dispersion of the configuration. Figure 4 shows the variation of GDD and CD with the grating thickness and the amplitude of the refractive index modulation. The SCR is 0.67 nm∕cm. At small amplitudes of the refractive index modulation, such as 100 ppm, the GDD and CD of the configuration are almost constant as the grating thickness increases, as shown in Fig. 4. As the amplitude of the refractive index modulation increases, the GDD and CD show closer relationships with the grating thickness. When the grating thickness is large enough, however, the GDD and CD converge to constants, 120 ps 2 and 0 ps 3 , respectively. Compared with the GDD, the CD shows a closer relationship with the amplitude of the refractive index modulation. Decreasing the refractive index modulation will apparently decrease the CD, which is consistent with the results obtained in the previous studies. 16 As indicated in Ref. 9, the product of the grating thickness and the amplitude of refractive index modulation should be large enough to obtain a high diffraction efficiency for the CVBGs. Thus, increasing the grating thickness will be even better than enlarging the amplitude of the refractive index modulation if one is considering the CD suppression. With the SCR increasing, the GDD and CD of the configuration show a monotonous decrease, as shown in Fig. 5. The grating thickness of a CVBG is 3 cm. Comparing Figs. 5(a) and 5(b) show that GDD is almost unaffected by the incident angle or the refractive index modulation. Thus, the SCR of the four CVBGs can be obtained directly with the required GDD in a certain application. As indicated in Fig. 5(a), the CD also shows no relationship with the incident angle. The incident angle affects nothing in the optimization of the GDD and CD. The CD decreases sharply as the SCR increases to 1 nm∕cm, and then it converges to a limit of 0 ps 3 . When the SCR is larger than 1 nm∕cm, the CDs of the configuration at different refractive index modulations are of the same value. It seems that due to the small amplitude of the refractive index modulation (≤ 1000 ppm), increasing the refractive index modulation will not strengthen the Fabry-Perot (FP) resonance effect in a CVBG or result in a noticeable CD. When the SCR is less than 1 nm∕cm, the noticeable CD at a large refractive index modulation may be caused by linear-chirp distortion. The bandwidth of CVBGs may deviate from Eq. (2) in this case. It is clear that much additional work is required before a complete understanding of the CD can be obtained. With the results obtained from Figs. 4 and 5, a general understanding of the dispersion of the configuration can be reached, which will be beneficial for the configuration optimization in large dispersion applications. The GDD shows a close relationship to the SCR, and does not change along with the grating thickness, the refractive index modulation, or the incident angle. Thus, the SCR of CVBGs can be determined first in a certain application. To reduce the CD, a large grating thickness will be helpful. Increasing the grating thickness will also be beneficial for a high diffraction efficiency. In addition, the pulse spatial chirp resulting from the oblique incidence can be compensated completely when the four CVBGs are well aligned. Since the maximum spatial chirp occurs in the space between CVBG-II and CVBG-III, the aperture of the two CVBGs should be large enough to avoid beam clipping in space. In order to characterize the maximum spatial chirp inside the configuration, a frequency gradient 17 is used where Δf is the bandwidth of the pulse beam and Δx is the horizontal aperture increment of the pulse beam after CVBG-II and before CVBG-III. The approximation expression in the right-hand side of Eq. (6) was obtained when assuming that the chirp of CVBGs is linear and the higher-order (≥ 3) dispersion is ignored. When the spatial chirp grows, the FG decreases and a large CVBG aperture is required. As shown in Fig. 6(a), the FG decreases sharply as the incident angle initially increases, and when the incident angle is more than 5 deg, the rate of change decreases. Figure 6(b) shows that the frequency gradient increases linearly as the SCR increases. Since the incident angle is free to design in the optimization of the GDD and CD, decreasing the incident angle is more practical in order to obtain a large FG. Conclusions Prior work has documented the effectiveness of CVBGs as dispersive elements in CPA systems to stretch femtosecond pulses to picosecond or compress picosecond pulses to femtosecond. However, the thickness of the CVBGs used in these studies has been limited due to the fabrication system, and stretching pulses to nanosecond duration or compressing nanosecond pulses to picosecond with a single CVBG may not be practical. In this study, a configuration with four identical CVBGs in parallel combination was proposed to solve this problem. The CVBGs work at oblique incidence and no other optics is required to avoid overlapping between the incident beam and output beam. With the transfer matrix method and the ray tracing method, the GDD and CD of this configuration were studied to benefit the dispersion design with an optimization of CVBG structural parameters, including grating thickness, SCR, central grating period, and refractive of index modulation. These investigations broaden the applications of CVBGs, and nanosecond CPA systems are available with the configuration as a dispersive element. In addition, the configuration proposed here will not result in spatial chirp, and the propagation direction of the output beam will be in the incident direction when the four CVBGs are well aligned.
2019-01-11T04:59:06.111Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "49bd219c16db00e9cacc0ef0e12963435c222e0a", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/Optical-Engineering/volume-54/issue-5/056105/Configuration-with-four-chirped-volume-Bragg-gratings-in-parallel-combination/10.1117/1.OE.54.5.056105.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7060fbaaab229e3f6d215919ee83c3030d59c26a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Engineering" ] }
221761218
pes2o/s2orc
v3-fos-license
Deep Collective Learning: Learning Optimal Inputs and Weights Jointly in Deep Neural Networks It is well observed that in deep learning and computer vision literature, visual data are always represented in a manually designed coding scheme (eg., RGB images are represented as integers ranging from 0 to 255 for each channel) when they are input to an end-to-end deep neural network (DNN) for any learning task. We boldly question whether the manually designed inputs are good for DNN training for different tasks and study whether the input to a DNN can be optimally learned end-to-end together with learning the weights of the DNN. In this paper, we propose the paradigm of {\em deep collective learning} which aims to learn the weights of DNNs and the inputs to DNNs simultaneously for given tasks. We note that collective learning has been implicitly but widely used in natural language processing while it has almost never been studied in computer vision. Consequently, we propose the lookup vision networks (Lookup-VNets) as a solution to deep collective learning in computer vision. This is achieved by associating each color in each channel with a vector in lookup tables. As learning inputs in computer vision has almost never been studied in the existing literature, we explore several aspects of this question through varieties of experiments on image classification tasks. Experimental results on four benchmark datasets, i.e., CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet (ILSVRC2012) have shown several surprising characteristics of Lookup-VNets and have demonstrated the advantages and promise of Lookup-VNets and deep collective learning. I. INTRODUCTION T HE advent of large datasets and computing resources has made deep neural networks (DNNs) as the most popular technology for varieties of applications [1], [2], [3], [4] in computer vision. One of the most used data types in deep vision models [5], [6], [7], [8], [9], [10] is images. It is observed that image pixels are represented by integers as they are almost always coded in discrete color spaces. For example, in the RGB space, the pixel values are 8-bit integers (0 to 255). The integers representing image pixels are first standardized (i.e., a manually designed linear function) and then used as the inputs to DNNs for any learning task. Thus, the data source of the inputs to DNNs is integers which are involved in the gradient calculation during the training process. The manually designed inputs have a strong assumption that the color changes in images cause linear value changes in the inputs. In lights of this, we boldly question whether the Xiang manually designed inputs are good for DNN training for different tasks although a DNN with a proper size theoretically can approximate any function [11], [12]. We study whether the inputs to DNNs can be learned automatically in computer vision. We take the images in the RGB color space as the examples in this paper, but the idea can be easily extended to the images (or videos) in other discrete color spaces. In standard DNNs for computer vision such as VGG [13] and ResNet [5], only the weights are learned with the RGB inputs during the training process. Based on the idea of learning, advanced efforts go beyond learning weights and propose to learn the activation function [14], [15], the pooling function [16], and the optimal regularizer [17] by parameterizing them instead of using manually designed functions. However, to the best of our knowledge, learning the inputs to DNNs in computer vision has never been explored in the existing literature. In the paper, we propose deep collective learning which aims to learn weights in DNNs and inputs to DNNs jointly instead of learning weights alone. Deep collective learning has been implicitly but successfully used in natural language processing (NLP) where a character or a word is associated with a learned vector [18], [19] as the input to a DNN [20], but it has not been studied in computer vision. In light of this, we propose lookup vision networks (Lookup-VNets) as a solution to deep collective learning in computer vision. As shown in Figure 1, a Lookup-VNet comprises a DNN and three lookup tables corresponding to the three GRB channels. The lookup tables are used to parameterize the inputs by associating each color in each channel with a vector. The pixel colors in images are used as indices to look up the three tables. The results are fed into the DNN as the inputs to generate the outputs. The lookup tables are not designed manually but learned jointly with the weights of the DNN. We propose two kinds of lookup tables for Lookup-VNets according to whether the pixel color space is compressed, i.e., full lookup tables and compressed lookup tables. Moreover, we introduce three kinds of table learning strategies, i.e., singletask and single-network learning, cross-network learning, and cross-task learning. Lookup-VNets possess several inherent advantages over the standard DNNs. First, Lookup-VNets enable DNNs to learn the optimal inputs end-to-end for given tasks. Second, from the perspective of image coding, Lookup-VNets can be used to learn the optimal image coding scheme automatically for a given criterion. For example, for the goal of image storage compression with the criterion of accuracy, experimental results show that the pixel color space can be compressed 4096 (16 3 ) times (from 256 × 256 × 256 colors to 16 × 16 × 16 colors) without accuracy dropping on CIFAR-10, which indicates that the pixel bits can be reduced from 24 (8×3) bits to 12 (4×3) bits under this setting. On the other hand, due to the vacancy of the existing literature on deep collective learning in computer vision, we explore various aspects of this question with Lookup-VNets, such as vector dimensions in lookup tables, pixel color space compression, and table learning strategies, through varieties of experiments on four benchmark datasets, i.e., CIFAR-10 [21], CIFAR-100 [21], Tiny ImageNet 1 , and Im-ageNet (ILSVRC2012) [22]. The experimental results show that Lookup-VNets are able to match the performances of the corresponding standard DNNs on CIFAR-10, CIFAR-100, and Tiny ImageNet while achieving better performances on the large-scale and challenging dataset ImageNet than those of the corresponding standard DNNs, which indicates the superiority of Lookup-VNets on large-scale and challenging datasets and the promise of deep collective learning in computer vision. We also observe several surprising characteristics of Lookup-VNets: (1) the vector dimensions in lookup tables have no influence on the test performance (generalization ability) of Lookup-VNets; (2) the commonly used color space can be compressed up to 4096 times without accuracy dropping on CIFAR-10, and 3375 times on CIFAR-100 and Tiny ImageNet. The main contributions of our work can be summarized as follows: • We have studied a new question on whether the inputs to deep vision networks can be optimally learned end-toend and have proposed a new paradigm of deep collective learning which aims to learn weights in DNNs and inputs to DNNs simultaneously for given tasks. • II. RELATED WORK In this part, we first review the literature on deep collective learning in NLP. Then we review the work on learning components of DNNs in computer vision. A. Deep Collective Learning in NLP Deep Collective Learning has been implicitly but successfully used in the area of NLP. These efforts in NLP mainly associate each character [23], subword unit [24], [25] or word [18], [26] with a highly dimensional vector in lookup tables. These vectors are learned for a task, which means the vectors that the lookup tables assign to the characters or words are not designed manually but discovered automatically in the training process of a neural network on a particular task. Learning lookup tables (embeddings) has been well studied in NLP with large amounts of well known approaches including but not limited to [18], [26], [24], [25], [23], [27], [28]. These learned vectors in lookup tables are used as inputs to neural networks (eg., LSTM [29]) [30], [31], [32] and are optimized in the training process of a neural network for a given task, which implies the idea of deep collective learning. However, deep collective learning has never been studied in computer vision. We propose Lookup-VNets as a solution to this problem in computer vision, which tries to automatically learn representations for pixel colors to replace the fixed integer representations in RGB space. B. Learning Components of DNNs From another perspective, Lookup-VNets can be considered as learning a component (i.e., the input) of a DNN. Thus, it is also related to the efforts on learning the components of DNNs. He et al. [33] and Agostinelli et al. [14] propose to learn the activation function end-to-end by parameterizing them. Lin et al. [34], Zhu et al. [35], and Sun et al. [36] propose to learn the pooling strategies based on the attention mechanism instead of using the manually designed ones such as average pooling and max pooling. Streeter et al. [17] propose to learn the optimal regularizer instead of using the manually finetuned ones. However, to the best of our knowledge, learning the input to a DNN has never been explored in the existing literature. III. OUR FRAMEWORK To illustrate the connection between standard DNNs and Lookup-VNets, we first review the standard DNNs which only learn the weights with RGB inputs during the training process without computing the gradients with respect to the inputs. Then we present Lookup-VNets which learn the weights and inputs jointly by associating each color with a vector. Specifically, we first introduce the full lookup tables and the compressed lookup tables in Lookup VNets; then we present different learning strategies for lookup tables; finally we provide the additional space and computation costs. We restate that the images in the RGB color space are used as the examples in this paper, but the idea can be easily extended to the images (or videos) in other discrete color spaces. A. Standard Deep Neural Networks Given the training data (X, Y ) where X are the images and Y are the targets, the outputs of a DNN f with weights W are f (X, W ), and the loss is written as: where L(.) is any loss function, such as the mean square error or cross entropy. In the training process, the loss function is minimized by the gradient descent based optimizer such as SGD or Adam [37]. W are updated iteratively based on the gradients on a mini-batch of training samples and the update in step t can be simply expressed as: where W t are the values of W in step t and (x t bat , y t bat ) are a mini-batch of training data sampled in step t. As seen from (2), only the weights are learned during the training process for a standard DNN. Lookup-VNets take a further step to update the inputs and weights simultaneously. B. Lookup-VNets Different from the standard DNNs, Lookup-VNets learn the weights and inputs jointly instead of learning weights alone. As shown in Figure 1, a Lookup-VNet consists of a DNN and three lookup tables. The lookup tables associate each color in each channel with a vector. According to whether the color space is compressed, we introduce two kinds of lookup tables for Lookup-VNets, i.e., full lookup tables and compressed lookup tables. We also develop three different strategies for learning lookup tables, i.e., single-network and single-task learning, cross-network learning, and cross-task learning. 1) Full Lookup Tables: The images in the RGB space have three channels, i.e., the red (R) channel, the green (G) channel, and the blue (B) channel. Each channel has 256 colors represented by 8-bit integers (0 to 255), so the full RGB color space is 256 × 256 × 256. The full lookup tables keep the color space size constant. As shown in Figure 2, there are three full lookup tables corresponding to the three RGB channels, and the 256 colors in each channel are associated with 256 distinct vectors. Thus, the color space is still 256 × 256 × 256. The vector dimension in lookup tables is a hyperparameter, and how it influences the performances is explored in Section IV. 2) Compressed Lookup Tables: Full lookup tables map different colors to different vectors so that the color space is still 256×256×256. We question the necessity of the large color space and propose compressed lookup tables. The compressed lookup tables compress the color space with a compressing rate (CMP-Rate) c. As shown in Figure 3, every c colors in each channel are mapped to a number in the compressed lookup tables, and then there are 256 c colors in each channel. Therefore, the whole RGB color space is totally compressed about c 3 times, i.e., from 256×256×256 to 256 c × 256 c × 256 c . It is worth noting that in the compressed tables, every c colors in a channel are mapped to a number not a vector for the goal of compression. We study how the CMP-Rate c affects the performances of Lookup-VNets in Section IV. An obvious advantage of compressed lookup-tables is that they can be used to save image storage space as each pixel is represented with less bits. 3) Single-Task and Single-Network Learning: In this part, we show how the lookup tables are learned in the scenario of single task and single network. For an RGB image x with size m×n×3 where m, n, and 3 are the height, the width, and the channel number, respectively, we use the pixel colors in each channel as indices to look up each lookup table and obtain x : where lookup(x, T ) denotes the result obtained from using the pixels in image x as indices to look up the tables T . The size of x is determined by the vector dimension in the tables. Suppose that the vector dimension is u (u is set to 1 in compressed lookup tables); then the size of x is m×n×3u. The reason is that each color in each channel of the RGB space is denoted by an integer, but each color in the lookup tables is represented by a vector with u values. x is used as the input to the DNN, so that the loss of the Lookup-VNet is: where X are the results of lookup(X, T ). Note that in (4), X contain the parameters from the lookup tables T . The weights W and the lookup tables T are learned simultaneously during the whole training process through a gradient descent based optimizer: where Θ = [W, T ], i.e., the list of all parameters in W and T ; Θ t are the values of Θ in step t; x t bat are the results of lookup(x t bat , T ); and λ is the learning rate. 4) Cross-Network Learning: Beside learning lookup tables on a task with a DNN, lookup tables can also be learned across two or more networks for a task. Suppose that there are two DNNs f and g with shared lookup tables T for a task with the training data (X, Y ). We alternately optimize the loss functions of f and g on the task with a gradient descent based optimizer, which can be simply written as: where Θ f = [W f , T ]; W f and W g are the weights of f and g, respectively; Θ f t are the values of Θ f in step t; and λ f is the learning rate for training f . By alternately executing (6) and (7), we learn the lookup tables across two networks f and g. 5) Cross-Task Learning: To further explore learning inputs to DNNs, we introduce learning lookup tables across tasks. Intuitively, the lookup tables learned across two or more tasks are more robust than those learned on one task. Suppose that f and g are two DNNs with shared tables T for two tasks p and q with the training data (X p , Y p ) and (X q , Y q ), respectively. We alternately optimize the loss functions of f on task p and g on task q with gradient descent, which is written as: By alternately executing (8) and (9), the lookup tables are learned across two different tasks p and q. C. Additional Costs of Lookup-VNets Compared with Standard Deep Neural Networks In this part, we provide the additional space and computation costs of a Lookup-VNet compared with those of the corresponding standard DNN. 1) Additional Space Cost: When a standard DNN is converted to the corresponding Lookup-VNet with lookup tables of 1-dimension vectors, the whole network architecture remains the same. The only additional parameters are from three lookup tables with 768 (256×3) parameters. When the vector dimension is greater than 1, only the first layer of the standard DNN needs to be changed. Without loss of generality, we assume that the first layer of a standard DNN is a convolutional layer with kernel size k × k and kernel number j. Then the parameter number in the first layer is k×k×3×j where 3 is the channel number of the input image in the RGB space. For the corresponding Lookup-VNet, suppose that the vector dimension of the lookup tables is u; then the space cost for the three lookup tables is 256×3×u. As each color in each channel is mapped into a vector with dimension u, the input channel number is changed from 3 to 3u and the parameter number in the first layer is changed to k×k×3u×j. Therefore, the total additional parameter number is 256×3×u+k×k×3(u − 1)×j. The experiments suggest that the vector dimension almost has no influence on the performances of Lookup-VNets. Thus, the additional cost is almost ignorable as we can always take small vector dimensions. For example, the total parameter number of the standard VGG-16 [13] is 138.4 million while the additional parameter number brought by the corresponding Lookup-VNet with vector dimension 1 is only 768 which is ignorable compared with 138.4 million. 2) Additional Computation Cost: The additional computation cost in the forward propagation is related to the input image size. Suppose that the input image size in the standard DNN is m×n×3 where m, n, and 3 are the height, the width, and the channel number, respectively, and assume that the computation cost for looking up tables equals to the number of query indices. Then the computation cost for looking up tables is m×n×3. Suppose both the vertical and horizontal strides in the first convolutional layer are s and the padding strategy is adopted. Then the additional computation cost in the first layer of the Lookup-VNet is m s × n s ×j ×(2k 2 ×3u+1)− m s × n s ×j ×(2k 2 ×3+1) floats. The total additional cost is m × n × 3 + m s × n s × j × (2k 2 × 3u + 1) − m s × n s × j × (2k 2 × 3 + 1) floats. Note that when the vector dimension u is set to 1, the only additional cost is m × n × 3 for looking up tables. IV. EXPERIMENTS In the section, we report a series of experiments conducted on image classification tasks to explore deep collective learning in computer vision with Lookup-VNets. The code will be released. Through these experiments, we intend to answer the following questions: CIFAR-10 is an image classification dataset with 10 classes, containing 50,000 training images and 10,000 test images with image size 32 × 32 in the RGB space. We follow the standard data augmentation on CIFAR datasets. During training time, we pad 4 pixels on each side of an image and randomly flip it horizontally. Then the image is randomly cropped to 32 × 32 size. During test time, we only evaluate the single view of an original 32 × 32 image without padding or cropping. CIFAR-100 comprises similar images to those in CIFAR-10, but has 100 classes. We adopt the same data augmentation strategy as that in CIFAR-10. Tiny ImageNet, i.e., a subeset of ImageNet, is an image classification dataset with 200 classes, containing 100,000 training images and 10,000 test images with size 64 × 64 in the RGB space. At training time, we pad 8 pixels on each side of an image and randomly flip it horizontally, then the image is randomly cropped to 64 × 64 size. At test time, we only evaluate the original image. ImageNet is a large-scale image classification dataset with 1000 classes, containing 1.28 million training images and 50,000 validation images with different sizes in the RGB space. On ImageNet, to reduce the CPU burden, we adopt a simpler data augmentation strategy than that in the models 2 http://tiny-imagenet.herokuapp.com/ pretrained by Facebook 3 . Specifically, we use a simple scale and aspect ratio augmentation strategy from [38]. Test images are resized so that the shorter side is set to 256, and then are cropped to size 224 × 224. Note that in Lookup-VNets, data preprocessing is not needed as the inputs are learned end-to-end . However, in standard DNNs, data preprocessing is necessary as their inputs are images which are represented by 8-bit integers ranging from 0 to 255. Therefore, to ensure the performances of standard DNNs (the baselines), we use data preprocessing for them. On CIFAR and Tiny-ImageNet datasets, we use the widely used data proprocessing strategy: each image is preprocessed by subtracting its mean and dividing it by its standard deviation. On ImageNet, each image is preprocessed by subtracting the mean of the whole training set and dividing it by the standard deviation. In every case below, the experiments are repeated three times and then we report the average test accuracy as the variance is quite small. On CIFAR-10 and CIFAR-100, we use the training hyperparameters in the original studies to train ResNet-20 and WRN-40-4. For VGG-16, we have trained it for 250 epochs with mini-batch size 128 and optimizer SGD with moment 0.9; the weight decay is set to 5e-4; the initial learning rate is set to 0.1 and divided by 2 after every 20 epochs. On Tiny ImageNet, the hyperparameters in weight decay for ResNet-20, VGG-16, and WRN-40-4 are set to 1e-4, 5e-4, and 5e-4, respectively. We have trained the three models for 120 epochs with mini-batch size 128 and optimizer SGD with moment 0.9. The initial learning rate is 0.05 and divided by 5 after every 30 epochs. On the large-scale dataset ImageNet, we have trained ResNet-18 and ResNet-34 for 90 epochs with optimizer SGD with moment 0.9. The mini-batch size is set to 128 and the learning rate is set to 0.05 and divided by 10 after every 30 epochs for both networks. C. Performances of Lookup-VNets with Full Lookup Tables of Different Vector Dimensions Lookup-VNets parameterize the inputs through associating each color with a vector in lookup tables. To investigate the influence of the vector dimension on the performances of Lookup-VNets, substantial experiments with various vector dimensions are conducted on the four datasets. Specifically, we evaluate Lookup-VNets with different vector dimensions on CIFAR-10, CIFAR-100, and Tiny ImageNet while we only check the performances of Lookup-VNets with vector dimension 1 on ImageNet due to the large image size. Full lookup tables are initialized evenly between -1 and 1. Table I and Table II summarize the performances of the standard DNNs and the corresponding Lookup-VNets with different vector dimensions on CIFAR (i.e, CIFAR-10 and CIFAR-100) and Tiny ImageNet, respectively. Surprisingly, Lookup-VNets with different vector dimensions have almost the same performance with a given network architecture. This indicates that the vector dimension has almost no influence on the performances (i.e., generalization ability 4 ) of Lookup-VNets while the network architecture matters. Another astonishing observation is that these Lookup-VNets produce almost the same results as those of the corresponding standard DNNs with RGB inputs on the three datasets. We attributed this observation to the small number of training data in the three datasets because a different phenomenon is observed on the large-scale and challenging dataset ImageNet as shown in Table III. As seen from Table III, 1-dimension Lookup-VNets show a consistent performance improvement on ImageNet with ResNet-18 and ResNet-34, which indicates that Lookup-VNets have advantages over the standard DNNs on large-scale and challenging datasets like ImageNet. The possible reason can be that when the the size and complexity of the dataset are scaled up, the integers in the RGB space are not appropriate for DNN training anymore, but the Lookup-VNets make the inputs to DNNs more flexible, and are able to learn the optimal inputs automatically. It is worth noting that the number of the additional parameters brought by 1-dimension Lookup-VNets is only 768 which is ignorable compared with 11.7 million parameters in ResNet-18 and 21.8 million parameters in ResNet-34. D. Performances of Lookup-VNets with Different CMP-Rates Empirical results have shown that the vector dimension in full lookup tables plays no role in the performances of Lookup-VNets, which questions the necessity of a large color space. Now we explore whether compressing the color space influences the performances of Lookup-VNets. We compare the performances of the standard DNNs with those of the corresponding Lookup-VNets with various CMP-Rates on CIFAR-10, CIFAR-100, and Tiny ImageNet. Compressed lookup tables are initialized evenly between -1 and 1. Figure 13, Figure 14, and Figure 15 represent the results with different CMP-Rates on CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. We notice that the performances do not drop compared with those of the standard DNNs with RGB inputs when the compressing rate is less than a threshold. To our surprise, without accuracy dropping, the color space can be compressed 4096 (16 3 The experimental results indicate that Lookup-VNets can be used to learn the optimal image coding scheme for given tasks and goals such as the storage compression and the accuracy. For example, when the color space can be compressed 4096 (16 3 ) times without accuracy dropping, the pixel bits can be reduced from 24 (8 × 3) bits to 12 ( 4 × 3) bits under this setting so that the image storage space can be saved one half. We also report the experimental results on large-large dataset ImageNet with different CMP-Rates. As shown in Table V, when the color space is compressed 8 (2 3 ), 1000 (10 3 ), and 3375 (15 3 ) times, Lookup-VNets are still able to perform no worse than the standard DNNs, which indicates the promise of Lookup-VNets and the potential of deep collective learning in computer vision. E. Performances of Lookup-VNets with Different Table Learning Strategies In this part, we investigate how the table learning strategy influences the performances of Lookup-VNets. We consider the classification tasks on CIFAR-10, CIFAR-100, and Tiny ImageNet as three distinct tasks. For cross-network learning, we learn the lookup tables across ResNet-20 and VGG-16 for each of the three tasks. For cross-task learning, we use ResNet-20 and VGG-16 on CIFAR-10 and Tiny ImageNet to jointly learn the lookup tables. Table VI and Table VII report the results of the lookup tables learned across two networks and across two tasks, respectively. Compared with the results of the individually learned tables as shown in Table I and Table II, we observe that learning across networks has almost no influence on the performances of Lookup-VNets while learning across tasks is able to improve the performances. F. Visualization In this part, we visualize lookup tables through showing images in the way larger pixel values are visualized as higher color intensity. Figure 16 shows eight CIFAR-10 images when they are represented in the original RGB space, 1-dimension full lookup tables, and compressed lookup tables with CMP-Rate 5. The full lookup tables and compressed lookup tables are learned with VGG-16 on the CIFAR-10 classification task. Suppose the human visual system prefers the images coded by RGB as shown in the first row of Figure 16. We observe that the code scheme that the DNN favors for CIFAR-10 classification task (i.e., the second row and the third row of Figure 16) is different from what the human visual system prefers. However, it is reasonable because the DNN as a extremely complex function may carry out a task from a perspective that is different from that of humans. V. CONCLUSION AND FUTURE WORK As visual data are almost always represented in a manually designed coding scheme when they are input to DNNs, we have explored whether the inputs to DNNs can be optimally learned end-to-end, and have proposed the paradigm of deep collective learning which aims to learn the weights of DNNs and the inputs to DNNs simultaneously. Due to the lack of the research on deep collective learning in computer vision, we have proposed Lookup-VNets as a solution. Lookup-VNets enable DNNs to learn the optimal inputs automatically for given tasks. From the perspective of image coding, Lookup-VNets can be considered as learning the optimal image coding scheme automatically for given goals. Additionally, we have explored various aspects of deep collective learning in computer vision with Lookup-VNets through extensive experiments on four benchmark datasets, i.e., CIFAR-10, CIFAR-100, Tiny ImageNet, and Imagenet. The experiments have shown several surprising characteristics of Lookup-VNets: (1) the vector dimensions in lookup tables has no influence on the test performance (generalization ability) of Lookup-VNets; (2) the commonly used color space can be compressed up to 4096 times without accuracy dropping on CIFAR-10, and 3375 times on CIFAR-100 and Tiny ImageNet. We also observe that Lookup-VNets are able to match the performances of the standard DNNs on small datasets and achieve superior performances on large-scale and challenging datasets like ImageNet since large and complex datasets can fully take advantages of the flexible and optimally learned inputs. Besides the basic aspects of Lookup-VNets studied in this paper, various other aspects can be explored, one of which is to design an effective regularizer with regard to lookup tables for Lookup-VNets. It is widely believed that the generalization abilities of DNNs are tightly connected to the stability which can be described as a partial derivative of the output to the input. In Lookup-VNets, the inputs to DNNs are significantly related to the learned lookup tables. Thus, developing an appropriate regularizer on lookup tables is intuitively promising to further improve the performances of Lookup-VNets. We leave this to the future work.
2020-09-18T01:00:25.682Z
2020-09-17T00:00:00.000
{ "year": 2020, "sha1": "04d036b54844622c22eaa96eb0b58d184004b337", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "04d036b54844622c22eaa96eb0b58d184004b337", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
235787096
pes2o/s2orc
v3-fos-license
Cognitive flexibility in children with Developmental Language Disorder: Drawing of nonexistent objects Cognitive flexibility is the ability to adapt thoughts and behaviors to new environments. Previous studies investigating cognitive flexibility in children with Developmental Language Disorder (DLD) present contradictory findings. In the current study, cognitive flexibility was assessed in 5-and 6-year-old preschoolers with DLD ( n = 23) and peers with typical development (TD; n = 50) using a nonexistent object drawing (NEOD) task. The children were asked to draw a nonexistent man and a nonexistent house. The children with DLD did not differ from their peers with TD on simple category changes, which were comprised of changes in the size or shape of parts of the object, change of the whole shape of the object, and deletion of parts of the object. Nevertheless, children with DLD made fewer more complex, high-level category changes, which included same-category insertions, position exchange of object ’ s parts, and cross-category insertions. The difference between DLD and TD on high-level category changes was related to differences between the two groups in verbal short-term memory and inhibition. Furthermore, children with DLD made no changes to their original drawings of an existing man and house more often than their peers with TD. It is concluded that children with DLD aged 5 – 6 years show less flexibility on the NEOD task than age-matched children with TD. This difference in cognitive flexibility may be related to lower levels of verbal short-term memory and inhibition ability of children with DLD, or to different use of these cognitive skills on the NEOD task. Introduction Children often need to adapt their thoughts and behaviors to changing situations in their everyday life.In order to do this, they need cognitive flexibility.Cognitive flexibility comprises one of the executive functions (Miyake & Friedman, 2012;Miyake et al., 2000).Executive functions are a set of general-purpose control processes that regulate a person's thoughts and behaviors and include inhibition and updating of the working memory contents, in addition to cognitive flexibility.Deficits in cognitive flexibility have been associated with several neurodevelopmental disorders, including Developmental Language Disorder (e.g., Farrant, Maybery, & Fletcher, 2012;Kapa, Plante, & Doubleday, 2017;Roello, Ferretti, Colonnello, Levi, & 2015).However, other studies found that cognitive flexibility is spared in children with Developmental Language Disorder (e.g., Henry et al., 2011;Im-Bolter, Johnson, & Pascual-Leone, 2006).Given the inconsistent findings across studies, additional research is needed to gain a clearer picture of cognitive flexibility in children with DLD.The current study is the first to study cognitive flexibility in children with Developmental Language Disorder using a nonexistent object drawing task (Karmiloff-Smith, 1990). Cognitive flexibility Cognitive flexibility is the human ability to adapt the cognitive processing strategies, responses and representations to new and unexpected conditions in the environment (Cañas, Quesada, Antolí, & Fajardo, 2003;Legare, Dale, Kim, & Deák, 2018).Cognitive flexibility is used for switching between tasks, changing perspectives and thinking outside the box (Diamond, 2013).Other terms for cognitive flexibility include attentional flexibility or set-shifting/task-switching.The term attentional flexibility is typically used in studies that focus on the processes required for shifting attention.Studies that use notions such as set-shifting or task-switching define cognitive flexibility by the task used to measure it, i.e. a set-shifting or task-switching task (Dajani & Uddin, 2015).Cognitive flexibility is associated with better reading abilities in childhood (de Abreu et al., 2014), higher resilience (Genet & Siemer, 2011) and levels of creativity in adulthood (Chen et al., 2015), and better quality of life in the elderly (Davis, Marra, Najafzadeh, & Liu-Ambrose, 2010). According to the well-known unity/diversity framework, cognitive flexibility is one of the executive functions (Miyake & Friedman, 2012;Miyake et al., 2000; note that Miyake and colleagues refer to the construct as 'shifting').Research has shown that children as young as 3-4 years old can already successfully shift between two simple response sets, provided that rules are placed in a story context (Hughes, 1998) or demands on inhibition are reduced (Rennie, Bull, & Diamond, 2004).Various studies reported significant growth in cognitive flexibility between the ages of 3 and 6 years (Deák, 2000;Zelazo, Müller, Frye, & Marcovitch, 2003;Zelazo, Frye, & Rapus, 1996).Other research indicates a sharp increase between ages 7 and 9 years (Dick, 2014).The development of cognitive flexibility appears relatively gradual compared to inhibition, which has been found to undergo a strikingly strong increase in the preschool years and less growth at later ages (Best & Miller, 2010). Most studies investigated children's cognitive flexibility using set-shifting tasks (Dajani & Uddin, 2015; Legare et al., 2018), which assess the ability to change previously learned behaviors if these are no longer relevant.Two tests are commonly used in research with kindergarteners, which is the population targeted in the current research: (1) Dimensional Change Card Sort (DCCS) in which cards are sorted using a simple rule, such as sorting cards based on color.After a number of items, the card sorting rule changes and children are asked to sort them based on shape.(2) Flexible Item Selection Task (FIST), which differs from the DCCS in that instead of telling children the rule explicitly, they need to generate it from a visual display.The DCCS and FIST are typically (computerized) tasks in which children are presented with a large number of items suited to their age, to which they respond as fast as possible.Outcomes are measured as accuracy.The DCCS also allows calculation of switch costs, which is the difference in response times between switch and non-switch trials. Cognitive flexibility has also been tested in an entirely different way, using a nonexistent object drawing (NEOD) task (Low, Goddard, & Melser, 2009;Spensley & Taylor, 1999a,b;Ten Eycke & Müller, 2018).When drawing, children typically use schemata based on sequentially ordered and practiced movements.In NEOD tasks, they are asked to "draw X", and then to draw "a nonexistent X, " such as an X that they invent, that they have never seen before, a strange X, an X with something funny or odd about it (Karmiloff-Smith, 1990;Spensley & Taylor, 1999a,b).To solve this task, children need cognitive flexibility to modify and alter the procedurally encoded schemata.NEOD tasks are particularly interesting because, unlike set-shifting tasks, they are production tasks where children's output provides information on how they solve tasks in which something new and unexpected is asked of them.When solving a NEOD task, children can make size, shape or deletion changes, change the location of elements, and add or insert elements.These different types of modifications can be classified into inter-representational or intra-representational flexibility and complexity (Berti & Freeman, 1997;Spensley & Taylor, 1999a,b;Zhi, Thomas, & Robinson, 1997).Inter-representational flexibility refers to cross-category insertions, that is, combining components of different categories in one drawing, such as a house with wings.Intra-representational flexibility is observed when children make changes within the components of a category, such as a man with two heads.Karmiloff-Smith (1990) concluded that different types of modifications are associated with different ages and development phases.She tested fifty-four children between the ages of 4 and 11 years, where each child produced six drawings.Children aged 4-6 years (n = 22) spontaneously made simple intra-representational changes: they modified size or shape, or deleted parts.Eight to 10-year-old children (n = 32) more often also made complex intra-representational changes: exchanging the position of elements, same-category insertions (such as a man with two heads), and inter-representational changes (cross-category insertions).The former three categories (changes of size, shape, deletion) were classified by Karmiloff-Smith as representing simpler low-level changes, while the latter three categories (element exchange, same-category insertion, cross-category insertion) represent more complex changes.In this study, Karmiloff-Smith also found that younger children drew their original schema again, and did not modify their drawing, more often than older children. Findings regarding cognitive flexibility are not unequivocal.Some studies show that the performance of children with DLD is lower than the performance of peers with typical development (TD) on nonverbal inhibition and working memory, but is similar on setshifting tasks (e.g., Im-Bolter et al., 2006;Henry et al., 2011).These studies investigated children who were 7-12 years old, with an average age of 10-11 years.Other studies that investigated younger preschool children found that children with TD outperformed children with DLD on a DCCS task (Farrant et al., 2012;Kapa et al., 2017) and on a FIST task (Roello et al., 2015).The discrepancy between these results may suggest that children with DLD have protracted cognitive flexibility development and/or later onset of cognitive flexibility development, which is reflected in lower scores at younger ages, but score similar to their peers at older ages.However, Dibbets, Bakker and Jolles (2006) investigated 6-year-old children and found no evidence for differences in task switching between children with DLD and TD.In addition, a meta-analysis of 22 studies (and 29 different samples) with children between ages 4 and 14 years did not point to age as a moderator of cognitive flexibility differences between DLD and TD (Pauls & Archibald, 2016).This meta-study revealed that children with DLD tend to perform lower than their peers with TD on tasks testing cognitive flexibility, but the size of the effect was small, while inhibition showed a moderate group effect.Pauls and Archibald suggested that the small group effect found for cognitive flexibility may, in fact, be driven by an inhibition deficit, as successful shifting from one task to another requires suppression of former tasks. There is, to the best of our knowledge, no previous research that used a NEOD task to investigate cognitive flexibility abilities of children with DLD.Both NEOD and set-shifting tasks measure children's ability to adapt cognitive processing strategies, responses and representations to new and unexpected conditions.However, these tasks differ considerably.First, unlike DCCS and FIST tasks, the NEOD task is a production task.Second, set-shifting tasks are relatively fixed, i.e. the rules and representations that children shift between are more or less given, contrary to the NEOD task in which the representation to which a child switches is open, and the only requirement is that it is nonexistent.Furthermore, set-shifting tasks focus on the attentional processes required for cognitive flexibility more than the NEOD task, whereas the NEOD task taps into those aspects of cognitive flexibility that have to do with imagination (Ten Eycke & Müller, 2018).Comparing DLD and TD on the NEOD task can thus provide more insight into how DLD and TD differ with respect to the broad construct of cognitive flexibility. The current study As part of the current research, we aimed to determine whether differences in cognitive flexibility will be found in children with DLD compared to age-matched children with TD by using a NEOD task.This task has the potential to identify a difference in cognitive flexibility between the two groups, and enables determining whether children with DLD solve the task by using a strategy that is more typical of younger children with TD.We did not collect data from younger children with TD and were therefore unable to make direct comparisons with younger children with TD.However, comparisons with observations in previous research may enable us to cautiously interpret the drawings of the children with DLD developmentally, that is, whether they reflect an earlier developmental phase.The research question that guided our study was: Do children with DLD and children with TD differ in cognitive flexibility as measured by a NEOD task? We expected indications of a protracted cognitive flexibility development for the children in our study, who were 5 to 6 years old, based on their performance on the NEOD task.Specifically, we hypothesized that: (a) Children with DLD are more likely to make no modifications or simple intra-representational modifications (low-level change categories) compared to their peers with TD, because such changes are linked to younger ages (Karmiloff-Smith, 1990).(b) Children with DLD are less likely to make complex intra-representational modifications (i.e., higher level within-category insertions) than their peers with TD.(c) Children with DLD are less likely to make inter-representational modifications (cross-category insertions) than their peers with TD, because such changes appear to reflect more advanced cognitive flexibility (Adi-Japha et al., 2010). Cognitive flexibility as measured using the NEOD task could be associated with other factors that are also likely to differ across DLD and TD, and that could create a confound.We considered four additional factors in our analyses in order to explore the robustness of the differences between TD and DLD on the NEOD task: basic drawing skills, nonverbal intelligence, verbal short-term memory (verbal STM) and inhibition.Children with DLD may score lower on basic drawing skills because this requires motor skills, and previous research has shown that these children have motor weaknesses (Diepeveen, van Dommelen, Oudesluys-Murphy, & Verkerk, 2018;Johnston & Weismer, 1983).To test for a possible motor mechanism related to the basic drawing level, we assessed visual-motor integration, and tested whether visual-motor integration contributed to group differences in basic drawing skills.We hypothesized that although visual motor skills contribute to group differences in basic drawing, they would not fully explain these differences, and that the basic drawing level (rather than visual motor skills) should thus be used as a motor predictor for explaining group differences in the NEOD.We therefore controlled for basic drawing skills when testing cognitive predictors for group differences in the NEOD change categories. We considered three cognitive measures as predictors for group differences in the NEOD task: nonverbal intelligence, verbal STM, E. Blom et al. and inhibition. Nonverbal intelligence describes thinking skills and problem-solving abilities that do not fundamentally require verbal language production and comprehension.It involves manipulating or problem-solving about visual information (Kuschner, 2013).Children with DLD score within the normal range on nonverbal intelligence, although they tend to score somewhat lower than TD controls in between-group comparisons (Gallinat & Spaulding, 2015).Nonverbal intelligence has been found to be related to cognitive flexibility (Arffa, 2007). Verbal STM is the ability to temporarily retain phonological information in mind and is typically impaired in children with DLD (Archibald & Gathercole, 2006;Weismer et al., 2000).Because of their verbal STM deficits, children with DLD may be less able to retain verbal instructions than their peers with TD, which may affect performance in nonverbal tasks with verbal instructions (Pauls & Archibald, 2016).For example, previous research showed that when verbal STM is controlled, differences in executive function tasks between children with TD and with DLD disappear (Lukács et al., 2016), and the same may be true for the NEOD task. Inhibitory control has been suggested to involve three processes: inhibition of a prepotent response, resistance to distracters, and resistance to interference from prior knowledge (Friedman & Miyake, 2004).Adaptation to a new environment and responding to new conditions is central to cognitive flexibility, and requires that previously engaged responses be inhibited (Dajani & Uddin, 2015).For example, in the NEOD task, children must inhibit typical objects drawing schemata they have just used in order to produce the unusual drawing of the same object.It is thus expected that inhibition of a prepotent response is involved in the NEOD task, and may impact any differences between the performance of children with DLD and with TD on the NEOD task, because children with DLD tend to have lower inhibition than children with TD (Pauls & Archibald, 2016). The following hypothesis was formulated regarding the cognitive predictors: (a) For the assessed cognitive variables, we predicted that the DLD group would perform lower than the TD group on verbal STM and inhibition.Furthermore, nonverbal intelligence, verbal STM and inhibition were expected to be associated with NEOD modifications, and better verbal STM and inhibition abilities were expected to be associated with a higher probability that the child would actually make a high-level modification (complex intra-representational and inter-representational modifications) and with a higher frequency of use of such high-level modification, as these high-level modifications are more pronounced in older ages and reflect more advanced cognitive flexibility. Study approval The study was approved by the Israeli Ministry of Education (287/8918/2015).Consent was obtained from the parents of the participating children. Participants The sample included 73 kindergarten children aged 5-6 years (M = 69.92months, SD = 3.49 months, see Table 1): 23 children with DLD and 50 children with TD.Participating children were from kindergartens in the same municipal area.The DLD sample included 12 boys and 11 girls.The TD sample consisted of 24 boys and 26 girls.In terms of gender, the groups were similarly composed (χ 2 (1) = 0.11, p = .74). Children with DLD were recruited from language kindergartens.Children are admitted to a language kindergarten based on significant primary language impairment, normal nonverbal intelligence, and sound adaptive behavior skills.Children are referred to a language kindergarten by a placement committee.A referral to the placement committee is made following a recommendation by a child neurologist or a clinical or educational psychologist and a diagnosis of a speech therapist (Ministry of Education, 2020).All children referred to the placement committee are administered the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III; Wechsler, 2002).Children are diagnosed as having a language disorder by a speech therapist based on a score of more than 1 SD below the mean in at least two language tests that are used in clinical practice in Israel and who have a normal performance IQ (Friedmann & Novogrodsky, 2004).The placement committee (comprised of the special education kindergarten district supervisor, municipal kindergarten psychologist, and special education therapists) tests eligibility based on standardized cognitive assessments, speech-language pathologist referral, and information from previous kindergarten teachers and caregivers that it receives.Children with TD were recruited from regular kindergartens.Kindergarten teachers identify children at risk for a developmental delay in the first 3 months of the school year (Ministry of Education, 2019).Children identified by their teacher as "at risk" in the TD group and for developmental delays other than language in the DLD group were not included in the study. All children participating in the current study were Hebrew-speaking monolinguals and scored within the normal range of the Raven test (Stand.Score ≥ 80, see Table 1).Language kindergarten children were identified as having DLD if they scored Z ≤ 1.25 SDs below the norm on the Goralnik Screening Test for Hebrew (Goralnik, 1995).The Goralnik test assesses children's abilities in Hebrew and includes subtests for vocabulary, sentence repetition, comprehension, oral expression, pronunciation, and story-telling (see below).Two of 25 language kindergarten children we assessed scored above this range and did not participate in the study.Table 1 presents the participating children's overall Z scores for normative data, but raw scores were used in the analyses. Measures Cognitive flexibility (NEOD task).The children were first asked to draw a man using an HB pencil or felt-tip pen (one color).They were then asked to draw "a man that does not exist" (Karmiloff-Smith, 1990).Several phrasings were used to enable the children to understand the task: "A man you invent, one you have never seen before, a strange man, with something funny or odd, something make-believe, pretend."This phrasing, following Spensley and Taylor (1999a,b), is more elaborate than the original phrasing used by Karmiloff-Smith (1990), and includes an explicit request to add something to the man.All children heard the same instructions.After drawing the nonexistent man, the children were asked to verbalize why such a man does not exist.To give generality to the data, the children were also asked to draw a house and a house that does not exist (Kasirer et al., 2020;Adi-Japha et al., 2010). Following the procedure developed by Karmiloff-Smith (1990), two independent raters scored the categories of changes compared with the original drawing as no change, deletion of elements, change in element shape or size, whole-shape changes, insertion of new (same-category) elements, position or orientation changes, and cross-category insertions.As in the original study, the categories were not mutually exclusive.Cohen's kappa coefficients for inter-rater agreement were greater than .90(p < .001) in each change category.Disagreements were settled by discussion with an additional experienced rater. In her study, Karmiloff-Smith (1990) found that changes made by older children (8-10 years old versus 4-6 years old) included more frequent insertions of new (same-category) elements, position or orientation changes, and cross-category insertions.These can thus be considered as reflecting a higher level of cognitive flexibility.For this reason, we grouped insertion of new (same-category) elements, position or orientation changes, and cross-category insertions into "high-level change categories".Deletion of elements, change in element shape or size, and whole-shape changes were combined into "low-level change categories".There were thus 6 possible change categories: 3 low-level change categories and 3 high-level change categories.The children could make multiple changes in one drawing and categories were not mutually exclusive. Basic drawing level.The complexity level of the original man/house drawing was scored on a scale of 1-4: 1 = non-recognizable object; 2 = a recognizable figure composed of two line objects (e.g., in the house drawing of a rectangle and a triangle above); 3 = a recognizable figure composed of three line objects, of which two are integrated (e.g., in the man a body, head and eyes within the head); 4 = a recognizable figure that includes more complex graphic formulas (e.g., figures composed by four or more line objects of which three are integrated, as in a house drawing with a rectangular house and a triangular roof, with a cross within a window within the house, or a 3-dimensional drawing, see Adi-Japha et al., 2010, Kellogg, 1970).The two independent raters who scored the drawings for the change categories also scored the two drawings for complexity.Weighted Cohen's kappa coefficient for an ordinal scale were .92(p < .001), on average, for the two drawings.Disagreements were settled by discussion with an additional experienced rater.The score of the two drawings was averaged. Verbal STM.The forward number recall test for children consists of predefined sets of random strings of numbers of increasing length (children: K-ABC; Kaufman & Kaufman, 1983) and tests verbal STM.Participants repeat the string in the same order.Testing continues until the participant makes two consecutive errors in a same-length set.This version of the K-ABC was adapted for Hebrew and has been normed in Israel (M = 10 and SD = 3 for each subtest).On average, test-retest and internal consistency reliabilities of the K-ABC subtests were reported as 0.85 and 0.62, respectively (Phizer, Shimborsky, Walf, & Hazani, 1995). Inhibition.The inhibition task resembled a go/no-go task.The stimuli were presented on a 16" laptop screen.The task involved 16 repeats of blocks of 3 stimuli (overall 48 stimuli) consisting of one of 6 aquarium animals (a yellow-blue striped fish, a green fish, a jellyfish, a starfish, a sea turtle, and an octopus) appearing in a random location on the screen for 3 s.The time between stimuli was 3-6 s, and the children had up to 3 s to respond.The children were asked to respond as rapidly and accurately as possible, using a computer key labeled in yellow.The children were instructed to tap the yellow key only when the yellow blue-striped fish appeared.They were told that other animals may appear as well, and that in that case they should not strike any key.The yellow-blue striped fish appeared with a probability of 2/3 and the other animals had equal probabilities of about 7% to appear.This task thus stresses inhibition of prepotent response.Each block included 2 repeats of the go stimuli and one no-go stimulus in a random order.This ensured that there were no more than two successive trials with a no-go stimulus (Howard & Okely, 2015).Prior to the task, the children practiced on 1 block.Accuracy (number of correctly responded blocks) was coded. A block was scored as accurate if the child correctly pressed the button for the (two) go stimuli and rejected the (one) no-go E. Blom et al. stimulus.We were unable to single the no-go stimulus, because the number of button presses to these stimuli was not recorded.However, as the prepotent response was a button press of the yellow button, it is likely that incorrect blocks were due to a button press of the yellow key to the (one) no-go stimulus in that block.It has been suggested that in go/no-go tasks the go trials index sustained attention while the no-go trials index actual inhibition processes (Ashley et al., 2019;Lewis et al., 2017;Willner et al., 2015).It may therefore be suggested that the current task is not a pure measure of inhibition, but rather a mix of inhibition and sustained attention. Visual-motor skills.The Beery-VMI is a standardized test (M = 100, SD = 15) that evaluates visual-motor integration skills (often associated with copying, e.g., Ogawa, Nagai, & Inui, 2010) for children aged 2 years to adult.Participants copy progressively difficult geometric shapes.The test is stopped after subjects fail to correctly copy three consecutive shapes.The final score is the number of correct shapes copied.Overall test-retest and inter-rater reliabilities were reported, .84-.88 and .93-.98, respectively (Beery, Buktenica & Beery, 2006). Language assessment.The Goralnik Screening Test for Hebrew (Goralnik, 1995) was administered in order to assess proficiency in Hebrew.The test includes subtests for vocabulary, sentence repetition, comprehension, oral expression, pronunciation, and story-telling.The Goralnik test was designed to screen monolingual Hebrew-speaking children aged 2;7 -6;0 who are at risk for DLD.The scores are raw scores, with a total of 180 points.The Goralnik manual enables calculation of a standardized Z-score based on age-appropriate norms, used for identifying DLD (Goralnik, 1995;Altman, Armon-Lotem, Fichman, & Walters, 2016).Participating children identified with DLD scored ≤1.25 SD below the mean, while children with TD scored ≥0.9 SD below the mean.The Goralnik test provides norms in 6-month intervals.The last norm is for children aged 67-72 months.We used this norm for children aged 73-78 months, as these children share the same educational level (Kindergarten). Procedures The children were tested individually in a quiet room in the kindergarten by R.B or N.S.The tests were administered in a fixed order and the children were tested in two separate sessions as part of a larger study (Adi-Japha, Berke, Shaya, & Julius, 2019).Background variables were tested before the flexibility task.The first session involved the language, verbal STM and inhibition assessments.The second session included the assessment of the visual-motor skills and the cognitive flexibility task. Data analysis The NEOD task has an ordinal scale for the basic drawing level and a nominal scale for the change categories.Language scores were negatively skewed.Non-parametric statistics was therefore preferred.We first checked how the two groups scored on background measures: language, nonverbal intelligence, basic drawing level, verbal STM, inhibition, and visual-motor skills.We expected that the children with DLD would perform lower on language than the children with TD, but similarly on nonverbal intelligence, conforming to the DLD profile.We also expected the children with DLD to perform lower than the children with TD on visual-motor skills, basic drawing level, verbal STM and inhibition.The Wilcoxon Z-test was used in order to establish group differences.r = Z/ ̅̅̅̅ N √ was used to estimate the magnitude of the effect (with r = .1,small effect; .3,medium effect, .5, large effect; Rosenthal, 1994, pp. 231-244;Pallant, 2007, p. 225). The results of the drawing test were compared across TD and DLD in order to investigate whether and how DLD is associated with cognitive flexibility.A χ 2 test was applied to compare the number of children who made no changes across the two groups.We also indicated whether or not each child used a change category at least once across the two drawings (nonexistent man and house), resulting in a binary 0/1 variable per category.A χ 2 test was used to compare the number of children who used each category, and the number of children who used at least one of the low-and high-level change categories.Spearman correlations and hierarchical (linear as well as logistic) regression analyses were conducted to determine whether any of the differences between the TD and DLD groups were affected by the inclusion of background measures on their own or in combination with others, i.e. basic drawing level, nonverbal intelligence, verbal STM, and inhibition.Three regression models were compared: Model 1 included basic drawing level as a predictor for determining whether the factor Group (TD, DLD) explained any variance beyond basic drawing level.Model 2 included basic drawing level and nonverbal intelligence as predictors in addition to Group.Model 3 included basic drawing level, nonverbal intelligence, and verbal STM or inhibition as predictors in addition to Group.Because verbal STM and inhibition were highly correlated (r = 0.67, p < .001),we tested them in two separate models (3A and 3B). Results The descriptive results in Table 1 show how the TD and DLD groups compare in terms of language, nonverbal intelligence, basic drawing level, verbal STM, inhibition, and visual-motor skills.The two groups did not differ in age and nonverbal intelligence. However, in addition to language scores, verbal STM and inhibition, the groups differed in their visual-motor skills and basic drawing level.Basic drawing skills and visual-motor skills were significantly correlated across the full sample (TD and DLD, r s (73) = 0.39, p = .001).Ordinal logistic regression applied to the sum of scores of the basic drawing level across the two drawings indicated that visual-motor skills significantly contributed to the variability in basic drawing level (B = 0.50, SD = 0.22, W 2 = 5.08, p = .024),but did not eliminate the contribution of the group variable (B = 1.58,SD = 0.60, W 2 = 7.20, p = 0.007). Table 2 presents the frequency (per child) of a specific change category across the two drawings (continuous variable).Use of change categories was scored per drawing, and children could score 0-2 for each change category across the two drawings.Table 2 also E. Blom et al. indicates the number of children who exhibited use of each category at least once across the two drawings (a binary 0/1 variable, for example 12 out of the 50 children with TD displayed same-category insertion at least once in their drawings).Because the frequency counts were low for several change categories, only the binary score (i.e., the number of children who exhibited that change category) was compared between groups. The binary score was compared between the two groups in order to study the hypotheses that children with DLD are more likely to make no modifications or simple intra-representational modifications (low-level change categories) compared to their peers; that they are less likely to make complex intra-representational modifications (exchanging the position of elements and same-category insertions) as well as inter-representational modifications (cross-category insertions) than their peers with TD (Table 2).As mentioned, change categories in Karmiloff-Smith's (1990) developmental study were grouped into low-and high-level change categories, where the former is typical of 4-6 year-olds, and the latter is more common in older children.Table 2 also addresses group difference in lowand high-level change categories.χ 2 tests comparing the number of children who used at least one of the low-and high-level change categories did not indicate differences for the low-level change categories, but did indicate group differences in the high-level change categories.Specifically, more children in the TD group used same-category insertions (a type of complex intra-representational modification) and cross-category changes (inter-representational modification) than children in the DLD group.The NEOD data in the current study further indicated that relatively more children in the DLD group (6/23 = 26.08%)made no change in their drawings of a man and a house than in the TD group (3/50 = 6% TD) (χ 2 (1) = 5.88, p = .015).It should be noted that the 6 children with DLD who made no change in their drawings did not differ from the other children in the DLD group in terms of language (Z = 0.11, p = .919)or nonverbal IQ scores (Z = 0.17, p = .878).Furthermore, the data indicated that in the TD group 13/50 children (26%) and in the DLD group 10/23 children (43.47%) made only low-level modifications (no significant difference, χ 2 (1) = 2.23, p = .135);in the TD group 29/50 children (58%) and in the DLD group 7/23 children (30.43%) made both low-and high-level changes (a higher proportion in the TD group, χ 2 (1) = 4.78, p = .029);and finally, in the TD group 5/50 children (10%) (and none of the children with DLD) made only high-level modifications to their drawings (χ 2 (1) = 5.33, p = .021). Due to the low frequency of specific change categories, the analysis of background predictors to performance on the NEOD task related only to high-vs.low-level change categories.Table 3 presents Spearman correlations between background measures and use of all change categories (overall, how well children solved the task), low-or high-level change categories for children with DLD and their peers with TD.There were no significant group differences in the level of correlations between background variables and overall use of change categories (tested using the Fisher r-to-z transformation).No significant group differences emerged for the correlation between background variables and low-or high-level changes.There was, however, a non-significant trend toward a stronger association between the inhibition score and the frequency of low-level changes in children with DLD compared to their peers with TD (Z = 1.81, p = .070).It should be noted that performance in the NEOD did not correlate with the standardized language score (r s (50) = .22,p = .127),r s (23) = .25,p = .249). The interpretation of the correlation of high-level changes in the DLD group is not clear due to the zero-clustered data with 16/23 children having the value 0 (Huson, 2007).To verify the association, we also conducted a Mann-Whitney test to compare the values of NEOD predictors between children with DLD who have (n = 7) and do not have high-level changes (n = 16).The results confirmed the pattern of significant associations in Table 3, with Z = 0.58, p = .624;Z = 1.52, p = .135;Z = 0.00, p = 1.00; and Z = 3.04, p = 0.010, for basic drawing level, non-verbal intelligence, verbal STM and inhibition, respectively. Table 4 (left hand side) shows the results of regression models testing whether any difference between the TD and DLD group in the number of high-level changes drawn by children were affected by the inclusion of background measures (i.e., basic drawing level, nonverbal intelligence, verbal STM, and inhibition) on their own or in combination with others.Linear regression models suggest that nonverbal intelligence contributed to the explained variance, and the addition of verbal STM or inhibition made a further contribution and resulted in the disappearance of the difference between the TD and DLD groups.Logistic regression (right hand side of Table 4) pertaining to the difference between the number of children in the TD and DLD groups who had (i.e., a binary variable = had/did not have) high-level change categories in their drawings was used to account for the non-linear distribution of the number of high-level changes drawn by children.Nevertheless, the same findings were found in the two analyses (Ghanamah, Eghbaria-Ghanamah, Karni, & Adi-Japha, 2020).It should be noted that the linear regression suggests a model for predicting the frequency of use of high-level change categories, where the R 2 improved from .206 with just Group as predictor to .320 in Model 3A (.364 in model 3B, Table 4).The logistic regression, however, suggests a model for predicting whether a child would use a high-level change category, with an improvement in model prediction represented by Cox and Snell R 2 from .126 with just Group as predictor to .276 in Model 3A (.370 in model 3B, Table 4).Chi square change statistics represent the significance of the step. Discussion Cognitive flexibility has been studied in the context of communication disorders.The goal of our study was to investigate cognitive flexibility in 5-to 6-year-old preschoolers with DLD using the NEOD task (Karmiloff-Smith, 1990;Low et al., 2009;Spensley & Taylor, 1999a,b;Ten Eycke & Müller, 2018), which is a drawing task that has not been used with children with DLD to date.In the NEOD task, the children were first asked to draw a man.After that, they were asked to draw a nonexistent man.The same procedure was followed for the object 'house'.The children could make no changes, or make modifications which could be low-level changes (change to the size/shape of the item, change of the whole shape, deletion of parts of the item) or high-level changes (same-category insertion, change to the location/orientation of the item, cross-category changes). Three main findings emerged from the study: (1) Children with DLD made no modifications more often than children with TD. (2) Children with DLD were less likely to make high-level changes.In particular, they made fewer inter-representational cross-category insertions and, to a lesser degree, fewer intra-representational same-category insertions.(3) This difference in high-level category changes between children with DLD and with TD disappeared when verbal STM or inhibition were added to the regression models. Not an isolated language impairment The weaker cognitive flexibility of children with DLD, signaled by a higher likelihood to make no changes and a lower likelihood to make high-level changes, is in line with the findings reported by Farrant et al. (2012) who found that 5-year-old children with DLD performed lower than age-matched peers with TD on a DCCS task, which also tests cognitive flexibility.The results of our study supplement this research by showing that cognitive flexibility weaknesses in kindergarten children with DLD are not only found in a set-shifting task, but also in a productive NEOD task.Studies with older children (aged 7-12 years, and 10-11 years on average) reported equal performance on set-shifting tasks across children with DLD and TD (Im-Bolter et al., 2006;Henry et al., 2012), suggesting that age impacts cognitive flexibility differences between children with DLD and TD.The pattern that emerged from our study is that the children with DLD nearly always made age-appropriate low-level changes and hardly ever high-level changes.They were as likely as their peers to make low-level changes, but less likely to make higher level changes.It is interesting to note that task success of the DLD group (74%), that is, the percentage of children who made changes, was lower than that of children with TD who were 5-months younger (91%) presented in the original study (Karmiloff-Smith, 1990; success of the children with TD who were 5-months older in the current study was 94%).It should be further noted that the phrasing used in the current study included an explicit request to add "something funny or odd," which may have somewhat changed the type of response (Low et al., 2009), where children with TD responded with a higher rate of insertions (same-as well as cross-category insertions) than in Karmiloff-Smith's (1990) study. That children with DLD show cognitive flexibility limitations is compatible with a growing body of research on domain-general executive function impairments in DLD (e.g., Ebert & Kohnert, 2011;Vugs et al., 2013;Pauls & Archibald, 2016;Vissers et al., 2015).In line with these studies as well as our predictions, the children with DLD performed lower than TD controls on inhibition, in addition to lower performance on verbal STM.The children with DLD in our study had moreover lower visual-motor skills and a lower basic drawing level than the children with TD, indicating weaknesses that extend to motor development (Diepeveen et al., 2018;Johnston & Weismer, 1983).These results demonstrate that although the language impairments of children with DLD are primary, these impairments are typically not isolated, and many children with DLD also have significant impairments outside the domain of language. The role of verbal STM and inhibition We investigated the effect of basic drawing level, nonverbal intelligence, verbal STM, and inhibition on the relation between cognitive flexibility and the presence of DLD in order to better understand the observed differences between DLD and TD.The effect of DLD on the use and likelihood of high-level category changes remained significant in the models that included basic drawing skills and nonverbal intelligence.Basic drawing level was not associated with high-level category changes, whereas nonverbal intelligence did show a significant association, confirming that high-level changes in the NEOD task are related to higher order cognition.The difference between the DLD and TD groups disappeared when verbal STM or inhibition were included in the regression model. A similar effect of verbal STM was found by Lukács et al. (2016), but only for verbal executive function tasks (in their study, children with TD and DLD performed equally on nonverbal executive function tasks).The impact of verbal STM suggests that verbal abilities impact children's performance in a NEOD task, and specifically children's ability to make high-level category changes.According to Vygotsky (1978), thought is mediated by language.In line with this view, children use private or inner speech to solve problems (Neuman, Leibowitz, & Schwarz, 2000;Damianova, Lucas, & Sullivan, 2012;Welsh, 1987).Deák (2003) described how language enhances and enables the expression of flexible cognition, and provides the potential for innovative conceptualization.As such, verbal mediation, and by implication verbal STM, may help children to arrive at more complex solutions of the NEOD task, and in particular to come up with cross-category insertions.In line with this suggestion, the association of verbal STM with high-level changes was significant only in the TD group, while for children with DLD, verbal STM and occurrences of high-level changes were not associated (r s = .00).Although this group difference in correlation level was not significant, the finding supports the role of verbal STM in the more complex solutions to the NEOD, and their lower frequency in children with DLD.The effect of verbal STM also fits within a general theory of executive functioning development, such as the integrative framework proposed by Garon and colleagues (2008; see Kapa et al., 2017, for an application to DLD).This framework holds that attention underlies executive function abilities and that developmental hierarchies could result in cascading effects.Verbal STM, like attention, is a basic and early available ability (Gathercole and Adams, 1993; Gathercole, Pickering, Ambridge, & Wearing, 2004) that comprises a foundation for later developing executive functions. Inhibition is related to the expression of cognitive flexibility in young children (Davidson et al., 2006).When inhibition was included in the regression models that explained the use of high-level changes, group differences were no longer significant.This suggests that group differences in the level of inhibitory control may explain differences in task performance and is in line with Pauls and Archibald's (2016) suggestion that an inhibitory control deficit may disable children with DLD to sufficiently clear former tasks (in our study: schemata) from the current focus of attention.This, in turn, may prevent them from focusing on the new task or coming up with a novel and more uncommon solution.It is possible that children with DLD and their peers with TD used somewhat different mechanisms when solving the NEOD.A strong association for inhibition with the use of change categories emerged for children with DLD, whereas in children with TD, a similar correlation level was found for nonverbal intelligence, verbal STM and inhibition with the use of change categories.In particular, the association of inhibition with low-level changes was stronger in children with DLD than in their peers with TD.Children with DLD may have relied on inhibition for solving the NEOD, forming mainly deletions to object parts.Use of inhibitory control did not suffice, however, for performing high-level changes, possibly because of its overall lower level in this group.A pattern of different mechanisms used to solve the NEOD task was also found when task performance was compared between high-functioning children with ASD and peers: the former relied mainly on executive functions for solving the task, whereas children with TD used additional cognitive processes (Ten Eycke & Müller, 2018). Limitations and future research The study has several limitations.First, the DLD sample size is relatively small.Moreover, hearing was not screened and DLD diagnoses were verified based on a screening language test.Although this test has been used in the literature (e.g., Altman et al., 2016), other language assessments may have yielded different findings.In particular, it may be suggested that children with DLD understood the task less well due to their poorer language skills.However, the findings of the current study do not support such an interpretation, because the children with DLD who did not modify their drawings had language scores similar to their peers, and the language composite score did not correlate with NEOD performance. Second, we did not test the children with other commonly-used measures of cognitive flexibility, such as set-shifting tasks.Although our findings are compatible with research that used the DCCS task with 5-year-old children with DLD (Farrant et al., 2012), it remains to be seen whether a NEOD task and a DCCS task measure the same underlying construct.Low correlations between different cognitive flexibility tasks suggest that the construct of cognitive flexibility is fractioned (Legare et al., 2018), and different tasks may tap into different subcomponents of cognitive flexibility. Finally, our measure of inhibition also measured sustained attention, as we were unable to single out the no-go stimulus, and reliability estimates for our task, such as the split-half test (Green et al., 2016), could not be calculated.Sustained attention is the ability to maintain focus on a task despite the absence of task events that are intrinsically arousing (Robertson, Manly, Andrade, Baddeley, & Yiend, 1997).It is a core and crucial ability that underlies children's performance in more formal structured experimenter-demand tasks, including tasks which test cognitive flexibility (Garon, Bryson, & Smith, 2008), and is often impaired in children with DLD (Ebert & Kohnert, 2011).Consequently, between-group differences in sustained attention could also have contributed to explaining the between-group differences on the NEOD task.It should be noted that high shared variance between the digit span task measuring verbal STM and go/no-go task measuring inhibition could reflect a basic contribution of sustained attention to both tasks. Our findings open venues for future research.Future research could aim to tease apart effects of inhibitory control and sustained attention in order to determine which ability potentially underlies a cognitive flexibility deficit of children with DLD.Other research could focus on investigating relationships between cognitive flexibility and language development.We did not observe significant correlations between children's performance on the NEOD task and the language composite scores, but relations may exist for specific aspects of language.Whether or not impairments in the different developmental domains (e.g., verbal, nonverbal cognition, motor) are related, either directly or indirectly, is an issue that requires further research. Conclusions Preschoolers with DLD are less cognitively flexible than their peers with TD.The results of a drawing task in which children were asked to draw nonexistent objects demonstrated that 5-to 6-year-old children with DLD made no changes to their initial drawings more often, and were less likely to make more complex changes than aged-matched children with TD, pointing to lower cognitive flexibility.Nevertheless, the performance of these children was within age expectations as compared to the 4-to-6 year olds studied in the original Karmiloff-Smith (1990) study.The difference between the two groups disappeared when verbal STM or inhibition were statistically controlled.These findings suggest that the lower cognitive flexibility of children with DLD is, at least in part, explained by their lower verbal STM and inhibition ability. Fig. 1 shows typical examples of drawings made by the children with DLD and their peers with TD in the NEOD task.Fig.1A-Cshows examples of drawings with same-category insertions, and Fig.1E and Fshows examples of drawings with cross-category insertions.These drawings were made by children with TD.Fig.1G-Iare examples of drawings with deletions made by children with DLD.NEOD categories are not mutually exclusive, for example, Fig.1Dinvolves deletion of windows. Fig. 1 . Fig. 1.Drawing examples.A-F, children with typical development (no language disorders); G-I children with a developmental language disorder.A. "He has 3 hands, actually 6, because 3 and 3 are 6".B. "A man with 4 heads, 4 necks, many hands and belly buttons".C. "A man with huge knees (laughs), a belly and hands, 2 heads and 4 eyes".D. "The house has hair, there is a triangle inside this house, and it has legs".E. "It has wheels and ears, and there are people in the house (points to the window)".F. "The man flies, he has wings".G. "A house with a 'delete' line".H. "She has no hands".I "The eye is one". Table 1 Descriptive statistics of the TD and DLD groups. Table 2 Frequency of changes by group and category for the nonexistent object drawing task. Note. χ 2 tests compared the number of children who used each category.E.Blom et al. Table 3 Correlations between background measures and dependent variables.
2021-07-11T06:16:39.366Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "b651bb81f8734a7fca6f2e76770dc8b004b39e9d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jcomdis.2021.106137", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ff064eb89d79f264fad48980b46502dde528902b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
20497779
pes2o/s2orc
v3-fos-license
Heeding the Message? Determinants of Risk Behaviours for West Nile Virus Methods A telephone survey was administered to a random sample of adults (n=1650) living in the L6L and L6K Forward Sortation Areas of Oakville, Ontario, Canada. Results While close to 100% of survey respondents were aware of WNv and approximately 80% recalled receiving information from the public health department regarding the virus, levels of reported personal protective behaviours were relatively low. Through a multivariable modeling process, a range of determinants emerged to explain outcome levels. Discussion The message about public education in the face of emerging health threats is clear; that is, that public education is key. But we cannot end the public health presence there - public health researchers must evaluate the uptake of the message. W est Nile virus (WNv) emerged on W W the North America scene in 1999, W Wc ausing an outbreak of menin-W W goencephalitis in the New York City area which resulted in seven deaths. 1,2 Of the total 123 non-fatal cases detected in the f USA in 1999-2001, the median age of patients was 65 years, with a range from 5 to 90 years; 60% were over 60 years of age; 63% were females. 3 With respect to fatalities (n=18), the median age was 75 years, with a range from 44 to 90 years; 90% were aged 60 years or above; and 44% were males. 3 Since this first North American outbreak, the incidence of WNv infection has been increasing annually. [4][5][6] As of 2005, there were 2,470 human cases of WNv infection reported to CDC for 2004. 5 In Canada, 1,335 human cases were reported in 2003. 6 In Canada, the increase in human cases has coincided with the spread of the disease westward across the country. In 2002, the Province of Ontario in central Canada contained over 90% of confirmed f human cases; one year later, the majority of human cases were confirmed in the western Prairie provinces. 7 The virus is transmitted to humans by infected mosquitoes. Culex pipiens, an urban-dwelling mosquito, is an important vector that breeds in underground standing water found in city drains and catch basins. During a long, hot summer, these water sources become even richer in the rotting organic material that Culex needs for survival; concomitantly, these climatic conditions can also lead to a decline in mosquito predators (e.g., frogs). Public health professionals recommend various personal protective behaviours (PPBs) that either reduce the risk of mosquito bites or eliminate mosquito breeding sites. It is often difficult, however, to get populations to heed public health risk messages. We know, for example, the risks associated with tobacco consumption, yet over 21% of the Canadian adult population continues to smoke cigarettes on a daily basis (www.healthcanada.ca). This translates also to new emerging health risks, 8,9 including (re-)emerging infectious diseases such as West Nile virus. Adams et al. 10 f report that of the 17 confirmed cases of WNv infection in Connecticut in 2002, only 3 reported having used any PPBs. 10 Risk communication, defined as "a science based approach for communicating effectively in high concern situations" 11, p.382 is key in these circumstances. The implementation of a householdbased seroprevalence survey in Oakville, Ontario (Figure 1), where a large outbreak of West Nile virus occurred in the summer of 2002, allowed us to assess the uptake of risk behaviour messages disseminated by public health agencies. Survey data are used to explore attitudes, risk perceptions, and prevention behaviours undertaken. Adams et al. 10 used survey data to explore knowledge, attitudes and behaviours around West Nile virus in Connecticut, where an outbreak had occurred in 2002. The majority (77%) of these respondents (n=1791) sometimes or always used at least one PPB, while only 15% never used any PPBs. These reported levels of risk reduction must be contextualized, however, by the fact that this area had a 15-year track record of public health messaging because of Lyme disease. In contrast, the present study was performed soon after the first introduction of WN virus into Canada, in a region where public health messaging about protection against insects was novel. Oakville is located in the region of Halton, characterized by the highest incidence of reported clinical West Nile virus infection in Ontario in 2002. Sixty cases (58 confirmed and 2 probable) occurred in a population of approximately 400,000 with onset during the months of August and September 2002 ( Figure 2). A peak in dead crow sitings in Halton (600 per week) occurred five weeks before the peak in human cases. Within this region, the greatest spatial concentration of cases occurred in south Oakville, in the L6L and L6K forward sortation areas (FSAs; i.e, the first three digits of the postal code) ( Figure 1). We hypothesized that, given a short duration of intense dissemination of the risk message by the public health department, there would be high levels of awareness of WN virus in the population as well as a relatively high level of uptake of the risk reduction message. METHODS The survey was conducted in March-April 2003. Households in the L6L and L6K Forward Sortation Areas (FSAs) were selected from a population of 30,467 (2001 census) using random digit dialing. Within households, one randomly selected adult (18+ years) was invited to participate. Given that pediatric neuroinvasive disease is rare, children were excluded from the study. 2 The average income for the population over 15 years of age was $42,827 (compared to $29,261 in Canada and $30,876 in Ontario). Thus, this is a middle-to high-income area. The survey (available from the authors upon request) consisted of questions related to socio-demographic information; information about exposures to mosquitoes, including home environment, potential water reservoirs, and exposures to birds; as well as PPBs. Research staff made home visits to obtain blood. Single serum samples were collected from March 23 to June 5, 2003 (note: specimen collections were interrupted from March 29 to April 16 because of the outbreak of severe acute respiratory syndrome (i.e., SARS). Respondents were unaware of their serologic status at the time of the telephone f interview, thus reducing the possibility of recall bias. The seroprevalence determined in this stage of the study was 3% in the general population (for more details, see Initially, 1,500 individuals completed the survey, but not all consented to provide a blood sample. As a result, an additional 150 individuals were surveyed. Of the 1,650 total, 1,505 respondents consented to provide a blood sample. No statistically significant differences were found between the two groups on key demographic characteristics; the two groups were therefore pooled for subsequent analysis. Respondents had an average age of 55.6; 50% were female; and 93% had completed high school. In comparison with the population from which they were drawn, there were some discrepancies (Table I) vis-à-vis the sample. Those aged 18-24 (2%) and 25-44 (24%) were under-represented when compared to these age categories in the 2001 Census data (13% and 24%, respectively). Those aged 45-64 (41%) and 65 years and older (31%) were over-represented (30% and 34%, respectively). To evaluate the practice of protective behaviours, we conducted a univariate analysis using the chi-square test and student's t-test to assess differences between those respondents who practiced two or more such behaviours versus others. Similarly, a univariate analysis was conducted to determine differences between respondents who wore mosquito repellent always or sometimes when outdoors for 30 minutes or more and those who rarely or never wore mosquito repellent. The following variables were considered for analysis: checking/cleaning gutters, collections of water present on property, draining items that collect water, worried about WNv, worried more about WNv than pesticide use, gender, mean time spent outdoors at dusk or dawn, mean time spent outdoors total, highest level of formal education completed, and frequency mosquitoes seen in the home. Multivariable analysis using logistic regression was performed using a backwards, stepwise method, initially selecting variables for inclusion in the model if p<0.20. RESULTS The majority (79%) of the 1,650 respondents lived in single-family homes and most of these (74%) were characterized by an open deck or unscreened porch. Further, while 1,507 (60%) of respondents reported having screens on doors and windows that lead to the outside, 394 (24%) of these reported tears in the screens. Three hundred and forty-seven (21%) respondents found mosquitoes in the home once f per week or more during the period of reporting. Of respondents, 80% reported remembering receiving information in the summer of 2002 about how to avoid mosquito bites, and 73% reported that they obtained their information about West Nile virus from the media (e.g., ref. 13). Virtually all respondents (99%) were aware of WNv before the survey and that the disease is transmitted through mosquito bites. f Approximately three quarters (78%) of respondents were somewhat or very worried about becoming sick with West Nile virus, compared with 59% who were very or somewhat worried about becoming sick from the pesticides used to kill mosquitoes. When asked what worried them more, 56% reported they were more worried about getting sick from West Nile virus, f 22% more worried about health impacts of pesticide use, and 18% concerned about the health effects of both. Nearly two thirds of respondents (65%) rarely or never wore insect repellent when outdoors for 30 minutes or more, and half (50%) rarely or never wore long-sleeved shirts and/or long pants when out at dusk or dawn for 30 minutes or more. When remaining respondents who had responded negatively to the above noted questions were asked what else f they did to avoid being bitten, over half (51%) reported they did nothing. Sixty-one percent of respondents practised two or more PPBs, including avoiding areas where mosquitoes are likely to be, avoiding going outdoors altogether, wearing long sleeves/long pants when outdoors, and using mosquito repellent when outf doors for 30 minutes or more. Results of the univariate analysis to assess characteristics of those respondents who practised two or more personal protective behaviours between July 1 st (Table II). Results of the univariate analysis to assess the characteristics of those respondents who wore mosquito repellent when outdoors for 30 minutes or more between July 1 st , 2002 and September 30 th , 2002 are shown in Table III. Being female, having completed high school, frequency of mosquitos seen in the home, time spent outdoors at dawn or dusk, total time spent outdoors, being worried about WN virus and being worried more about WN virus than getting sick from pesticide use were all associated at the a priori cut-point of 0.2 with use of mosquito repellent when outdoors for 30 minutes or more. Being female, frequency of mosquitos in the home, time spent outdoors and being worried about West Nile virus were retained in the final multivariate model for use of mosquito repellent model (Table III). DISCUSSION AND CONCLUSION Levels of awareness (99.9%) and worry (78%) about WNv in south Oakville in the summer of 2003 were both high in the wake of the outbreak experienced. And yet, the uptake of the public health messagewhich 80% of respondents reported receiving -was relatively modest. However, a little more than half (61%) of respondents did report undertaking two or more personal protective behaviours. The key determinants of PPBs that emerged from the multivariable analyses were: being female, being worried about West Nile virus, indoor exposure (mosquito repellent use model only) and outdoor exposure. With respect to the latter, the direction of relationship changes between the models. That is, for two or more PPBs, less time spent outdoors meant increased likelihood of use of two or more PPBs. While this might seem counterintuitive at first, it is consistent with the risk perception literature that indicates that familiarity with a risk decreases one's concern. 11,14 However, those who spend more time outdoors generally were more likely to report using mosquito repellent (Table III). It is difficult to find comparative data in the literature to determine whether or not this is a 'typical' response. A study in a similar community in Connecticut that had also experienced an outbreak in the previous summer 10 showed that 57% of respondents wore repellent on skin or clothes and 59% sometimes or always used at least two protective behaviours. With respect to the determinants of these behaviours, Adams et al. 10 discovered a similar picture: using insect repellent was significantly associated with being less than 50 years old, being worried about getting WNv, and spending time outdoors in the evening. Using 2+ PPBs was associated with being female and being worried about getting WNv. Herrington, 15 in a national US survey of 1,750 adults, found that the most robust predictor of behavioural action to prevent mosquito bites was worry about being bitten by an infected mosquito (OR 7.3; 95% CI = 4.3-12.2). Of our sample, 80% reported receiving information the previous summer about how to prevent WNv, and yet the uptake of the message was relatively modest. There are several potential explanations for this. First, the data are based on self-report and could be biased toward a socially-desirable response. Second, 73% of respondents reported receiving their WN virus information from the media, yet the media has been criticized for lack of accurate reporting of environmental health risk issues in general 16 and WN virus in particular. 17 Third, the message was not delivered clearly and/or not well understood. This seems unlikely, though, as indicated by the relatively high socio-economic status of the study population as well as the fact that indeed 78% reported at least one PPB. Fourth, the risk was simply not seen as such by the general population. Given the high levels of awareness and concern, however, this explanation is not likely. Covello et al. 11 critiqued the risk communication strategy used in the 1999 New York outbreak. They suggest that the risk communication strategy failed in that case f for a number of reasons, including lack of consultation with key stakeholders about f the perception of the risk; the full range of communication channels not being exploited; too many messages being contained in risk communication materials; and materials produced containing inadequate repetition/visualization. These results have important implications for the public health response to emerging public health threats. It appears likely that WN virus has become endemic in North America, with seasonal recurrences. Indeed, a representative of the Public Health Agency of Canada is quoted in the media as saying: "West Nile virus has become part of the scenery." 18 Résultats : Près de 100 % des répondants au sondage avaient entendu parler du VNO, et environ 80 % se souvenaient d'avoir reçu de l'information des services de santé publique à propos du virus, mais les niveaux déclarés de mesures de protection individuelle étaient relativement faibles. Un exercice de modélisation multivariée a mis au jour divers déterminants possibles de ces résultats.
2017-06-17T05:38:34.226Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "b954ee894868b0383e00ebdbf8eb911cc9e41cd4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "654ccf59fa03b60d9903bda6ff5937eaa318d5dd", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
235967052
pes2o/s2orc
v3-fos-license
In vivo and in vitro mutagenicity of perillaldehyde and cinnamaldehyde Background Perillaldehyde and cinnamaldehyde are natural substances found in plants that are used as flavoring ingredients. Due to the α,β-unsaturated aldehydes in their structures, these compounds are expected to be DNA reactive. Indeed, several reports have indicated that perillaldehyde and cinnamaldehyde show positive in in vitro and in vivo genotoxicity tests. However, their genotoxic potentials are currently disputed. To clarify the mutagenicity of perillaldehyde and cinnamaldehyde, we conducted in silico quantitative structure–activity relationship (QSAR) analysis, in vitro Ames tests, and in vivo transgenic rodent gene mutation (TGR) assays. Results In Ames tests, perillaldehyde was negative and cinnamaldehyde was positive; these respective results were supported by QSAR analysis. In TGR assays, we treated Muta™ Mice with perillaldehyde and gpt-delta mice with cinnamaldehyde up to the maximum tested doses (1000 mg/kg/day). There was no increase in gene mutations in the liver, glandular stomach, or small intestine following all treatments except the positive control (N-ethyl-N-nitrosourea at 100 mg/kg/day). Conclusions These data clearly show no evidence of in vivo mutagenic potentials of perillaldehyde and cinnamaldehyde (administered up to 1000 mg/kg/day) in mice; however, cinnamaldehyde is mutagenic in vitro. Introduction Perillaldehyde (Table 1) is a natural substance found abundantly in the plant Perilla frutescens (Shiso in Japanese) from the mint family Lamiaceae. Because perillaldehyde has strong antiseptic and bactericidal actions and its scent produces an appetite-enhancing effect [1,2], Shiso is used as an accessory herb when eating raw fish (sashimi) and as an essential ingredient of salted plum (umeboshi) in Japan. In addition, perillaldehyde is used as a flavoring ingredient for salad dressings, sauces, pickled vegetables, and beverages. Cinnamaldehyde (Table 1) is one of the aromatic aldehydes contained in cinnamon. Its main uses are as a flavoring for chewing gum, ice cream, candies, and beverages. It is also used as a fragrance in cosmetics, soaps, and detergents. Cinnamaldehyde is often used as a stomachic, an antipyretic, and an antiallergic drug or as a tonic in traditional Chinese medicines [3]. Furthermore, the antifungal and antibacterial effects of cinnamaldehyde can help reduce infections [4,5]. In terms of their safety, perillaldehyde and cinnamaldehyde have been "generally recognized as safe" (GRAS) by the Expert Panel of the U.S. Flavor and Extract Manufactures Association (FEMA), they have been approved for use by the Food and Drug Administration of the United States, and they were judged to be safe by the Food and Agriculture Organization of the United Nations/World Health Organization Joint Expert Committee on Food Additives [6,7]. On the other hand, the genotoxicity of perillaldehyde and cinnamaldehyde are potential concerns due to the presence of α and β-unsaturated aldehydes in their structures; these unsaturated aldehydes are electrophilic and can react with electron-rich macromolecules, including DNA, to form DNA adducts [8]. Indeed, previous reports have indicated that perillaldehyde and cinnamaldehyde showed positive results in vitro or in vivo genotoxicity tests [9][10][11]. Of the many types of genotoxicity, mutagenicity is an important mechanism of chemical-mediated carcinogenesis that is based on the reactivity between DNA and chemical substances resulting in mutations [12]. Given that mutations are irreversible and permanent, mutagenicity does not have a threshold because just one mutation in the genome has the potential to generate a cancerous cell. If a chemical is mutagenic, the risk of cancer cannot be zero, even at low dosage levels [13]. Therefore, determining the presence or absence of a chemical substance's mutagenicity is an important step in cancer risk assessment. In the present study, the mutagenicity of perillaldehyde and cinnamaldehyde were determined systematically using in silico quantitative structure-activity relationship analysis (QSAR), in vitro Ames tests, and in vivo transgenic rodent gene mutation (TGR) assays. QSAR analysis We used both rule-and statistical-based QSAR tools [14]. Derek Nexus 6.1.0 is a rule-based expert SAR system developed by Lhasa Limited, UK [15,16]. The knowledge base includes structural alerts for Ames mutagenicity that have been implemented by experts who assessed relevant Ames data and supporting mechanistic data (e.g., DNA adduct-formation experiments). When a query compound matches a structural alert, Derek Nexus offers the relevant inference level (e.g., certain, probable, plausible, equivocal, doubted, or improbable), which indicates the likelihood that compounds in a class will be active in an Ames test. In our tests, a positive prediction was assigned to the query compound when the reasoning level was equivocal or above. Table 1 The results of (Q) SAR prediction for Ames mutagenicity of perillaldehyde and cinnamaldehyde CASE Ultra is statistical-based QSAR software developed by MultiCASE Inc. (USA). It uses a statistical method to automatically extract alerts based on training data via machine learning technology [17,18]. In this study, we used CASE Ultra version 1.8.0.2 with the GT1_BMUT module. The prediction result of each module was ranked as "known positive," "positive," "negative," "known negative," "inconclusive," or "out of domain." A query chemical that ranked as "known positive," "positive," or "inconclusive" in the Ames test was predicted to be positive. Ames test Using the preincubation method, Ames tests were conducted by contract research organizations following Good Laboratory Practice (GLP) compliance according to the Industrial Safety and Health Act test guidelines [19]. The test guidelines require the use of five strains (Salmonella thyphimurium TA100, TA98, TA1535, and TA1537, and Escherichia coli WP2 uvrA) under both the presence and absence of metabolic activation (rat S9mix), which is similar to the Organization of Economic Co-operation and Development (OECD) guideline TG471 [20]. The positive criterion was the number of revertant colonies increasing by more than two-fold the control in at least one Ames test strain in the presence or absence of S9-mix. Dose dependency and reproducibility were also considered in the final judgment. TGR assay TGR assays were conducted by contract research organizations following GLP compliance according to the OECD guideline TG488 [21]. Animals were treated in accordance with regulations of the Animal Care and Use Committees of the laboratories and the National Institute of Health Sciences, Japan. TGR assay for perillaldehyde using Muta™ mice Male Muta™ Mice (CD 2 -LacZ80/HazfBR) were purchased at 8 weeks of age from Japan Laboratory Animals, Inc. (Tokyo, Japan). Administration of perillaldehyde started at 9 weeks of age. Six mice each were treated with perillaldehyde at 125, 250, 500, or 1000 mg/kg/day by oral gavage for 28 days, with corn oil used as a vehicle. Three days after the final treatment, the liver and glandular stomach were each collected and stored. As a positive control group, mice were treated with N-ethyl-N-nitrosourea (ENU) at 100 mg/kg/day by intraperitoneal (i.p.) injection for two consecutive days. Genomic DNA was extracted from the liver and glandular stomach (whole tissue) using the phenol/chloroform method. Transgenes were rescued via an in vitro packaging reaction using Transpack Packaging Extracts (Agilent Technologies, CA). Mutant frequency (MF) was estimated via the lacZ positive selection method [22]. Five mice each from the highest three dose groups (i.e., 250, 500, and 1000 mg/kg/day) were used for mutation assays. MFs were statistically analyzed using Dunnett's test to compare treated groups against the vehicle control group and using Student's or Welch's t-test to compare the positive control group against the vehicle control. A significance level of 5% was adopted with two-tailed tests. TGR assay for cinnamaldehyde using gpt delta mice Male C57BL/6 J gpt delta transgenic mice (C57BL/ 6JJmsSlc-Tg) were purchased at 6 weeks of age from Japan SLC, Inc. (Shizuoka, Japan). Administration of cinnamaldehyde started at 7 weeks of age. Seven to ten mice each were treated with cinnamaldehyde at 125, 250, 500, and 1000 mg/kg/day by oral gavage for 28 days with corn oil as the vehicle. Three days after the final treatment, the liver and small intestine were again each collected and stored. As a positive control, previously collected bone marrow DNA was used. The positive control DNA for the gpt assay was extracted from ENU (50 mg/kg/day, i.p., 5 consecutive days)-treated mice sacrificed 14 days after the final treatment. The positive control DNA for the Spi − assay was extracted from mitomycin C (1 mg/kg/day, i.p., 5 consecutive days)-treated mice sacrificed 7 days after the final treatment. Genomic DNAs of the liver and small intestine (whole tissue) were extracted using a RecoverEase DNA Isolation Kit (Agilent Technologies). Transgenes were rescued by an in vitro packaging reaction using Transpack Packaging Extracts. MF was estimated by the gpt assay for point mutations and by the Spi − assay for deletions [23]. Five animals each from the two highest dose groups (500 and 1000 mg/kg/day) and three animals each from the positive control groups were used for the mutation assays. MFs were statistically analyzed by one-tailed Dunnett's tests or Steel tests with a significance level of 5% to compared treated groups against the vehicle control. Onetailed Student's or Welch's t-tests were used to compare the positive control against the vehicle control at a significance level of 5%. QSAR analysis We used two QSAR tools (DEREK Nexus and CASE Ultra) to predict the Ames mutagenicity of perillaldehyde and cinnamaldehyde. Perillaldehyde was judged negative for mutagenicity by both the QSAR tools, whereas cinnamaldehyde was judged positive by both (Table 1). Ames tests Following treatment with perillaldehyde, the number of revertant colonies did not increase in any of the strains in the presence or absence of S9-mix; however, cytotoxicity was observed from 313 μg/plate in all treatments ( Table 2). On the other hand, cinnamaldehyde treatment dose-dependently induced revertant colonies in TA100 in both the presence and absence of S9-mix (Table 3). However, the maximum number of revertant colonies was 213 (TA100 in the absence of S9-mix) at 313 μg/ plate, which was slightly more than twice the number (105 colonies) detected in the negative control (DMSO) and indicates weak mutagenicity. Signs of cytotoxicity were observed at 625 μg/plate in all cinnamaldehyde treatments. According to these results, we concluded that perillaldehyde was negative and cinnamaldehyde was positive in the Ames tests. TGR assays Perillaldehyde treatment in Muta™ mice In the 1000-mg/kg/day perillaldehyde treatment group, one death was observed on day 5 (before the treatment). However, no significant weight loss was observed in all cases (except for the dead mouse; data not shown). MFs of lacZ genes in the liver and glandular stomach tissues from perillaldehyde-treated mice were not significantly higher than MFs of genes in comparable tissues from negative control animals (Tables 4 and 5). In contrast, the positive control ENU significantly increased MFs in the liver and glandular stomach (p ≤ 0.05). Cinnamaldehyde treatment in gpt delta mice During the treatment of cinnamaldehyde, one death in 7 animals was observed in each of 250 mg/kg/day (on day 6) and 500 mg/kg/day treatment group (on day 5). In the 1000-mg/kg/day treatment group, four death in 10 animals were observed (on day 5,5,7,14). No significant weight loss was observed in other mice. Gene mutation analysis was performed in the highest dosage groups (500 and 1000 mg/kg/day); no significant increase in MF was observed in cinnamaldehyde-treated mice tested in Table 2 The results of the Ames test with perillaldehyde (Tables 6 and 7). In Spiassays of the liver and small intestine, a significant increase in MF was not detected in cinnamaldehyde-treated groups (data not shown). On the other hand, gpt and Spi MFs in the positive control groups (ENU and MMC, respectively) showed significant increases at the 5 and 1% significance levels, which confirm the validity of this study system. Discussion Of the 4500 types of food flavor currently registered worldwide, the flavors permitted for use differ among countries [24,25] and not all flavors are guaranteed to be safe. If a chemical substance that is intentionally added to foods, such as a food flavor, is genotoxic and suspected to be carcinogenic, its use is typically banned across countries. Since it is difficult to perform carcinogenicity tests on many flavors due to the cost and requirement of large amount of flavor samples, the flavor chemicals permitted for use are usually determined by the results of genotoxicity tests. The International Council for Harmonization of Pharmaceutical Regulations (ICH) M7 guideline, "Assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals to limit potential carcinogenic risk" issued in 2014 states that the assessment of genotoxicity for low-level chemicals such as pharmaceutical impurities should be conducted via Ames (mutagenicity) tests [26]. Other types of genotoxicants that are non-mutagenic typically have threshold mechanisms and usually do not pose carcinogenic risk in humans at the level ordinarily present as impurities. The guideline also recommends the use of QSAR analysis to predict the Ames test results as well as an in vivo TGR assay to follow-up on positive results from Ames tests. Similar to pharmaceutical impurities, food flavors are chemicals to which humans are exposed at low levels through food. Therefore, we assessed the mutagenicity of perillaldehyde and cinnamaldehyde according to the ICH-M7 approach. Perillaldehyde and cinnamaldehyde have α,β-unsaturated aldehydes in their structure, which are a representative structural alerts for mutagenicity [27,28]. Chemicals Table 3 The results of the Ames test with cinnamaldehyde with an α,β-unsaturated aldehyde have electrophilicity that may interact with DNA. In addition to the carbon in the carbonylic functionality (1,2-addition), the βcarbon is positively polarized because of conjugation with the carbonyl group and becomes the preferred site of nucleophilic attack (1,4-addition) by the Michael reaction [29]. The first product of the 1,4-addition is a resonance-stabilized enolate ion. In the present study, perillaldehyde did not show mutagenicity in the Ames test. Because the cyclic structure of perillaldehyde can inhibit enolate ion production and because β-carbon is probably inactive, perillaldehyde does not exhibit mutagenicity. Consistent with this result, perillaldehyde was previously reported to be negative in an Ames test [9,30,31]. Since this information is integrated into Derek Nexus and Case Ultra as knowledge, their QSAR predictions were "inactive" and "known negative," respectively. We confirmed the negative result in the Ames test using a TGR assay with Muta™ Mice. Because gene mutations did not increase in the liver and glandular stomach of mice treated with perillaldehyde at the maximum dose tested, we concluded that perillaldehyde poses no risk of cancerrelated mutagenicity. Although perillaldehyde is a naturally occurring chemical used as a food flavoring worldwide and considered GRAS [6,7], the European Food Safety Authority (EFSA) Panel on Food Contact Materials, Enzymes, Flavorings, and Processing Aids requested additional data related to the possible genotoxic potential of flavoring substances with α,β-unsaturated carbonyl structures including perillaldehyde. This request was made because α,β-unsaturated carbonyl compounds can react with nucleophilic sites in DNA through a 1,4-nucleophilic addition. In response to the EFSA request, perillaldehyde was assessed by Ames tests, in vitro micronucleus (MN) assays, and an in vitro HPRT mutation assay. In addition, in vivo MN and comet assays were also conducted in male rats. Results showed a statistically significant increase in revertant colony number according to the Ames test Table 4 The results of TGR assay in liver of MutaTM Mouse after perillaldehyde treatment Corn oil: Negative control (5 mL/kg) ENU: positive control (N-ethyl-N-nitrosourea, 10 mL/kg, i.p., dose once a day, for 2 days, expression period; 10 days) *p < 0.05, significant difference from control (Kastenbaum and Bowman method, upper-trailed) (TA98, −S9 mix), whereas the in vitro MN and HPRT mutation assays showed negative results. In in vivo MN and comet assays, there was no significant increase in MNs in the bone marrow of male rats following oral gavage administration of perillaldehyde doses up to 700 mg/kg/day; however, a small but statistically significant increase in comet tail intensity in the liver was observed at the highest dose (700 mg/kg/day). The study director reported that this small increase was within the distribution of historical negative control data and not biologically relevant; rather, it was most likely an artifact of the observed hepatic cytotoxicity. Therefore, the study director concluded that there was no genotoxic concern in vivo for perillaldehyde [32]. The results of these genotoxicity studies were reviewed by EFSA to determine whether perillaldehyde had genotoxic potential. Contrary to the conclusions stated in the study reported above, EFSA determined that the results of the in vitro HPRT mutation assay and in vivo comet assay were equivocal and positive, respectively. Based on concerns about genotoxicity in the liver, EFSA concluded that perillaldehyde was a potential safety concern as a flavoring substance [33,34]. In response to the conclusion of EFSA, the Expert Panel of FEMA reviewed the newly available data and considered its interpretation relative to standard guidelines [35]. Ultimately, FEMA concluded that the results of the comet assay were consistent with the interpretation provided by the study director, i.e., that perillaldehyde does not appear to have any in vivo genotoxic potential [9]. Therefore, the genotoxic properties of perillaldehyde currently remain under dispute. It may be difficult to end to the dispute between EFSA and FEMA with limited data because the battery of genotoxicity tests used for the assessment of genotoxic potential in perillaldehyde, conducted at the request of EFSA, is inappropriate and cannot be globally accepted. EFSA was initially concerned about in vitro mutagenicity in the Ames test and HPRT mutation assay. To confirm in vitro mutagenicity in vivo, it is necessary to conduct in vivo mutagenicity tests such as TGR assays. According to the Table 5 The results of TGR assay in glandular stomach of MutaTM Mouse after perillaldehyde treatment Corn oil: Negative control (5 mL/kg) ENU: positive control (N-ethyl-N-nitrosourea, 10 mL/kg, i.p., dose once a day, for 2 days, expression period; 10 days) *p < 0.05, significant difference from control (Kastenbaum and Bowman method, upper-trailed) ICH-S2 (R1) guideline (Guidance on Genotoxicity Testing and Data Interpretation for Pharmaceuticals Intended for Human Use), the comet assay is acceptable in follow-up studies to confirm the positive results of in vitro mammalian cell genotoxicity tests but not to confirm the positive results of Ames tests [36]. The ICH-M7 (R1) guideline recommends the TGR assay as a follow-up test if an impurity in pharmaceuticals produces positive results in an Ames test [26]. Therefore, it is internationally agreed that a TGR assay is essential for confirming Ames mutagenicity. In the current study, we clearly demonstrated that perillaldehyde was negative for mutagenicity in both an Ames test and TGR assay. It is unclear why our Ames test result was negative whereas EFSA's result was positive. The purity of perillaldehyde used in our study was 97.3% but in the EFSA study it was 91.9-94.2%. In addition, the Ames test by EFSA showed a clear increase in mutants only at high doses (> 1000 μg/plate). This suggests that a small amount of impurity in perillaldehyde in the EFSA study may have produced the Ames mutagenicity. Regardless, there are no concerns about the in vivo mutagenicity of perillaldehyde because of the negative result shown in our TGR assay. We hope that EFSA will review our current study and reassess the mutagenicity of perillaldehyde in the near future. EFSA determined that cinnamaldehyde lacks direct mutagenic and genotoxic activity [37,38], although positive results have been recorded in some in vitro and in vivo genotoxicity tests, including Ames tests, and cinnamaldehyde has α,β-unsaturated aldehydes in its structure [10,11,30,31]. Ishidate et al. reported a positive result in an Ames test for cinnamaldehyde in the TA100 strain [30,31]; however, only borderline mutagenicity was observed in the absence of S9-mix, with the revertant frequency slightly more than twice the spontaneous frequency. In the present study, cinnamaldehyde showed a similar response, i.e., the maximum revertant frequency in the TA100 strain in the absence of S9-mix was slightly more than twice that in the negative control. Trans-cinnamaldehyde (cas# 104-55-2), 4′-methoxy cinnamaldehyde (cas#1963-36-6), and benzalacetone (4-phenyl − 3-buten-2-one; cas# 122-57-6), which are cinnamaldehyde-related flavor chemicals, have also shown positive results in Ames tests [39]. We concluded that cinnamaldehyde and its derivatives are mutagenic in vitro, given that they show reproducible Ames mutagenicity and have α,β-unsaturated aldehydes. However, the TGR assay employed in the present study clearly demonstrates that there is no concern about in vivo mutagenicity from cinnamaldehyde. α,β-Unsaturated aldehydes are converted to less electrophilic molecules via three pathways Table 6 The results of TGR assay in liver of gpt delta mice after cinnamaldehyde treatment Corn oil: Negative control (5 mL/kg) ENU: positive control (N-ethyl-N-nitrosourea, 10 mL/kg, i.p., dose once a day, for 5 days, expression period; 10 days) *p < 0.05, significant difference from control (Welch's I-test) in vivo: oxidation, conjugation with glutathione, and reduction. The detoxification efficiency and reaction efficiency with DNA vary depending on the structure [40]. Kiwamoto et al. demonstrated that although cinnamaldehyde induces a higher DNA adduct level than other α,β-unsaturated aldehydes, this level is three orders of magnitude lower than the natural background levels of structurally similar DNA adducts observed in the human liver, i.e., the observed level does not show mutagenicity [41]. Indeed, most α,β-unsaturated aldehydes may be of no concern in terms of mutagenicity and carcinogenicity in vivo. In conclusion, the present study clearly demonstrates that perillaldehyde and cinnamaldehyde do not produce in vivo mutagenicity when administered at doses up to 1000 mg/kg/day in mice; however, cinnamaldehyde is mutagenic in vitro. Table 7 The results of TGR assay in small intestine of gpt delta mice after cinnamaldehyde treatment Corn oil: Negative control (5 mL/kg) ENU: positive control (N-ethyl-N-nitrosourea, 10 mL/kg, i.p., dose once a day, for 5 days, expression period; 10 days) *p < 0.05, significant difference from control (Welch's I-test) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2021-07-17T13:28:54.991Z
2021-07-16T00:00:00.000
{ "year": 2021, "sha1": "3d85091fe87767476f5c562ebf722c8434f1db6f", "oa_license": "CCBY", "oa_url": "https://genesenvironment.biomedcentral.com/track/pdf/10.1186/s41021-021-00204-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1dafab8dc5ea4c3b9fa3975d10acc0db05a9701e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
2480388
pes2o/s2orc
v3-fos-license
Psychological And Behavioral Treatment Of Insomnia:Update Of The Recent Evidence (1998-2004) There is also a need to disseminate more effectively the available evidence in support of psychological and behavioral interventions to health-care practitioners working on the front line. Background: Recognition that psychological and behavioral factors play an important role in insomnia has led to increased interest in therapies targeting these factors. A review paper published in 1999 summarized the evidence regarding the efficacy of psychological and behavioral treatments for persistent insomnia. The present review provides an update of the evidence published since the original paper. As with the original paper, this review was conducted by a task force commissioned by the American Academy of Sleep Medicine in order to update its practice parameters on psychological and behavioral therapies for insomnia. Methods: A systematic review was conducted on 37 treatment studies (N = 2246 subjects/patients) published between 1998 and 2004 inclusively and identified through PsycInfo and Medline searches. Each study was systematically reviewed with a standard coding sheet and the following information was extracted: Study design, sample (number of participants, age, gender), diagnosis, type of treatments and controls, primary and secondary outcome measures, and main findings. Criteria for inclusion of a study were as follows: (a) the main sleep diagnosis was insomnia (primary or comorbid), (b) at least 1 treatment condition was psychological or behavioral in content, (c) the study design was a randomized controlled trial, a nonrandomized group design, a clinical case series or a single subject experimental design with a minimum of 10 subjects, and (d) the study included at least 1 of the following as dependent variables: sleep onset latency, number and/or duration of awakenings, total sleep time, sleep efficiency, or sleep quality. Results: Psychological and behavioral therapies produced reliable changes in several sleep parameters of individuals with either primary insomnia or insomnia associated with medical and psychiatric disorders. Nine studies documented the benefits of insomnia treatment in older adults or for facilitating discontinuation of medication among chronic hypnotic users. Sleep improvements achieved with treatment were well sustained over time; however, with the exception of reduced psychological symptoms/ distress, there was limited evidence that improved sleep led to clinically meaningful changes in other indices of morbidity (e.g., daytime fatigue). Five treatments met criteria for empirically-supported psychological treatments for insomnia: Stimulus control therapy, relaxation, paradoxical intention, sleep restriction, and cognitive-behavior therapy. Discussion: These updated findings provide additional evidence in support of the original review's conclusions as to the efficacy and generalizability of psychological and behavioral therapies for persistent insomnia. Nonetheless, further research is needed to develop therapies that would optimize outcomes and reduce morbidity, as would studies of treatment mechanisms, mediators, and moderators of outcomes. Effectiveness studies are also needed to validate those therapies when implemented in clinical settings (primary care), by non-sleep specialists. There is also a need to disseminate more effectively the available evidence in support of psychological and behavioral interventions to health-care practitioners working on the front line. SLEEP, Vol. 29, No. 11, 2006 1399 month and often several years. Insomnia complaints are typically associated with reports of daytime fatigue, problems with memory and concentration, and mood disturbances, impairments that may be the primary concerns prompting patients to seek treatment. Insomnia can be a symptom of several other conditions including medical, psychiatric, substance abuse or another sleep disorder; or, it can be a disorder in itself as in primary insomnia. [6][7][8] There are several treatment options available for insomnia, including psychological/ behavioral approaches, various classes of medications, and a host of complementary and alternative therapies (e.g., herbal/dietary supplement, acupuncture). The present paper focuses on psychological and behavioral approaches to treating insomnia. These procedures have received increasing research attention in the past 2 decades, and were noted as effective therapies at a recent National Institutes of Health State-of-the-Science Conference 9 on the manifestations and management of chronic insomnia. These methods include stimulus control therapy, sleep restriction, relaxation-based interventions, paradoxical intention, cognitive therapy, and combined cognitive-behavioral therapy. A brief summary of the nature of these interventions is presented in Table 1; more extensive descriptions are available in other sources. 10,11 2.0 PURPOSE The objective of this paper is to provide an update of the evidence regarding the efficacy, effectiveness, durability, and generalizability of psychological and behavioral interventions for persistent insomnia. The evidence is reviewed for the treatment of both primary insomnia and insomnia associated with other medical, psychiatric, or substance abuse disorders. As with the initial review paper 12 this updated review was commissioned by the Standards of Practice Committee of the American Academy of Sleep Medicine. Search Methods, Keywords, and Databases Treatment studies selected for review in this paper were identified through PsycInfo and Medline searches for research conducted from 1998 through 2004 inclusively. The following key words were used: nonpharmacologic, behavior therapy, cognitive therapy, psychotherapy, alternative medicine, stimulus control, progressive relaxation therapy or progressive muscle relaxation, paradoxical techniques or paradoxical intention, behavior modification, cognitive behavior therapy, psychological therapy, treatment, intervention, behavioral intervention, treatment, cognitive treatment, alternative treatment, therapy, biofeedback, sleep restriction, sleep deprivation, complementary therapies, mind-body and relaxation techniques, aromatherapy, biofeedback, hypnosis, imagery, or meditation, relaxation, relaxation techniques, yoga, massage. These terms were combined with sleep disorders or sleep initiation and maintenance disorders, or insomnia, or dyssomnia. The search was limited to humans, adults (18 and older), English or French language. Selection Criteria of Treatment Studies The initial PsycInfo and Medline searches yielded a total of 312 titles of potential interest; an additional 34 titles were identified by members of the task force through their own reading of the literature, for a total of 346 titles of potential interest. Of these, 102 abstracts were read by the task force chair for initial screening and 53 articles were selected for full review by 2 independent members of the task force. Only peer-reviewed published articles were retained at this phase. Each rater used a standard extraction sheet to summarize information about the study including experimental design, sample (number of participants, age, gender), diagnosis, type of treatments and controls, primary and secondary outcome measures, and main findings. Data extraction was completed independently and discrepancies between 2 members of a pair of raters were resolved through discussion with the chair and other members of the task force. The criteria for inclusion of a study were: (a) the main sleep diagnosis was insomnia (primary or comorbid), (b) at least 1 treatment condition was psychological or behavioral in content, (c) the study design was a randomized controlled trial, a nonrandomized group design, a clinical case series or a single subject experimental design with a minimum of 10 subjects, (d) the dependent measure included 1 or more of the following variables (as measured by daily sleep diaries, polysomnography (PSG), or actigraphy): Sleep onset latency (SOL), number of awakenings (NA), time awake after sleep onset (WASO), total sleep time (TST), sleep efficiency (SE), or sleep quality Psychological And Behavioral Treatment Of Insomnia-Morin et al (SQ). Studies using global measures of treatment outcome were included if those measures had been validated. Studies of circadian disorders (e.g., phase delay or phase advance syndromes) and those using other nonpharmacological therapies (e.g., light therapy, electro-sleep therapy) or complementary and alternative therapies (e.g., acupuncture) were excluded from this review. The majority of the initial 346 titles of potential interest were excluded either because they were not treatment studies for insomnia or because the main intervention was pharmacological. Of the 53 full articles reviewed and rated, 16 were rejected. The main reasons for article rejection were: no documentation of an insomnia diagnosis, sample size smaller than 10, results were secondary analyses of other databases (e.g., predictors of treatment response) and used global and non-validated measures of outcome. A list of excluded studies with reasons for exclusion is provided in an, Appendix A. The 37 studies that met inclusion criteria are listed in Table 2. For each study, we report the study design and the evidence level (using the Sackett System, 1993), the sample size (enrolled/completed), age, gender, insomnia diagnosis (primary or secondary), types of treatment and control conditions, therapy dosage, format, setting, therapist training, treatment duration and longest follow-up, primary and secondary outcome measures, and summary of main findings. Although some studies not meeting inclusion criteria are discussed in the text they are not included in the evidence table. Based on the Sackett sytem, 13 the criteria for grading evidence level of each study were: Randomized well-designed trials with low alpha and beta error (Grade I), randomized trials with high alpha and beta error (Grade II), nonrandomized concurrently controlled studies (Grade III), nonrandomized historically controlled studies (Grade IV), case series (Grade V). Table 2 summarizes the main features of the 37 studies selected for this review paper. The number of studies meeting Sackett System standards for grading evidence levels 13 was as follows: 11 studies were graded I, 13 studies graded II, 2 studies graded III, 5 studies graded IV, and 6 studies graded V. A total of 2246 patients with insomnia were enrolled in the 37 studies and approximately 2029 of those completed treatment, for an attrition rate of less than 10% overall. With a few exceptions (studies of Veterans, substance abusers, internet-based treatment), there was a larger representation of women than men enrolled in most studies, with a typical ratio of 2:1, which is representative of insomnia prevalence estimates. 1 Nine of the 37 studies focused specifically on older adults (average age > 60 years old). Descriptive Features of the Studies and Samples The main sleep diagnoses of patients enrolled in the studies were primary or psychophysiological insomnia (28 studies), insomnia associated with medical (4 studies) or psychiatric disorders (3 studies) or a mix of both conditions (6 studies), and hypnotic-dependent insomnia (5 studies). Eight studies examined treatment efficacy in patients with different subgroups of insomnia diagnoses, leading to a total number of studies greater than 37 studies. The majority of reviewed studies (n = 33) relied on prospective daily sleep diaries to document treatment outcome. Participants were typically required to complete a daily diary for a minimum of a 1 or 2 week baseline period, for the duration of treatment, and for an additional 1 or 2 week period at post treatment and follow-ups. We retained 3 studies using the Pittsburgh Sleep Quality Index (PSQI) as the primary outcome. [14][15][16] A few studies have also included polysomnography (n = 7) 17-23 and actigraphy (n = 6) 18,19,[24][25][26][27] to complement subjective reports from daily sleep diaries. Those studies are identified in Table 2 and in the appropriate subsections of the results. Primary dependent variables derived from these assessment methods were sleep onset latency (SOL), wake time after sleep onset (WASO), total sleep time (TST), sleep efficiency (SE), and sleep quality (SQ). Secondary outcomes included measures of insomnia severity (Insomnia Severity Index; ISI, 28 sleep quality (PSQI), 29 psychological symptoms (Beck Depression Inventory, BDI, 30 State-Trait Anxiety Inventory STAI 31 , and fatigue (Multidimensional Fatigue Inventory, MFI 32 ). The following sections summarize the evidence regarding the efficacy of treatment for primary insomnia, the generalizability of the evidence to different forms of insomnia (primary, secondary, hypnotic-dependent), insomnia in older adults, and the clinical significance and durability of sleep improvements over time. Comparative findings of single and multifaceted therapies and of different treatment implementation models are also summarized. Treatment of Primary Insomnia Seventeen studies evaluated the effects of treatment for primary insomnia (see Table 2). Five of those studies were randomized clinical trials (RCT; with grade I, 17,20,21,33,34 4 of which used CBT as the main intervention. In a comparative study of CBT (without relaxation), relaxation, and a psychological placebo (i.e., quasidesensitization) with a sample of 75 primary insomniacs, 17 CBT produced greater improvements on the main diary and PSG-defined sleep measures (e.g., SE, WASO) relative to relaxation and control. More CBT patients (64%) achieved clinically significant outcomes compared to relaxation (12%) and placebo (8%). In an effectiveness trial 33 conducted in primary care that evaluated CBT against a wait-list control group, active treatment was found superior to the control condition on most primary and secondary outcome measures. SOL was reduced from 61 to 28 min following active treatment compared to a change from 74 to 70 min for the control condition. a Smaller improvements were noted on WASO. There was no significant change on TST during treatment, but an increase of about one-half hour over baseline was obtained at follow up. Of those patients using hypnotic medications at baseline, 76% were medication-free at the end of treatment and 80% at the 12-month follow up. In a comparative study 21 of CBT, medication (temazepam), and combined CBT plus medication, all 3 active treatments improved more than pill placebo on the main outcomes of WASO and SE, with a trend for the combined intervention to yield the greatest benefits. PSG data produced similar outcomes, although of smaller magnitude, but only the combined condition was significantly superior to placebo on the main outcome variables. According to PSG, more patients in the CBT (56%) and combined (68%) conditions achieved clinically significant changes (i.e., SE > 85%) relative to medication alone (47%) or placebo groups (22%). In another clinical trial of primary insomnia in older adults, 20 23,37 or treatment implementation models, 38,39 or as part of case series. 40,41 Some of these studies will be discussed in later sections of this paper. Treatment of Insomnia Associated with other Medical or Psychiatric Disorders Twelve investigations have evaluated the efficacy of psychological and behavioral treatments for insomnia associated with another medical or psychiatric disorder. These studies have focused, for example, on patients with chronic pain, 25 cancer, 16,42 alcohol dependence 26 and older adults with various medical illnesses. 27,43 Only 4 of those 12 studies were RCT (Grade I or II) [25][26][27]43 and the remaining were nonrandomized studies or clinical replication series. In a study of 60 patients with insomnia associated with chronic pain, 25 CBT was significantly more effective than control on measures of SOL, WASO, and SE, but not on number of awakenings and TST. SOL was reduced from 55 min to 28 min and SE increased from 72% to 85%. Nocturnal motor activity (as measured by actigraphy) was reduced in the treated group but not in the control group; there were no group differences on pain ratings, depressive symptoms, or medication use. In a study of 51 older adults with insomnia associated with medical illness, 27 CBT and relaxation conditions were more effective than control on diary measures of WASO and SE, as well as on a measure of overall sleep quality (PSQI); the relaxation group had a greater increase in TST than CBT and controls. A higher proportion of treated patients relative to controls achieved clinically significant improvements. There were no differential group effects on actigraphy, medication use, or other secondary measures of anxiety, depression, and quality of life. In a study of 49 older adults with insomnia associated with medical and psychiatric conditions, 43 a combined intervention of stimulus control, relaxation, and education reduced WASO 25 min and increased SE 11% at post treatment. Fifty-seven percent (57%) of treated patients achieved clinically significant improvements on SE relative to 19% of control patients; there was no significant change on secondary measures of anxiety, depression, and impact of insomnia. Outcomes were similar for individuals with insomnia associated with a medical condition and those with insomnia related to a psychiatric disorder. A controlled study conducted with recovered alcoholics 26 showed modest but significant improvements of SOL (-18 min) and SE (+10%) among insomnia-treated patients. At the 6 month follow-up, 15% of treated participants had relapses with alcohol and this proportion was not different between treated and control patients. Several additional studies (clinical replication series or uncontrolled group studies) have also provided evidence showing that patients with medical and psychiatric disorders could also benefit from sleep/insomnia specific interventions. Two investigations with cancer patients 16,42 showed that CBT was associated with improvements of sleep and of daytime functioning (e.g., fatigue, energy). One case series study of 67 patients with psychiatric disorders and insomnia 44 reported significant improvements of sleep, mood, fatigue, and reduced use of sleep medication, while a smaller study 45 found no change on specific sleep parameters (SOL and WASO) but reported significant reductions of global insomnia severity as measured by the ISI. Additional evidence supporting the efficacy of CBT was reported in 3 clinical replications series [46][47][48] conducted with heterogeneous samples of patients presenting to sleep disorders clinics with a variety of primary and secondary insomnia diagnoses. Although conclusions drawn from these uncontrolled studies should be treated with caution because of the studies' high attrition rate, the evidence suggests that among those who received an adequate treatment exposure (average of 6 -8 therapy sessions) outcomes were comparable to those patients with primary insomnia enrolled in controlled clinical trials. Furthermore, treatment response appeared comparable between patients with medical or psychiatric comorbidity and those with primary insomnia in one study. Baseline anxiety, depression, and insomnia severity did not differ among treatment responders and nonresponders. 48 Treatment of Insomnia in Older Adults Nearly 25% (9 studies out of 37) of reviewed studies were conducted with older adults (average age > 60 years old). This is in sharp contrast to our previous review 12 that included only a handful of studies with older subjects. Three studies focused on older adults with primary insomnia, 18,20,21 2 on insomnia associated with medical or psychiatric illnesses, 27,43 1 included a mix of patients with primary and comorbid insomnia, 49 2 evaluated the impact of psychological and behavioral interventions specifically in older adults who were chronic users of hypnotic medications, 15,22 and 1 study 19 examined the moderating role of upper airway resistance syndrome in the treatment of postmenopausal insomnia. With the exception of the study conducted by Pallesen et al, 49 all these investigations were RCT. In a study of 89 older adults with primary insomnia, 20 sleep restriction and relaxation were both more effective than a psychological placebo for reducing WASO; changes were identical (67 min to 43 min) for the 2 treatment groups at the end of the 6-week treatment phase, but sleep restriction produced the best outcome at the 1-year follow up. No significant changes were obtained on PSG measures. All 3 conditions, including placebo control, showed improvements on secondary measures of fatigue and a measure of insomnia impact. In another placebo-controlled study, 21 76 older adults treated with CBT, medication (temazepam), or combined CBT + medication improved more than those receiving placebo on the main outcome measures of WASO and SE. PSG comparisons yielded improvements in the same direction, albeit of smaller magnitude, than those reported on sleep diaries. A greater proportion of patients treated with CBT, alone or combined with medication, achieved clinically significant improvements (i.e., SE > 85%) compared to those receiving medication alone or placebo. In a comparison of sleep restriction, with and without an optional daytime nap, to sleep education alone, 18 both sleep restriction conditions produced greater SE increase, with reduced time spent in bed, relative to the control condition. There was no sig-nificant group difference on actigraphy or PSG measures; TST was reduced for the sleep restriction conditions at post treatment and returned towards baseline values at the 3-month follow-up. There was a mild increase of physiological sleepiness (as measured by the MSLT) but no change on subjective sleepiness. In another investigation, 49 a combination of sleep education plus stimulus control was as effective as sleep education plus relaxation, and more effective than a wait-list control, in older adults with mixed primary and secondary insomnia; there were modest improvements of daytime measures for both active conditions. Two additional studies (reviewed in section 4.3) provided evidence that older adults with insomnia and comorbid medical disorders also benefitted from sleep-specific interventions. 20,27 Treatment of Insomnia Among Chronic Hypnotic Users Four investigations examined the efficacy of psychological and behavioral interventions for insomnia in the context of chronic hypnotic usage, including 2 that were conducted with older adults. 15,22 In a study of 209 chronic hypnotic users, 15 CBT (with an optional medication taper) was associated with improved PSQI scores and reductions of hypnotic use at 3-and 6-month follow ups. A greater percentage of patients treated with CBT for insomnia (39%) relative to no treatment controls (11%) achieved at least a 50% reduction of hypnotic use relative to baseline at the 6-month follow up. A cost-offset analysis revealed that while CBT added to the initial treatment cost, there was a significant cost offset at follow up resulting from a reduction of sleep medication usage. In another study comparing a supervised medication withdrawal program, alone and combined with CBT for insomnia, to CBT alone, 22 all 3 interventions produced significant reductions in both the quantity (90%) and the frequency (80%) of benzodiazepine use, and more patients in the combined approach (85%) were medication free at post-treatment than for those receiving the taper schedule alone (48%) or CBT alone (54%). There were modest changes in sleep patterns during the initial 10-week withdrawal phase, but CBT-treated patients reported greater sleep improvements relative to those receiving the medication withdrawal alone. Several improvements were also reported on secondary measures of insomnia severity (ISI), anxiety (BAI) and depressive (BDI) symptoms. Two additional studies have examined the impact of chronic hypnotic use on outcome with middle-aged adults. One study 50 found a significant worsening of sleep parameters during medication withdrawal and the addition of relaxation therapy did not attenuate this effect. In a similar study using stimulus control as the main behavioral intervention, 51 stimulus control produced significant improvements on most sleep parameters relative to no additional treatment. There was no difference in sleep outcomes between medicated and nonmedicated patients. Validation and Comparative Efficacy of Single and Multifaceted Therapies Although there are several distinct psychological and behavioral therapies for insomnia, there was a clear trend/preference for investigators to combine 2 or more of these methods when treating insomnia. The most common combination involves an educational (sleep hygiene), behavioral (stimulus control, sleep restriction, relaxation), and a cognitive therapy component, usually referred to as cognitive-behavior therapy. Indeed, 21 stud-ies have evaluated the efficacy of CBT, either with (12 studies) or without relaxation (9 studies), and 5 more studies have used a similar multi-component interventions but without cognitive therapy (see Table 2). There has been no complete dismantling of CBT to isolate the relative efficacy of each component within the same study. However, comparisons of some components revealed that CBT was superior to relaxation alone in 1 study of primary insomnia 17 and sleep restriction was superior to relaxation at follow-up in another study with older adults. 20 In another study, 52 relaxation was more effective for sleep initiation problems relative to sleep hygiene education alone and a combination of stimulus control plus sleep restriction, whereas the latter combination had greater effects on sleep maintenance variables. Twelve studies have isolated in a controlled trial at least 1 therapy component such as relaxation, sleep restriction, stimulus control, or paradoxical intention. All 6 studies contrasting relaxation-based interventions (either progressive muscle relaxation or similar procedures) to a control condition, have reported that this single therapy was more effective than wait-list, 27 placebo, 17,20 no treatment, 37,50 and minimal sleep hygiene education control. 52 Two studies showed that sleep restriction was superior to either placebo 20 or sleep hygiene education alone, 18 1 study found that stimulus control was more effective than a wait-list control, 51 and 1 additional investigation reported that paradoxical intention was superior to a wait-list control for sleep onset insomnia. 24 In spite of the inclusion of a cognitive therapy component in numerous studies, no study has yet evaluated its unique contribution to outcomes. As in the earlier review paper, 12 criteria developed by the American Psychological Association 53 for defining empirically-validated psychological treatments, were used to determine whether additional evidence was available for each psychological and behavioral interventions (See Table 3). Based on the criteria outlined in our previous review, 12 stimulus control therapy, relaxation training, and paradoxical intention met criteria for wellestablished psychological treatment for insomnia. With the additional evidence from the present review (indicated by studies in boldface in Table 3), sleep restriction 18,20 and CBT 17,21 would also meet criteria for well-established treatments. Furthermore, additional studies strengthened the level of evidence supporting stimulus control, 51 relaxation, 17,20,27,37 and paradoxical intention. 24 Comparisons of Psychological/Behavioral Therapies and Medication Five controlled studies conducted with primary/psychophysiological insomnia patients have evaluated the impact of psychological/behavioral interventions in comparison to or as an adjunct to hypnotic medications. Two studies evaluated the efficacy of CBT, singly and combined with medication, 21,34 1 used a medication alone condition as a comparator to psychological treatments, 52 and 2 examined the incremental benefits of adding 1 treatment to the other. 54,55 In a placebo-controlled comparison of CBT and medication (temazepam) singly and combined 21 (study described in Sections 4.2 and 4.4), all 3 active treatments were more effective than placebo on sleep continuity variables, with a trend for the combined approach to yield better outcomes. These results were corroborated with PSG measures, although the magnitudes of sleep improve-ments were smaller on PSG than on diary measures. Long-term follow-up data showed that subjects treated with CBT sustained their clinical gains over time, whereas those treated with medication alone did not. The combined approach showed some loss of therapeutic benefits over the follow-up periods, although there was more variability across subjects in that condition. In a similar study design with 63 young and middle-aged adults with sleep onset insomnia 34 (study described in Section 4.2), CBT was shown more effective than medication (zolpidem) and placebo on measures of SOL and SE. All 4 conditions, including placebo, increased their TST, with medication yielding the largest increase in TST (69 min), though this finding was not significantly different from the other groups. There was no significant difference between CBT alone and CBT combined with medication. Sleep changes were well maintained at the 12-month follow up for patients treated with CBT, singly or combined with medication, but no follow up data was available for those treated with medication alone. Data obtained from a Nightcap device showed sleep changes in the same direction as those from diaries, except that no improvement was obtained on any of the measures for the placebo condition. An investigation of 41 patients treated with estazolam examined the added benefits of muscle relaxation, imagery training, and sleep hygiene education to medication. 55 There was no group difference on any outcome. Significant improvements from baseline to post treatment were obtained on WASO and SE for the relaxation (-17 min and + 9.7% respectively) and imagery training groups (-33 min and + 7.4%). TST was increased by 34 min (education), 40 min (imagery), and 65 min (muscle relaxation) for the same period but there was no significant group difference. SOL was not changed in any of the groups. Significant changes were obtained from baseline to follow up in all 3 groups on sleep measures and on secondary measures of arousal, self-efficacy, and depressive (but not anxiety) symptoms. Another study of 30 patients 54 examined the added benefits of modafinil when combined with CBT in the management of primary insomnia. Although there was no significant gain from modafinil in terms of sleep continuity parameters, there were trends suggesting that the addition of modafinil to CBT reduced daytime sleepiness and enhanced compliance with prescribed bedtime. Finally, data from 53 patients with primary insomnia 52 showed that while relaxation was more effective for sleep onset problems and a combination of stimulus control plus sleep restriction had more benefits for sleep maintenance variables, medication (flurazepam) produced the largest improvements on all sleep variables during the initial 2-week intervention. Treatment Implementation Methods: Individual, Group, and Self-Help Treatment was implemented on an individual basis in 22 studies (54%), in a group format in 11 studies (29.7%), and a few additional studies relied on self-help materials with or without additional telephone consultations. An average of 5.7 consultation visits was conducted over a mean treatment period of 6.5 weeks. 1 study directly compared the relative efficacy of CBT implemented in group or individual sessions, or through self-help written materials combined with brief telephone consultations. 56 All 3 groups produced significant improvements of sleep and secondary measures and there was no between group difference on any 70 SC > Pla 71 SC > Pla Morin & Azrin (1987; 72 75 Rel > Pla Nicassio et al. (1982) 76 Rel > No treatment Turner & Asher (1979) 74 Rel > Pla Woolfolk & McNulty (1983) 77 Rel 38 CBT > WL Morin et al. (1993) 82 CBT > WL These studies provided evidence supporting more than 1 treatment. *Well established treatments according to APA criteria for empirically supported treatments. Criteria for well-established treatments require at least 2 between-group design studies demonstrating efficacy in 1 or more of the following ways: I. superior to pill or psychological placebo or to another treatment; or equivalent to an already established treatment in a study with adequate statistical power; II. a large series of single case design experiments (n > 9) demonstrating efficacy as in I; III. the studies must be conducted with treatment manuals; IV. the characteristics of the sample must be well-described; V. the effects must have been demonstrated by at least 2 different investigators or investigatory teams. †Probably efficacious treatments according to APA criteria for empirically supported treatments. Criteria for probably efficacious treatments are: I. 2 studies showing the treatment is more effective than a waiting-list control group, or II. 1 or more studies meeting the wellestablished treatment criteria I, III, and IV, but not V, or III. a small series of single case design studies (n > 3) otherwise meeting wellestablished treatment criteria II, III, and IV. measure. A similar study of insomnia in recovered alcoholics also found equivalent outcome between individual CBT and self-help CBT plus telephone consultation. 26 In contrast, 1 study 38 found that the addition of telephone consultation to self-help written material enhanced outcomes at post treatment, but those initial gains tended to disappear at follow up. An internet-based intervention produced greater improvement in several sleep parameters relative to controls, 39 but the attrition rate (24%) was higher than in studies using face-to-face consultation visits. While the majority of reviewed studies have used psychologists or psychology trainees as therapist (with treatment manual), 2 studies examined treatment efficacy as implemented by primary care physicians 36 or by nurse practitioners 33 who had also been trained before the studies; treatment benefits were generally equivalent to those obtained with therapists who had mental-health training. Durability of Sleep Improvements Twenty-six of the 37 reviewed studies reported follow up data of at least 1 month duration after completing treatment (mean duration = 7.7 months; range 1-36 months). The remaining 11 studies reported no follow-up data. As was the case in the initial review, a very robust finding across studies is that treatmentproduced changes in sleep parameters are well maintained at short (1-3 month), intermediate (6-month), and long-term (> 12 months) follow-ups. One interesting finding from studies using sleep restriction is that total sleep time may be reduced during the initial intervention, but it is significantly increased at follow up evaluation. 20 Despite fairly robust long-term outcomes, follow-up data must be interpreted cautiously as there are relatively few studies reporting long-term (> 1 year) follow-ups (see Table 2) and, among those that do, attrition rates increase substantially over time. CONCLUSIONS This updated review of treatment studies conducted between 1998 and 2004 provides additional evidence that psychological and behavioral interventions represent an effective treatment option for the management of persistent insomnia. In addition to studies further documenting treatment efficacy for primary insomnia, recent studies indicate that treatment is also effective for insomnia associated with some medical conditions and, to a lesser extent, with psychiatric conditions. Treatment benefits are well sustained over time. There is still limited evidence of clinically meaningful changes beyond reductions of insomnia symptoms (i.e., improved daytime functioning, quality of life). These findings are consistent with our previous systematic review 12 as well as with other meta-analyses of the efficacy of psychological and behavioral interventions for insomnia. [57][58][59] For instance, of the 17 most recent treatment studies of primary insomnia, 5 were randomized controlled trials, 17,20,21,33,34 and all 5 yielded additional evidence of significant sleep improvements with psychological and behavioral interventions. Although most of this additional evidence is based on daily sleep diaries, 3 of the 5 key studies included PSG measures and 2 of them reported outcomes that paralleled findings from diary measures. Actigraphy, on the other hand, was not very sensitive for detecting changes in sleep/wake variables in the few studies using this device. The treatment of comorbid insomnia has received limited attention until recently, perhaps owing to the traditional notion that it would not respond to treatment unless the associated condition was treated first. The present review, as well as recent findings, [60][61][62] challenge this traditional notion and indicate that insomnia-specific treatment is of benefit even among those whose insomnia is associated with comorbid conditions such as cancer, pain, alcohol abuse, and some psychiatric conditions. Nonetheless, there is a need for additional prospective and randomized controlled studies of comorbid insomnia contrasting outcomes when sleep is or is not directly targeted in treatment. The treatment of insomnia in older adults is another area previously neglected and for which there was limited evidence to guide practitioners. Nearly 25% of the studies reviewed in this paper focused on older adults. The findings from those studies indicate that older adults with primary insomnia respond to treatment as well as younger and middle-aged adults, although the presence of comorbid medical or psychiatric condition may moderate outcomes. 27,43 A recent meta-analysis 63 also confirmed that treatment effect sizes are comparable for middle-aged and older adults. There is additional evidence that psychological treatment can facilitate hypnotic discontinuation in older adults who are chronic users of hypnotics. 15,22 This is an important finding as older adults are more likely to be long-term hypnotic users which, in some cases, may perpetuate sleep disturbances. Thus, although heterogeneity in diagnosis makes it more difficult to compare studies, 64 such heterogeneity of insomnia samples also enhances generalizability of outcomes. This review highlights an emerging trend among investigators for combining multiple treatments, which contrasts with the earlier review describing numerous studies comparing treatment efficacy among 2 or more single therapies. Indeed, 26 of the 37 reviewed studies evaluated the efficacy a multi component approaches, including 21 studies using multicomponent therapy, with or without relaxation, and 5 more combining behavioral interventions (e.g., stimulus control and sleep restriction) but without cognitive therapy. Although the reason for this shift in paradigm is not entirely clear, the use of multi-component approaches is more likely, at least on a clinical basis, to address the different facets/perpetuating factors of insomnia. 11 Whether CBT produces outcome that is superior to single therapies remains largely unexplored. The few comparative studies available show that outcome is superior for CBT, stimulus control, and for sleep restriction relative to relaxation alone, 17,20,27,49 however, there has been no complete dismantling of CBT to isolate the relative efficacy of each component. Furthermore, although some findings 65 suggest that change in beliefs and attitudes is an important mediator of long-term outcomes, there has been no direct controlled evaluation to isolate the relative contribution of cognitive therapy. Our previous review had identified 3 treatments as well-validated and 2 more as probably efficacious according to criteria set by the American Psychological Association. 53 This updated review provides further evidence supporting stimulus control, relaxation, paradoxical intention as well validated therapies, and new evidence to upgrade sleep restriction and CBT from probably efficacious to well-established treatments. Although there is evidence supporting the efficacy of psychological and behavioral treatment for insomnia, there is still little information about the specificity of this treatment modality and the active therapeutic mechanisms responsible for sleep improve-ments. With a few notable exceptions using attention-placebo conditions, 17,20 most CBT trials have used wait-list control groups, precluding the unequivocal attribution of treatment effects to any specific ingredient of psychological and behavioral treatment. The lack of a pill-placebo control equivalent in psychological outcome research makes it difficult to determine what percentage of the variance in outcomes is due to specific therapeutic ingredients (i.e., restriction of time in bed, cognitive restructuring), the measurement process (i.e., self-monitoring), or to non-specific factors (e.g., therapist attention, patients' expectations). An important limitation noted in the previous review that is still evident in recent studies is the limited evidence documenting the clinical significance of outcomes beyond insomnia symptom reductions (i.e., reduced morbidity, improved quality of life). There is a need for broadening the scope of outcome measures 66 and for standardizing assessment methodology in insomnia research, 64,67 Furthermore, even for patients meeting criteria for what might be considered a clinically meaningful change, many such treatment responders reach a plateau and continue showing residual sleep disturbances after treatment and may remain at risk for relapse. There is a need to develop and validate more potent interventions that would increase the rate of patients reaching full remission. 9 Ongoing studies are currently examining optimal treatment dosage, treatment combination involving medication, and maintenance therapies. A related issue is that most of the outcome evidence currently available is about improving sleep initiatiation and sleep continuity parameters, with essentially no information about the impact of treatment on more qualitative aspects of sleep, i.e., non-restorative sleep. Although this qualitative feature is part of the standard insomnia definition, no study has of yet examined the impact of psychological treatment on this variable. Proper implementation of psychological and behavioral therapies usually requires more time than prescribing a hypnotic medication, which may represent an important barrier to using such interventions in clinical practice. Nonetheless, several studies have documented the benefits of cost-effective implementation models using nurse practitioners, group therapy, or self-help materials to complement therapist-guided intervention. Whereas such implementation models are likely to make treatment more readily available, adequate therapist training remains an important consideration in using CBT effectively to optimize outcome. Additional studies examining the relative cost-effectiveness of different insomnia interventions would be warranted 68 In summary, this updated review provides additional evidence supporting the use of psychological and behavioral interventions for primary insomnia, for insomnia associated with medical or psychiatric conditions, and for insomnia in older adults. Additional research is still needed to develop and validate treatment algorithms that would optimize outcomes and reduce morbidity; clinical research that would examine treatment mechanisms, mediators and moderators of outcomes is also warranted; and, additional effectiveness trials are particularly needed to document outcomes in unselected patients seeking treatment in various clinical settings (e.g., primary care). Finally, an important challenge for the future will be to disseminate more efficiently the available evidence to health-care providers and translate that evidence into meaningful clinical guidelines in order to ensure a more widespread use of validated therapies. SLEEP, Vol. 29, No. 11, 2006
2017-03-21T22:13:21.627Z
2006-11-01T00:00:00.000
{ "year": 2006, "sha1": "9c98ebd317545339cf7aaf38fcacca7f133665b3", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/sleep/article-pdf/29/11/1398/13663178/sleep-29-11-1398.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "db31b2ab4243038d740fae791520e785289ae9bf", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
149279889
pes2o/s2orc
v3-fos-license
State and Trait Anxiety Scores of Patients Receiving Intravitreal Injections Background To evaluate parameters on the state and trait anxiety scores of patients receiving intravitreal injections. Methods One hundred thirteen patients were included in the study. All subjects received intravitreal ranibizumab or bevacizumab injections. To measure the level of anxiety, Spielberg's State-Trait Anxiety Inventory questionnaire was used. Results The mean state anxiety scores were 45.19 ± 5.62 in experienced patients and 43.10 ± 6.62 in inexperienced patients (p = 0.078). The mean trait anxiety scores were 50.14 ± 6.62 in experienced patients and 49.17 ± 10.79 in inexperienced patients (p = 0.810). Additionally, there was no statistically significant difference in the state and trait anxiety scores between the male and female, employed, and retired patients (p > 0.05). Conclusion Anxiety may not show significant differences according to sociodemographic status. High anxiety scores found in this study also emphasize that health care providers should try to decrease anxiety levels during the course of treatment. What Is It about? Intravitreal anti-vascular endothelial growth factor injections are widely used all over the world. More than one injection may be usually required for treatment. The number of injections and other factors may increase anxiety in patients. This situation may affect their general comfort and decrease compliance with recurring injections. This study was conducted to assess the effects of various parameters on patient anxiety. Introduction Intravitreal anti-vascular endothelial growth factor injections are used in the treatment of various diseases, including diabetic retinopathy, age-related macular degeneration, neovascular glaucoma, and intraocular inflammation. More than one injection may be usually required for the management of these diseases. On the other hand, patient opinion about a drug injection into the eye may stimulate much anxiety and may even lead to much pain. This situation may affect their general comfort and decrease compliance with recurring injections [1] . Therefore, the aim of this study was to assess the effects of various parameters, mainly previous experience with intravitreal injection on patient anxiety. Methods This prospective, consecutive, observational, noninterventional study included 113 patients with diagnosis of wet-type age-related macular degeneration (AMD) and diabetic macular edema (DME). None of the patients had known psychiatric conditions or used anxiolytic drugs. All subjects received intravitreal ranibizumab or bevacizumab injections performed by the same surgeon. To measure the level of anxiety, Spielberg's State-Trait Anxiety Inventory (STAI) questionnaire was used. All patients completed the questionnaire by themselves immediately before intravitreal injection. STAI is the "gold standard" for measuring preoperative anxiety [2][3][4] . It comprises separate self-report scales for measuring two distinct anxiety concepts: state anxiety and trait anxiety. The reliability and validity of the STAI are well reported (Cronbach's alpha = 0.896). The STAI-T scale consists of 20 statements that ask people to describe how they generally feel. The STAI-S scale also consists of 20 statements, but the instructions require subjects to indicate how they feel at a particular moment in time. The STAI-S scale can be used to determine the actual levels of anxiety intensity induced by stressful procedures. The validity of the STAI rests upon the assumption that the examinee has a clear understanding of the "state" and "trait" instructions. Each question is rated on a 4-point scale (not at all, somewhat, moderately so, very much so). The range of possible scores for form Y of the STAI varies from a minimum score of 20 to a maximum score of 80 on both the STAI-T and STAI-S subscales. STAI scores are commonly classified as "no or low anxiety" (20-37), "moderate anxiety" (38-44), and "high anxiety" (45-80). We used form Y of STAI in English and had it translated into Turkish by an expert in the respective languages. The translated forms were then retranslated back to English. The retranslated sentences which most closely resembled the original English STAI form Y sentences were used for that language. SPSS, version 18 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Normality of the data was tested with a Kolmogorov-Smirnov test to indicate the appropriateness of parametric testing. Values are presented as means ± standard deviation. A Mann-Whitney U test and Wilcoxon test were used for nonparametric data. Parametric data were analyzed using a Student t test. Pearson correlation and Spearman correlation tests were used to measure the linear association between two variables. A p value of less than 0.05 was considered to be statistically significant. Results Out of 113 patients, 58 (51.3%) were female and 55 (48.6%) were male. The mean age of the participants was 67.02 ± 9.5 years. Ninety-one patients (80.5%) were retired during the study period while 22 (19.4%) were in active working life. Seventy-one patients (63.7%) had a diagnosis of AMD and 42 (37.1%) had a diagnosis of DME. Seventy-one patients (63.7%) had experience with intravitreal injections because they had received at least one intravitreal injection previously, and 42 inexperienced patients (37.1%) received an intravitreal injection for the first time. The average state and trait anxiety scores were 49.78 ± 8.36 and 44.41 ± 6.04, respectively (Wilcoxon test p < 0.001). There was a weakly positive but nonsignificant relationship between age and state and trait anxiety scores (Pearson correlation, r = 0.138, p = 0.145 and Spearman's correlation r = 0.112, p = 0.236 respectively). Experienced, employed, male patients had higher state anxiety scores than inexperienced, retired, female patients, but the difference did not reach statistical significance (Student's t test, p = 0.783, p = 0.078, p = 0.186, respectively) ( Table 1 ). Furthermore, experienced, employed, male patients had higher trait anxiety scores than inexperienced, retired, female patients, but the difference did not reach statistical significance (Mann-Whitney U test, p = 0.810, p = 0.606, p = 0.434, respectively). However, patients with AMD had higher anxiety scores than DME patients, and the difference in trait anxiety scores between groups was statistically significant ( p = 0.033). Discussion Intravitreal injection is one of the most common treatments in most tertiary eye care centers. In recent application protocols, patients should receive repetitive injections during the treatment process. Although repetitive injections are an economical and psychological burden for the patients, they are necessary for treatment success. Segal et al. [5] showed that one quarter of the patients in their study had high anxiety scores. They also found significant positive correlation between pain during the injection and preprocedural anxiety level. In their study, retired patients and male gender had higher pain scores than patients who were employed and female. Therefore, we expected that anxiety level would be higher in retired and male patients, but we did not find a significant difference between employed and retired and between male and female. Furthermore, Segal et al. did not find a significant correlation between pain score and prior injections. In the current study, being in employment, gender, age, and previous experience with intravitreal injection were not significantly associated with anxiety levels. Increased experience did not resolve patient anxiety, which might be due to previously perceived pain or the idea of a continuing disease process. Although we expected that younger patients would have higher anxiety scores, age had an unexpected positive correlation with anxiety scores. Similarly, in a study conducted by Chen et al. [6] it was reported that there was no statistically significant relation between anxiety and the number of previous injections. Furthermore, they proposed that listening to classical music before and during intravitreal injections significantly decreased anxiety in patients. Also, in this study, the average state anxiety score was significantly higher than trait anxiety scores. This difference can be associated with having an intravitreal injection procedure, but can also be observed when patients attend hospital or see doctors ("white coat" phenomenon). Moreover, fear of job loss, vision loss, and struggle to earn a living may be other possible reasons for increased state anxiety in employed patients. Segal et al. [5] evaluated pain scores in different diagnoses, AMD versus DME, and found patients with AMD had higher pain scores than DME patients. The difference of anxiety level between the two diseases was not evaluated in their study. In our study, it was found that AMD patients had higher anxiety scores. In looking at the results of our study and the one by Segal et al., it may be said that the injection process affected AMD patients more than DME patients. This situation may point to these two different diseases having their own psychological and pathophysiological processes. In order to understand the differences between the two diseases, social and economic situations in addition to the processes described above should be investigated. In conclusion, anxiety is a crucial issue for all patients receiving an intravitreal injection, and it may not show significant differences according to sociodemographic status. AMD and diabetes mellitus may be considered two different diseases which have their own psychological and pathophysiological processes. The relevance of the results would be much greater if we had an intraindividual comparison of injections between inexperienced and experienced patients. Nevertheless, high anxiety scores found in this study emphasize that health care providers should try to decrease anxiety levels during the course of treatment. Statement of Ethics The study adhered to the principles of the Declaration of Helsinki. Informed consent was obtained from all participants. Disclosure Statement The authors have declared that no competing financial interests exist.
2019-05-11T13:05:07.810Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "29290421c951d76cdff61d9d3068dbab216fda9f", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/478993", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db6f791b9692bcf6cd3b8e64f6cb675d92820768", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
235743654
pes2o/s2orc
v3-fos-license
Ultralow-threshold laser using super-bound states in the continuum Wavelength-scale lasers provide promising applications through low power consumption requiring for optical cavities with increased quality factors. Cavity radiative losses can be suppressed strongly in the regime of optical bound states in the continuum; however, a finite size of the resonator limits the performance of bound states in the continuum as cavity modes for active nanophotonic devices. Here, we employ the concept of a supercavity mode created by merging symmetry-protected and accidental bound states in the continuum in the momentum space, and realize an efficient laser based on a finite-size cavity with a small footprint. We trace the evolution of lasing properties before and after the merging point by varying the lattice spacing, and we reveal this laser demonstrates the significantly reduced threshold, substantially increased quality factor, and shrunken far-field images. Our results provide a route for nanolasers with reduced out-of-plane losses in finite-size active nanodevices and improved lasing characteristics. U ltra-compact lasers operating in the single-mode regime have been a long-standing goal for nanophotonics. Various mechanisms for strong light confinement were proposed and demonstrated in subwavelength and wavelength-scale optical cavities used to decrease their lasing threshold [1][2][3][4] . However, nanolasers such as defect-type photonic crystal lasers and plasmonic lasers possess limited output power and exhibit instability to the structural disorder 5,6 . These properties hinder the practical applications of nanolasers although they have small mode volumes and reduced laser thresholds. On contrary, a band-edge type laser with a periodic structure without defects has a relatively high threshold, but a high output power with the possibility of topological robustness 7 . Optical bound states in the continuum (BICs) were shown to be a versatile tool for substantial suppression of out-of-plane radiative losses and dramatic enhancement of the quality factor (Q factor) in infinite periodic structures to provide low-threshold lasing and high-output powers [8][9][10] . The intrinsic topological nature of BICs splits them into a few groups, with two of the most conventional kinds represented by symmetry-protected BICs, existing at high-symmetry points of the momentum space, and accidental BICs, which can be realized for an arbitrary in-plane wavevector. Thus far, symmetry-protected BICs and accidental BICs were observed successfully in Si 3 N 4 and Si photonic crystal slabs 11,12 . In addition, the lasing action was achieved for BIC cavities in a few recent experiments [13][14][15][16][17][18][19] . The BIC-enhanced feature of a reasonably low threshold was successfully demonstrated in a cavity with lattices of finite size 13 . However, most of the developed lasers were still based on cavities of substantially large scales with hundreds of periods and despite that, they did not demonstrate high Q factors. Very recently, a new kind of BIC mode was suggested, which we term here as a super-BIC (BIC in the supercavity regime with extremely high values of the Q factor), that originates from the merging of several BICs in the momentum space 20,21 . The performance of super-BICs for active devices, however, was never discussed or studied. Extended periodic structures consisting of hundreds of unit cells are feasible mostly for passive photonic devices 11,12,20 . For active devices such as a laser, however, it is critical to use a finite structure with a small footprint due to the limited spot size in optical or electrical pumping. For photonic crystal slabs with just a few dozen periods, the Q factor of conventional BICs is significantly reduced [22][23][24][25] . Thus, in the most of available studies, the size of BIC cavities was assumed to be infinite for analysis, which caused the difference between the experimental and theoretical results. In addition, it remains unclear whether the topological properties of BIC can be maintained in a finite-size cavity. Here we demonstrate efficient lasing in a finite periodic photonic structure for three different regimes using one design: the symmetry-protected BIC laser, accidental BIC laser, and the super-BIC laser which is a combination of the first two BIC lasers, achieved by tuning the lattice constant. We achieve lasing action from small-scale dielectric photonic crystal slabs operating in the super-BIC regime. Our analysis shows that in the vicinity of the super-BIC mode the finite-size cavity can possess a high radiative Q factor, in contrast to the other BIC modes. We also show that for finite-size resonators the optimal lattice constant providing the lowest radiation is shifted compared to the infinite-size structure. After the transition to the super-BIC regime, we measure the farfield laser image with strong angular confinement, the reduced threshold to~1.47 kW/cm 2 , and the increased Q factor up tõ 7300. The measured threshold peak power is approximately 50 to ten million times lower than that of earlier demonstrated BIC nanolasers [13][14][15][16][17][18][19] . Our study presents the direct observation of all the BIC lasers simultaneously and provides an efficient recipe to reduce the optical loss in active nanocavities with finite sizes. Results Infinite-size BIC cavity. The design of the laser cavity is based on an infinite-size InGaAsP photonic crystal slab structure with a thickness of 650 nm (inset in Fig. 1a) modulated with a squarelattice array of air holes. The calculated band structure shows that only the fundamental transverse-electric-like (TE-like) band is located within the emission wavelength range of the InGaAsP, 1.5 μm, due to the thick slab of high-index material (Supplementary Fig. 1). The calculated map of Q factor in the k-space shows that the fundamental TE-like band demonstrates high Q factors at the origin and at specific points along highly symmetric Γ-X and Γ-M directions ( Supplementary Fig. 2): in total, one symmetry-protected BIC (|k|= 0) and eight accidental BICs (|k|k t = 0.067a/2π) are formed 26 . In addition, the multipolar decomposition of electromagnetic fields inside the unit cell shows that both symmetry-protected and accidental BICs are dominated by the out-of-plane magnetic dipole, which agrees well with the fundamental nature of the TE-like mode ( Supplementary Fig. 3). Each of accidental BICs is formed because of destructive interference between the dominating out-of-plane magnetic dipole and weak in-plane electric dipole, in-plane electric quadrupole, out-of-plane magnetic quadrupole. We also calculate the evolution of the polarization phases in the far-field for symmetryprotected and accidental BICs ( Supplementary Fig. 4) 26 . The calculation shows the topological charge q = 1 for the symmetryprotected BIC and four accidental BICs located on the Γ-M band, and q = -1 for four accidental BICs located on the Γ-X band. In addition, we calculate the wavelengths of the resonant modes shown in Fig. 1a. A symmetry-protected BIC and an accidental BIC mode appear in the range of the lattice constants a from 560 to 590 nm. The wavelengths of these modes become closer as the lattice constant increases at a < 576.3 nm, and only a single mode is observed after the two BIC modes merge at a = 576.3 nm. Next, the normalized radiation loss of the fundamental TE-like band is calculated for different lattice constants along the Γ-X direction (black dots in Fig. 1b). At a = 568 nm, the zeroradiation loss is observed at k = 0 (symmetry-protected BIC) and at k = k t (accidental BIC), showing each sharp dependence on the wavevector. The position of the accidental BIC approaches k = 0 as the lattice constant increases; the merging of the BICs occurs at a = 576.3 nm and modifies the dependence on the wavevector in the vicinity of the band edge. The k-vector dependence returns to the narrow one at a = 590 nm when the single symmetryprotected BIC is restored. The radiation loss follows the laws [k (kk t ) (k + k t )] 2 , k 6 , or k 2 depending on the lattice constant (red curves in Fig. 1b), because of the interaction of the topological charges q 0 at the symmetry-protected BIC and q t at the accidental BIC 20 . This feature is also seen in the graph of the k vs. Q factor ( Supplementary Fig. 1b). Finite-size BIC cavity. The fast sixth-order dependence of the radiation losses on the wavevector is crucial to keep a high Q factor for the BIC mode in a finite-size cavity 20 . Fig. 2a shows the mode magnetic field profile H z at the Γ point of the fundamental TE-like band in a photonic crystal slab with 15 × 15 periods. Unlike the infinite-size cavity, the mode profile shows an envelope distribution with a convex shape accompanied by mode leakage into free space 25,27 . This finite-size effect results in the mode broadening Δk (white circle) in the k-space field distribution, FT(H z ) (Fig. 2b), where FT means the spatial Fourier transformation. The mode broadening leads to increased radiation loss due to mixing with the off-Γ point modes with a finite Q factor. To increase the Q factor, it is essential to reduce the radiation loss due to the broadening (Fig. 2c). The undesired radiation loss at off-Γ points can be effectively suppressed by moving the off-Γ BIC with the charge q t inside the mode broadening range Δk (pre-merging regime) or to the Γ point k = 0 (merging regime). Notably, in the merging process with a limited number of air holes, the most effective radiation suppression can be achieved by placing the charge q t at an optimum k t in the pre-merging configuration rather than at k = 0 (Fig. 2c), which is explained below. To elucidate the radiation loss mechanism in the finite-size system, we calculate the 2D maps of the radiation factor in momentum space with varying lattice constants (Fig. 2d). The radiation factor is defined by the k-space mode distribution in a finite domain and the Q factor in an infinite domain, i.e., |FT(H z ) (k)/Q(k)|, to account for both effects of the cavity size and the radiative loss. For example, the radiation factors are high before merging (a = 568 nm) and after merging (a = 578 nm) because of the substantial radiative loss at off-Γ points. At the exact merging condition of a = 576 nm, radiation loss still occurs at the boundary of the mode broadening. Interestingly, a much lower radiation factor is observed in the pre-merging regime at a = 573 nm because the radiative loss is strongly suppressed in the large k-space area covered by q 0 and q t . For more detailed quantitative analysis, we plot the inverse radiation factor, [Σ | FT(H z )(k)/Q (k)|] −1 , as a function of the lattice constant when the hole diameter and slab thickness are fixed to 400 and 650 nm, respectively (Fig. 2e). The broad merging configuration occurs due to the finite-size effect. The graph becomes narrower and the maximum occurs at a larger lattice constant when the cavity size increases from N = 15 to 21. Further discussion is performed to investigate the influences caused by other structural parameters on the radiation factor ( Supplementary Fig. 5). In addition, we calculate the radiative Q factor using a fullwave numerical simulation (Fig. 2f). The agreement between the radiative Q factor and the inverse radiation factor demonstrates the effectiveness of our analysis based on the radiation factor. Consequently, in the finite-size cavity, the radiative loss near the merging-BIC regime (from pre-merging to merging) is still low, whereas the loss in the other BIC regimes is relatively high. Also, the optimal point with the lowest radiation in the finite and infinite-size cavities could be different from each other (Supplementary Note 1). Measurements of BIC lasing. To experimentally verify the merging of the BICs, we fabricate square-lattice photonic crystal structures using a 650 nm-thick InGaAsP slab incorporating seven quantum wells (see "Methods" section). A set of samples with the lattice constant varying from 560 to 580 nm in 1 nm steps is fabricated with the hole radius fixed at~200 nm (Supplementary Fig. 6). The scanning electron microscope (SEM) images of the fabricated sample are shown in Fig. 3a. The photoluminescence (PL) measurements are performed using a 980nm pulsed pump laser with a spot size of~5.4 μm at room temperature (see "Methods" section). One or two types of lasing modes are observed in the photonic crystal structures, depending on the lattice constant. The Fig. 3b) and two peaks at a ≤ 568 nm. The wavelengths of the two peaks increase with different slopes as the lattice constant increases at a ≤ 568 nm. To identify these lasing modes, we measure their far-field mode images (see "Methods" section). Two distinct far-field images are observed, one with a highly confined donut shape (right inset; Fig. 3b) and the other with a widespread shape (left inset; Fig. 3b). Consequently, the lasing peaks can be classified into the two groups of symmetry-protected BIC (black dots) and accidental BIC modes (red dots), based on the measured far-field images and a comparison with the simulation results including those presented in Fig. 1a. We note that the merging occurs at ã 574 nm by extrapolating the wavelength of the accidental BIC mode. Similar features are exhibited by the other samples with slightly different structural parameters ( Supplementary Fig. 7). To further investigate the optical properties of the symmetryprotected BIC laser, we compare the measured far-field images before merging (a = 568 and 571 nm) with those after merging (a = 574 and 578 nm) (Fig. 3c). The shapes of the lasing modes are identical, whereas the mode size decreases as the lattice constant increases and remains unchanged after merging. For more detailed quantitative analysis, we estimate the angle from the center to the first intensity maximum in eight measured far-field images (see "Methods" section). The change in the mode size is clearly shown in Fig. 3e (black dots). The angle decreases from 4.9°to~3.7°until a = 574 nm and remains almost constant at a ≥ 574 nm. In fact, the size of the far-field image depends on the mode size in the near field 28 . Thus, by comparing the sizes of the measured and simulated far-field images, one can estimate the effective number of air holes (N) in the photonic crystal structure for the excitation of the corresponding near-field image. Our numerical simulations show that the structures with N = 19, 23, 27, and 27 yields the measured far-field images in Fig. 3c (left to right in Fig. 3d). A more systematic comparison between the measurement and simulation is shown in Fig. 3e, where the red dashed lines indicate the simulation results. Therefore, in the evolution from before-merging to after-merging, we observe that the after-merging BIC mode is lasing with a larger effective N. We term this BIC mode at a ≥ 574 nm the super-BIC mode, as it possesses optical characteristics such as a single lasing peak and a shrunken far-field image that are distinct from those of the conventional BICs at a < 574 nm. In particular, the super-BIC mode is confined more strongly by the effectively increased number of air holes. This unique feature is useful for improving the laser performance despite the effect of the finite size on the Q factor. Lasing properties. We examine the lasing properties of all demonstrated BIC lasers. First, the lasing spectra and light in-light out (L-L) curves are measured in the accidental BIC lasers (Supplementary Fig. 8); their threshold values are much larger than those of the symmetry-protected BIC lasers (Supplementary Fig. 9). Next, we measure the linewidth of the resonant peak and L-L curves in the symmetry-protected BIC lasers more systematically by varying the lattice constants ( Supplementary Figs. 10 and 11). Fig. 4a shows representative measured data for three different lattice constants. Clear lasing behaviors are observed at the lasing thresholds of~448,~340, and~413 μW at a = 571, 574, and 578 nm, respectively. Linewidth narrowing is also observed, although the below-threshold linewidth is small at a = 574 and 578 nm due to the high Q factor, which will be discussed in Fig. 4c. The threshold is much lower in the super-BIC regime at a = 574 nm. This feature is shown more clearly in the plot of the threshold power density vs. the lattice constant (Fig. 4b). The threshold decreases significantly as the lattice constant increases until a = 574 nm and becomes almost constant at a ≥ 574 nm. Thus, the measurement indicates that the lasing threshold is minimized by the transition to the super-BIC mode. We note that the threshold peak power of~340 μW and threshold power density of~14.7 μW/μm 2 at a = 574 nm are the smallest among all the BIC lasers previously reported (Supplementary Table 1) 5,[13][14][15][16][17][18][19]29 . To understand the ultralow threshold of the super-BIC laser, we estimate the Q factor, λ/Δλ, where λ is the peak wavelength and Δλ is the linewidth of the peak at the transparent pumping condition at~340 μW (see "Methods" section) 30 . The Q factors are plotted as a function of the lattice constant using all the linewidth data in Supplementary Fig. 10 (Fig. 4c). The Q factor increases with the lattice constant before merging but is almost constant after merging because Δλ is limited by the spectrometer resolution of~0.22 nm. The experimental Q factor has a maximum value of~7300 due to the spectrometer-limited linewidth, although the actual value is much higher. The maximum Q factor of~7300 is exhibited from a = 574 to 578 nm, where the lasing threshold is also minimized. Therefore, the enhanced Q factor in the super-BIC regime is responsible for the significantly reduced threshold. We note that the effects of other factors such as mode volume and spontaneous emission factor on the threshold are not significant in the BIC laser with a relatively large mode volume 13 . Our measurements show that the super-BIC mode has a higher Q factor than the other BICs before merging, as indicated by Fig. 2. The effective increase in the number of air holes confining the super-BIC mode (Fig. 3e) further enhances the Q factor. The Q factor starts to decrease again at a = 579 nm (Fig. 4c), which indicates that the merging effect ends in this regime. This feature is more evident when the pump spot size increases (Supplementary Fig. 12). For larger pump spot sizes, the threshold values at a = 578 and 579 nm increase, whereas the super-BIC laser still shows a low threshold (Fig. 5a). In particular, the super-BIC occurs in a narrower range of lattice constants as the pump spot size increases, which is like the simulation result shown in Fig. 2e. Also, the super-BIC regime from a = 574 to 577 nm agrees well with the regime of calculated high radiative Q factors. Therefore, the merging point at a~574 nm is further clarified through the experiment performed with varying pump spot sizes. At a = 578 and 579 nm, the lasing mode turns into the isolated BIC. Furthermore, additional laser properties are investigated in the super-BIC regime (a = 574 nm). First, we measure the polarization-resolved lasing images by placing a linear polarizer in front of the IR camera (Fig. 5b). These images exhibit an intensity minimum along the direction of the polarizer, which agrees well with the previous report 31 . Second, we estimate the spontaneous emission factor, by comparing the measured L-L curve with that obtained from the conventional rate equations (Fig. 5c). The estimated spontaneous emission factor is~0.01, which is smaller than the values of ultrasmall nanolasers 32 due to the relatively large mode volume [33][34][35] . Third, the interference images are measured in the spontaneous emission, amplified spontaneous emission, and lasing regions in the super-BIC laser ( Fig. 5d and "Methods" section). The interference pattern is clearly observed only in the lasing region, exhibiting the calculated coherence time of >38 ps (ref. 36 ). Fourth, we measure the decay times in the spontaneous emission and lasing regions of the super-BIC laser (Supplementary Fig. 13). The measured decay time of <138 ps in lasing is fast enough for the high-speed modulation 37,38 . Discussion We have demonstrated a super-BIC laser based on a finite-size photonic cavity with a small footprint. We have observed a transition to the super-BIC laser from the symmetry-protected BIC and accidental BIC lasers by tuning the lattice constant. The theoretical analysis shows that the radiation loss in super-BIC follows the law k 6 depending on the lattice constant and the super-BIC keeps a high Q factor even in the finite-size cavity. Thus, the high-performance optical characteristics, including an ultralow threshold, a single lasing peak, and a high Q factor, have been measured for the super-BIC laser. These features in the super-BIC laser are distinguishable from those in the symmetryprotected BIC laser. Notably, its threshold is extremely low and is limited by the transparency value of the gain material, as a result of the low radiative loss in the finite photonic structure. Furthermore, the semiconductor active material with high optical gain well supports the superior optical properties of super-BIC. Compared with other BIC lasers using similar gain materials (Supplementary Table 1), the threshold values we have measured are lower in the super-BIC regime and similar outside the super-BIC regime. For the practical implementation of such an ultralow-threshold super-BIC laser, it is necessary to develop a flexible and stretchable laser structure 39 to vary the lattice constant and find the merging point more easily. In addition, electrical pumping should be performed: an efficient current path needs to be formed to inject carriers to the whole area of the BIC cavity 7,40 . We believe that our findings will pave the way to significantly reduced optical losses in active nanophotonic structures with a finite footprint and the development of an ultralow-threshold light source for photonic integrated circuits, by controlling the topological charges in the reciprocal space and engineering the radiation condition. Methods Numerical simulations. The photonic band diagrams and optical properties of the resonant modes are calculated in a free-standing InGaAsP membrane using a three-dimensional finite-element method (FEM) solver in COMSOL Multiphysics. Floquet periodic boundaries and perfectly matched layers are imposed in the inplane and vertical directions, respectively, for the infinite-size structures ( Fig. 1 and Supplementary Figs. 1-4). The radiation loss (γ) is calculated by collecting the outof-plane component of the Poynting vector away from the surface of the slab (Fig. 1b). Each radiation loss is normalized by the radiative loss at k = 0.275 (γ 0 ), where the fundamental TE-like band is located near the light cone. The field decomposition into Cartesian multipoles ( Supplementary Fig. 3) is done via the integration of the total field over the membrane volume within one unit cell. The topological charge is evaluated by integration of the polarization phase in the farfield domain over a closed loop in the k-space. For the simulation of the finite-size structures (Fig. 2), perfectly matched layers are introduced in all directions including the in-plane domain of N × N size. The far-field simulation is performed using a circular-shaped outer boundary to remove artificial interference patterns (Fig. 3d). In addition, a home-made three-dimensional finite-difference time-domain (FDTD) method is used (Fig. 2f and Supplementary Fig. 5c) to calculate the radiative Q factor and cross-check the validity of the result in Fig. 2e, because FDTD can directly calculate Q factors in a finite-size cavity in the time domain by observing the time decay of resonant modes. The convolutional perfectly matched layer is used as an absorbing boundary condition in the FDTD. The size of the mesh grid is 10 nm, and more than 1000 periods of resonance oscillations are observed in the time domain to precisely calculate resonant wavelengths and Q factors. The Poynting vector is decomposed into in-plane and vertical components in the slab structure, and the radiative Q factor is then obtained using the vertical component. In the FEM and FDTD simulations (except for Supplementary Fig. 5), the hole diameter and slab thickness are set to 400 and 650 nm, respectively. Different structural parameters are examined in Supplementary Fig. 5. The refractive index of the InGaAsP slab is set to 3.4. Device fabrication. The samples are fabricated using a 650 nm-thick InGaAsP/1 μm-thick InP/100 nm-thick InGaAs/InP substrate wafer. The InGaAsP layer includes seven 7 nm quantum wells in the middle, whose central emission wavelength is~1.5 μm. The InP and InGaAs layers act as sacrificial and etch stop layers, respectively. To define a periodic square-lattice structure, electron-beam lithography is performed at 30 keV on a polymethyl methacrylate (PMMA) layer coated on the wafer. The hole diameter is fixed at~400 nm and the lattice constant varies from 560 to 580 nm. Chemically assisted ion-beam etching is performed to drill air holes in the InGaAsP layer while using the PMMA layer as an etch mask. Finally, the sacrificial InP layer is selectively wet-etched using a diluted HCl: H 2 O (4:1) solution at room temperature, and the remaining PMMA layer on top of the slab is removed by O 2 plasma. Optical measurements. A 980-nm pulsed laser diode (2.0% duty cycle, 1 MHz period) is used to optically pump the fabricated samples at room temperature. The light emitted from the cavities is collected by a ×100 objective lens with a numerical aperture of 0.85 (LCPLN100XIR, Olympus) and focused onto either a spectrometer with an IR array detector (SP 2300i and PyLoN, Princeton Instruments) or an InGaAs IR camera (C10633, Hamamatsu). The spot size of the pump laser is varied from~5.4 to~9.2 μm using additional bulk lenses. The resolution of the spectrometer is~0.22 nm. In Fig. 3b, the wavelength is taken just above the threshold of each laser to minimize the thermal effect. In the insets of Fig. 3b and in Fig. 3c, the far-field images are measured using a 4-f system consisting of bulk lenses and a spatial filter. All the L-L curves and linewidth graphs are plotted as a function of the peak pump power. In Fig. 5d, the Michelson interferometer setup is used to measure the interference images 36 . The beams from the two arms of the interferometer are overlapped in time and space. In Supplementary Fig. 13, time-resolved PL measurement is performed using a near-IR femtosecond fiber laser (FemtoFiber pro-NIR, Toptica Photonics) with a repetition rate of 80 MHz and a wavelength of 780 nm. The emitted photons are detected by a superconducting nanowire singlephoton detector (Eos 210 CS, Single Quantum) and time tagger (Time Tagger Ultra, Swabian Instruments). In addition, we estimate the external differential quantum efficiency,~1.2%, in the laser structure with a = 574 nm (Fig. 4a, middle). This value needs to be further improved through the optimization of the structure 40,41 , although it is higher than those in small-size nanolasers 42,43 . Data analysis. To accurately estimate the size of the donut-shaped far-field image (Fig. 3e), we measure the average distance between the center of the donut and the position where the intensity is maximized. This distance is converted to the angle by comparing the far-field image with the reference image corresponding to a numerical aperture of the objective lens we used. The experimental Q factor in Fig. 4c is obtained by λ/Δλ, where λ is the peak wavelength and Δλ is the linewidth at the transparency pump power. The transparency pump power L tr is estimated using the conventional rate equation 30 : L tr = 1/η × ħ ω p V a B N tr 2 , where η is the absorbed ratio in the quantum wells, ħ is the Plank constant, ω p is the frequency of the 980-nm pump laser, V a is the active region volume, B is the radiative recombination coefficient, and N tr is the transparency carrier density. The nonradiative recombination coefficients are ignored. Using η = 0.24, V a = 1.12 × 10 −12 cm 3 , B = 1.6 × 10 −10 cm 3 /s, and N tr = 1.5 × 10 18 cm −3 for our InGaAsP quantum wells, we obtain L tr~3 40 μW. In addition, to estimate the spontaneous emission factor of the super-BIC laser (Fig. 5c), the logarithmic gain g(N) = g 0 (c/n eq ) ln(N/N tr ) is assumed in the rate equations, where N is carrier density, g 0 is the gain coefficient, c is the speed of light, and n eq is the effective refractive index 30 . The following parameters are used for fitting: g 0 = 3000 cm −1 (ref. 44 ) and n eq = 2.3. Data availability The data that support the findings of this study are available from the corresponding authors upon request.
2021-07-07T06:16:39.360Z
2021-07-05T00:00:00.000
{ "year": 2021, "sha1": "191917277052ed1f33837771d560adf70cdf8be6", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-24502-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "661ceaedb874deb9b2fa4d1803e4ad261e69d445", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
3744336
pes2o/s2orc
v3-fos-license
Cytogenetic monitoring of human populations at risk in Egypt: role of cytogenetic data in cancer risk assessment. Somatic mutation plays a critical role in carcinogenesis. Numerous environmental agents can increase the probability that somatic mutation will occur. The use of genotoxicity testing is essential for assessing potential human toxicity so that hazards can be prevented. Cytogenetic monitoring of human populations exposed to chemicals has proved to be a useful tool for detecting the chemical mutagenic effects. Cytogenetic analyses of human chromosomes in peripheral lymphocytes allows direct detection of mutation in somatic cells. Different methods can be used for chromosomal analysis (conventional chromosomal analysis, sister chromatid exchange, micronucleus frequency detection). Micronucleus frequency can be detected either in peripheral blood lymphocytes or in exfoliated cells. Different examples of human population studies are presented. Several problems that are found in biomonitoring studies are discussed. These studies should help us learn about individual exposure assessment and biologically relevant doses, leading to quantitative assessment of human cancer risks. Introduction Although neoplasia is familiar in human populations, possible mechanisms for its causation remain limited to speculation. A relatively few situations exist in which the onset of cancer has been closely associated vith a specific causative agent, either environmental or genetic. For example, mesothelioma is now considered strongly suggestive of exposure to asbestos. Humans are exposed to a large number ofgenotoxicants via ingestion, respiration, or absorption through the skin. Human exposure patterns are complicated with respect to exposure to single agent or complex mixtures. The situation is further complicated by agents acting synergistically or with inhibitory effects. An enormous number of chemical or physical agents have been shown in experimental systems to be mutagenic or carcinogenic or both. Mutagenesis and carcinogenesis are frequently grouped together in discussions of possible risks to human health. Identification of mechanisms of cancer development has involved consideration of the somatic mutation hypothesis on the basis of the widespread occurrence of chromosomal abnormalities in cancer cells. Subsequent correlations between the mutagenicity and the carcinogenicity of radiation and chemicals have provided considerable support for the hypothesis (1). Human Biomonitoring of Exposure to Genotoxicants Epidemiologic studies on cancer in humans are necessary for risk assessment approaches. However, the epidemiologic approach is limited for two main reasons: first, only relatively high risks can be detected, and second, the observations on end effects are the consequence ofexposures that may have occurred several decades earlier. Improved epidemiology, ideally, needs direct and accurate estimates ofindividual exposures. Biomonitoring has become an essential part ofexposure assessment; its special objective is to define biologically relevant doses. It may be relevant to look for early effects directly in the exposed individuals or groups, especially from high exposure occupations (2). Currently, there is a need for multidisciplinary studies to evaluate the effect of different genotoxicants. A general strategy for risk assessment of environmental genotoxicants is to determine the biological characteristics of genotoxicants and their toxic activity to the genetic material. Proper and relevant methods for genotoxicity assessment need to be used both at the experimental (3,4) and at the human exposure assessment levels (Table 1). Molecular epidemiology aims at combining laboratory and epidemiologic tools in a more analytical and sensitive approach for individual exposure assessment (5). Cytogenetic Methods for Testing the Effect of Genotoxicants Cytogenetic methods have become widely used in analyzing human chromosomes and screening various mutagens for their effects directly on human somatic cells in vivo (6). Cytogenetic End Points Conventional or Classical Cytogenetic Technique. Homogenous staining of chromosomes was the first classical cytogenetic technique for testing the mutagenic effects on human chromosomes. This technique permits rapid overall analysis of tested cells, i.e., checking the chromosomal number and registering chromosomal damage. The typical postradiation chromosomal aberrations are deletions, terminal and interstitial; dicentrics and rings, with or without fragments; translocations; and inversions. The aberrations induced by chemicals are quite different from postradiation changes. The most frequent ofthese aberrations are the chromatid and chromosome breaks. Chromatid and chromosomal exchanges and rearrangements are rare. Some types ofchromosomal aberrations escape detection by this conventional technique, for example, periand paracentric inversions, small deletions, and reciprocal translocations. These can be detected by banding techniques. The automatic chromosomal analysis offers the possibility to screen systematically large groups ofoccupationally exposed people. However, this method is still in the experimental stage. It is expensive and is not yet ready for widespread use. Harlequin Technique. The harlequin technique was described as a sensitive, convenient method for routine chemical mutagenicity testing (7). An increased number of sister chromatid exchanges (SCE) was noticed even when the dose ofa chemical was below the concentration required for detecting the classical chromosomal aberrations. The increase in incidence after radiation in vitro and in vivo is very low. The detection ofSCE in cells requires a lower level of technical skills and is more rapid than scoring the classical aberrations; SCEs occur by different mechanisms than the classical aberrations; therefore, we are testing two different types of DNA damage. For these reasons, the harlequin technique is convenient for large screening studies. Micronucleus Technique. IN BINUCLEATED PERIPHERAL BLWOD LYMPHOCYTES. The micronucleus testing procedure has a number of important advantages over the analysis of metaphase chromosomes. Preparing cells as well as scoring slides is simpler and quicker than chromosome analysis, but not at the expense of accuracy. The micronucleus (MN) test is at least as sensitive as the metaphase method (chromosomal breakage analysis). In addition, MN can be used to screen for effects on the spindle apparatus. It is highly suitable for routine toxicological screening (8). Micronuclei enclose acentric chromosome fragments or whole chromosomes that have not been incorporated in the main nuclei at cell division. Micronuclei require one cell division to be expressed. Consequently, the conventional MN technique (8) is imprecise because the cells that have undergone only one division and contain the micronuclei cannot be identified separately from the total population of lymphocytes. To overcome this problem, cytokinesis was blocked using cytochalasian B. Cytokinesis-blocked cells are easily recognizable by their binucleated appearance. They are dividing cells that have completed nuclear but not cytoplasmic division (9). IN EXFOLIATED HUMAN CELLS. This approach involves a modification ofthe micronucleus test to use on exfoliated human cells (MEC test). Micronuclei present in these cells represent chromosomal breakage occurring in the dividing cell populations ofthe basal epithelial layers. The daughter cells containing the micronuclei migrate up through the epithelium and are exfoliated. The biological relevance ofemploying the MEC test to study carcinogen-exposed populations is that the approach serves as an endgenous dosimeter of genotoxic damage directly in the tissue that is the target for the carcinogens and from which tumors will later arise. Furthermore, the fact that the assay provides an estimate of the frequency ofchromosomal breakage is itself significant. Although the MEC test will not indicate whether a specific required chromosome change has occurred in a carcinogen-exposed tissue, a chronic increase in the level of MEC in a tissue may suggest an increase in the probability ofthe necessary chromosomal change occurring (10). Relevance of Cytogenetic Damage Evidence suggests chromosomal changes may be intrinsically linked to cancer development. Chromosomal instability is characteristic of dysplasias and many premalignant conditions, and specific chromosomal aberrations appear to be associated with many types ofcancer (11). These abnormalities are usually represented by a translocation or a loss ofa chromosome band. Furthermore, individuals having to genetic syndromes in which chromosomal breakage rates are elevated, such as the chromosomal breakage syndromes, Bloom's syndrome, ataxia-telangictasia, Fanconi's anemia, and xeroderma pigmentosum, are also characterized by an increase in the risk for cancer (12,13). Finally, a role for chromosomal breakage, translocation, or loss is implicated in the sequence ofevents leading to development of neoplasia, as such changes can activate oncogenes or result in the loss of tumor anti-oncogenes or suppressor genes (14,15). Studies of Humans Exposed to Chemicals Population cytogenetic monitoring is one ofthe ways in which the effects ofenvironmental mutagens may be detected in man. In Egypt, workers who have undergone long-term occupational exposure to high levels ofa test chemical are supposed to be at high risk. A cytogenetic analysis is suitable for this purpose. A small peripheral blood sample provides enough cells to be scored for chromosomal aberrations. Blood collections are technically easy and it is therefore possible to carry out repeated periodical sampling of exposed workers. Examples of Monitoring Studies Several studies have been conducted in Egypt for cytogenetic monitoring (Fig. 1). Cytogenetic Effects in Traffic Policemen. The aim of this study was to evaluate the cytogenetic effects in humans exposed to automobile exhaust. The induction ofchromosomal damage was studied in an exposed group of28 traffic policemen with exposure of more than 10 years and a control group of 15 policemen trainers from the Faculty of the Police. The percentage of chromosomal aberrations (7.7 ± 3.1) as well as the mean sister chromatid exchanges (7.5 ± 3.4) were significantly higher among the traffic policemen than in the control group. The percentage of chromosomal aberrations was 2.8 ± 2.1 and the mean sister chromatid exchanges was 4.8 ± 2.9 in the control group. The cause for this elevated chromosome damage is most likely due to their exposure to pollutants from automobile exhaust; however, the increase is not correlated with the blood lead level or the duration of employment. On the other hand, the increase in chromosome damage among the traffic policemen is enhanced further by smoking (16). Cytogenetic Study in Workers Occupationaly Exposed to Mercury. This study was conducted to evaluate the cytogenetic effects in male workers exposed to mercury fulmin. A total of 29 male workers and 29 age-and sex-matched controls were examined. The mean mercury level in urine ofthe exposed workers was 123.2 + 54.1 g/L compared to 39.2 ± 11.1 g/L in the control group. The difference was statistically significant (p < 0.001). Metaphase chromosomes were studied. Micronucleated peripheral blood lymphocytes were also analyzed in cytochalasian-B blocked binucleated lymphocytes. The percentage of metaphases with chromosomal aberrations was significantly higher (p < 0.001) in the exposed group (6.1 ± 2.3) compared to the control group (2.8 ± 0.7). The chromosomal aberrations detected were in the form of gaps, breaks, and fragments. A significant increase in the incidence of micronucleated lymphocytes was found among the exposed group (7.1 ± 4.2) compared to the control group (5.4 ± 2.2) (p < 0.01). The detected chromosomal damage correlated neither with the duration ofexposure nor with the urinary mercury level (17). Cytogenetic Study among Workers Packing Pesticides. The aim of this study was to investigate cytogenetic changes among % CH.A. workers packing a variety of pesticides. Twenty-eight workers from two companies in Egypt were selected for this study. Exposed workers were matched by age and sex to 20 controls, who worked as clerks. Duration of exposure was 12.9 ± 6.2 years. Lymphocytic cultures were set up and harvested at 48 hr for chromosomal aberration assay and 72 hr for sister chromatid exchange assay. The mean frequency ofchromosomal aberrations was 4.58% among the exposed group versus 2.55% among controls. The difference was statistically significant (p < 0.05). The exposed workers who were smokers showed elevated frequency of aberrations compared to nonsmokers (5.07% and 3.85%) (p > 0.05). No significant correlation was observed regarding chromosome aberrations and duration of exposure. Types of aberrations were mainly chromatid gaps and breaks. Sister chromatid exchanges were not significantly elevated among exposed group compared to the controls (18). Cytogenetic Study in Nurses Occupationaly Exposed to An-tineoplasticDrugs. This study evaluated the effects oflow-level occupational exposure of nursing personnel to antineoplastic drugs. Twenty nurses who constandy handled these drugs and 20 controls matched according to age and sex were examined. Micronucleated peripheral blood lymphocytes were analyzed in cytochalasian B-treated binucleated lymphocytes. Metaphase chromosomes were also studied. A significant increase in micronuclei (p < 0.001) was found for nurses (10.05 ± 4.71) as compared to the matched controls (5.42 ± 2.22). The number of micronucleated lymphocytes was significantly related to the duration of exposure (p < 0.001). The percentage ofmetaphases with chromosomal aberrations was significantly higher (p < 0.05) in the exposed group (6.1 ± 2.7) compared to the control group (2.6 ± 1.6). The detected chromosomal aberrations were in form of gaps, breaks, and fragments (19). Cytogenetc Effect of In Utero Exposure to Diagnostic Ursoundon Maternal and FetalLymphocyte Chromosomes. In this study, metaphase chromosomes were studied from 16 women and 16 fetuses exposed in utero to diagnostic ultrasound. The exposure ranged from one to ten times during different periods ofgestation. Sixteen unexposed women and 18 fetus were investigated as control groups. The detected chromosomal aberrations were in the form of gaps, breaks, and fragments. The changes (including gaps) in the exposed groups (1.19% maternal, 0.67 % fetal) were not statistically significant compared to the control groups (1.89% maternal, 1.61% fetal). Excluding gaps, the changes were also not significant. The percentage of chromosomal changes were 0.4% in the maternal-exposed group and 0.29% in the fetal-exposed group. In the control groups, the percentage were 0.83% and 0.3% in the maternal and fetal groups, respectively. No correlation could be detected between chromosomal aberrations and the frequency or the type of exposure (20). Chromosomal Breakage in Urothelial CeUs ofIndividuals with Schistosoma haematobium Infections. Individuals infected with the parasite Schistosoma haematobium have an elevated risk of bladder cancer. The underlying mechanism by which this elevation in cancer risk is produced is unresolved. The aim of this research was to determine whether inflammatory reactions triggered by these chronic infections would produce DNA damage to urothelial cells, a possible mechanism whereby precancerous changes could be induced. The study was based in a village in the Fayoum goverorate in Egypt. The prevalence of infection in this village is approximately 40%. The study involved sampling 50 infected males who had schistosoma ova in their urine. The urine was centrifuged, and the pellet, containing exfoliated urothelial cells, was fixed in mehanol. As controls for these individuals, 25 local villagers were matched with the study group. These villagers had no clinical symptoms of schistosomiasis and no ova or inflanmmatory cells in their urine. A second control group was taken from 25 individuals of the same socioeconomic status from Cairo. The mean micronucleus frequencies for urothelial cells in the infected population was significantly greater than observed among either control populations. However, interindividual variation was observed among the studied groups. Further studies examining the effect of antischistosomal therapy on micronucleus frequencies in these infected individuals are currently ongoing (manuscript in preparation). Micronucleus Frequencies in Exfoliated CeUs Obtained from Different Sites in the Oral Cavity ofXerodenna Pigmentosum Patients. Xeroderma pigmentosum (XP) patients are predisposed to skin cancer. They also have a high prevalence of squamous cell carcinoma ofthe tip ofthe tongue. Mechanistically, this enhanced risk has been attributed to a defect in the repair of DNA damage induced by ultraviolet rays from sunlight. To determine whether a relationship exists between exposure to ultraviolet light and the level of chromosomal breakage occurring in epithelial tissue in five XP patients, the exfoliated cell micronucleus test was applied to different sites in the oral cavity ofpatients. These sites were the right buccal mucosa, the left buccal mucosa, the dorsal tip ofthe tongue, and the palate. Five controls matched according to age and sex were examined concurrently. The frequency ofmicronucleated cells was determined in each exfoliated cell sample. An unequal distribution ofthe frequency of micronucleated cells was found in the different sample sites ofthe oral cavity in XP patients, with the greatest elevation in frequencies among cells collected from the dorsal tip of the tongue. In contrast, the frequency ofmicronucleated cells did not vary in samples from different sites obtained from the controls. Samples from the tip ofthe tongue were significantly lower in controls than in XP patients (p < 0.05). This observation may have some significance because this site in the oral cavity would receive the greatest sunlight exposure. These data suggest that the exfoliated cell micronucleus test can be used to study the extent to which genotoxic damage in a tissue results from the complex interplay of host and environmental factors (manuscript in preparation). Problems in Cytogenetic Monitoring Studies of Populations Exposed Environmentally Different Individual Sensitivity Different individual sensitivity is not surprising because the mutagen undergoes a long and complicated process between the first contact with the human body and the appearance of the ultimate chromosomal change, which we are able to detect as a chromosomal aberration in the lymphocyte. According to Vogel (21), this process proceeds at three levels: a) entrance of the mutagen into the human body and its excretion have a strong influence on the actual dose of the mutagen, i.e., its concentration in different tissues. Inherited or acquired changes in the function of the lungs, digestive tract, kidneys and the other organs can change the quantitative effects of the mutagen; b) metabolism of the mutagen, the detoxicating or the activating ability of the liver and the other organs in the body are important and can be influenced genetically or by environmental factors; c) direct contact with DNA. Each individual differs most probably in the ability to repair DNA damage. Problem of Valid Controls An evaluation of chromosomal changes detected in the cells of exposed workers requires reliable control data. It is more convenient to use the cells from the same person before exposure as control and then, if possible, at various time intervals during and after exposure to the chemical tested (22). This will avoid the difficulties encountered with differing sensitivities of individuals tested. Such controls are optimal but they are seldom available when occupationally exposed persons are tested. Therefore, blood samples from other persons must be used as controls. If possible the control should be matched in term of sex, age, and race. Control blood samples should be cultured under exactly the same conditions as samples from exposed people, and slides should be coded before scoring. The average percentage of aberrant cells in peripheral blood lymphocytes of healthy adult people, who are not exposed to unusual doses of mutagens, is 1 or 2% in most laboratories if gaps are not included. Young children and infants usually have less than 1% aberrant cells (5). Confounding Factors A number of factors, such as the use of drugs, drinking alcohol, smoking habits, radiation, and viral or other infections in the last 3 months before sampling, need to be taken into account because they can have a profound influence on the results. Those individuals who may not recall exposure to any of these factors may have an elevated level of chromosomal aberrations. In addition, individuals exposed to elevated levels of radiation and chemicals in the remote past are not suitable as controls because some types of aberrations may survive for years in the human body (1). Number of Persons Examined The use of chromosomal aberrations for monitoring means that we are studying the response of single, randomly chosen cells; each aberrant cell is a member of the lymphocyte population. Therefore, it is possible to obtain significant data from a few persons if enough randomly chosen cells are scored. Cytogenetic monitoring is, in fact, the monitoring of a population (cells) in a population (group of people) (6). Evaluation of the Real Consequence of Chromosomal Damage The danger ofthe neoplastic process is supposed to be greater in individuals with clones ofaberrant cells. Even ifthe persons with high levels ofaberrant cells are healthy, we must accept this finding as an indication that these persons have been exposed to some kind ofmutagen. The interindividual variability to chemicals makes the proper dose estimate from chromosomal changes impossible. Therefore, chromosomal aberrations are at present only a qualitative indicator of mutagenic effects. Possible Use of Results from Cytogenetic Studies in Occupational Health Service Ifit appears that chromosomal aberrations have increased, exposure must be reduced because these individuals may also have other types of genetic damage (e.g., point mutation or chromosomal aberrations in germinal cells). Workers should be advised not to plan to conceive a child within 1 year for women or within the next 3 months for men (5). Results of some studies have demonstrated that chromosomal changes are reversible and that chromosomal aberrations may revert to control levels in 2 to 3 years after reducing exposure to the mutagen. Obtaining Statistically Significant Results and Predicting Future Health Effects An additional problem in cytogenetic surveys of populations exposed environmentally is in obtaining statistically significant results with regard to very low-level exposures and then determining the meaning of such results for health effects in the population as a whole and in the individuals on which the observations were made. Thus, although chromosomal changes are an indicator of cellular genetic damage in a population, they cannot be used quantitatively to predict future health effects for a given individual (25).
2014-10-01T00:00:00.000Z
1991-12-01T00:00:00.000
{ "year": 1991, "sha1": "3e69b9aec7ee1d3fa88c02709995a7c088060b82", "oa_license": "pd", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.919691", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e69b9aec7ee1d3fa88c02709995a7c088060b82", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
261277714
pes2o/s2orc
v3-fos-license
Sampling for Remote Estimation of an Ornstein-Uhlenbeck Process through Channel with Unknown Delay Statistics In this paper, we consider sampling an Ornstein-Uhlenbeck (OU) process through a channel for remote estimation. The goal is to minimize the mean square error (MSE) at the estimator under a sampling frequency constraint when the channel delay statistics is unknown. Sampling for MSE minimization is reformulated into an optimal stopping problem. By revisiting the threshold structure of the optimal stopping policy when the delay statistics is known, we propose an online sampling algorithm to learn the optimum threshold using stochastic approximation algorithm and the virtual queue method. We prove that with probability 1, the MSE of the proposed online algorithm converges to the minimum MSE that is achieved when the channel delay statistics is known. The cumulative MSE gap of our proposed algorithm compared with the minimum MSE up to the $(k+1)$-th sample grows with rate at most $\mathcal{O}(\ln k)$. Our proposed online algorithm can satisfy the sampling frequency constraint theoretically. Finally, simulation results are provided to demonstrate the performance of the proposed algorithm. I. INTRODUCTION With the rapid development of the autonomous vehicles [1] and intelligent machine communications [2], status update information (e.g., the speed of the vehicles) is becoming a major part in future communication networks [3].Those status information are delivered to the destination through communication channels, and to guarantee the system safety and efficient control, it is necessary to ensure that the controller has an accurate estimation of the system state. To measure the information freshness at the destination, the metric, Age of Information (AoI), has been proposed in [4].According to the definition, AoI measures the difference between the current time and the generation time of the latest information received at the destination.Previous work [5], [6] have shown that AoI minimization is different from the traditional throughput and delay optimization.Specifically in the data generation procedure, a new data sample should be made only when the data stored at the destination is old.Numerous research have been conducted to minimize the AoI in various networks [4]- [11].The average AoI optimization in the queueing system is studied in [4], [7].Age-optimal scheduling policies in a multi-user wireless network are also investigated in [9]- [12].For minimizing the more general nonlinear age function, [6], [8] also design the optimal sampling strategies. However, when the signal model is known, AoI itself cannot reflect the different signal evolution.As an alternative, a better metric to capture information freshness at the destination is the mean square error (MSE) [13]- [21].The sampling strategy to minimize the estimation MSE of a Wiener process is studied in [14], [15], [20].Sampling strategy to minimize an Ornstein-Uhlenbeck (OU) process is investigated in [14], [21].It is revealed that the optimum sampling threshold depends on signal evolution and channel delay statistics.When the channel delay statistics is known, the aforementioned optimum sampling thresholds can be computed numerically by fixedpoint iteration [19] or bi-section search [20], [21]. When the channel statistics of the communication link is unknown, finding the optimum policy (i.e., the optimum AoI [6] or signal difference threshold [20], [21]) is challenging.Designing an adaptive sampling and transmission strategy under unknown channel statistics for data freshness optimization can be formulated into a sequential decision-making process [22]- [29].Based on the stochastic multi-armed bandit, [22]- [24] design online channel selection algorithms to minimize average AoI performance for the ON-OFF channel with unknown transition probability.For channels with more efficient communication protocols, [30]- [32] use reinforcement learning to minimize the AoI performance under unknown channel statistics.For communication channels with random delay, [28], [29], [33] apply the stochastic approximation method to design adaptive sampling algorithms to optimize AoI performance.The stochastic approximation method can also be extended to online estimation of the signals with simple evolution model, i.e., the Wiener process [34]. Notice that the Wiener process is the simplest time-varying signal model, and we are interested in extending the results to handle more general and complex signal models.In this paper, we consider a point-to-point link with a sensor sampling an OU process and transmitting the sampled packet to the destination through a channel with random delay for remote estimation.Our goal is to design an online sampling policy to minimize the average MSE under a frequency constraint when the channel statistics is unknown.The main contributions of the work are listed as follows: • We reformulated the MSE minimum sampling problem under the unknown channel statistics as an optimal stopping problem by providing a novel frame division algorithm that is different from [21].This novel approach of frame division enables us to propose an online sampling algorithm to learn the optimal threshold adaptively through stochastic approximation and virtual queue method. • When there is no sampling frequency constraint, we proved that the expected average MSE of the proposed algorithm can converge to the minimum MSE almost surely.Specifically, we first utilized the property of the OU process to bound the threshold parameter (Lemma 2 and Lemma 6), and then we proved the cumulative MSE regret grows at the speed of O(ln K), where K is the number of samples (Theorem 2) we have taken. • When there exists a sampling frequency constraint, by viewing the sampling frequency debt as a virtual queue, we proved that the sampling frequency constraint can be satisfied in the sense that the virtual queue is stable (Theorem 3).The rest of the paper is organized as follows.In Section II, we introduce the system model and formulate the MSE minimization problem.In Section III, we reformulate the problem into an optimal stopping optimization and then propose an online sampling algorithm.The theoretical analysis of the proposed algorithm is provided in Section IV.In Section V, we present the simulation results.Finally, conclusions are drawn in Section VI. A. System Model As depicted in Fig. 1, we study a status update system similar to [21], where a sensor observes a time-varying process and sends the sampled data to the remote estimator through a channel.Let X t ∈ R, ∀t ≥ 0 denote the value of the time-varying process at time t.To model these time-varying first-order auto-regressive processes, we assume X t to be an OU process in this work.This general process is the only nontrivial continuous-time process that is stationary, Gaussian, and Markovian [35].The OU process evolution parameterized by µ, θ, σ ∈ R + can be modeled by the following stochastic differential equation (SDE) [35]: where W t is a Wiener process. ACK Channel Fig. 1.A point-to-point status update system. Suppose the sensor can sample the process at any time t ∈ R + at his own will.Let S k be the sampling time-stamp of the k-th sample.Once sample k is transmitted over the channel, it will experience a random delay D k ∈ [0, ∞) to reach the destination.We assume the transmission delay is independent and identically distributed (i.i.d.) following a probability measure P D . Due to the interference constraint, only one sample can be transmitted over the channel at one time.Once the transmission of an update finishes, an ACK signal will be sent to the sensor without error immediately.Let R k be the reception time of the k-th sample.Then we can compute R k iteratively by (1) B. Minimum Mean Squared Error (MMSE) Estimation The receiver attempts to estimate the value of X t based on the received packets and the transmission results before time t.Let i(t) = max k∈N {k|R k ≤ t} be the index of the latest received sample at time t.The evolution of X t can be rewritten using the strong Markov property of the OU process [21, equation (8)] as follows. Let k=1 , t be the historical information up to time t.Then, the MMSE estimator at the destination is the conditional expectation [36]: (3) Combined with (2), the instant estimation error at time t, denoted by ∆ t can be computed as which can be viewed as an OU process starting at time t = S i(t) . To better demonstrate the MMSE estimation, we draw Fig. 2 as an example.The blue line is a sample path of an OU Frame 1 Frame 2 W1 W2 Fig. 2. Illustration of the OU process and the estimation error. process, and the orange line is the MMSE estimator computed by (3).Then the difference between these two lines, i.e., the shaded area, is the cumulative estimation error between the two samples. C. Optimization Problem The goal of the sampler is to find a sampling policy represented by a series of sampling times, i.e., π := {S 1 , S 2 , • • • } to minimize the estimation MSE of the OU process at the destination.We assume that the sampler knows the statistical information of the OU process, i.e., parameters θ, µ, σ, while the channel delay statistics P D is unknown.Here we focus on the set of causal sampling policies denoted by Π.The sampling time S k selected by each policy π ∈ Π is determined only by the historical information.No future information can be used for the sampling decision.Moreover, due to the hardware constraint and energy conservation, the average sampling frequency during the transmission should be below a certain threshold f max .Then, the optimization problem can be formulated as Problem 1 (MSE Minimization): III. PROBLEM RESOLUTION In this section, we first reformulate the Problem 1 into an optimal stopping problem.Then, an online sampling algorithm is proposed to approach the optimal mmse. A. Optimal Stopping Problem Reformulation Notice that Problem 1 is a constrained continuous-time Markov decision process (MDP) with a continuous state space.It has been proven in [21,Lemma 6] that it is sub-optimal to take a new sample before the last packet is received by the receiver.In other words, to achieve the optimal mmse, the sampling time-stamp S k should be larger than R k−1 .Then (1) can be simplified as be the waiting time before taking the (k + 1)-th sample.Then, designing a sampling policy π = {S 1 , S 2 , • • • } is equivalent to choosing a sequence of waiting time {W 1 , W 2 , • • • }.To facilitate further analysis, define frame k to be the time interval between S k and S k+1 .Then, we introduce the following lemma to reformulate the Problem 1 into the packet-level MDP. Lemma 1: Define I k = (D k , {X t } t≥S k ) to be the information in frame k, and Π r to be the set of stationary sampling policies whose W k only depends on I k .Let D be the random delay following distribution P D .Then Problem 1 can be reformulated into the following MDP: Problem 2 (Packet-level MDP Reformulation): where O t is an OU process with initial state O t = 0 and parameter µ = 0, which is the solution to the SDE: Moreover, the optimum value α ⋆ satisfies: The proof of Lemma 1 is provided in Appendix B. Assumption 1: The expectation of delay D k is bounded and known to the transmitter, i.e., Lemma 2: Define Ŵ = 1 fmax + c, where c > 0 is an arbitrary constant.If Assumption 1 is satisfied, then we can bound α ⋆ as where α lb and α ub can be chosen as The proof of Lemma 2 is provided in Appendix C. The lower bound is obtained by constructing a feasible and constant sampling policy whose waiting time is always Ŵ and then using (6a).The constant c is introduced to ensure Ŵ > 0 when there is no frequency constraint.The upper bound is obtained by using (8) and the fact mmse B. Optimal Sampling with Known P D In the sequel, we will derive the optimum policy π ⋆ that achieves optimal mmse when P D is known.The structure of the optimal policy can help us design the algorithm under unknown channel statistics, and the average MSE obtained by π ⋆ will be used to measure the performance of the proposed online learning algorithm in Subsection III-C According to (6a), the cost obtained by any policy π that satisfies the sampling constraint (6b) is less or equal to α ⋆ .In other words, we have on both sides of (13) and then adding α ⋆ lim K→∞ on both sides, we are able to solve Problem 2 by minimizing the following objective function: Problem 3: Similar to Dinkelbach's method [37] for the non-linear fractional programming, we can deduce that the optimal value ρ ⋆ of Problem 3 equals 0, and the optimum policy that achieves mmse in Problem 1 and ρ ⋆ in Problem 3 are identical.Therefore, we proceed to solve Problem 3 using the Lagrange multiplier approach.Let λ ≥ 0 be the Lagrange multiplier of the sampling frequency constraint (14b), the Lagrange function for Problem 3 is as follows: Notice that the transmission delay D k is i.i.d., and O t is an OU process starting at time t = 0. Then for fixed λ, selecting the optimum waiting time W k to minimize (15) becomes a per-sample optimal stopping problem by finding the optimum stop time w to minimize the following expectation: For simplicity, let V w = O D k +w be the value of the OU process at time D k + w and V 0 = O D k by definition.Then problem ( 16) is one instance of the following optimal stopping problem when β = α ⋆ − λ: where E v0 is the conditional expectation given V 0 = v 0 .The optimum policy to (17) is obtained in the following Lemma: Lemma 3: If 0 < β ≤ σ 2 , then the solution to minimize (17) has a threshold property, i.e., (18) where and The proof of Lemma 3 is provided in Appendix D. Since [21,Theorem 6] has proven the strong duality of Problem 3, i.e., ρ ⋆ = max λ min L(π, λ).For notational simplicity, let o(β) and l(β) denote the expected estimation error and frame length by using threshold β, i.e., by substituting 18), the optimal sampling time S k+1 = R k +W k to Problem 3 is as follows: Lemma 4: [21, Theorem 2 Restated] The optimal solution to Problem 1 is: where v(•) is defined in (19), λ ⋆ = arg sup λ L(π, λ) is the dual optimizer, and α ⋆ is the solution to the following equation: where we recall that o is the expected squared estimation error by using threshold β, and l(β Remark 1: If the frequency constraint is inactive, then according to the complementary slackness, we have λ ⋆ = 0, and the threshold becomes v(α ⋆ ).Otherwise, the optimal α ⋆ − λ ⋆ < α ⋆ .Then according to (19), the sampling threshold is larger than v(α ⋆ ) to satisfy the sampling frequency constraint. Remark 2: In [21, Theorem 2], the optimum sampling threshold to minimize the MSE is where The optimum sampling threshold is taken when where (a) holds by (8).Comparing ( 25) with ( 19), we find the conclusions coincide. C. Online Algorithm Notice that the optimal sampling in Section III-B is determined by α ⋆ − λ ⋆ through equation (19).However, when the channel statistics P D is unknown, α ⋆ and λ ⋆ are unknown, making direct computation of v(α ⋆ − λ ⋆ ) impossible.To overcome the challenge, we propose an online learning algorithm to approximate these two parameters α ⋆ and λ ⋆ respectively. Notice that α ⋆ is the solution to equation ( 22) when λ = λ ⋆ .This motivates us to approximate α ⋆ using the Robbins-Monro algorithm [38] for stochastic approximation.For λ ⋆ , we construct a virtual queue U k to record the cumulative sampling constraint violation up to frame k. Algorithm 1 Online Learning Sampling Algorithm 1: Parameters: V .2: Initialization: According to the last sampling generation time S k and delay D k , choose the waiting time W k as Update α k : Update U k : 8: end for As concluded in Algorithm 1, the proposed algorithm consists of two parts: sampling (step 5) and updating (step 6 and 7).For the sampling step, the algorithm uses the current estimation α k and λ k to compute the threshold, i.e., We then update α k+1 according to the Robbins-Monro algorithm: where (x) b a is the projection of x onto the interval [a, b]; α lb and α ub are the lower and upper bound of α ⋆ defined in ( 11) and ( 12); η k is the step size, which can be chosen as For estimating λ ⋆ , we construct a virtual queue U k which evolves as where V > 0 is the hyper-parameter.Notice that 1fmax − L k is the violation of sampling constraint in frame k.Therefore U k can be interpreted as the cumulative violation up to frame k.The Algorithm 1 attempts to stabilize U k to satisfy the sampling frequency constraint. Remark 3: In (28), we choose (α k − λ k ) + to ensure the positive input for v(•).We should also avoid the estimation α k − λ k to be zero, which will make the threshold v to be infinite.This requires the algorithm cannot choose V to be too small.Also in practice one can set an arbitrarily small positive value η > 0 as a lower bound for α k − λ k to avoid the infinite threshold. IV. THEORETICAL ANALYSIS In this section, we analyze the convergence and optimality of Algorithm 1. Assumption 2: The second moment of delay D k is bounded, i.e., 1 First, we assume that there is no sampling frequency constraint, i.e., f max = ∞ and thus λ = 0. Finally, we will prove that in general case f max < ∞, Algorithm 1 will still satisfy the constraint. Theorem 1: The time average MSE S k+1 0 of the proposed online learning algorithm converges to mmse with probability 1, i.e., S k+1 0 ] denote the expected cumulative MSE regret up to the (k + 1)-th sample.We can upper bound R k as follows: where C is a constant independent of k and is defined (42).The proof of Theorem 1 and Theorem 2 are provided in Appendix E and Appendix F, respectively.Now we consider the sampling frequency constraint.Here we assume that the constraint is feasible, i.e., Assumption 3: There exists a constant ǫ > 0, and a stationary sampling policy π ǫ satisfies where the expectation is taken over the channel statistics and the policy π ǫ .Theorem 3: Under Algorithm 1, the sampling frequency constraint can be satisfied, i.e., The proof of Theorem 3 is provided in Appendix H. V. SIMULATION RESULTS In this section, we provide some simulation results to demonstrate the performance of our proposed algorithm.The parameters of the monitored OU process are σ = 1, θ = 0.2, and µ = 3.The channel delay follows the log-normal distribution with µ D = σ D = 1.The expected MSE is computed by taking the average of 100 simulation runs for K = 10 4 packet transmission frames. A. Without A Sampling Frequency Constraint First, we consider the case with no frequency constraint, i.e., f max = ∞.We compare the MSE performance using the following policies: The estimation performance is depicted in Fig. 3. From Fig. 3, we can verify that the expected MSE performance of the proposed policy π online converges to the optimum policy π ⋆ , and achieves a smaller MSE performance compared with the signal-agnostic AoI minimum sampling and zero-wait policy.Previous work [21] has shown that the zero-wait policy is far from optimality when the channel delay is heavy tail.For the AoI optimal policy, while [20] reveals the relationship between average AoI and estimation error for the Wiener process, it is sub-optimal for MSE optimization of the OU process, even worse than the zero-wait policy. Next, we consider the estimation of the threshold v(α ⋆ − λ ⋆ ).Obviously, the fast and accurate estimation of the threshold is the necessary condition for the convergence of MSE performance.As depicted in Fig. 4, the proposed algorithm can approximate the optimal threshold as the time goes to infinity.Besides, the variance of the threshold estimation will also become small, which guarantees the convergence of MSE. B. With A Sampling Frequency Constraint In this part, we depict the simulation results when a sampling constraint exists.The parameters of the system are the same as in Fig. 3, and we set f max = 0.02.In other words, the minimum average frame length 1 fmax = 50.Notice that now the zero-wait policy does not satisfy the sampling constraint.Therefore, we consider a frequency conservative policy π freq , which selects W k as We set the parameter V = 500 and depict the MSE performance and average frame length in Fig. 5 and Fig. 6.These two figures verify that the proposed algorithm can also approximate the lower bound while satisfying the frequency constraint. Finally, we investigate the impact of V on the MSE performance and average frame length.We choose three different values of V = {300, 500, 800} and compare the MSE performance and average frame length, as depicted in Fig. 7(a) and Fig. 7(b) respectively.Generally speaking, the MSE performance of proposed algorithm with different V can all converge to the optimal MMSE, and the average inter-update interval of the proposed algorithms are near the frequency constraint.Notice that V is a hyper parameter controlling the estimation of the Lagrange multiplier.A larger V indicates less emphasis on the frequency constraint.By using a larger V = 800, the algorithm will take a longer time to converge to the sampling frequency constraint.Since for t < 8000 the sampling frequency of the algorithm slightly violates the sampling frequency constraint, the MSE is smaller. VI. CONCLUSION In this work, we studied the sampling policy for remote estimation of an OU process through a channel with transmission delay.We aim at designing an online sampling policy that can minimize the mean square error when the delay distribution is unknown.Finding the MSE minimum sampling policy can be reformulated into an optimal stopping problem, we proposed a stochastic approximation algorithm to learn the optimum stopping threshold adaptively.We prove that, after taking k samples, the cumulative MSE regret of our proposed algorithm grows with rate O(ln k), and the expected time-averaged MSE of our proposed algorithm converges to the minimum MSE almost surely.Numerical simulation validates the superiority and convergence performance of the proposed algorithm.[5] R. D. where x 0 e −t 2 dt is monotonic increasing, we have R 1 (v(α)) is monotically decreasing. Corollary 1: Recall that function l(β) = E[D + w(O D ; β)] is the expected framelength when using sampling threshold v(β).When there is no sampling frequency constraint and λ = 0, function l(α) has the following property: where ] ≤ M ub and α k is truncated into interval [α lb , α ub ] using Lemma 2, when there is no sampling frequency constraint and λ k ≡ 0, we have the following bounds for each frame k: The proof of Lemma 6 is provided in Appendix I-B.Lemma 7: For fixed λ, function g λ (α) = o(α − λ) − αl(α−λ) is continuous, monotonically decreasing and convex.Moreover, there exists a constant N so that function g 0 (α) Proof for Lemma 7 is provided in Appendix I-C.Theorem 4: The estimation α k computed in Algorithm 1 can converge to α ⋆ with probability 1, and we have where C is a constant independent of k, i.e., The proof of Theorem 4 is the same as [39, Lemma 6]. APPENDIX B PROOF OF LEMMA 1 The ultimate goal is to rewrite the averaged MMSE (5a) obtained by a stationary policy as the time-averaged cost of each frame.The waiting time W k set by any stationary policy π can be viewed as a stopping time.The information, i.e., tuple {(D k , ∆ S k+1 )} is a regenerative sequence as the instant estimation error ∆ t , t ≥ S k + D k is an OU process starting from time t = S k .Therefore, for stationary policy, the cumulative estimation error in frame k, i.e., E k := S k+1 S k (X t − Xt ) 2 dt and L k := S k+1 − S k are generative random processes.Then according the renewal-reward theory [40], both the average cumulative MSE in each frame Then according to the renewal reward theory [40], the time averaged MMSE can be computed by: Then to compute the average cost in each frame k, we introduce the following properties of the stopping time of an OU process: Lemma 8 (Lemma 5, [21] Restated): Let O t be an OU process with initial state zero and parameter µ = 0, and τ is a stopping time with E[τ ] < ∞, the integral of O 2 t from 0 to t can be computed by We then proceed to compute the expected cumulative error of stationary policy π using Lemma 8. Notice that the interval [S k , S k+1 ) can then be divided into two intervals [S k , S k +D k ) and [S k +D k , S k +D k +W k ).The cumulative estimation error during [S k , S k + D k ) can be computed as follows: where equation 4) is equivalent to an OU process starting from time t = S k−1 , and the cumulative MSE can be computed by Lemma 8. Notice that the delay distribution Plugging ( 46) into (45), we have: Similarly, the second part of the cumulative MSE, i.e., the cumulative MSE during interval where equation (b) is obtained because the instant estimation error X t − Xt , t ≥ S k + D k is an OU process starting at time S k according to (4).By summing up (47) and (48), we are able to compute the expected cumulative error for stationary policy π: where equality (c) is obtained because the transmission delay D k is i.i.d., and therefore and equality (d) is because: Finally, plugging (49) into ( 43), we have, with probability 1, the time-averaged MSE can be computed by: Notice that optimal value of LHS of ( 51) is indeed mmse.Therefore, the problem is equivalent to mmse Rearranging the terms yields According to [21], we have mmse ≤ σ 2 2θ .Therefore, α ⋆ ≥ 0. APPENDIX C PROOF OF LEMMA 2 Notice that This means Ŵ is a fixed and feasible waiting solution to the problem.Then according to (6a), we have where (a) holds since D ≥ 0 and e −x is decreasing.Combining the above two terms we have For the upper bound, according to [21], we have Plugging ( 53) into (8) yields APPENDIX D PROOF OF LEMMA 3 To solve the problem, From general optimal stopping theory [41, Chapter 1], we know that the following stopping time should be optimal: where v ⋆ is the optimal stopping threshold to be found.We solve ( 17) by the free-boundary approach [41].To find the v ⋆ , we solve the following free boundary problem: where H(v) is the value function of (17).Let S(v) = H ′ (v), equation (55a) implies: on both sides of equation ( 56), we have: where C 1 is a constant so that S(±v ⋆ ) satisfy (55c).Denote Then, Finally, denote G(x) = F (x)/x. the optimum threshold v ⋆ can be obtained by: Therefore, we have APPENDIX E PROOF OF THEOREM 1 According to Lemma 6, since α k and E[L k ] is bounded by a function of α, to show that the average MSE 1 S k+1 S k+1 0 (X t − Xt ) 2 dt converges to mmse, it is then suffice to show that sequence converges to 0 almost surely. Our proof is based on the perturbed ODE approach [42, Chapter 7] for analyzing stochastic approximation.To use the ODE approach, first we need to rewrite ξ k in recursive form as follows: where equation (a) is from the definition of ξ k−1 in (66).In equation ( 67), 1/k can be viewed as a step-size of updating ξ k and G k is the updating direction.We can further decompose G k as follows: be the conditional probability given historical information H k−1 .Then according to equation (47), since the transmission delay Similarly, through equation ( 48), the conditional expectation of the G k,2 can be computed by: And the conditional expectation of G k,3 can be computed by: From equations ( 69)-( 71), we can compute the conditional where equation ] by equation (24b).Terms β k,1 , • • • , β k,5 can be viewed as the bias terms in the ODE.Denote be the difference between the actual update and the conditional expectation, and define function: Plugging ( 72) into (67), we have: Denote t 0 = 0 and t k := k−1 j=0 1 j to be the cumulative stepsize sequences.Select m(t) ∈ N + to be the largest integer so that t m(t) ≤ t.To show that the ODE (74) converges to 0 with almost surely, we will then verify the following statements, whose proof are provided in Appendix G: Lemma 9: The updating steps {G k } and the difference sequence {δM k } have the following properties: (a) For each constant N , the expectation (c) For any running time T , the following limit holds for all ξ and µ > 0: The sum of the bias terms defined in (72) satisfies: (f) Function f (ξ, α) can be decomposed into the sum of function of ξ and a function of α, i.e., Since g 0 (α ⋆ ) = 0, we have −ξ = f (ξ, α ⋆ ).Moreover, Finally, according to [42, p. 166, Theorem 1.1], sequence {ξ k } converges to some limits of the ODE: Since function f (•, α ⋆ ) is monotonically decreasing, ξ = 0 is the unique equilibrium point of the ODE (80).Therefore, ξ k converges to 0 almost surely, and the time-averaged MSE converges to the mmse with probability 1. APPENDIX F PROOF OF THEOREM 2 The cumulative regret, i.e., the difference between the expected cumulative MSE using the online algorithm compared with the MSE optimum sampling up to sample (K + 1) can be upper bounded as follows: where equation (a) is obtained by (51) and mse ∞ = σ 2 2θ , and equation 2θ from equation (8).Then to further bound the cumulative regret computed by (81), let W ⋆ k be the waiting time selected by using parameter α ⋆ (i.e., the MSE minimum sampling policy).Then it is suffice to upper bound each term where equation Finally, plugging inequality (82) into (81) for each term k, the cumulative regret R K can be bounded, i.e., where equation (f ) is obtained by Theorem 4. APPENDIX G PROOF OF LEMMA 9 We will verify each statement in Lemma 9 respectively: The first term Since ] is also bounded.This verifies statement (a). To proceed with the proof of statement (c) − (f ), we restate the following lemma, whose proof is provided in [34,Appendix G] Lemma 10: Let {ψ k } be a sequence.Then lim k→∞ Pr sup j≥k j i=k 1 i ψ i ≥ µ = 0 holds if one of the following condition is satisfied: (1) ψ k is a martingale sequence and its second order moment is bounded, i.e., (c) According to Lemma 7, since g 0 (α) is monotonic decreasing and convex, the difference |g 0 (α) Therefore, the expectation of f (ξ, α k ) − f (ξ, α ⋆ ) can be upper bounded by: where equality (a) is by Cauchy-Schwartz inequality and equality (b) is from Theorem 4. Since term f (ξ, α k )−f (ξ, α ⋆ ) satisfies condition 2 in Lemma 10, statement (c) is verified.Therefore, to show that statement (d) is satisfied, it is suffice to show that each term δM i,p , p = 1, 2, 3 satisfies condition (1) in Lemma 10. Notice that for fixed ] is bounded, which is shown as follows: where equation (b) is by Cauchy-Schwartz inequality; inequality (c) is from (103).Since δM k,1 meets the first condition in Lemma 10, we have: The difference sequence δM k,2 and δM k,3 only depends on the transmission delay D k and the OU process evolution in frame k.Using similar methods, it can be shown that sequences {δM k,2 } and {δM k,3 } satisfy condition 1 in Lemma 10.Since To show that statement (e) holds, it is suffice to show that each of the bias term satisfy: the second condition in Lemma 10.We will then upper bound the expectation of each bias term E[β k,p ] respectively.The first bias term satisfies E[β k,1 ] = 0 and is hence a martingale sequence.We can bound E Then according to Lemma 6, E satisfies Condition 1, Lemma 10.Therefore, equation (92) holds for p = 1 .The expectation of the second bias term β k,2 can be upper bounded by: Recall that by Theorem 4, and satisfies Lemma 10 condition 2. Equation (92) holds for p = 2. We then proceed to upper bound the expectation of of the third bias term by: g 0 (α k ) as follows: This verifies Condition 2 in Lemma 10 and therefore verifies statement (f). APPENDIX H PROOF OF THEOREM 3 Recall that the sampling debt queue U k evolves as According to [43], in order to satisfy the sampling constraint, it is sufficient to prove that lim sup Here we adopt the Lyapunov drift-plus-penalty method to prove the stability of U k .Define the Lyapunov function as and the Lyapunov drift is defined by First we upper bound U 2 k+1 : Plugging the above inequality into (97) yields Plugging the above equation into (98) and then take the expectation on both sides of (98) yields where (a) holds since D k is independent of U k .Similar to the proof of Lemma 6, we can bound Therefore, we have Now we upper bound the first term of the RHS of (99).According to (16), the waiting time W k is the optimal solution to sup For simplicity, we denote the historical information O D k , D k to be M k−1 . Let W ǫ be the waiting time under policy π ǫ .According to (100), we have Rearranging the terms yields where (a) holds by Assumption 3; (b) holds by Lemma 6 and W ub = 1 fmax + D ub for sufficiently small ǫ.Now we have where is a constant.Summing up from k = 1 to K yields Notice that U 1 = 0 and U k+1 ≥ 0. Thus we have Rearranging the terms yields lim sup For L k , according to Lemma 5 we can bound Since v(α k ) is decreasing function with respect to α k , v(α k ) can be bounded where (a) holds by Lemma 2. Next, we bound R 1 (v) as where (a) holds by n! ≤ (2n + 1)!!.Then we have Next we bound where (a) holds because is chosen to minimize function (16).We have For each policy w, function L(w, α − λ) is a linear increasing function of α.Then by taking the infimum, function inf w L(w, α−λ) is continuous, concave and increasing.Therefore, function g λ (α) is convex and monotonic decreasing. • Zero-Wait Policy π zw : take a new sample immediately after the reception of the ACK of the last sample, i.e., W k = 0. • Signal-Aware MSE Optimum Policy π ⋆ : signal aware MSE optimum policy when P D is known [21].• Signal-Agnostic AoI Minimum Policy π AoI : signal agnostic sampling policy for AoI minimization [6].• Proposed Online Policy π online : described in Algorithm 1. Fig. 7 . Fig. 7. MSE performance and average frame length with different parameter V . 2 , where (a) holds becauseW k = 0 if |O D k | > v(α k ); (b)holds by Lemma 5; (c) holds by (105).Therefore, we haveE[D k E[W k |D k ]] Now we just need to bound E[W 2 k |D k , α k , |O D k | < v(α k )].According to(28), W k is the stopping time that an OU process exits a bounded set[−v(α k ), v(α k )] with the initial state O D k .Denote t (1)v (x) and t ) B. Proof of Lemma 6Since O 2 L k is an instance of O 2 D k +W k , we just bound E[O 2 D k +W k ] and E[O 4 D k +W k ].Therefore we have E[O 2 D k +W k ] = σ 2 2θ E[1 − e −2θ(D k +W k ) ] ≤ α lb ) 2 σ 2 e
2023-08-30T06:42:24.122Z
2023-08-29T00:00:00.000
{ "year": 2023, "sha1": "7dd5225d41b817a2f9e25a61ed422086980ecf10", "oa_license": null, "oa_url": "https://ieeexplore.ieee.org/ielx7/5449605/10323420/10323429.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "7dd5225d41b817a2f9e25a61ed422086980ecf10", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
2047422
pes2o/s2orc
v3-fos-license
An Actor-Based Model of Social Network Influence on Adolescent Body Size, Screen Time, and Playing Sports Recent studies suggest that obesity may be “contagious” between individuals in social networks. Social contagion (influence), however, may not be identifiable using traditional statistical approaches because they cannot distinguish contagion from homophily (the propensity for individuals to select friends who are similar to themselves) or from shared environmental influences. In this paper, we apply the stochastic actor-based model (SABM) framework developed by Snijders and colleagues to data on adolescent body mass index (BMI), screen time, and playing active sports. Our primary hypothesis was that social influences on adolescent body size and related behaviors are independent of friend selection. Employing the SABM, we simultaneously modeled network dynamics (friendship selection based on homophily and structural characteristics of the network) and social influence. We focused on the 2 largest schools in the National Longitudinal Study of Adolescent Health (Add Health) and held the school environment constant by examining the 2 school networks separately (N = 624 and 1151). Results show support in both schools for homophily on BMI, but also for social influence on BMI. There was no evidence of homophily on screen time in either school, while only one of the schools showed homophily on playing active sports. There was, however, evidence of social influence on screen time in one of the schools, and playing active sports in both schools. These results suggest that both homophily and social influence are important in understanding patterns of adolescent obesity. Intervention efforts should take into consideration peers’ influence on one another, rather than treating “high risk” adolescents in isolation. Introduction Childhood obesity is epidemic in the U.S. [1,2]. Recent data show that that 18.1% of adolescents (ages 12-19 years old) are obese (defined as exceeding the historical 95 th percentile of ageand sex-specific body mass index (BMI)) [3]. By contrast, the prevalence of U.S. adolescent obesity in the period 1988-1994 was 10.7% [4]. To reverse the alarming rise in childhood and adolescent obesitỳ, researchers have tried many individual-level prevention strategies, including educating children on healthy eating habits, promoting increased physical activitỳ, and restricting screen time. Most interventions, however, have shown, at most, modest benefit. For example, a 2011 Cochrane Review by Waters and colleagues showed that interventions aimed at reducing obesity in 13-to- 18 year old adolescents lowered BMI by an average of 0.09 kg/m 2 [5]. The failure of these interventions, especially those targeting individuals, has spurred researchers to identify social and economic influences and suggest novel population-level interventions [6]. Along these lines, recent studies support an etiologic role for social networks in the production and maintenance of childhood and adult obesity [7,8,9,10]. Social relationships and interactions generally influence behaviors and health outcomes [11,12]. We may represent the pattern of relationships between ''social entities'' as a social network; the entities might be individuals, collectives (such as households), institutions, or governments [13]. Social networks are increasingly regarded as important determinants of health issues as diverse as the spread of human immunodeficiency virus [14] and the ''contagion'' of several conditions including obesity [7], smoking [15,16,17], and even happiness [18]. Valente has further shown the importance of social networks in the diffusion of health-related innovations and behaviors [19]. The specific mechanisms by which networks influence behavior are poorly understood, although norms [11,20], peer modeling [21], and social capital [22] have all been implicated. Notable critiques of the social network ''contagion'' hypothesis have appeared in academic [23,24,25,26] and popular [27] literatures. The key issues highlighted by these critiques are a trio of potential mechanisms that could account for the observed ''network contagion'': 1) social influence; 2) confounding by shared social environments of network members; and 3) social selection or homophily (''love of sameness'') on shared predisposition to engage in (un)healthy behaviors. These mechanisms are not identifiable using traditional statistical approaches. This trio has been a longstanding problem in epidemiology and other fields, and is best articulated by Manski as the ''reflection problem'', because all three mechanisms can mirror one another [28]. The critique is most sharply articulated by Shalizi and Thomas [26]. Using graphical causal models, they show that those aspects of latent traits that lead to either friendship formation or behavior ''must be made observable… In either case the confounding arcs go away, and the direct effect [of peer influence] becomes identifiable'' ( [26], p.218). Several prior studies have employed regression-based approaches to adolescent obesity or BMI using data drawn from the National Longitudinal Study of Adolescent Health [24,25,29]. These studies all claim to control for confounding by holding constant individual background characteristics that influence behavior. Cohen-Cole and Fletcher offer perhaps the best example of a regression-based approach [24] as their model adds controls for environmental confounding using school-specific trends; these controls alone attenuate the association by over 30%. They further extend Christakis and Fowler's model by examining the change in BMI following declaration of friendship using individual fixed effects (FE). The FE model is appealing because it automatically adjusts for all time-invariant background characteristics of individuals, whether or not these characteristics are observed. The stochastic actor-based model (SABM) of Snijders and colleagues provides a means of separating the effects of social influence and friend selection [30,31]. The SABM simultaneously models the evolution of social network structure and the behavior of individuals in the network. In this paper, we apply the stochastic actor-based framework to data on adolescent body size and obesity-related behavior. Our primary hypothesis was that social influences on adolescent body size and obesity-related behavior are independent of peer selection when stratified by the school environment. We predicted that peers exert an influence on one another's BMI, screen time, and playing active sports; these influences are assumed to be localized in the social network and were operationalized as assimilation (i.e., individual becoming more similar to their friends). Methods The Loyola University Chicago Institutional Review Board approved these analyses. All subject data were de-identified prior to receipt of the data by the investigators, and the study was deemed ''exempt''. Study Population Data were drawn from the first and second waves of the National Longitudinal Study of Adolescent Health (hereafter, Add Health). Details of the overall study design, including codebooks, may be found elsewhere [32]. Add Health invited all students at 16 schools to participate in a detailed survey conducted in the student's home. Only 2 schools enrolled enough students to permit school-stratified analyses and thus only 2 schools are included in the current study, referred to as ''Jefferson High'' by Bearman [33] and ''Sunshine High'' by Moody (unpublished data). Jefferson High, located in the rural Midwest, is the only public high school in the area, which is critical because friendships can only be identified if they are within the school. Jefferson High is primarily comprised of non-Hispanic white students. Sunshine High is in an urban environment and has substantial racial and ethnic diversity; this makes it an ideal contrast to the more homogeneous population of Jefferson High. The total student participation rates were 776 (75.8%) at Jefferson High and 1744 (82.9%) at Sunshine High. Wave 1 was collected during the 1994-1995 school year a follow-up visit took place 1 year later (Wave 2). Because we are interested in longitudinal changes in the social network, we excluded any respondent not followed in Wave 2, which, for the most part, included those who were 12 th graders in Wave 1. This yielded a final dataset of 624 students in Jefferson High and 1151 students in Sunshine High for analysis. The remaining schools in the saturation sample (i.e., those not included in this study) only included from 19 to 133 students with complete BMI information; they were not included because of their low sample sizes which would have precluded disentangling peer influence from social selection. To rule out unmeasured confounders at the school level, we stratified all analyses by school. Obesity-related Measures Body size was assessed by BMI (in kg per meters squared); both weight and height were self-reported, as Wave 1 lacked objective measures of these variables. Although self-reported weight was found to be under-reported in Wave 2 of Add Health, the amount was less than 1 pound for males and less than 2 pounds for females [34]. Over one year of followup, there was little transition between adiposity categories using CDC sex-and age-specific BMI cutpoints at the 85 th (overweight) and 95 th (obese) percentiles [35]. Because only 42 respondents (6.7%) from Jefferson High and 84 (7.4%) from Sunshine High moved up one or more weight categories, we chose to analyze one-unit changes in BMI as the behavioral outcome. As our modeling approach required behaviors to be ordered categories, we recoded BMI as an integer. Since BMI is a proxy for adiposity rather than a behavior per se, we selected two behaviors for investigation that have been implicated in childhood obesity: screen time and (not) playing active sports [36]. Screen time was assessed as the sum total of hours watching television and/or video recordings plus computer or video games in the past week. Implausible values (i.e., above 99 hours per week; n = 4 in Jefferson High and n = 2 in Sunshine) were re-coded as 99 hours. To aid estimation and interpretation, screen time was divided into 10-hour categories ranging from 0 (under 10 hours of screen time) to 9 (90 or more hours per week). Playing active sports was measured with the question: ''During the past week, how many times did you play an active sport, such as baseball, softball, basketball, soccer, swimming, or football?'' The active sports score was coded as 0 (not at all), 1 (1 or 2 times), 2 (3 or 4 times), or 3 (5 or more times). Social Network Measures At both waves 1 and 2, all respondents were asked to name up to 10 friends, up to 5 male and 5 female. Based on these answers, an N by N adjacency matrix for each high school was created, where N is the number of students in the network. If student i named student j as a friend, then the i,j entry in the matrix was a one, and all other entries were zero [13]. Thus, each row of the matrix corresponds to a particular student i, called an ''ego,'' and each ego is surrounded by his or her local ''alters'': other actors in the network with their own attributes, network properties, and behaviors, indexed by the subscript j, corresponding to the columns in the adjacency matrix (these and other key terms used throughout the paper are defined in Table 1). At baseline (Wave 1), further questions assessed the strength of each named friend; however, this information was not used in the present analysis. Only respondents with network (friendship) data were included in the analysis, as only they may serve as both egos and alters. Stochastic Actor-based Model (SABM) of Peer Selection and Social Influence Snijders and colleagues have developed an stochastic actorbased model of the co-evolution of social networks and behaviors [30,31], implemented in R as the Simulation Investigation for Empirical Network Analysis (R-SIENA). The model uses rate functions to assign type of change (network or behavior) for each individual (actor). Two discrete choice functions are fitted recursively: one for network choices (i.e., friendship selection and dissolution), and one for changes in behavior (in our case, BMI, screen time, or playing active sports). The outcome is a log-linked objective function of the various actors and network attributes, which can be likened to the utility of a particular action for each actor. Actors are more likely to choose actions that yield larger objective function values. But since the model is stochastic, actors may choose lower values as well (albeit with lower probability). The model parameters are estimated using method-of-moments [37]. The initial network, behaviors, and attributes are used as the starting point of the model, which is then simulated for a given set of parameters, with the results compared to the observed data. The parameters are then adjusted and the model is re-simulated in an iterative cycle to minimize the difference between simulation and observation for all actors based on target statistics for those attributes. Standard errors are calculated using a score function method as described in the R-SIENA manual [38]. Specification of the SABM Model Although numerous network and behavior statistics can be included in the model [30,31]. We included only those statistics that theory or prior research suggested would contribute to a critical network or behavior change. Specifically, we defined X to be the friendship adjacency matrix described above. For the network model, the complete objective function for network state x for actor i given covariates y and behavior z is defined as: zb a,sim S j x ij (sim a,ij a,average) zb c,sim S j x ij (sim c,ij c,average) zb z,ego x ij z i zb z,alt x ij z j zb net z,sim S j x ij (sim z,ij z,average) where deg indexes degree (number of ties between ego i and alters j), rec indexes reciprocity, ttip indexes transitive triplets, s is sex, g is grade, r is black race, e is Hispanic ethnicity, a is age, c is income, and z is the behavior variable in question (BMI, screen time per week, or playing active sports score). The variable x ij is a dummy variable coded 1 if ego i names alter j as a friend, and 0 otherwise; x ij x ji is coded 1 if i and j are mutual friends (i.e., it is a reciprocated tie). Likewise, x ih x ij x jh is coded 1 if ego i and alter j both name another person h as a friend, and 0 otherwise. The linear combination of all terms results in a value for f i net (x,y,z), the objective function for actor i. We may convert this value into a probability for a particular action, exponentiating it, and then dividing by the sum of all possible exponentiated actions. Because the network objective function is complex in its entirety, we describe each of its components below. Each component of the model carries a parameter estimate (b), interpreted as the weight the actor places on a particular characteristic of his or her network ties. We divide these into three categories: structural effects, homophily effects, and behavior effects on the network. 1. Outdegree is defined by the formula b deg S j x ij , where S j x ij is the total number of named friends, and b deg is the parameter (weight placed on adding, keeping, or dropping a new alter), regardless of that alter's Table 1. Key terms used in this paper. Term Definition Actor a respondent in one of the Add Health saturation schools SABM stochastic actor-based model Ego the actor whose network and behavior choices are being modeled Alter an actor who is named as a friend by the ego Degree the total number of alters an ego has named Reciprocated tie tie for which the alter also names the ego as a friend; synonymous with mutual tie Transitive triplets triplet whereby one of the ego's alters names a second of the ego's alters as a friend; ''friend of a friend'' who is named by the ego as a friend Identical attribute indicates that both the ego and the alter have the same attribute value; a measure of homophily for discrete attributes (sex, grade, and race-ethnicity) Similar attribute the standardized absolute difference between the ego's and the alter's attribute; a raw (uncentered) value of 1 indicates perfect similarity; used as a measure of homophily for continuous attributes and behaviors characteristics. Because social actors cannot sustain an unlimited number of friendship ties, b deg is always negative. 2. Reciprocity is the effect of the ego naming a friend if the alter has named the ego as a friend, and is defined by the formula b rec S j x ij x ji . Since x ij x ji only takes the value 1 if both ego i and alter j name each other as friends, S j x ij x ji is the total sum of mutual ties. 3. Transitive triplets is defined as the effect of the ego i naming alter j's friend h (friend of a friend). The formula is b ttip S j,h x ih x ij x jh , where x ih x ij x jh takes the value 1 if actor i names actor h, actor i names actor j, and actor i also names actor h. Thus, the sum over j and h is the total number of actors to whom i is tied who are also friends with each other. Homophily Effects for Actor Attributes 4. Same sex is the effect of the number of ties the ego has with alters of the same sex, defined as b s S j x ij I{s i = s j }, where I{s i = s j } takes the value 1 if both i and j are the same sex. 5-7. Same grade (b g S j x ij I{g i = g j }), same black race (b r S i x ij I{r i = r j }), and same Hispanic ethnicity (b e S i x ij I{e i = e j }) are defined analogously to same sex. Because of its racial and ethnic homogeneity, same race and same Hispanic ethnicity are omitted from the model for Jefferson High. 8-9. Age similarity and income similarity quantify how much weight actors place on choosing friends of similar age and income. They are calculated using the sum of similarity scores between the ego and his or her alters for age, b a,sim S j x ij (sim a,ij -sim a,average ), and for income, b c,sim S j x ij (sim c,ij -sim c,average ). We defined similarity for age (a) between ego i and alter j to be sim a,ij = 1-[|a i -a j |/(a range )], where a range is the difference between the largest and smallest value of age in the network. The measure for each dyad is centered by subtracting the mean similarity, sim a,avg , from the similarity measure for that dyad. Income similarity is calculated by substituting income for age in this formula. Since household income was missing for many respondents (17% in Jefferson, and 39% in Sunshine), we substituted the mean value for the school for these actors. 10. Behavior ego is interpreted as extra activity or sociability for egos with high values of the behavior (BMI, screen time, or active sports). It is calculated as x i+ z i, the outdegree weighted by the value of the behavior. 11. Behavior alter is interpreted as the attraction of the ego to alters with high values of the behavior. It is calculated as the sum of the behavioral value over all of the ego's alters, S j x ij z j . When the parameter estimate for the behavior alter effect is negative, this indicates a preference to establish or maintain friendships with alters with low values of the behavior. 12. Behavior similarity is the statistic for homophily on behavior. It is calculated as the centered sum of similarity scores between the actor and all of his or her alters, S j x ij (sim z,ij -sim z,avg ), using the same general formula employed for age and income similarity. Actors are assumed to prefer alters who are most similar to themselves with regard to behavior (BMI, screen time, and active sports). Behavior Objective Function For the behavior model, the complete objective function for network state x for actor i given covariates y and behavior z is defined as: There are three parameters for the behavioral model: linear and quadratic ''shape'' parameters and the average similarity effect. 13-14. linear shape effect (z i -z avg ) and quadratic shape (z i -z avg ) 2 effects are both centered by subtracting the mean value of the behavior (z avg ). The linear shape parameter (b lin ) may be likened to the ''tracking'' of a behavior over time. Subjects who are already higher than average on the behavior are likely to increase it, while subjects who are lower are less likely to do so. The quadratic shape effect allows for non-linearity in this association, whereby extreme values at one time point may lead to even more extreme values at a future time point. Snijders and colleagues argue that a positive and significant value for the quadratic shape parameter b quad indicates addictive behavior [30]. 15. Behavior average similarity is defined as S j x ij (sim z,ij -sim z,avg )/S j x ij . The focus of our analysis is on this effect, as it represents behavioral social influence or assimilation. If the parameter b beh makes a meaningful contribution to the behavior objective function, then it indicates that egos whose behavior differs from that of their peers assimilate to their peers by increasing or decreasing the behavior. With BMI, this may indicate a conscious decision to lose weight in order to fit in with lean friends, or an unconscious choice of unhealthy foods based on imitating peer behavior. Note that SIENA requires separate models for each investigated behavior. To rule out unmeasured confounders at the school level, and since schools define the boundaries of the social networks, we stratified all analyses by school. Because there are two schools (Jefferson and Sunshine) and three behaviors examined (BMI, screen time, and playing active sports), a total of 6 models were run. Descriptive Statistics The characteristics of students in the two schools are listed in Table 2. Respondents in each school were similar on age, percent male, and playing active sports. Average household income was $11,500 higher in Jefferson High than Sunshine. Both BMI (1.7 kg/m 2 ) and screen time (3.5 hours/week) were higher in Sunshine High than Jefferson and Jefferson High respondents reported more friendships (3.5 vs. 1.8 per student) resulting in a higher overall number of ties (2201 vs. 2025), despite fewer students. There were also a greater number of average reciprocated ties (mutual friendships) and transitive triplets (the friend of an alter's friend is also the ego's friend) in Jefferson compared to Sunshine. The similarity measures are centered by the overall average in the network, and thus are close to zero. Network Objective Function Parameters for the network objective function that were common to all models were robust to the inclusion of different behavioral attributes; that is, network structural parameters (degree, reciprocity, transitive triplets, and homophily on sex, grade, black race, Hispanic ethnicity, age, and income) are not confounded by behavioral attributes of actors, and did not change appreciably across models. Table 3 shows network structural characteristics:. all but one of the parameter estimates in this table make meaningful contributions to the objective function for adding or deleting a network tie: income similarity in Jefferson High which was close to zero with a wide confidence interval. The estimates may be likened to the weight that each individual places upon network and alter attributes in deciding to add or drop a friendship tie or to keep his or her personal network as it is. Outdegree is strongly negative, reflecting the disinclination to form ties with random alters. Reciprocity, however, is strongly positive, indicating that an ego is highly inclined to form or keep friendship ties with alters who have named the ego as a friend. The values for sex, grade, black race, Hispanic ethnicity, age, and income quantify the weight placed on homophily for these attributes. Parameters for the network objective function change across models when we examine different behaviors ( Table 4). These differences arise from each behavior having its own distribution, and actors giving the behaviors different weights. The attractiveness measures represent the weight egos place on alters' behavior; positive measures indicate that egos prefer alters who are above average on the behavior, while negative measures indicate a preference for those below the mean on those behaviors. Positive sociability (called ''activity'' by Snijders et al. [30]) measures indicate that egos are more likely to form ties if they have above-average values of the behavior. Finally, similarity measures indicate a preference for alters who have values that are similar to the ego's values on the behavioral attribute. In both Jefferson High and Sunshine High, we found evidence of homophily on BMI, with a parameter estimate of 0.54 and 95% confidence interval (0.14, 0.95) for Jefferson, and 1.30 (0.68, 1.91) for Sunshine. In both schools, high BMI students chose friends who were similarly high in BMI, while lean students chose lean friends. Ego's BMI also made a small contribution to sociability; all things being equal, students with high BMI named more friends than those who are low on BMI. Sensitivity analyses, including additional controls for screen time similarity and playing active sports similarity, did not meaningfully change these results. Jefferson High showed evidence of homophily on active sports. Respondents who reported playing active sports more often were Parameters are the weights actors place on various network configurations. They are the contributions to the objective function. The 95% confidence intervals quantify the precision of the estimates a score function method. 2 The basic rate parameter for friendship controls how often actors have the opportunity to change their network (add, keep, or drop a friend). Higher values indicate more network changes. 3 The outdegree parameter is the weight placed on having a friendship tie with any member of the social network, irrespective of the alter's characteristics. 4 The reciprocity parameter is the weight an actor places on reciprocating alters' friendship nominations. 5 The transitive triplets parameter is the weight an actor places on naming friends who are also named by the actor's friend. 6 Positive values of ''same'' and ''similarity'' measures are the effects of homophily on these attributes. doi:10.1371/journal.pone.0039795.t003 more likely to choose others who also played more often, perhaps because they chose friends who played the same sports. We note that when all forms of physical activity (active sports, exercising, and rollerblading or bike riding) were combined to create a summary score, neither school showed evidence of homophily (results not shown). Playing active sports, however, did not appear to be a basis for friendship selection in Sunshine High; this may have been due to the lower density of that network. Against our prediction, screen time did not appear to affect the actors' choice of friends in either school. To illustrate how the network objective function is calculated, consider a respondent from Jefferson High who is male, 17 years old, in grade 11, and with a BMI of 25 (we do not include income similarity or attractiveness of alter's BMI here because the parameter estimates make ignorable contributions to the objective function). The student has 2 friends, one of whom reciprocates, and one who does not; the alters are male, both in grade 11, and both with a BMI of 25. The alters are not friends with each other. The network objective function for this student's current network is a linear combination of parameters for outdegree (23.56), reciprocity (2.26), transitive triplets (0.48), identical sex (0.18), same grade (0.49), age similarity (0.91), sociability (0.14), and BMI similarity (0.54). Similarity scores are calculated as described above, yielding the following formula: Suppose this student is contemplating dropping his male friend who has not reciprocated, or adding a third male friend who is obese (i.e., has a BMI of 30), but who has named the ego as a friend, thus creating a reciprocated friendship tie. This third male student is also 17 and in grade 11. We may calculate the predicted probability of dropping, adding, or keeping the same ties for any individual in the network by exponentiating the value of the objective function for a particular scenario, and dividing it by the sum of the exponentiated objective values for all possible scenarios. If our network contained only the four individuals described here, the ego could make four possible choices: keep the same network; drop one of the 2 existing ties; or add the tie that is not present. The exponentiated values of these four choices, and the probability of each, would be: Note that the denominator for each probability (p) is 0.9584, the sum of the four exponentiated objective function values for each choice (0.0624+0.0807+0.7737+0.0415). In this artificial scenario, it is most likely the student will make the third choice, that is, to drop the existing unreciprocated tie. This choice has the highest probability because the parameters are obtained from a school containing 624 individuals, but we are applying them to a hypothetical network of only 4 individuals which is high in density (0.25, as there are 3 ties over 463 or 12 possible ties). In reality, the network is already low in density (density is 0.006, as only 2201 of the 6246623 possible ties are present). There are 624 network choices possible for a student at Jefferson High, or as many choices as there are actors in the network, and the value of the objective function for each choice would need to be calculated and compared to estimate the predicted probability of any particular choice. Behavior Objective Function Values for the behavior objective function parameters are listed in Table 5. We found evidence of peer influence (social modeling or assimilation) for BMI and playing active sports in both Jefferson and Sunshine High, and for screen time in Jefferson High. Evidence of Peer Influence on BMI The BMI average similarity score for Jefferson High was 14.10 (95% CI: 7.76, 20.44). This indicates a tendency for egos to try to match the average BMI of their friends; if their BMIs are higher than their friends, this will tend to pull their BMI down; if they are lower than their friends, it will pull their BMI up. While this parameter estimate seems high, it must be viewed in the context of the mean value (0.017), minimum (20.54), maximum (0.14), and interquartile range (20.002 to 0.078) of average similarity values. Thus, at the 25 th percentile value, the contribution of average similarity to the objective function is (14.10)(20.002) = 20.028; at the 75 percentile, it is (14.10)(0.078) = 1.10. The BMI average similarity value for Sunshine High was similar, at 10.57 (95% CI: 5.30, 15.85). Sensitivity analyses, including additional controls to the behavior objective function for sex, ethnicity, race, age, income, body weight image, trying to lose weight, and trying to gain weight, did not meaningfully change these parameter estimates (results not shown). The behavior objective function is simpler than the network function because there are only three choices that an actor can make: stay the same; move up one unit; or move down one unit. The larger the value is of the objective function, the greater the probability of the choice made, and it will depend both on the ego's BMI and the average similarity with his or her alters. Ego's current BMI influences future BMI, as indicated by the ''linear shape'' and ''quadratic shape'' parameters. As current BMI values increase, there is a greater tendency to increase BMI between time steps; that is, more emphasis is placed upon increasing BMI than decreasing it. Translating the behavior objective function into probabilities is done in an analogous fashion to the calculation for network changes. We exponentiate the value of the objective function for a particular BMI state, and then divide it by the sum of the exponentiated objective function values for all three scenarios (move down one unit, stay same, or move up one unit). To illustrate, consider a student in Jefferson High whose BMI is 23, close to the mean value (22.6). We can assume the student is male; while sex is not a part of the behavior objective function, it is a determinant of the student's friends. If the actor has no friends, then only the linear and quadratic shape will drive the objective function values of his 3 choices: The most probable scenario is that the actor will increase his BMI by one unit, but the other two scenarios are nearly as likely. Now consider a situation where this same ego has 2 friends, each with the identical BMI of 30 kg/m 2 . The centered average similarity value between this ego and his friends is thus: Were he to move up one unit in BMI, the centered similarity measure would become larger (0.047); were he to move down one unit, similarity would become smaller (20.107). These measures then figure into the objective function, and each ''move'' in BMI can be assigned a probability: It is more likely than not that this subject will increase his BMI. The converse, however, is not true: going down in BMI when alters are lower on BMI is much less likely than gaining body mass when the alters are higher. If the scenario were reversed, with the ego's BMI beginning at 30 and the alters' at 23, the probability of decreasing BMI is 0.351, while the probability of increasing BMI is 0.319. Table 6 shows the probability of increasing, decreasing, or remaining at the same BMI for various combinations of egos' BMI and average similarity with alters' BMI. The table shows that egos who have alters with higher BMI will be more likely to be pulled in the alters' direction, while egos with leaner alters do not necessarily have higher probabilities of moving down. Behavioral change parameters are adjusted for network structural parameters (Table 3 and 4). Linear and quadratic shape parameters are the effects of the ego's own behavior (linear) and behavior-squared (quadratic) on his or her future behavior. The ''average similarity'' parameters represent social influence of the alters' on the ego. doi:10.1371/journal.pone.0039795.t005 Predicted Behavior on Screen Time and Playing Active Sports Similar calculations can be made for peer influence in Jefferson High on screen time ( Table 7) and playing an active sport (Table 8). Results ( Table 7) A similar pattern is noted for playing active sports ( Table 8). Egos who played an active sport once or twice in the past week at Wave 1 had a 75% predicted probability of decreasing their playing sports if their average alter did not play any sports. On the other hand, egos who played sports 3-4 times a week at baseline had a 62% probability of increasing their playing sports if their average alter played 5 or more times. Discussion Our model's primary strength is that we explicitly model both the processes of friendship formation and social influence. Our results add to a growing body of evidence demonstrating clustering Table 6. Probability of ego's increasing (+1), decreasing (21), or remaining at the same body mass index (BMI) in the next time step, based on ego's and average alters' current BMI. of friends' obesity and related behaviors [7,10,25,29]. All of these previous models employ a variation of the generalized estimating equation (GEE), which accounts for the correlated structure of the data, but does not explicitly model social network dynamics. While showing that BMI and behaviors cluster is consistent with a causal story that friends influence one another (or that obesity spreads through social networks), GEE models offer little support of such a causal claim. The crux of the problem lies in the potential for confounding by shared environments and homophily [26], or the tendency of similar individuals to form friendships as in the adage ''birds of a feather flock together'' [39]. Controls for shared environments can be introduced using traditional methods, such as adjusting for neighborhood characteristics or including controls for fixed effects, as done by Cohen-Cole and Fletcher [24]. Homophily is more difficult to control for, since it may be based not only on the behavior in question (which is observed), but also on unobserved (latent) tendencies for friendship formation (e.g., a shared propensity for the behavior, which may itself be due to race, sex, or other characteristic). Unless the shared propensity toward both the behavior and the friendship is controlled for, we cannot progress beyond merely documenting a correlation of behavior between friends. We found that a number of well-known bases for homophily operate in the Add Health friendship network, including sex, age, grade, race-ethnicity, and income [39]. Our findings are also consistent with two works by de la Haye [9,40] and one by O'Malley [41] that findevidence for homophily on body size using SABM, exponential random graph and tie prediction models. While we found evidence that homophily matters for BMI and for playing active sports, we found no evidence for homophily on screen time. Because the model allows for homophily in friendship retention and in dropping ties, results should be robust to the ''unfriending problem'' described by Noel and Nyhan [42]. After accounting for these many sources of homophily (age, raceethnicity, income, sex, and grade), we found evidence of social influence for BMI, screen time, and playing active sports. These results contrast with de la Haye and colleagues' SABM analysis [40], which did not find any evidence of peer influence on BMI once homophily and other structural factors were taken into account. These differing results may be due to their study's smaller sample size (N = 156), the Australian setting, or a different specification of the influence parameter (alter's BMI, rather than similarity between ego's and alters' BMI, as in our model). Our model further extends prior work by specifically examining behaviors implicated in the epidemic of childhood obesity. The model is based on observations of respondents from two large high schools that are quite different, yet the results show substantively similar evidence of peer influence on BMI and playing active sports. Estimates of social influence in the two schools are not directly comparable, because these measures depend upon such factors as the average behavioral values and ranges for the school and the density of network ties. For example, effects for influence may have been smaller in Sunshine High due to the sparseness of its in-school social network, as reflected in the lower average outdegree (1.8, vs. 3.5 in Jefferson High). Differences in the built environment between the two schools may also have contributed to heterogeneity of effects [43]. Because we stratified the analyses to respondents within two schools, the school environment cannot be a confounder. Stratification further allowed us to demonstrate internal validity, as qualitatively similar results for peer influence on BMI and playing active sports were obtained in both schools. Our model also addresses a major limitation of regression-based approaches. As noted by Salizi and Thomas, peer influence effect can only be identified if the mechanism for friendship formation can be specified, measured, and included in the model [26]. Our model does provide such specification for friendship selection, based on reciprocity, transitivity, and homophily on several characteristics, including the behavior in question (BMI, screen time, and playing active sports). Our modeling framework also captures feedback between selection and influence processes. For example, large weight gain may be stigmatizing and lead to social isolation [44], in which case the beneficial effect of having (leaner) friends would be missed. Alternatively, obese adolescents might form and maintain friendships only with other obese adolescents; if social influence is present, then the two processes would be reinforcing. Regression models, which assume individual observations to be independent, cannot handle this type of complexity. There are several limitations to our study. First, we rely on selfreported BMI, screen time, and frequency of playing active sports. Self-reported BMI is known to suffer from cross-sectional misclassification bias based on sex, age, and race-ethnicity [45,46]. However, because sex and race-ethnicity are constant across waves and age only differs by one year, change in BMI might be underreported but should not otherwise be biased. Field and colleagues found that while obese males and females underreported weight by the largest margin, weight change showed relatively little bias (underreported by 1.7 pounds in males, and overreported by 0.3 pounds in females) [34]. Likewise, screen time may be underreported and playing sports over-reported, but these biases should be consistent across waves. A second is our use of observational data. There is no feasible mechanism for randomly assigning friendships, which would be the most satisfying means of removing homophily as a competing explanation, although some forms of dyadic relationship may be assigned. In a ''natural experiment'' of the random assignment of college freshmen roommates, researchers found that obese women negatively influence weight gain in their roommates, perhaps through eating behavior [47]. Nevertheless, roommates are not necessarily friends, and the results of this study cannot be directly compared to the current results. Another approach would involve the random assignment of obesity status to one node of a dyad. The only ''natural experiment'' we can identify, however, is a study by Woodard and colleagues of weight loss following a spouse's bariatric surgery [48]. Because of the observational nature of our data, we lacked some measures that may have confounded the findings of peer influence. In particular, we did not have a measure of Tanner stage. Physical maturation is an important contributor to individual BMI trajectories and physical activity [49], and it is plausible that more developed adolescents were both more likely to be friends and also more likely to increase BMI. A further limitation is that the SABM model is designed for discrete behaviors, such as smoking and alcohol consumption [30,31]. On the behavior side, the SABM requires that increases or decreases occur in single unit quanta, and is unable to handle continuous behavioral outcomes. In addition, SABM was designed for small networks (up to a few hundred actors). In small networks, each actor has the opportunity to form ties with all other actors [30], an assumption that is unlikely to hold in our analyses. Running analyses on such large networks was computationally intensive: each model took several hours on an 8-core machine to complete. Finally, SIENA can only model one behavior at a time, precluding simultaneous modeling of peer influence on BMI, screen time, and playing active sports. In future studies, we hope to address some of these limitations by extending the SABM framework to handle continuous behavior measures in large networks with greater computational efficiency. For the present time, however, the implementation in R-SIENA is the only means capable of teasing apart network dynamics and social influence. In conclusion, we found support for social influence on obesityrelated measures and behaviors that is independent of homophily or confounding by shared school environment. Nevertheless, homophily on BMI and playing sports cannot be ignored. We will use these model results to parameterize an agent-based model of peer influence and selection processes. In the absence of direct experimentation (such as the natural experiments described earlier), it remains unclear how social networks can be harnessed to promote health or prevent obesity. Regardless, evidence on the importance of social networks continues to accumulate. For intervention purposes, networks may provide an explanation for why ''high-risk'' approaches that focus only on obese individuals qua individuals are prone to fail [50]. Networks may also offer insight into what Sterman terms ''policy resistance, [which] arises when we do not understand the full range of feedbacks surrounding… our decisions'' ( [51] p.507). Our model shows that social influence tends to operate more in detrimental directions, especially for BMI; a focus on weight loss is therefore less likely to be effective than a primary prevention strategy against weight gain. Effective interventions will be necessary to overcome these barriers, requiring that social networks be considered rather than ignored.
2016-05-15T12:25:50.104Z
2012-06-29T00:00:00.000
{ "year": 2012, "sha1": "60b9c615101a5f2e6129650d254d1c71c6c53e67", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0039795&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60b9c615101a5f2e6129650d254d1c71c6c53e67", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222178078
pes2o/s2orc
v3-fos-license
A Novel Hierarchy of Integrable Lattices In the framework of the reduction technique for Poisson-Nijenhuis structures, we derive a new hierarchy of integrable lattice, whose continuum limit is the AKNS hierarchy. In contrast with other differential-difference versions of the AKNS system, our hierarchy is endowed with a canonical Poisson structure and, moreover, it admits a vector generalisation. We also solve the associated spectral problem and explicity contruct action-angle variables through the r-matrix approach. Introduction The search for discrete integrable systems got a novel impetus in recent years: the quantisation of integrable PDE.s [1 -4], the recent findings on integrable discrete-time systems [5 -7], including the "extreme" case of cellular automata [8], and even some results in string theory and 2 − D quantum gravity (related to the so-called "discrete string equations") are perhaps the main motivations for such a growing interest in that field. The results reported in this paper can be ascribed to the above line of research, although they belong in a sense to a more traditional approach, aiming at deriving integrable systems with discrete space and continuous time. In fact, we present and discuss an integrable differential-difference version of socalled AKNS hierarchy [9], already mentioned in [10]. In comparison with other discretisations [11] [3], it has certain advantages and some drawbacks. The latters are essentially given by the non-existence of one-field reductions, unlike the model derived by Ablowitz and Ladik [11]: such reductions are in fact admissible only in the continuum limit. However, this seems the price to be paid (i) to preserve the first hamiltonian structure of the AKNS hierarchy, namely the canonical one, and (ii) to allow for a vector generalisation, both such features being exhibited by our model. We have to mention that one equation belonging to our hierarchy can be interpreted as a Backlund tranformations for the continuous AKNS system, and, as such, it has been recently derived by Yamilov and Svinolupov [12]. The paper is organised as follows. In Section 2, the hierarchy under scrutiny is derived by using a nowadays standard geometrical reduction technique for Poisson-Nijenhuis structures. In Section 3, the underlying direct and inverse problem is solved: for the sake of simplicity, only the 2 × 2 matrix case is considered. In Section 4, the r-matrix structure of the system is revealed, and, through the r-matrix, the action-angle variables are explicitly given in terms of the spectral data. In section 5, it is shown that the whole hierarchy goes into the AKNS one in a suitable simple continuum limit. In Section 6 we make a few concluding remarks and mention some interesting open problems. Poisson-Nijenhuis structure In a recent paper, P.M.Santini and one of the authors [13], have considered the following abstract linear problem: In formula (1), λ ∈ C is a spectral parameter, Q, A, ψ take values in an associative algebra A with unit element I, endowed with a trace-form < ·, · >, E is an algebra automorphism; A is assumed to be a constant ( . = fixed point of E) element in the algebra. It has been shown in [13] that to the linear problem (1) one can naturally associate the following two compatible Poisson tensors: where the symbol L x (resp.R x ) denote left (resp. right) multiplication by x, and N is the hereditary recursion operator, or Nijenhuis tensor [14], defined as: Furthermore, one has proved there that the family of vector fields: is an invariant family for N , namely the Lie-derivative of N along K B vanishes: We can then assert that to the linear problem (1) it is naturally associated the following hierarchy of (commuting) bi-hamiltonian systems: Out of the abstract hierarchy (7), one can construct bi-hamiltonian lattices through a convenient realization of the algebra A, chosen to be the algebra of matrix valued sequences, approaching an arbitrary constant value as |n| → ∞, and of the automorphism E, taken to be the "shift operator" on sequences: In particular, throughout this paper, we shall consider (N + 1) × (N + 1) real matrices. Accordingly, the "field variable" Q will be parametrized as follows: where p is a scalar, < q| is the 1 × N matrix (q 1 , ....., q N ) ("row vector"), |r > is the N × 1 matrix (r 1 , ....., r N ) ("column vector") andQ is an N × N matrix. The constant matrix A will be chosen as: where < 0| (|0 >) is the null row (column) vector, and0 is the null N × Nmatrix. The sequence {Q n } n∈Z of matrices of type (9) is assumed to fulfil the boundary condition: whereÎ denotes the N × N identity matrix. Let I ⊂ A be the subalgebra of A, consisting of matrix-valued sequences obeying homogeneous boundary conditions. Once equipped with the Lie-product given by the point-wise commutator: [X, Y ] n . = X n Y n − Y n X n I becomes a Lie-algebra and moreover an ideal of A, and can be identified with its dual through the bilinear form: Our configuration space M will be the affine hyperplane to I of matrix-valued sequences obeying (11); accordingly, its generic point will be again denoted by Q. Its tangent bundle T (M) will then be the set {(Q, K Q ) : Q ∈ M, K Q ∈ I} while the points of its cotangent bundle T * (M) will be parametrized as {(Q, γ Q ) : Q ∈ M, γ Q ∈ I * ≃ I}. In the following, for the sake of simplicity, elements of T (M) (resp. T * (M)) will be shortly denoted by K Q (resp. γ Q ). In the present concrete realisation of the abstract setting introduced in [13], the Poisson tensors θ 1 , θ 2 defined by (2,3), as linear maps form T (M) and T * (M), are in fact linear operators on I, and the same is true for the Nijenhuis tensor N , given by (4), which is a linear map form T (M) into itself. The main result of present Section is contained in the following: The Nijenhuis tensor N reduces by restriction on the submanifold Im θ 1 . As a consequence, the abstract bi-hamiltonian hierarchy (7) restricts to a bi-hamiltonian hierarchy of evolution equations on a one-dimensional lattice. The theorem (1) will be proved in a constructive way, which will then also yield the concrete form of the restricted Poisson and Nijenhuis tensors. Let us start by evaluating the image of θ 1 : Let us choose for K Q and γ Q the parametrisation induced by (9): One immediately gets, for K Q ∈ Imθ 1 , i.e. for K of the form: the necessary condition: Hence,Q must be a constant N × N matrix. In view of the boundary condition (11), we have:Q =Î (17) In turn, formula (17) implies: which entails: Summarizing, we can assert that: 1. under conditions (10), (11), Imθ 1 is the submanifold of I consisting of vector fields K Q such that: 2. the set S ⊂ M, such that T (S) = Imθ 1 , given by: is a characteristic leaf of θ 1 . We are now ready to show that N reduces by restriction on S, namely that T (S) = Imθ 1 is an invariant submanifold for N . To this aim, let us introduce the auxiliary vector field ξ Q through the formula: whence it follows, on T (M ): Parametrizing ξ Q as we get from (21) while the matrixξ stays undetermined. On the other hand, eq.(22) yields, for Q ∈ S: By imposing the reduction condition:K ′ Q = 0, one gets: Then, to establish that S is an invariant submanifold for N , we have just to show that: and this can be seen by a direct straightforward (although tedious) calculation. ✷ Hence, the theorem (1) is proved and moreover we can give the esplicit form of N | S , more precisely we have: The explicit form of θ 1 | S can be easily obtained by noting that the points of S (resp. T (S)) are completely determined once the 2N vector (< q|, < r|) t ( resp. (< K q |, < K r |) t ) are given. Hence points of T * (S) will be fully characterized by the value of the 2N vector (< β q |, < β r |) defined by the duality condition: which entails: By direct calculation, one can check that the relation: once restricted to S, implies: whence it follows that θ 1 | S is the canonical Poisson tensor or, in other words, that the fields variables |q >, |r > are endowed with canonical Poisson brackets: It is worthwile to notice that the restricted Poisson tensor θ 2 | S . = N | S θ 1 | S is on the other hand nonlocal and has a rather cumbersome form: its non-locality is intimately related with the non degeneracy of θ 1 | S . According with the general theory of bi-hamiltonian system, we have still to show that the restricted vector fields: are invariant vector fields for N | S : but this follows by the invariant nature of the Lie derivative, provided that K B belongs to T (S). Actually, the commutativity condition [B, A] = 0 implies for matrix B the form: so that, for any Q ∈ S, we have: So, the family of starting commuting symmetries can be written in terms of the 2N-vectors: However, we should notice that the resulting evolution equations: will be in general non-local. Local equations will be get wheneverĈ = cÎ. Let us now give some concrete examples of local evolution equations associated with the linear problem (1), corresponding to the choiceĈ =Î: or, in components: with hamiltonian density: with hamiltonian density: (46) with hamiltonian density: with hamiltonian density: We will now show that the above geometric reduction procedure is perfectly equivalent to the peharps more familiar technique, based on the so-called discrete zero-curvature condition: obtained by enforcing compatibility of linear problem (1) with the auxiliary linear problem: In our case: where A is given by (10) and Q belongs to manifold S (20), i.e.: To extract from (5) the hierarchy (40) we start by considering the "stationary equation" associated with (50), namely: Parametrizing W as: W (λ) being an arbitrary constant (i.e., field independent) matrix, and the following "eigenvalue equations" for < u|, |v >: with w,Ŵ given by (55),(56). Eqs. (57, 58) can be solved recursively assuming W , and thus < u|, |v >, to be given by the following Laurent series in λ: and requiringW (λ) in (55),(56), to be in fact λ-independent. One gets: On the other hand, if W is a solutions of (53), the same is true of course for λ k W , where k is any positive integer. Given a two-sided Laurent series on the unit circle: we shall denote, as usual, by f + (λ) (resp. f − (λ)) the part containing only non negative (resp. strictly negative) powers of λ. Therefore, eq. (53) can be rewritten as: But now, as U is linear in λ, the l.h.s. of (62) cannot contain any negative power of λ, while its r.h.s. cannot contain any strictly positive one. Hence both sides of (62) are λ-independent (order "zero" in λ), and we have: As far as its λ-dependence is concerned, (63) (and of course 64) is then compatible with U t . On the other hand it may be readily checked that it belongs indeed to the manifold T (S), defined in eqs. (19-20). Hence, we can assert that the hierarchy of evolution equations associated to (1), (51) is given by: They correspond to the following choice for the matrix V appearing in formulas (51), and clearly coincide with (40), by choosing −Ĉ = (trW )Î +W (see rqs. (39), (55),(56)). Direct and inverse problem In this Section, we outline the solution of the direct and inverse problem associated to (1); for simplicity, we restrict considerations to the 2 × 2 matrix case, when we have: For the linear problem: we can naturally define the transfer matrix: such that: We can then introduce the "Jost matrices": where: is a fundamental matrix solution of the asymptotic (or "undressed") problem: and ϕ n ,φ n , ψ n ,ψ n are 2-column vector solutions of (1), the "Jost solutions". Cleary, the asymptotic solution E n (λ) (75) is bounded on the unit circle |λ| = 1, which will then be the continuous spectrum of (1), (68). On the unit circle, the monodromy matrix is then defined as: In the following, we shall call "spectral parameters" the elements of monodromy matrix: As det U n = λ, we have: so that: det T (λ) = 1 (80) Formulas (79) implies that ϕ n ,φ n and ψ n ,φ n are two pairs of independent vector solutions of (1) on the unit circle (|λ| = 1), while formulas (78), (80) entail, on the unit circle: Direct problem The direct problem amounts to determine the monodromy matrix (78) once the fields {q n }, {r n } are given. To this aim, it is convenient to introduce the normalized vector sequences: such that: Somewhat loosely, in the followingχ n , ϕ n will be denoted as "Jost solution" as well. The spectral parameters are naturally defined as Wronskians of independent vector solutions; namely we have: where W (a, b) is the determinant of the matrix whose columns are the 2-vectors a and b. It easily seen that the vector sequencesχ n , ψ n , ϕ n ,φ n satisfy the following "discrete integral" equations: The analyticity properties of the Jost solutions are summarized by the following: then ψ n , ϕ n are analytic functions of λ in the domain |λ| > 1 and are continuously differentiable for |λ| ≥ 1; analogously,χ n ,φ n are analytic functions of λ for |λ| < 1 and continuously differentiable for |λ| ≤ 1. We outline the proof of theorem (2) for ϕ n . First of all, we equip C N with the norm: so that linear transformations in C N are naturally equipped with the norm: Then, the following inequality holds for (87): where So, writing the Neumann series solutions of (85): where: The inequality (90) implies, by iteration: Hence, if γ < ∞, ϕ n (λ) is analytic for |λ| > 1. On the other hand: U 1,n = max |q n | 1 + |r n | , |r n | ≤ |q n | + |r n | + 1/2 |q n | 2 + |r n | 2 so that the existence of γ is guaranteed whenever the sequences {q n }, {r n } belong to l 1 . A similar procedure leads to the following result for ∂ϕn ∂λ : ∂ϕ n ∂λ ≤ a + |n|b |λ| ≥ 1 (98) which holds, with suitable coefficients a and b, whenever: Hence, provided q n and r n vanish faster than n −2 as |n| → ∞, ϕ n (λ) is continuously differentiable with respect to λ for |λ| ≥ 1. ✷ We can thus assert that, whenever {q n }, {r n } belong to l 1 , the diagonal entries of the monodromy matrix a(λ) andã(λ) (3.14) are analytic respectively outside and inside the unit circle. Morever, due to their asymptotic behaviour in λ: both a andã have at most a finite number of zeros, say N andÑ respectively, in their analyticity domains. These zeroes will be denoted as {λ j } N j=1 and {λ j }Ñ j=1 and will be assumed to be simple. If in addition the stronger condition (99) is satisfied, then the entries of the monodromy matrix (78) are Hölder continuous on the unit circle (see again (84)). Morever, the analyticity properties of a(λ) andã(λ) imply that the scalar Riemann problem on the unit circle (81) can be solved through the formulas: where σ is the index of the Riemann problem, i.e. the variation of Arg(1 + b(λ)b(λ)) after a cycle; condition (100) clearly implies σ = N −Ñ. In the following, we shall assume the index σ to be zero so that N =Ñ : this is certainly true in the reflectionless case (b =b = 0). Inverse problem The inverse problem amounts to reconstruct the sequences {q n }, {r n }, once the monodromy matrix is given. Finally, as usual, the sequences {q n }, {r n } are given in terms of the leading terms of the asymptotic behaviour of the vector solutions ψ n ,χ n . For instance, taking into account that: we have: To find the time-evolution of the spectral data corresponding to eq. (50), (51), we notice that, if the matrix V appearing in the ausiliary linear problem (51) is given by (66), the monodromy matrix undergoes the time evolution: where:V Restricting considerations to the 2 × 2 case, we can thus write: where c is an arbitrary scalar constant and σ 3 is the usual Pauli matrix. Consequently we have: of course, eqs. (115) imply that λ r ,λ r are constant in time for any equation of the hierarchy ("isospectral deformation"), while the normalization coefficient γ r ,γ r evolve according to equations: r-matrix and action-angle variables We have seen in sec. 2 (formulas (36),(37)) that our hierarchy of discrete evolution equations are hamiltonian with respect to the canonical Poisson tensor or, in other words, that the fields variables |q >, |r > are endowed with the canonical Poisson bracket. As consequence, we have the "ultra-local" Poisson bracket relation [15]: Restricting again considerations to 2 ×2 matrices U n , we have for 4 ×4 r-matrix r(λ, µ) the formula: i.e. the same expression as for the NLS case. From (118) one easily gets the analogous relation for the transfer matrix, and finally the Poisson-bracket relation for the monodromy matrix, that reads: where: that is: In formula (121), 1 λ−µ denotes the principal-value distribution, and we have used the distribution formula: lim Eq. (120) implies the following Poisson-brackets for spectral parameters: We can thus construct canonical spectral variables, given by: obeying the Poisson bracket realtions: (all other P.B. being identically zero). As usual, in terms of the spectral variables, the continuous and the discrete spectrum contributions are separated. Moreover, α(λ), {λ j }, {λ j } are constant along any flow of the hierarchy, while β(λ), {ν j }, {ν j } evolve linearly. The "action" variable α(λ) is defined only on the unit circle. However, it can be expressed as a sum of two functions having a single-valued analytic branch for |λ| > max j |λ j | and |λ| < min j |λ j | respectively, uniquely defined by the asymptotics (100),(102). Such branches will be denoted as ln a(λ) and lnã(λ) respectively. Then formulas (103),(104) imply for such branches the following power series expansions: where: In the following, we will show that the evolution equations (65) are generated by the hamiltonians J k , i.e.: Indeed, from the Poisson-bracket relation (118), we obtain: where V n is defined in terms of the Jost solutions: (135) It can be immediately seen that (V n+1 (µ)U(n, µ) − U(n, µ)V n (µ)) = 0 Then formula (134) entails: where V ± are of course the projections of V outside and inside the unit circle. Hence: are again analytic functions of λ for |λ| > max j |λ j | and |λ| < min j |λ j | respectively; they obey the asymptotic conditions: On the other hand,W (+) , defined by (138) and W , defined by (53)(59), obey the same stationary equation (53) and have the same analyticity properties with respect to λ. Moreover, their asymptotic behaviours differ essentially just for a constant multiple of the identity matrix, which plays no role in the recursion relation: hence formula (143) coincides with (65). We end this section by noting that formula (118) does hold even in the (N + 1) × (N + 1) matrix case. For this general spectral problem, the r-matrix is given by: where Λ is the recursion operator of the vector AKNS hierarchy.
2020-10-08T07:56:36.856Z
1994-01-25T00:00:00.000
{ "year": 1994, "sha1": "0e0eba366b6f84be2579fc7eea95ae94bfd688f5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/solv-int/9401005", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0e0eba366b6f84be2579fc7eea95ae94bfd688f5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
248106730
pes2o/s2orc
v3-fos-license
Research on the Impact of Digital Finance on the Green Development of Chinese Cities This paper takes 286 prefecture-level Chinese cities from 2011 to 2019 as research objects. It empirically examines the influence of digital finance on green development by using the EBM model, two-way fixed effect model, and instrumental variable method. It is found that digital finance can significantly promote the green development of Chinese cities, and this conclusion is still valid after the robustness tests, such as the instrumental variable method dealing with endogeneity and two-sided winsorization. The mechanism analysis results suggest that only the breadth of digital finance can significantly promote green development in the internal mechanism of digital finance, but the effect of coverage depth and digitization degree is not significant. The external mechanism of industrial structure upgrading assumes the mediating role of digital finance in improving the level of green development. The heterogeneity analysis results show that digital finance has a stronger impact on green development in the central and western cities and small-scale cities. This paper contributes to the study of the relationship between digital finance and green development and puts forward relevant suggestions. Introduction China's current economic development has moved towards the path of high-quality development. It has become a critical issue to realize the transformation of the speedoriented economy to the quality-oriented economy. China's rapid economic growth since the reform and opening-up has been realized in an extensive development mode with high pollution and high energy consumption, which has brought serious impacts on China's energy resources and ecological environment. Economic transformation has now become a critical path. e 2021 Sustainable Development Report released by the Institute for Sustainable Development Goals, Tsinghua University, points out that China's sustainable development index is 72.1, ranking 57th among 165 countries in the world. In September 2020, President Xi Jinping attended the United Nations General Assembly and proposed that China need to reach peak carbon dioxide emissions by 2030 and strive to achieve carbon neutrality by 2060. is implies that the economic and social development mode emphasizing reducing carbon emissions and promoting green development has become the focus. Finance is the bloodline of the modern economy, and digital finance is a financial innovation that combines traditional finance with technologies such as big data and artificial intelligence. e development of digital finance has addressed the problems such as information asymmetry, provided crucial financial support, supported the development of real economy such as small-and medium-sized enterprises, reduced costs for the development of high-tech and low-carbon clean enterprises, and made the information of enterprises more transparent, thus making the utilization of funds easy to be monitored and helping prevent and eliminate risks. ere have been many studies on the role of digital finance and the influencing factors of green development, but there are few studies on the relationship between digital finance and green development. Some scholars find that digital finance can improve the green total factor productivity [1][2][3]. However, they mainly proceed from the perspectives of factor distortion, resource mismatch, innovation, and entrepreneurship. eir measurement models all adopt the SBM model, only considering the nonradial relationship between the input and output. However, both radial and nonradial relationships exist in actual production. e omission of this factor may lead to deviation in the measurement [4]. erefore, this paper uses the panel data of 286 Chinese cities from 2011 to 2019, employs the mixed distance function, EBM model, to measure the green total factor productivity, and matches it with the digital inclusive finance index. On this basis, it uses cartographic visualization, two-way fixed effect model, instrumental variable method, and mediating effect model to examine the relationship and mechanism between digital finance and green development and further explores the heterogeneity. e possible contributions of this paper are as follows. Firstly, the existing studies seldom conduct research from the perspective of the internal and external mechanisms of digital finance. is paper studies the internal and external mechanisms and verifies their existence in different ways. Secondly, the existing studies on digital finance and the green total factor productivity mainly adopt the SBM model, only considering nonradial function. is paper adopts mixed distance function, EBM model, which can better reduce the error in measurement. irdly, this paper uses the number of urban telephones in 1984 as an instrumental variable to identify the causal relationship between digital finance and green development, which helps alleviate the endogenous problems related to the research of digital finance and green development. is paper is arranged as follows. e second part presents the theoretical analysis and shows that digital finance has a positive effect on green development. e third part introduces the empirical model of this paper and measures the green total factor productivity. e fourth part verifies that digital finance can promote green development and further tests its mechanism and heterogeneity. e fifth part further discusses the empirical results of this paper and proposes policy suggestions. Theoretical Basis and Research Hypotheses Green development is a way of economic growth and social development aiming at efficiency, harmony, and sustainability. As the theme of progress and development in the new era, it not only inherits the goal of sustainable development, but also innovates the theory of sustainable development in China. Green development possesses three characteristics, namely, systematic coordination, global sharing, and social practice. Jiang Nanping and others [5] argue that the core of green development is the combination and efficient utilization of resources and environmental energy, the common development of economy and society, and the harmonious coexistence between man and nature. e joint research group of the World Bank and the Development Research Center of the State Council believes that economic growth should be moderate, rather than excessively relying on resources, carbon emissions, and environmental damage. Green development is creating new green products, technologies, and investment. Hu Angang and others [6] think that the main characteristics of green development are moderate consumption, low energy consumption, low emission, and the continuous expansion of ecological resources, coordinating the three systems of economy, nature, and society. Financial development can significantly affect green development, which has been supported by many studies. Financial factor cluster has a significant spatial spillover effect on green development, and this effect embodies significant characteristics of spatial decline. Financial cluster also influences green development through capital support, resource allocation, and innovation [7]. Huang Jianhuan and others [8] think that the enterprise supervision effect plays a substantial role in the influence mechanism of financial development on green development, while the green financial effect is not strong. In terms of industry, Guo Wei and Si Menghui [9] conclude that financial cluster has an inverted "U" shape relationship with green total factor productivity in manufacturing industry, and it mainly generates the influence through improving technological capabilities. Some researchers believe that financial development can reduce the green total factor productivity. For example, Ge Pengfei and others [10] find that financial development can reduce the green total factor productivity by using the cross-border panel data of the "Belt and Road Initiative," while fundamental innovation and application innovation can mitigate the negative effect. With the development of Alipay and other platforms, the development speed of digital finance in China is extremely rapid, which has caused tremendous impacts on people's life. In terms of the studies on the impact of digital finance, many researchers have studied the macro and micro effects of digital finance. Yi Hang and Zhou Li [11] find that digital finance can significantly improve residents' consumption and play a more prominent role in the "vulnerable" groups in rural areas and central and western cities, thus proving the inclusive characteristics of digital finance. Qian Haizhang and others [12] think that digital finance has a significant impact on China's economic growth, and its mechanism is to drive the development of innovation and entrepreneurship. Song Min and others [13] put forward that financial technology can empower enterprises, reduce information asymmetry, improve the relationship between banks and enterprises, and ease financing constraints and other issues, thus improving the total factor productivity of enterprises. Tang Wenjin and others [14] state that digital finance has a nonlinear influence on the industrial structure upgrading by using the panel data of prefecture-level Chinese cities. In terms of the literature on the relationship between digital finance and environmental governance, Fang Lin and Yang Siying [15] think that financial technology can reduce urban pollution, presenting a significant emission reduction effect. Zheng Wanteng and others [16] argue that digital finance can significantly control environmental pollution, and its primary mechanism is to promote social development, industrial upgrading, and green technology innovation. Based on the previous literature, this paper puts forward the following hypotheses: Research Design In this model, subscript i indicates city identification and year identification. lngtfp represents the explained variable, the green total factor productivity. lndfi represents the core explained variable, digital finance. "controls" represents a series of control variables. φ i represents the city fixed effect. π t represents the year fixed effect. α 0 represents the intercept term of the equation. ε it represents the random disturbance term that varies with individuals and time. e empirical model adopted in this paper is a two-way fixed effect model, which can alleviate endogenous problems caused by factors such as missing variables. Data Sources and Variable definitions. is paper uses the balanced panel data of 2,574 observations in 286 cities from 2011 to 2019 for empirical research. e primary data sources are the CSMAR Database, China Urban Statistical Yearbook, and Institute of Digital Finance Peking University. All the variables related to prices are deflated while taking the year 2010 as the base year. Explained variable (lngtfp): the existing methods to measure green development are mainly divided into index system method and model efficiency method. e index system method mainly uses a variety of indexes to measure development performance. Chen et al. [17] employ 32 indexes to systematically evaluate the interprovincial industrial green development level in China from four aspects: industrial green output, innovation, efficiency, and policy. e model efficiency method mainly utilizes data envelopment analysis. Feng et al. [18] measure the development status through the green development performance index from the perspective of regions and evaluate the green development status of more than 40 regions in the world based on the expert scoring and data envelopment method. ere are also studies using the methods such as stochastic frontier method and exponential method [19,20]. In this paper, DEA software is used to measure the green total factor productivity of Chinese cities through the EBM model considering the undesirable output and Malmquist-Luenberger productivity index method. e year 2010 is defined as the base period, and then the green total factor productivity is measured. Finally, the logarithmic value is taken as the explained variable in this paper. e indexes for calculating green total factor productivity are as follows. 1. Input indexes: in this paper, input elements include labor, capital, and energy. e number of employees in state-owned units and private units in various prefecture-level cities is taken as labor input indexes. e sum of household electricity consumption and industrial electricity consumption in the whole city is used to measure the energy input. e capital input is measured by the perpetual inventory method, in which the depreciation rate is set at 10.96%. 2. Output indexes: output indexes are divided into desired output and undesirable output. e desired output is constant-price GDP and green coverage of built-up areas, while the undesirable output is industrial smoke, industrial sulfur dioxide, and industrial wastewater. To reduce the efficiency reduction caused by too many variables, this paper uses entropy method to synthesize three indexes of undesirable output into one index. Core explanatory variable (lndfi): referring to the research of Guo Feng et al., this paper uses the digital inclusive finance index published by the Institute of Digital Finance Peking University to measure the development of digital finance in Chinese cities. Its logarithmic value is taken as the core explanatory variable to reduce the influence of heteroscedasticity [21]. Control variables: referring to the existing literature, this paper introduces a series of economic and social control variables. Economic development level (LNP GDP) is represented by the logarithmic value of per capita GDP. Human capital level (edu) is represented by the proportion of college students to urban population. Urban scale (lnurban) is represented by the logarithmic value of population density. Foreign investment level (fdi) is represented by the ratio of actual foreign direct investment to GDP adjusted by exchange rate. Financial development level (finance) is represented by the proportion of year-end loan balance in financial institutions to GDP. Descriptive statistics of related variables are shown in Table 1. e mean value of the green total factor productivity (gtfp) is 1.05, indicating that the green total factor productivity of Chinese cities is on the rise from 2011 to 2019. e standard deviation of human capital (edu) of cities is large, suggesting that the human capital level of Chinese cities is quite different. In addition, the characteristics of each variable in this paper are similar to those in previous studies. To observe the evolution of the green total factor productivity and digital finance more intuitively, this paper visualizes it in Figure 1 with Stata15.0. In terms of the green total factor productivity, it can be seen from the comparison between 2011 and 2019 that the maximum and minimum values increase substantially, especially in the northeast, central, and western cities. e deeper color indicates that its absolute value increases, and its ranking also rises substantially. For digital finance, the minimum value increases from 17.02 to 199.54, and the maximum value increases from 86.51 to 321.65, indicating that the overall development of digital finance is also improving. In addition, the color of digital finance in the central region is obviously deepened, indicating that digital finance in the central region is developing at a fast speed. Empirical Results and Economic Explanations is part tests the previous hypotheses. Before the empirical analysis, it is necessary to conduct multicollinearity test on the model. e results show that the vif values are all less Discrete Dynamics in Nature and Society than 2.08, far less than 10, indicating that there is no multicollinearity in the empirical model of this paper. In addition, it also passes the F test, LM test, and Hausman test, confirming that it is reliable to choose the fixed effect model, as demonstrated in Table 2. Baseline Regression. Firstly, this paper examines the relationship between digital finance and urban green development. e corresponding results are summarized in Table 3. In this paper, the two-way fixed effect model is taken as the baseline model, and stepwise regression is carried out by adding control variables step by step to verify the robustness of the results. In Columns (1) to (6), the coefficients of digital finance (lndfi) are all positive and significant at the level of 1%, which indicates that digital finance has the green growth effect and significantly promotes the green total factor productivity. Among them, the coefficient of digital finance in Column (6) is 0.0377, indicating that the logarithm of green total factor productivity increases by Discrete Dynamics in Nature and Society 3.77% for every 1 unit increase in the logarithm of digital financial index. Digital finance relies on big data, artificial intelligence, and other means to improve the efficiency of information matching, lower service costs, and optimize the distribution of production factors. Moreover, the transparency of its information makes it easier to distinguish between green industries and green companies, laying a solid foundation for the improvement of green development. en, hypothesis 1 is preliminarily verified. For control variables, the coefficient of human capital (edu) is 0.0065, which is significant at the level of 1%, indicating that the improvement of urban human capital can significantly improve the urban green total factor productivity. e coefficient of urban scale (lnurban) is 0.0488, which is significant at the level of 1%, indicating that the increase of urban population density can improve the level of green total factor productivity. Finally, the coefficient representing the level of traditional finance (finance) is 0.0061, which is significant at the level of 1%. It suggests that digital finance does not play a role of substitution, and traditional finance can still have an impact on green development. Instrumental Variable Method. Even if the two-way fixed effect model is adopted, endogenous problems may still exist in this study. To address this concern, this paper refers to the practice of Huang Qunhui and others [22] and uses the number of telephones per 10,000 households in various regions in 1984 as an instrumental variable method. It is because the historical data can meet the exogeneity, and the development of communication technology and digital finance in history also meets the relationship. is variable is the cross-sectional data, which cannot match the panel data, so the mean value of digital finance in each region except itself in the previous year is taken as the trend item with dynamic changes. After multiplying it by the number of telephones owned by every 10,000 households in 1984, the instrumental variable of this paper is obtained. Column (1) of Table 4 reports the regression results of the instrumental variable. e coefficient of digital finance (lndfi) is 0.2361, which shows that the impact of digital finance on green total factor productivity is still significant at the positive level of 1% after considering endogenous problems, which is basically consistent with the previous results of baseline model. ey all pass the K-P rk LM test and K-P rk Wald F test, showing that the selection of the instrumental variable is reasonable and valid. Control Macro Factors. e fixed effect model adopted in this paper is the year-city two-way fixed effect. Because each region may have different characteristics that change at any time, the two-way fixed effect model may not be rigorous enough. erefore, this paper takes the interaction term of year and province into the model and considers the characteristics of the region changing with time to further test the conclusions. e corresponding results are presented in Column (2) of Table 4. e coefficient of digital finance is 0.0358, which is significant at the level of 1%, indicating that the previous conclusion is robust. Table 4. e coefficient of digital finance is 0.0435, slightly higher than the result of baseline regression, and still Note. * , * * and * * * are significant at the level of 10%, 5%, and 1%, respectively. e robust standard error is within the brackets. Same below. Eliminate Interference Discrete Dynamics in Nature and Society significant at the level of 1%. Meanwhile, the coefficient and significance of other variables do not change significantly. erefore, it can be considered that the previous conclusion is robust. ② Exclude the abnormal year: the P2P platform went bankrupt in 2018, which had a great impact on the development of digital finance. erefore, this paper excludes that year and makes the regression again. e corresponding results are shown in Column (4). Among them, the coefficient of digital finance is 0.0366, which is significant at the level of 1%, basically consistent with the previous result. e above results show that the previous conclusions are robust and reliable. Internal and External Mechanisms of Digital Finance Influencing Green Development. Digital finance must have its unique mechanisms when it plays a role in green development, but what are its internal and external mechanisms? How do these mechanisms work? To answer these questions, this paper divides the mechanisms into internal mechanism and external mechanism for research. Internal Mechanism. e Institute of Digital Finance Peking University measures the digital inclusive finance from the coverage breadth, depth, and digitization degree and then synthesizes these three indexes into the digital inclusive finance, which facilitates the research on its internal mechanism. Table 5 shows the regression results of the internal mechanism of digital finance. e results in Columns (1) and (2) demonstrate that the coverage breadth of digital finance has a positive influence on the green total factor productivity, and it is significant at the level of 1%, indicating that the "inclusive feature" of digital finance plays a significant role in the growth of green total factor productivity. Columns (3) to (6) present the results of the depth and digitization degree of digital finance. eir estimation coefficients are also positive, but they have not passed the statistical significance test, indicating that the depth and digitization degree of digital finance do not play a significant role in green growth. e above results suggest that the expansion of its coverage breadth promotes green development, but its depth and digitalization degree need to be further explored to better contribute to green development. External Mechanism. e secondary industry mainly refers to the industrial sector. In the process of China's rapid economic development, some industries, especially heavy industries, emit a large number of pollutants such as carbon dioxide and sulfur dioxide, which adversely affects the green and low-carbon development mode. According to the research of Xu Xianchun and others [23], the upgrading industrial structure promotes green development, and the promotion effect of digital finance on the industrial structure upgrading has been effectively verified. To test the role of the industrial structure upgrading, this paper refers to the research of Gan Chunhui and others, adopts the ratio of the tertiary industry to the secondary industry to express the industrial structure upgrading, and uses the mediating effect model to conduct the study [24]. Among them, the subscript i indicates city identification and year identification. "indurs" indicates the mediating variable in this paper, the industrial structure upgrading. e definitions of other variables are consistent with the previous ones. For the mediating effect model, firstly, if the coefficient of α 1 is significant and positive, it will be regarded as passing the test. In the second step, if the coefficient β 1 is significant and positive, it shows that digital finance can significantly affect the industrial structure upgrading. In the third step, if the coefficient of c 2 is significant and positive, it means that it passes the mediating effect test. In addition, if only one of β 1 and c 2 passes the significance test, the bootstrap method is needed to perform the Sobel test. Table 6 reports the results of external mechanism test. e coefficient of α 1 is proved to be significant and positive, so they will not be elaborated here. In Column (2), the coefficient of digital finance (lndfi) is 0.1693, which is significant at the level of 1%, indicating that digital finance can significantly promote the urban industrial structure upgrading. Next, the coefficient of c 2 is further tested and found to be 0.0123, which is significant at the level of 1%, indicating that the industrial structure upgrading can significantly promote the urban green development. e above results demonstrate that the industrial structure upgrading plays the mediating role in digital finance influencing urban green development. e coefficients of digital finance in the third Column are significant and positive. It suggests that there are other mediating roles, and the industrial structure upgrading bears part of the mediating roles. e rapid development of digital finance lowers the threshold for obtaining funds, and its information matching mechanism enables more small-and medium-sized enterprises to obtain funds, thus providing essential support for the upgrading and development of the tertiary industry, promoting the industrial structure upgrading and accelerating the process of urban green development. us, hypothesis 2 is verified. Heterogeneity of Region. Due to the huge differences in the economic and social development levels in different regions of China, there are great differences in resource endowments and development stages between the eastern region and other regions. Meanwhile, there are also great differences in their green development levels. In this context, this paper divides cities into eastern, central, and western cities to further explore the heterogeneity of the impact of digital finance on green development. e corresponding results are displayed in Column (1) and Column (2) of Table 7. It can be observed that the coefficient of digital finance in the eastern region is 0.0306, which is positive but not significant. e coefficient of digital finance in the central and western cities is 0.0390, which is positive and is is consistent with the conclusion of the previous internal mechanism test that digital finance mainly plays a role in increasing the green total factor productivity through its inclusive feature. e reason why it does not have a significant role in the eastern region may be that the traditional finance there is developed enough to meet the supply of factors that promote green growth. Moreover, in the first Column, the coefficient of traditional finance is positive, which is also proved by the fact that it is significant at the level of 1%. Heterogeneity of City Size. According to the median of the population density in cities, this paper divides cities into large-scale cities and small-scale cities and further explores the heterogeneity. e corresponding results are shown in Columns (3) and (4) of Table 7. It can be seen that digital finance plays a significant role in promoting green development in both large-scale cities and small-scale cities. e difference lies in that the significance of digital finance in large-scale cities is only 5%, while that in small-scale cities is 1%. e coefficient of digital finance in small-scale cities is 0.0418, greater than 0.0349. erefore, for small-scale cities, the significance and coefficient of digital finance prove that it can play a stronger role, which is also consistent with the Conclusion is paper attempts to explore the current situation of green development in Chinese cities from the perspective of green development and digital finance and further investigate how digital finance affects green development. In this paper, the panel data of 286 Chinese cities from 2011 to 2019 is employed. Moreover, the two-way fixed effect model, instrumental variable method, and mediating effect model are adopted for the empirical test. e main conclusions are as follows. First, digital finance can significantly improve the green development of Chinese cities. After using the number of telephones in 1984 as an instrumental variable and employing the robustness test method, such as winsorization, the conclusion is still robust and reliable. Second, digital finance promotes the green development level of Chinese cities through coverage breadth (internal mechanism) and industrial structure upgrading (external mechanism). ird, digital finance has a stronger effect on driving green development in the central and western cities, as well as small-scale cities. Currently, there are few literatures on the relationship between digital finance and urban green development. is paper provides valuable empirical evidence for studying the relationship between them. Furthermore, this paper puts forward the following suggestions. First, we should vigorously explore its market depth and develop the digitalization degree. e breadth of digital finance plays a significant role in the green total factor productivity. However, the roles of depth and digitalization degree have not been highlighted. Hence, it is necessary to explore the role of digital finance and further reveal the green dividend brought by the development of digital finance. Second, we should put forward the policy of digital financial empowering the real economy according to local conditions. e central and western cities and small-scaled cities are the key beneficiaries of digital finance. Based on local resource endowments and production conditions, digital finance should play the role of empowering the real economy, speeding up the industrial structure upgrading, easing the financing constraints of enterprise development, reducing resource mismatch, and contributing to improving green development. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-04-13T15:12:35.650Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "2430be5cae5f1c4ea20fd34325b519e05658ee28", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2022/3813474.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f5caaa8e527dfe4b39b114d907aac149ea8284b8", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [] }
199547760
pes2o/s2orc
v3-fos-license
Olfaction, Vision, and Semantics for Mobile Robots. Results of the IRO Project Olfaction is a valuable source of information about the environment that has not been sufficiently exploited in mobile robotics yet. Certainly, odor information can contribute to other sensing modalities, e.g., vision, to accomplish high-level robot activities, such as task planning or execution in human environments. This paper organizes and puts together the developments and experiences on combining olfaction and vision into robotics applications, as the result of our five-years long project IRO: Improvement of the sensory and autonomous capability of Robots through Olfaction. Particularly, it investigates mechanisms to exploit odor information (usually coming in the form of the type of volatile and its concentration) in problems such as object recognition and scene–activity understanding. A distinctive aspect of this research is the special attention paid to the role of semantics within the robot perception and decision-making processes. The obtained results have improved the robot capabilities in terms of efficiency, autonomy, and usefulness, as reported in our publications. Introduction The sense of smell is not the most vital one for humans, but we certainly use it every day. When we face a cup with a dark-colored liquid, we can assure that it is a cup of coffee not only from what we observe, but also from what we smell. When we detect an alarming odor that might be associated to gas/butane, we do not look for the possible escape in the living room but we firstly go to the kitchen, where we do not inspect randomly, but we turn our attention to those devices that use gas (e.g., hob, oven, etc.). As in the last example, the smell sense usually triggers alerts: a possible fire, a gas leak, food in poor condition, etc., but it is also associated to emotionally rooted processes [1]: memories, attraction or repulsion, etc. Both facets are interesting in robotics, although the latter, especially relevant in the long term for the so-called social robots [2,3], is beyond the scope of our current research. The IRO project focuses on the usefulness of a mobile robot able to detect and measure gases in the environment in order to identify the activities carried out in its surroundings, e.g., smoking, cooking, mopping the floor, etc. Having identified the situation, the robot should be able to act consistently, for example, locating and scolding the smoker, avoiding to pass by freshly mopped areas or, perhaps, interacting in a social way to help the person who is cooking. Some related works in this field [4,5] present mobile robots endowed with olfactory capabilities and applications to detect odor sources. The work done within the IRO project combines olfaction with vision and semantic knowledge to improve the robot abilities, which differs from such related works. To provide a mobile robot with olfaction capabilities, we relied on electronic noses (e-noses) [6], i.e., electronic devices Project Overview The general objective of the IRO project is to investigate mechanisms for integrating olfactory data into the robot sensing system, as well as the development of algorithms for decision making and task generation that exploit the combination of the different sensor modalities. The key idea behind our research here is that the perception of gases, including both their classification and the measurement of their intensity or concentration, can improve the intelligent behavior of the mobile robot, upgrading its performance in terms of efficiency, autonomy and usefulness. Within this global target we can distinguish three partial objectives: • Design and fabrication of an artificial nose (e-nose) adapted to the requirements of a mobile robot. Most of the e-noses used in mobile robotics are designed for measuring only the chemical concentration, aiming at tasks such as the creation of concentration maps and/or the search of the emission sources. In the context of the present project, it is necessary that the electronic nose is designed to also provide information on the type of gas, that is, be as effective as possible in the classification of the detected chemical volatile. The objective is, therefore, to combine both facets which requires integrating different sensor technologies into a single device. • Gas classification and object recognition for robotics applications. The robot, equipped with a vision system (e.g., one or multiple RGB or RGB-D cameras) and an electronic nose, could successfully improve the vision-based recognition of simple objects, exploiting the odor information gathered in the surroundings, as well as enhancing the gas classification when considering the semantic information and the probabilistic categorization of the detected object. • Exploiting high-level olfactory and visual semantic information in the planning and execution of tasks. Semantics provide additional human-like information to the perceived elements. For example, a high concentration of gases related to rotten food suggest that somebody forgot about it. Semantic information can be exploited to automatically infer new robot tasks in order to maintain a set of pre-stablished human-like norms, in this case, rotten food should be taken out of the house [12]. Among the multiple tasks that can benefit from such inference process, we focus on the challenging task of source localization with a mobile robot in indoor environments, aiming at minimizing the necessary time to locate the object emanating the gases in the environment. The following sections describe with more detail the work done to reach these partial objectives. Section 3 describes the hardware involved in the project, both the electronic noses and the employed mobile robots. Then, Section 4 summarizes the classification algorithms considered to recognize different gases, analyzing the impact of the robot movements in the gas recognition. Finally, Sections 5 and 6 present our insights on combining olfaction, vision, and semantics abilities in mobile robotics. Hardware Description This section describes the hardware components employed in the set of experiments performed during the IRO project, with a particular emphasis in the e-noses and the mobile platforms used to carry them. Electronic Noses E-noses are devices designed to detect, measure and classify volatile chemical substances by means of an array of gas sensors. Commonly, the gas sensors employed react to a wide range of different gases (non-selective), but provide no specific information about the chemical identity. Therefore, the output of the sensor array is usually further processed by some sort of machine learning algorithm to classify [10,13] or quantify [14,15] the samples. However, it must be noticed that in the last decade multiple advances have been made towards developing selective gas sensors [16,17], which could reduce the complexity of e-noses in a close future by reducing the number of sensors to host and the need of a post-processing stage to classify the gases. As a result, e-noses offer a relatively cheap and fast tool to assess the presence of gases, but with a substantially greater error and uncertainty margin than precise analytic methods, such as gas-chromatography or mass-spectrometry [18]. Common gas sensor technologies employed to build e-noses include Metal OXide (MOX), Amperometric ElectroChemical (AEC), Quartz Crystal Microbalance (QCM), Conducting Polymers (CP), and Surface Acoustic Wave (SAW). Each of these exhibits advantages and disadvantages in terms of selectivity, sensitivity, response speed, influence by environmental conditions and drift over time, among others [6,19]. However, no single technology excels in all categories. Thus, limiting the design of an e-nose to a single sensor technology will restrict its performance and, quite often, prevent it from reaching the demanded specifications [9]. This motivates the combination of different gas sensor technologies into a single e-nose, which would result in a sensor array with better dynamic capabilities and a more informative output than any single sensor technology. Since it is unfeasible to install all possible gas sensors and technologies simultaneously on a single device, it also becomes appealing to design an e-nose in such a way that its sensor array can be reconfigured depending on the applications, keeping it cost-efficient and compact. To attain the objectives identified in this project, our first step has been the design and fabrication of e-nose prototypes for gas classification and concentration estimation, as well as their posterior integration into a mobile robot. In the earliest stages of the project, we employed the so-called Multi-Chamber Electronic (MCE) nose, developed in one of our previous works [20]. The MCE nose is a device that comprises several identical sets of MOX sensors accommodated in separate chambers so that it can alternate between sensing and recovery states, providing, as a whole, a device capable of sensing changes in chemical concentrations faster than conventional e-noses. This overcomes the main drawback of MOX sensors in terms of recovery time after being exposed to gases, which highly restricts its usage in applications where the gas concentrations may change rapidly, as in mobile robotic olfaction. In subsequent stages, we exploited our experience with the MCE nose and proposed, as a central contribution for the IRO project, a novel e-nose architecture [8] that combines self-contained and intelligent sensor boards (i.e., modules) with a decentralized design offering a viable solution to the problem of integrating heterogeneous gas sensors in a modular fashion. This allows us to create different and specific gas-sensing devices from inter-connectable building blocks, which not only brings versatility and reusability to the design of e-noses but also reduces development costs and ensures long-term serviceability, as new sensors can be added as needed. Moreover, the proposed e-nose architecture also enables the integration of other electronic components such as GPS for geo-referenced measurements, or wireless communications for remote readings, a feature which, despite not being a technological contribution, provides an improvement over most commercial e-noses and facilitates applications of mobile robot olfaction. Figure 1 shows a picture of the prototype built along the course of this project. The particular configuration shown includes a power module (along with a 2200 mAh lithium battery, useful for pre-heating the gas sensors when the robot is still not powering the e-nose), an SD memory card module to keep a log of all measurements, and four gas-sensing modules (hosting eight MOX sensors and two electrolytic sensors). In terms of consumption, due to its modular nature, the total power needed by this e-nose is highly dependent on its particular configuration. As an example, the setup shown in Figure 1 has a maximum power consumption of ∼2.5 W, which is suitable for being supplied through a standard USB2.0 port. This value is low enough to not significantly compromise our robot's autonomy, as they include high capacity lithium-ion batteries capable of powering all the electronic and mechanical devices on the robot, including the on-board PC and the wheels' motors. A thorough description of the power consumption values for each module can be found in [8]. Mobile Robots Along the course of the IRO project two different robotic platforms have been employed for carrying out the multiple experiments, namely Rhodon and Giraff. • Rhodon is a laboratory robot built upon a commercial PatrolBot platform (refer to Figure 2a), capable of being tele-operated or even to autonomously navigate (i.e., self localization and obstacle avoidance) by using a pair of 2D laser scanners: a SICK PLS (front) and a Hokuyo URG (back). The on-board PC controls both the navigation and data acquisition by means of a set of software modules running within a ROS framework. Since the experiments described in this paper corresponds to different stages of the IRO project and aimed to different purposes, diverse robot setups have been adopted, as specified in Section 4. The Rhodon robot has been available from the beginning of the IRO project, and is capable of carrying heavy loads, becoming ideal for the attachment of a robotic arm used in one of the experiments. • The second robotic platform employed is the so called Giraff robot [21,22]. It has been used during the experiments regarding object recognition, as described in Section 6. In a nutshell, it is a telepresence robotic platform equipped with a frontal 2D laser range finder for navigation and localization, and a set of RGB-D cameras to capture 3D information from the environment (see Figure 2b). The Giraff robot became available later during the project and, as it is lighter and easier to transport than Rhodon, it was chosen for the experiments related to semantics, due to the need for recording visual measurements in a real house. Gas Recognition and Classification for Robotic Applications The task of odor recognition deals with the problem of identifying a volatile sample among a set of possible categories [23]. This process plays an important role in the development of many applications, such as city odor mapping [24,25], pollution monitoring [26], breath analysis in clinical environments [27], or the nowadays common estimation of blood alcohol content for drivers [28,29]. Among them, there are some applications such as pollution monitoring or leak detection that require measuring the environment continuously and/or at different locations. For such scenarios, the use of a mobile robot with the capability of identifying and measuring the volatiles' concentration is of great help, as already reported in [30]. Gas Classification The classification of volatile substances is, possibly, the most studied application of e-noses. Traditionally, this has been performed by analyzing the response of an array of gas sensors when exposed to pulse-like gas excitation under well-controlled measurement conditions (i.e., temperature, humidity, exposure time, etc.). Unsurprisingly, dozens of works report less than 10% classification error rate under these specific circumstances. However, when the classification is to be performed on a real, uncontrolled scenario, and particularly for the case where the e-nose is collecting samples on board a moving platform, assumptions such as a perfect alignment or equal length of patterns do not hold [31]. This, which is due to the dynamic and chaotic nature of gas dispersal, together with the strong dynamics shown by most gas sensor technologies, notably increases the complexity of the classification problem [7]. Continuous Chemical Classification The discrimination of gases performed with a robot equipped with an array of gas sensors presents a number of additional challenges when compared to standard identification applications. While standard classification tasks usually host gas sensors inside a chamber with controlled humidity, temperature and airflow conditions, in robotics olfaction, there is no control over the sensing conditions. This entails that the sensor signals to be processed are noisy and dominated by the signal transient behavior [32]. Under these challenging conditions, chemical recognition can be seen as a particular case of time series classification, characterized by working on sub-sequences of the main data stream (see [33] for a complete review). Nevertheless, most of these approaches are proposed for uni-variate time series, while e-nose data are fundamentally multi-variate (i.e., based on an array of gas sensors with different dynamic responses). This, together with the aforementioned challenges of real data, make most segmentation approaches difficult to apply to e-nose data, which, in turn, affect negatively the classification rate. A novel approach was published in [34] as a partial result of the IRO project to address the aforementioned issues. This approach is based on generative topographic mapping through time (GTM-TT) and integrates supervised classification and relevance learning (SGTM-TT) to the problem of volatile identification in mobile robotics. By exploiting the strong temporal correlation of the e-nose data, the method is capable of classifying gases with high accuracy employing short data sequences (1 s, 10 s and 20 s). Given the ephemeral nature of gas dispersion, the impact of the data sequence length on the classification performance is also analyzed, trying to push the limits towards a fast-response chemical recognition system. Furthermore, another remarkable advantage for robotics applications is the introduction of a relevance value, by studying the relevance of the different sensors composing the e-nose and the time points in the data sequence for predicting the class label. Figure 3 shows an example of these magnitudes for an e-nose composed of five gas sensors (Figaro TGS-2600, TGS-2602, TGS-2611, TGS-2620, and MiCS-5135) when exposed to four different gaseous substances (gin, acetone, ethanol and lighter-gas). As can be seen, the relevance in the classification process of each sensor drastically varies according to the gas being exposed, sometimes being one sensor dominant over the others, while in other cases it would be necessary to consider a combination of their outputs to achieve a good classification rate. Related to the time points relevance (Figure 3e), it can be seen how the most relevant data match the exposure time, while the relevance decays considerably during the recovery phase. However, due to the different recovery times of the sensors composing the e-nose, we can find some time-periods with high relevance that could also be used to get a high accuracy in the classification. In these experiments, the Rhodon robot was equipped with a robotic arm that held an aspiration tube connected to the MCE nose, as can be seen in Figure 4. Later, in [7], we advocated the use of the well known sliding window approach to avoid feature based segmentation and to study up to which extent considering delayed samples contributes to exploit the temporal correlation of e-nose's data. This technique is attractive because it is simple, intuitive, and, moreover, amenable to online applications, which is a primary focus of the IRO project. We analyzed the impact of the window length on the classification accuracy (see Figure 5) for three state of the art classifiers, a variety of experimental scenarios, e-nose configurations and gas classes (employing three different olfaction datasets). The main conclusion of such work is that, for online chemical classification in uncontrolled environments, feeding the classifiers with additional delayed samples leads to a small, yet important, improvement (up to 6% units) on the classification accuracy. Gas Classification in Motion Having demonstrated that online chemical classification is feasible with a mobile robot, IRO also investigated the impact of carrying such task while the robot is navigating. We analyzed the induced changes in the gas sensor's response and determined that the movement of the robot has an important impact on the classification accuracy if not properly considered, resulting in a decrease of up to 30% in some configurations [35]. We supported our conclusions with an extensive experimental evaluation consisting of a mobile robot inspecting a long indoor corridor with two chemical volatile sources (ethanol and acetone) more than 240 times, at four different motion speeds: low ≈ 0.2 m/s, medium ≈ 0.4 m/s, high ≈ 0.5 m/s and very high ≈ 0.6 m/s. In these experiments, apart from the e-nose, the Rhodon robot was equipped with a Gill WindSonic ultrasonic anemometer for measuring the wind flows in the environment, and a miniRAE Lite photo ionization detector as an alternative gas detector. The on-board e-nose, in turn, was composed of an array of 10 MOX gas sensors including Figaro TGS26xx sensors for measuring gases such as hydrogen, ethanol, CO or Iso-butane, and Hanwei MQx sensors for other substances such as LGP, propane or natural gas. This e-nose provided gas readings at a rate of 5Hz. Further information about the dynamic conditions of these experiments can be found in [35]. To analyze to which extent the motion of the gas sensing device may affect the classification accuracy, we trained multiple classifiers with samples of each chemical volatile collected in a traditional static setup (i.e., both robot and gas source standing still), and then, analyzed the classification performance for a set of increasing motion velocities. Figure 6 (left) shows the results of the experiments from which a noticeable reduction in the classification accuracy is observed when increasing the motion speed. This confirms our suspicions about the negative impact that the motion speed of the robot has over classification rate. To overcome, to a certain degree, the aforementioned effect, we also analyzed the classification accuracy when the classifier is also trained with in-motion data samples, proposing different training schemes. We showed that training a classifier with data collected in motion yields, on average, more accurate outcomes (see Figure 6, right) than using a static setup (Figure 6, left). Moreover, we found that it is not necessary to train the classifiers with data gathered at the same speed than the testing data to remove this negative correlation, but it suffices to capture the underlying dynamics. As a general conclusion, the absolute speed is not a determinant parameter, but the gap between the speeds used to collect the training and testing datasets is an aspect to be taken into consideration when deploying real olfaction applications with a mobile robot. Object Recognition and Semantic Knowledge for Robotic Applications From the object recognition side, the peculiarities of the acquisition process of visual data by a mobile robot permits the inspection of larger portions of the robot workspace, gathering rich semantic information. In this case, semantic information comes in the form of contextual relations, i.e., objects that are found according to certain configurations: keyboards are usually in front of computer screens, microwaves are in the same room as refrigerators, tables are typically surrounded by chairs, etc. [36]. Thereby, during the object recognition process, the presence of a refrigerator in a room helps to disambiguate the classification of a white, box-shaped object as a microwave and not as a night stand [11,37]. To exploit these contextual relations in the IRO project, we make use of Conditional Random Fields (CRFs), a model from the Probabilistic Graphical Models (PGMs) family [38], and combine them with ontologies [39] to achieve a more robust performance. CRFs represent the objects in the environment as nodes in a graph, where edges are used to link contextually related objects (Figure 7). In [40], a survey on different learning approaches for these models is presented, performing a comparative analysis focusing on the time needed for training and the achieved recognition accuracy. This analysis is especially targeted at finding the most suitable one for scene object recognition, providing Loopy Belief Propagation (LBP) the best results [41]. These comparisons were done with two state-of-the-art datasets, including a particular one, called Robot@Home [42], specifically conceived to serve as a testbed for the evaluation of semantic mapping algorithms, mainly those exploiting contextual information (see Figure 8). To combine different sources of contextual information, novel environment representations can be used such as the so-called Multiversal Semantic Map [43]. This map is an extension of traditional semantic maps for robotics [44], with the ability to coherently manage uncertain information coming from, for example, object recognition or gas classification processes, and reference them to the location where they were acquired into a metric map. Additionally, it also comprises semantic information codified by means of an ontology, enabling the execution of high-level reasoning tasks [45], which are of special interest in this project. x 3 x 4 x 5 x 6 x 9 x 8 x 7 x 10 x 11 Exploiting High-Level Olfactory and Visual Semantic Information in the Planning and Execution of Tasks Mobile robots operating in human environments such as offices, hospitals, or factories benefit from the fusion of different sensing modalities to efficiently accomplish tasks that are hard or even unfeasible to address if only one sensor is employed [46]. As mentioned, in the IRO project we focus on two of these modalities, namely vision and artificial olfaction, and study their application to a challenging problem: the localization of gas emission sources within real-world indoor environments, commonly referred as gas source localization (GSL) [47]. For that, the robot would need not only to detect the volatile chemical substance that is being release, but also pinpoint the location of its release source. As stated, enriching the search process with visual sensory information and considering semantic relationships through an inference process will enhance the current state of art of GSL algorithms. To demonstrate this claim, two parallel approaches were considered: on the one hand, we relied on human intervention by means of a teleoperated mobile platform [48], delegating the inference of the most likely source location to the human tele-operator, and, on the other hand, we developed a fully autonomous system able to infer the most likely source location based on the sensory data available on the robot and high-level semantic reasoning [49]. Both approaches are detailed in the following sections and were assessed through experiments with the Giraff mobile robot. Olfactory Telerobotics Since inferring the type of object (and the location in the environment) of the gas source that is releasing the gases that have been detected by the robot is not straightforward, we simplified the problem by introducing the human factor and its powerful reasoning capabilities to solve this challenging problem [50]. In this context, olfactory telerobotics can be seen as the augmentation of the sensing capabilities of a conventional teleoperated mobile robot to acquire information about the surrounding air (i.e., gases, wind-speed, etc.) in addition to the usual audio and video streams (see Figure 9). Figure 9. Diagram of a traditional teleoperation system (in black) and extended olfactory telerobotics (in blue). The latter requires equipping the mobile robot with additional sensors (e.g., an e-nose or an anemometer), and enhances the teleoperation user-interface to display this new sensory data. To evaluate whether the human reasoning can be exploited through a teleoperated robot to efficiently locate the gas source, we collected a dataset comprised of 60 GSL experiments with a teleoperated mobile robot [51]. The goal of the human operators was to identify and locate the gas source among several visually-identical candidate objects (see Figure 10). Results demonstrate that humans had over 75% success rate for search times between three to four minutes, supporting our hypothesis that semantic reasoning is indeed used by humans when locating the gas source with this configuration. Semantic-Based Autonomous Gas Source Localization The use of visual information when locating a gas source is not a novel approach, yet, it has been only superficially explored in the literature with very simple problem domains where the robot exploited prior knowledge about the source physical characteristics to reduce the locations to search [52]. Moreover, a formal way to define and exploit the relationships among gases and objects (i.e., their semantics) is still missing, aspect which could assist the GSL process in a more flexible way. In [53], as a partial result of the project, presented a novel GSL system that pursues both efficiency by exploiting the semantics between the detected gases and the objects in the environment, and coherence through the consideration of the uncertainty in the identification of gases and objects. To encode these semantic relationships (e.g., that heaters can release smoke), we rely on an ontology [39]. These factors make this approach particularly suitable for structured-indoor environments containing multiple objects likely to release gases where semantic relationships can be exploited. Fusing the classification results (from both the detected gases and the recognized objects in the environment) together with the semantic information, a probabilistic Bayesian framework is proposed to assign to each detected object a probability of being the gas source. Finally, a path planning algorithm based on Markov Decision Processes (MDP) merges these probabilities with the navigation distances from the current robot location to the different objects (i.e., a cost value related to the time the robot would spend to reach the candidate object), to produce a plan that minimizes the search time. Both simulated (using computational fluid dynamic tools and GADEN gas dispersion simulator [54]) and real experiments demonstrate the feasibility of this novel approach by considerably reducing the search times and producing more coherent gas source searches. Conclusions In this paper, we have described and reviewed the goal and main contributions of the IRO project, focused on the improvement of the sensory and autonomous capability of mobile robots through olfaction. We have first reviewed the concept of electronic nose, raising some specific issues when used on-board a mobile robot, and described a design of a modular e-nose suitable for mobile robotics applications. Then, having in mind the final goal of fusing different sensing modalities, we have focused on the intermediate tasks of visual object recognition and gas classification. Here, the project contribution consisted of different algorithms and experimental evaluations towards improving the recognition rates when these tasks are carried out with a mobile robot while navigating. Finally, we have introduced semantic reasoning to successfully fuse multiple sensing modalities when solving the challenging problem of gas source localization with a mobile robot. At this point, the project contributed with a novel architecture able to exploit the information provided by the vision and olfaction sensory sub-systems, as well as handling their respective uncertainties. For each detected object in the environment, a probability of being the gas source is estimated and afterward fed to a probabilistic framework that outputs the optimal path the robot should follow when inspecting the different objects in the environment, minimizing the search time.
2019-08-14T13:05:07.782Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "6e1668f96cd8c76d0718ab5ccf133e494e665773", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc6720589?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f54fa6c0888e324a2b2865c2ad227494b4c85702", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
61725648
pes2o/s2orc
v3-fos-license
A Survey on Computer-Aided Healthcare Diagnosis for Automatic Classification and Grading of Cataract - A fundus image analysis based computer aided diagnosis for automatic classification and grading of cataract is presented.The burden of ophthalmologists can be reduced due to this system and help cataract patients to know their cataract conditions. The system comprises the processes as pre-processing of fundus image, image feature extraction and automatic cataract classification and grading. A multiclass discriminant analysis algorithm is used for cataract classification, including two-class classification and cataract grading in mild, moderate, and severe. The wavelet transform is investigated to extract from fundus image. ANN based methods and SVM based methods have been used in pattern recognition and classification. The fundus image analysis for cataract classification and grading is very helpful for improving ophthalmic healthcare quality and review of fundus image. I. INTRODUCTION The disease cataract is a clouding and dulling of the eye lens. It leads to decrease in vision. The eye lens focuses an image onto the retina of the eye. This is where that image can be processed, and then sent to the brain. The decreased vision, glare, contrast, color sensitivity and sometimes blindness can happen when cataract matures. The only way to diagnose the cataract includes examining the eyes of the patient. The eye doctor will examine the lens of eye or do some tests to know more about the health and structure of the patient's eye. The comprehensive eye examination for cataracts commonly comprises:  Visual acuity assessment test (VAT) for assessing distant vision.  Slit lamp examination for magnify the eye.  Tonometry test for assess fluid pressures happenings inside the eye and the increased pressure may leads to glaucoma disease.  Dilated eye examination for evaluating the lens and the structures of the back of the eye.  Other Treatments For an earlier stage of cataract, the vision may be improved by using different eyeglasses, magnifying lenses, or stronger lighting. If these measures do not help, or if vision losses interfere with daily activities such as driving, reading, or watching TV, cataract surgery is the one of the efficient treatment. The cataract surgery involves waiting until the patients are ready to have it so that it does not harm the patient's eye. While the patient takes long time for surgery, cataract will get cloudier with time. This type of surgery is almost performed always in one eye at a time. After the cloudy presence in lens is removed, the eye surgeon places an intraocular lens in its place. An intraocular lens is a lens which is clear which does not require care and becomes a mandatory part of eye. Most people need reading glasses or glasses for distance vision, which is still problematic after surgery. There is another new option that is multifocal intraocular lenses, which is suitable for both near and far distances in the particular lens. Many who receive multifocal intraocular lenses may not need to wear any type of glasses. II. COMPUTER-AIDED DIAGNOSIS (CAD) In medical imaging and diagnostic radiology, Computer Aided Diagnosis (CAD) is one of the key research topics. The philosophy and motivation for early improvement of CAD schemes are presented together with the present status and future potential of CAD in a PACS environment. Radiologists use the computer results as a second option and make the final decisions based on the CAD. This is a concept established by taking equal roles of surgeons and processors, whereas automated computer diagnosis is a concept based only on computer algorithms. With CAD, the performance of computers better than that by or does not have to be comparable to physicians, but needs to be corresponding to that by surgeons. In fact, in the detection of breast cancers, a large number of CAD systems have been employed for supporting physicians. A Computer Aided Diagnosis scheme that makes use of lateral chest images which comprises the potential to improve the overall efficiency in the detection of lung nodules when combined with other CAD scheme for chest images. On lateral chest radiographs, vertebral fractures can be reliably detected by computer, thus radiologist's exactness in the detection of vertebral fractures would be improved by using CAD feature, and further diagnosis of osteoporosis would become possible in earlier stage. In MRA for assisting those radiologists in the detection of intracranial aneurysms, a CAD system has been introduced. A CAD scheme for detection of interval changes has been developed by use of sequential subtraction images for consecutive bone scan images. In the future, many CAD schemes could be accumulated as packages and executed as a division of PACS. For example, the chest CAD package may include the computerized detection of interstitial opacities, lung nodules, vertebral fractures, cardiomegaly and interval changes in chest radiographs as well as the onscreen categorization of caring and malignant lumps and the differential diagnosis of interstitial diseases on lungs. In demand to assist in the differential diagnosis, it is possible to search for images and retrieve images with known pathology and would be much related to a new unidentified case, from PACS when a reliable and useful method has been implemented for checking the similarity of a pair of images by radiologists for visual comparison. Recently, computer-aided diagnosis (CAD) becomes a measure of the tedious clinical work for detection of breast cancer at many screening locations and hospitals. This appears to indicate that CAD is the beginning to be applied widely in the recognition and variance diagnosis of many different types of irregularities in medical images in numerous examinations by use of different imaging modalities is acquired. In detail, CAD developed as one of the major research topics in medical imaging and diagnostic radiology. Although early attempts in computerized analysis of medical images were created in the 1960s, systematic and serious research on CAD initiated in the 1980s with an essential change in the concept for deployment of the computer output, from automated computer diagnosis to automatic Computer-Aided Diagnosis. The philosophy and motivation for early progress of CAD schemes are presented together with the present status and future potential of CAD in the Picture Archiving and Communication Systems (PACS) environment. III. FUNDUS IMAGES OF CATARACT The fundus of the eye is the interior portion of the eye, opposite the lens and includes the optic disc, retina, fovea macula and posterior pole. The fundus can be examined by either ophthalmoscopy or fundus photography. The different categories of cataracts include:  A subcapsular cataract -This arises at the back portion of the lens. Greater risk of emerging a subcapsular cataract comprises the people with diabetes or the people who takes high doses of steroid medicines.  A nuclear cataract -It is made on the depth of the nucleus of the lens. This nuclear cataract usually related with aging.  A cortical cataract -It is characterized by wedgelike, white opaqueness that start in the periphery of the lens and it work their way to the center in a spoke-like fashion. This type of cataract mainly arises in the lens cortex, which is the part of the lens that surrounds the central nucleus. Cataracts may be stationary or progressive, partial or complete, or hard or soft. The main categories of age related cataracts are cortical, nuclear sclerosis, and posterior subcapsular. Nuclear sclerosis is the most common kind of cataract, which involves the central or nuclear part of the lens. Over period, this becomes hard or 'sclerotic' due to deposition of brown pigment within the lens and condensation of lens nucleus. In progressive stages, it is said to be known as brunescent cataract. These types of cataract can exist with a shift to shortsightedness and causes difficulties with distance vision and reading is less affected. Cortical cataracts are due to the lens cortex that is outer layer becoming opaque. They happen when changes in the water content of the boundary of the lens causes fissuring. When these cataracts are observed through an ophthalmoscope or other magnification method, the presence is related to white spokes of a wheel. Symptoms often include difficulties with glare and light scatter at night. Posterior subcapsular cataracts are cloudy at back of the lens contiguous to the capsule in which the lens lies. Because light becomes more engrossed toward the back of the lens, they can cause uneven symptoms for their size. An undeveloped cataract has certain transparent protein, but with a mature cataract, the entire lens is cloudy. In a hyper mature or Morgagnian cataract, the lens proteins have become liquefied. Congenital cataract, which may be identified in middle age, has a different cataloging and includes lamellar, polar, and sutural cataracts. IV. METHODS The present cataract examination equipment and techniques, such as Lens Opacities Classification System (LOCS) [16], are difficult for most patients and can only be operated by well-experienced ophthalmologists. To make them appropriate to accomplish a diagnosis based on fundus image of eye, the well-experienced ophthalmologist has to be substantially close to the patients. This statistic makes the ophthalmologist becoming a limited resource and a bottleneck that causes the large scale screening in the early stage of the cataract disease unmanageable. The fundus image can be more effortlessly obtained only with the help of nurses from public service even the patients themselves. This system emphasis on the analysis of fundus image and fully automatic classification of cataract. Its goal is to decrease the problem of scarce resources and improve the usefulness and proficiency of fundus image review, through which to facilitate active and superior healthcare services. The basic requirement is to develop a costeffective and convenient computer-aided auxiliary analysis system for classification and grading of cataract automatically, which should sort multiple types of healthcare facility providers loosely coupled together and cooperatively provide high quality medical precaution to the patients in rural regions. Consequently, cataract classification and grading system is designed. The main component of the system consists of three parts which includes pre-processing of fundus image, feature extraction, and automatic classification and grading of cataract. These three measures will run on the server of an ophthalmic clinic and can be incorporated into its prevailing information systems. The existing information system has collaborated with many public clinics, remote pastoral hospitals and additional hospitals, sharing the healthcare resources through the internet. This co-operative mode has greatly enriched the medical service quality and clinic proficiency. The proposed cataract classification and grading system recovers fundus image of eyes from the server, conducts analyses of images and automatic cataract classification, and yields the report on the classification results. In this way, the automatically grading and classification system replaces the manual screening and decouple the co-located association between patients and well experienced ophthalmologists. Since the non-cataract eye images are not directed to the ophthalmologist for testing, a large amount of work burden of scarce resources is shortened, which make them be capable to expend more time on the patients that really need their concerns. Inside our structure, each fundus image of eye is converted into a set of types after pre-processing and feature extraction, based on which consequences are obtained by classification algorithm. The outcome gives the ophthalmologist a reference to the condition of cataracts patients. A. Fundus Image Preprocessing It involves the preprocessing of fundus images to remove noise and to correct non-uniform illuminations. An appropriate preprocessing will yield an accurate result in the further stage. In this module first we will load the original input fundus image. Then Gaussian filter will be applied for input fundus image to get an original image without noises. B. Image Feature Extraction When the input image to an algorithm is processed and it is supposed to be redundant then it can be renovated into a reduced set of features. In this module, feature extraction is done by Wavelet Transform. A wavelet transform is the demonstration of a function by wavelets. The wavelets are scaled and converted copies, said to be known as daughter wavelets, of a fast-decaying or finite length oscillating waveform, known as the parent wavelet. Wavelet transforms have benefits over traditional Fourier transforms for expressive functions that have cutoffs and sharp peaks, and for exactly disintegrating and recreating finite, non-periodic and non-static signals. Wavelet Transforms are categorized into two transformations as Discrete Wavelet Transforms (DWTs) and Continuous Wavelet Transforms (CWTs). Both the transforms are continuous-time analog transforms. CWTs operate over each potential scale and translation whereas DWTs practice an exact subclass of scale and translation ideals or representation grid. In this, Discrete Wavelet Transform is used. Due to the advantage of improved resolution, the Gaussian wavelets are chosen as parent wavelet. The wavelet transform 2D-DWT is used to find its spectral components and its features which are of 64x64 matrix are extracted from the obtained 2D-DWT representation. a. Discrete Wavelet Transform This type of transformation decomposes an image into several sub-bands according to some recursive process Fig. 4 comprises of LH1, HH1 and HL1 which represents the detailed images and LL1 corresponds to the estimate image. The estimate and detailed images are then decomposed into second-level approximation with detailed images, and the process is iterated to achieve the preferred level of the analysis of multi-resolution. The managed coefficient values for the approximation and detail subband images are useful features for texture classification. Due to the advantage of better resolution, the Gaussian wavelet is chosen as parent wavelet. The wavelet transform 2D-DWT is used to find its spectral components and its features which are of 64x64 matrix are extracted from the obtained 2D-DWT representation. Related Works Luo Gang et al., [9] paper deals with that Gaussian function which is evaluated and an amplitudemodified second-order Gaussian filter is proposed for the recognition and measurement of blood vessels. Mathematical analysis is given and sustained by a simulation and experiments to determine that the width of vessel can be dignified in linear relationship with the spreading factor of the coordinated filter after the filter's magnitude coefficient is appropriately assigned. The absolute value of diameter of the vessels can be determined basically by using a pre-calibrated line, which is typically necessary since images are constantly system dependent. C. Automatic Cataract Classification And Grading The proposed system of cataract classification and grading retrieves the fundus image from the server, ways image analysis and automatic cataract classification, and yields the report on the classification results. In this way, the automatic grading and classification system interchanges the manual screening and decouple the colocated association between patients and well-experienced ophthalmologists. Since the non-cataract images are not directed to the ophthalmologist for testing, a huge amount of work burden of scarce resources is reduced. After feature extraction, the algorithm of multiclass discriminant analysis is used for cataract classification, including twoclass classification and cataract grading in mild, moderate, and severe. Related Works C. Sinthanayothin, et al., [10] paper compacts with the approaches of automatic recognition and location. The optic discs were located by identifying the part with the highest difference in intensity of neighboring pixels. Blood vessels were acknowledged by means of a multilayer perceptron neural net, for which the responses were resultant from a Principal Component Analysis (PCA) of the image and detection of edge of the first component of PCA. The foveas were recognized together with characteristics typical of a fovea using matching correlation. D. Performance Evaluation Following the feature extraction approaches, a multi-class discriminant analysis is used for classification and grading of cataract. The algorithm of multi-Fisher classification is trained by randomly selecting some samples from the data set and then tested by using the other samples. By repeating the training and testing procedure, the overall performance of the aforementioned classification and grading methods are obtained. In this Section, we explained sample result which was obtained by using Gaussian Filter method. The input fundus image is given and preprocesses the image based on the discrete wavelet transformation. The sample screen shots are shown in Fig. 8 to 12. The following figure explains the sample result for smoothening, segmentation of blood vessels, the enhanced image of optic disk and image segmentation results. In Fig. 9, the smoothened image is displayed. In which the data is to create the function that works to capture important patterns in the image. Mainly smoothening method is used in scale space demonstrations. Fig. 11 shows the contrast enhanced image of the given fundus image of eye. Contrast enhancement of color images is typically done by transforming an image to a color space that has image intensity as one of its components. Contrast enhanced image prevent the over saturation caused by basic histogram equalization. Fig. 12: Segmentation of Blood Vessels Fig. 12 displays the blood vascular segmentation of fundus image. The segmentation of blood vessels is also an important step for the detection of bright and dark lesions, the performance of automatic detection methods may be enhanced if regions containing vessels can be excluded from the analysis. VI. CONCLUSION A computer-aided healthcare diagnosis system for cataract classification and grading from fundus image is presented. The wavelets transform method with discrete cosine transformation is used to extract the features in fundus image. The preliminary test results on a data set of fundus image samples show that the performance of the wavelet transformation based method is efficient than that of the sketch based method. The real-world pilot study about the execution and organization of the automatic classification system of fundus image is described. By retrieving fundus image of eye from the Cloud server, conducting analysis of fundus image and automatic classification of cataract, and return the report on the organization results, the proposed solution offer great abilities to reduce the burden of the well-experienced ophthalmologists having the scarce resources and improve the quality of ophthalmic healthcare.
2019-02-15T14:22:30.768Z
2016-02-24T00:00:00.000
{ "year": 2016, "sha1": "256f47cb6d7fb946e3795e64d5744711593e120c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17577/ijertv5is020268", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "71332b3945f2866b4014526e2074a2f637c30085", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
12878752
pes2o/s2orc
v3-fos-license
Detection of HPV and the role of p16INK4A overexpression as a surrogate marker for the presence of functional HPV oncoprotein E7 in colorectal cancer Background Based on the well-recognized etiological role of human papillomavirus (HPV) in cervical, anogenital and oropharyngeal carcinogenesis, a potential role of HPV in colorectal carcinogenesis has been suggested. For that reason, the aim of the present study was to investigate the presence of HPV DNA in colorectal carcinomas (CRC) and to study overexpression of p16INK4A as a marker for the presence of an active HPV oncoprotein E7. These findings were correlated with clinical and pathological prognostic factors of CRC. Methods The presence of HPV was assessed using a multiplex PCR system of 10 non-biotinylated primers. The amplified fragments of HPV positive samples were further analyzed by a highly sensitive, broad spectrum SPF10 PCR and subsequently genotyped using reverse hybridization in a line probe assay. P16INK4A protein expression was investigated in a subset of 90 (30 HPV positive and 60 HPV negative) CRC samples by immunohistochemistry. Results HPV DNA was found in 14.2% of the CRC samples with HPV16 as the most prevalent type. No significant differences in clinical and pathological variables were found between HPV positive and negative CRCs, except for age. HPV positive patients were significantly younger (p = 0.05). There was no significant correlation between the presence of HPV and overexpression of p16INK4A (p = 0.325). Conclusions In conclusion, the presence of oncogenic HPV DNA in a small cohort of CRC samples may suggest that HPV may be involved in the carcinogenesis of some CRC. However, contrary to what has been observed in head and neck squamous cell cancer and cancer of the uterine cervix, p16INK4A does not seem to be a surrogate marker for an active HPV infection in CRC. Therefore, further functional analyses are necessary to elucidate the role of HPV in CRC. Background Colorectal cancer (CRC) is one of the most common malignancies throughout the Western World. Surgery is the cornerstone in the treatment of patients with CRC and is followed by adjuvant chemotherapy and radiotherapy for specific subgroups of patients [1]. Although many risk factors for development of CRC have been identified, the molecular mechanisms related to the colorectal carcinogenesis remain to be elucidated [2]. The HPV viral oncogenes E6 and E7 have shown to be the main contributors to the development of HPV induced cancers. These oncogenes have the ability to bind host cell regulatory proteins, especially tumor suppressor gene products [17]. The HPV oncoprotein E7 is known to bind and inactivate hypophosphorylated retinoblastoma protein (pRB) [18], which eventually leads to upregulation of p16 INK4A . P16 INK4A is a tumor suppressor protein that inhibits cyclin dependant kinases (CDK)-4 or -6 binding to cyclin D which regulates the G1 cell cycle checkpoints [19,20]. Overexpression of p16 INK4A is considered to be strong and consistent in HPV-induced cancers [21]. Therefore, overexpression of p16 INK4A , as detected by immunohistochemistry, has shown to be a useful adjunct to cytology in cervical cancer screening [22], a reliable marker of human papillomavirus-induced oral high-grade squamous dysplasia [23], and a useful adjunct in the assessment of biopsies for HPV-associated anal intraepithelial neoplasia [24]. Furthermore, in primary rectal squamous cell carcinoma (SCC) there was a clear association between strong reactivity for p16 INK4A and the presence of high-risk HPV [25]. However, that study was limited to three patients. The aim of the present study was to investigate the presence of HPV DNA in a series of colorectal carcinomas. In a second part of the study, overexpression of p16 INK4A was investigated as a marker for the presence of an active HPV oncoprotein E7 in a subset of the above mentioned series of colorectal cancers. Subsequently, the results were analyzed for correlation with prognostic clinical features for disease outcome and pathological variables. Tissue samples Material from a previous study of patients with CRC treated at the Antwerp University Hospital in Edegem or the St. Augustinus Hospital in Wilrijk [26] was used for HPV detection as described below. A total of 232 CRC samples were eligible for HPV detection. This comprised 90 females and 142 males with a median age of 59.4 years (range 30 to 88 years). TNM staging was determined and the distribution was as follows: 27 patients were classified as stage I (12.2%), 68 as stage II (30.6%), 74 as stage III (33.3%) and 53 as stage IV (23.9%). Seventy patients had a tumor located in the proximal region of the colon (30.2%), while 80 tumors were found in the distal colon (34.5%) and 71 in the rectum (30.6%). All HPV positive tumors, except three (n = 30), plus two randomly chosen HPV negative tumors per HPV positive tumor (n = 60) were used for p16 INK4A immunohistochemistry. Three HPV positive samples could not be investigated by immunohistochemistry since the paraffin blocks were no longer available. The study was approved by the local Ethics Committee of the University of Antwerp and was conducted in accordance with the ethical principles stated in the most recent version of the Declaration of Helsinki. DNA isolation Tumor DNA was obtained from formalin-fixed, paraffin embedded tissue blocks. After manual microdissection to enrich for tumor cells, DNA was isolated as described previously [27]. After DNA extraction, adequate DNA isolation was confirmed by β-globin PCR [28], generating a fragment of 110 bp. PCR and genotyping analysis of HPV Since formalin-fixed paraffin embedded materials often yield poorly amplifiable DNA, the efficacy of the primer pair is inversely correlated with the length of the amplimers and the primers should be designed to amplify a relatively short PCR fragment [29]. DNA samples were first tested in a genital HPV broad spectrum PCR using 10 non-biotinylated short PCR fragment (SPF) primers. The SPF primers sets are designed to amplify a 65 bp fragment located within the L1 region of HPV [30,31] allowing highly sensitive detection of HPV DNA. PCR reactions were performed in a final volume of 50 μl containing 1.25 units of iTaq DNA polymerase (BioRad, Nazareth, Belgium), 2 mM MgCl 2 , 200 μM deoxynucleotide triphosphate, 1× iTaq buffer, 15 pmol of each of the forward and reverse primers and 10 μl of isolated DNA. The PCR reactions were carried out using the iCycler (BioRad) as previously described [32], except that activation of the enzyme was carried out for 3 min at 95°C. Each experiment was performed with separate positive (1 pg and 10 pg HPV16 stable SiHa cells) and negative PCR controls. After analysis on ethidiumbromide stained agarose gel analysis, positive samples were re-amplified using biotinylated SPF 10 primers (InnoGenetics, Ghent, Belgium). Immunohistochemistry Five μm-thick sections were prepared from formalinfixed paraffin-embedded tissue for IHC. Sections were deparaffinized in toluene, dehydrated and subjected to heat antigen retrieval in Epitope retrieval solution (as supplied in the CINtec Histology Kit, mtm laboratories, Heidelberg, Germany) in a heating bath for 30 min. at 95 (± 1)°C. Sections were subsequently stained using the CINtec Histology Kit (mtm laboratories) on a Dako Autostainer Plus system (DAKO, DakoCytomation, Glostrup, Denmark). Endogenous peroxidase activity was quenched by incubating the slides in peroxidase blocking reagent for 10 minutes. Incubation with mouse anti-human p16 INK4A monoclonal antibody (diluted 1:100) was performed for 30 minutes at room temperature. Sites of binding were detected using 3,3'-diaminobenzidine (DAB + ) as chromogen according to the manufacturers instructions. The sections were counterstained with haematoxylin, dehydrated, cleared and mounted. MSI analysis All cases had been previously analyzed for MSI status [26]. After manual microdissection of formalin-fixed, paraffin embedded tissue blocks, DNA was isolated as described previously [27]. MSI analysis was performed using the mononucleotide multiplex system as described earlier [34]. In short, the sense primers were chemically labeled at the 5' end with FAM™ fluorescent dyes. PCR was carried out in a final volume of 25 μl containing 200 μmol/L dNTPs (MBI Fermentas, St. Leon-Rot, Germany), 500 nM of each sense and antisense primer (Eurogentec, Seraing, Belgium), 1 × PCR buffer (60 mM Tris SO 4 (pH 8.9), 18 mM (NH 4 )SO 4 and 2 mM MgSO 4 ) and 1 unit Discoverase dHPLC DNA polymerase (Invitrogen, Merelbeke, Belgium). Fluorescent PCR products were analyzed by capillary electrophoresis using an ABI 3100 Genetic Analyzer (Applied Biosystems, Lennik, Belgium) and Genemapper Software 3.7. Statistics Prognostic relevance of HPV was assessed by survival analysis. Survival probability was estimated using the Kaplan and Meier method. Differences were tested using the log rank statistic. The median follow up for OS and DFS was 4.5 and 3.7 years respectively for the entire study population and 5.8 and 4.8 years respectively for the subpopulation of 90 colorectal tumors used for p16 INK4A IHC. Possible associations between the presence of HPV-DNA and clinicopathological parameters of colorectal cancers were investigated using the χ 2 -test or Fisher's exact test (when appropriate) for categorical variables and using Student t-test or Mann-Whitney U test (when appropriate) for continuous variables. In order to assess the independent prognostic contribution of HPV, a multiple Cox regression analysis was conducted. All analyses were conducted using SPSS (version 16.0). Significance for all statistics was two-tailed and recorded if p < 0.05. HPV detection and genotyping All tissue samples were positive for the β-globin gene, indicating that DNA was available for molecular analysis. HPV DNA was detected in colorectal tissue in 33 out of 232 patients (14.2%) using SPF 10 PCR. HPV DNA-positive samples were subsequently genotyped using the SPF 10 LiPA by reverse hybridization (Innogenetics). In about half of the samples a single HPV infection was identified (54.5%) whereas the other HPV-DNA positive samples contained multiple HPV infections (45.5%). A relative broad spectrum of HPV genotypes was found, HPV 16 (57.6%) being the most prevalent type, followed by HPV 18 (45.5%) ( Figure 1). The low risk HPV types 6, 11, 42, 43 and 44 were also found in a limited number of CRC samples, but, with one exception (for HPV type 43), always in the presence of a high risk HPV type. Correlations of HPV with clinicopathological variables and survival The median age of the overall population is 59 years. HPV positive patients were younger (median age: 56 years) than HPV negative patients (median age: 60 years) but the difference was of borderline significance (p = 0.05). Anatomic location of the tumor had no correlation with the presence of HPV infection. HPV prevalence was similar in proximal colon, distal colon, and rectum (p = 0.565). The location of the tumors throughout the colon in correlation to the presence of HPV is shown in Table 1. Clinical and pathological features were studied between HPV positive and negative carcinomas. The results are shown in Table 2 this scoring system, 57 slides were scored again after a two month interval, to assess the reproducibility of the scoring system. Although some differences were noted, both in cell numbers and in intensity, the reproducibility of the scoring system was high (Kappa: 0.831 and 0.742 for cell numbers and intensity respectively). Both aspects were subsequently weighed to come to a final score as shown in Table 3, again, reproducibility after a two month interval was very high (Kappa: 0.975). Seventy-four percent (n = 67) of all CRC tumors showed p16 expression ranging from weak (n = 11) over moderate (n = 18) to strong (n = 38). The results of p16 INK4A IHC in relation to presence or absence of HPV are given in Table 4. It is obvious from these results that there is no significant correlation between the presence of HPV and overexpression of p16 INK4A (p = 0.325) in the colorectal cancer tissues examined. Correlation of p16 INK4A expression with clinicopathological variables and survival Anatomic location of the tumor showed a significant correlation with the overexpression of p16 INK4A . Tissues obtained from the proximal colon showed significantly less expression of p16 INK4A compared to tissues taken from the distal colon and the rectum (p = 0.002). The location of the tumors throughout the colon in correlation to the p16 INK4A expression is shown in Table 5. There was also a trend towards a correlation between p16 INK4A expression level and stage (p = 0.066). Clinical and pathological features were studied between carcinomas with and without expression of p16 INK4A . The results are shown in Table 4, with no significant differences being observed. Discussion Oncogenic papillomaviruses have shown to be involved in benign and malignant lesions of the cervix and other anogenital sites [10]. Although the squamous cell epithelium is the most frequent target site of human papillomavirus (HPV) infection, similar infections have been demonstrated in other neoplasms, including adenocarcinomas of the cervix [3]. The presence of HPV DNA in colonic neoplasms is a conflicting issue. Although earlier studies have failed to detect HPV DNA in colon biopsy samples [13,14], more recent reports have suggested that infection with HPV16 and 18 may be etiologically associated with some cases of CRC [2][3][4][5]7,8,[10][11][12][39][40][41]. In the present study, HPV DNA was found in 14.2% of CRC. Single infections as well as multiple infections were present and all positive samples, except one, contained at least one high-risk HPV type. The HPV frequency is lower than that found in previous studies where HPV was detected in 21.9 -97% of CRC samples [2][3][4][5]7,8,[10][11][12][39][40][41]. The discrepant results might be attributed to methodological differences (for instance the use of L1 versus E6/7 primers sets) among the studies, or differences in sensitivity of the methods used for the analysis (for instance due to differences in amplicon length, since it has been shown that the efficiency of the primer pair is inversely correlated to the length of the amplicon in formalin fixed paraffin embedded tissues [29]). In addition, regional variations in the prevalence of HPV infection, which is known to be influenced by the ethnical and geographical origin of the individuals being tested, might also contribute to the differences observed among published studies [2,7]. However, we took the necessary precautions (during microdissections, DNA extractions and PCR reagents preparations) to avoid cross-contamination and the SPF 10 PCR is proven to be a very sensitive HPV detection technique [30,31]. Modes of transmission of HPV infection in the colon region have not been fully resolved; however, anal transmission and an association between sexual behavior and risk for HPV-positive cancers have been suggested [10]. In accordance to Bodaghi et al. [11] and Damin et al [2], there was no significant difference in the distribution of the virus throughout the colon (p = 0.565). Rates of viral detection were similar in tissues taken from the proximal colon, the distal colon or the rectum, suggesting that HPV is not a result of retrograde viral transmission from the anogenital area [2]. One possible hypothesis could be that during a screening colonoscopy, an anal HPV infection might be transported from the anal region throughout the colon [42]. Likewise, it has been shown that HPV DNA can be present on specula, used for taking PAP smears, and autoclave sterilization is the method of choice to eradicate these viruses [41,43]. Transfer by colonoscopy might also explain the lower rate of HPV infection in our study population because screening for CRC is much less common in Belgium than in the US. However, considering that HPV infection is mainly transmitted by cell surface contact, the route of viral transmission to the colon remains to be determined [2]. As seen in most other studies, high-risk HPV type 16 was the most prevalent type in colorectal tissues in this study, followed by high-risk HPV type 18. These types have been reported to suppress tumor suppressor proteins functions and play an important part in carcinogenesis [7]. Low risk types were also detected in CRC but, with one exception, always along with the presence of a high risk type. However, in order to suggest that HPV might be involved in colon cancer carcinogenesis, viral DNA incorporation into the host genome needs to be demonstrated by in situ hybridization. In addition, the presence or absence of HPV in a non-malignant control group and tumor adjacent tissue needs to be investigated in order to determine whether HPV is merely an epiphenomenon in CRC or rather a potential cofactor in the development of the disease [2]. No significant differences in clinical and pathological variables were found between HPV positive and negative colorectal carcinomas. HPV positive patients showed a trend to be younger than HPV negative patients. A similar observation has been made in HPV positive oral squamous cell cancer, a disease in which HPV appears to play an etiologic role [44][45][46]. Other reports in CRC have failed to demonstrate a correlation between the presence of HPV and prognostic factors [2,3,5,7,8,[10][11][12] In the second part of the study, overexpression of p16 INK4A was investigated as a marker for the presence [48]. In this study, we noted that p16 INK4A protein was expressed in 74% of the colorectal adenocarcinomas and more than half (n = 38) showed a high level of p16 INK4A expression. P16 is a nucleoprotein; the presence of staining in both the nuclei and the cytoplasm supports the finding that p16 gene is overexpressed. The change in subcellular location of the overexpressed nucleoprotein might account for its role in CRC carcinogenesis. The mechanism inducing the p16 INK4A overexpression is probably different from promoter methylation and could presumably result from a compensatory response to cell cycle deregulation [48]. CDK4 overexpression could be the initial event leading to a reactive overexpression of p16 INK4A and to a break in the G1-S transition through pRb phosphorylation [49]. No significant differences in clinical and pathological variables were found between CRC samples expressing p16 INK4A and those not expressing p16 INK4A , except for location. In accordance with others [47,48] p16 INK4A protein expression was more often seen in the distal colon and the rectum. In addition, there was a trend towards an association between strong p16 INK4A expression and stage II and III tumors. The prognostic role of p16 INK4A protein has been investigated in five studies. Three studies noted that p16 INK4A expression was associated with poorer survival [48][49][50]. In accordance to Norrie et al. [51] and Tada et al. [52], we found no relationship between p16 INK4A expression and patient survival. Conclusions In conclusion, the presence of oncogenic HPV DNA in a small cohort of CRC samples, with high risk HPV 16 as the most prevalent type, was confirmed in the present study. However, in order to suggest that HPV might be involved in colon cancer, viral DNA incorporation into the host genome needs to be demonstrated by in situ hybridization. Additionally, the presence or absence of HPV in a non-malignant control group and tumor adjacent tissue needs to be investigated in order to determine whether HPV is merely an epiphenomenon in CRC or rather a potential cofactor in the development of the disease. In addition, contrary to what has been observed in head and neck squamous cell cancer and cancer of the uterine cervix, p16 INK4A does not seem to be a surrogate marker for an active HPV infection in CRC. Therefore, further functional analyses are necessary to elucidate the significance of the presence of HPV in CRC.
2014-10-01T00:00:00.000Z
2010-03-26T00:00:00.000
{ "year": 2010, "sha1": "7bf4b2fbc1a97d3903ede8e79b53b04f91239a0f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1471-2407-10-117", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bf4b2fbc1a97d3903ede8e79b53b04f91239a0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
103104178
pes2o/s2orc
v3-fos-license
Correlation Co-Efficient Studies of Oil, Yield and Related Traits in Bt and Non Bt Hybrids Cotton has a pride place among the crops from the earliest times. It finds mention in the Rigveda, the oldest scripture of Hindus. Manu, the law giver also referred to it in his Dharmashashtra. It was the excellence of Indian cotton fabrics famed as ‘Webs of Woven Wind” which compelled European countries to seek new trade routes with India. Despite the advent of a multitude of other fibres, cotton, the white gold rules the world of textile. Cotton though mainly grown for fibre is also ranked as a major oilseed crop in the international market. Out of four major products i.e., meal, hull, oil and linters, oil is most important. Besides commercial importance it is used in the leather industry as a lubricant. Cotton seed oil can also be used for edible purpose after refining. Cotton seed oil is premium quality oil as it has no cholesterol and is a vegetable oil. Introduction Cotton has a pride place among the crops from the earliest times. It finds mention in the Rigveda, the oldest scripture of Hindus. Manu, the law giver also referred to it in his Dharmashashtra. It was the excellence of Indian cotton fabrics famed as 'Webs of Woven Wind" which compelled European countries to seek new trade routes with India. Despite the advent of a multitude of other fibres, cotton, the white gold rules the world of textile. Cotton though mainly grown for fibre is also ranked as a major oilseed crop in the international market. Out of four major products i.e., meal, hull, oil and linters, oil is most important. Besides commercial importance it is used in the leather industry as a lubricant. Cotton seed oil can also be used for edible purpose after refining. Cotton seed oil is premium quality oil as it has no cholesterol and is a vegetable oil. India is addressing the need for increased Bt cotton cultivars. These insect protected cotton varieties contain a naturally occurring substance, Bacillus thuringiensis (Bt) protein which has been used as an ingredient in safe and effective biological sprays for more than 50 years. Bt trait has been successfully transferred into several Indian lines. Extensive and fully replicated field traits of Bt cotton were conducted from 1998 to 2001 cropping seasons, meeting the government requirements for commercialization. Three Bt cotton cultivars have been approved for planting in India in 2002-03. Since, the introduction of Bt cotton hybrid around 44,500 ha were planted with three hybrids of Bt cotton in central and southern zones in 2002-03 season. This increased to some 1, 00,000 ha in 2003-04. In 2004-05 around four Bt cotton hybrids were planted over 5, 00,000 ha by three lakh resource poor farmers. With approval of 16 new hybrids of half a dozen companies including six Bt cotton hybrids for northern region, Bt cotton planting for 2005-06 season has experienced the highest yearly percentage growth rate increasing its area by 160 per cent (13 lakh ha). Around 10 lakh farmers elected to plant Bt cotton hybrids in northern, central and southern cotton growing zones of Indiaas compared to 3 lakh farmers in the previous year (Anon., 2006). The knowledge of inter-character correlation of quantitative characters is useful to design an effective breeding programme. These correlations provide a reliable measure to differentiate the vital association useful in breeding from that of non-vital ones . The present investigation primarily aimed at assessment of inter-character association between oil and other seed properties viz., fibre properties. Character association was done in all the three experiments comprising of genetic stocks involving both tetraploid Bt and non-Bt commercial hybrids. The complete set of correlations among seed and oil related parameters are available in the genetic stock. Soxhlet method for estimation of oil content: oil content was estimated by Soxhlet method as given by Jambunathan et al., (1985) with some modifications. 5 gms of cotton seeds from each entry were powdered in a pestle and mortar. Cotton seed meal was extracted with petroleum ether for 5 hrs approximately in a Soxhlet apparatus. Petroleum ether was evaporated and the oil content was estimated by the difference in the weight between the two and was expressed in percentage. The phenotypic data obtained for oil content by this method was used for calibrating for oil in NIRS. Correlation coefficient Phenotypic and genotypic correlation coefficients between different variables were calculated by using covariance technique . The analysis of covariance by following the method described by Singh and Choudhary (1977) is given below. Cov xy (g) = TMSP -EMSP r Cov xy (e) = EMSP Cov xy (p) =cov xy (g) + cov xy Genotypic correlation = r xy(g) = cov xy (g) / [v x(g) x v y(g) ] ½ Phenotypic correlation = r xy(p) = cov xy (P) / [v x(p) x v y(p) ] ½ Environmental correlation = r xy(e) = cov xy (e) / [v x(e) x v y(e) ] ½ Where,RMSP = replication mean sum of products, TMSP = treatment mean sum of products, EMSP = error mean sum of products, Cov xy (g) = genotypic covariance between characters x and y, Cov xy (p) = phenotypic covariance between characters x and y, Cov xy (e) = error covariance between characters x and y, v x(g) = genotypic variance of character x, v y(g) = genotypic variance of character y, v x(p) = phenotypic variance of character x, v y(p) = phenotypic variance of character y, v x(e) = error variance of character x, v y(e) = error variance of character y Genotypic (r g ), Phenotypic (r p ) and Environmental (r e ) correlation coefficients among different characters were estimated for which variance ratio was significant from the variance and covariance components following the method given by Hayes et al., (1955). Test of significance for correlation: Significance of phenotypic, genotypic and environmental correlation coefficients were tested against table value for r at (n-2) degrees of freedom from Fischer and Yates (1963) tables where 'n' denotes total number of entries under study. Results and Discussion The knowledge of inter-character correlation of quantitative characters is useful to design an effective breeding programme. These correlations provide a reliable measure to differentiate the vital association useful in breeding from that of non-vital ones . The present investigation primarily aimed at assessment of inter-character association between oil and other seed properties viz., fibre properties. Character association was done in all the three experiments comprising of genetic stocks involving both tetraploid Bt and non-Bt commercial hybrids. The complete set of correlations among seed and oil related parameters are available in the genetic stock. The oil per cent was significantly and positively associated with seed cotton yield at Nagpur and Dharwad but only positive association was observed at Bagalkot. Dani (1984a), Ramalingam (1994) made similar observations. Plant height showed positive and significant association with oil per cent at Nagpur and Bagalkot. These results completely agreed with Zuquinhao et al., (1995). Number of sympodial branches was positively correlated with oil per cent at 1 per cent level. Bolls per plant positively and significantly correlated with oil per cent at Nagpur. Twenty boll weight had positive and significant association with oil per cent at Nagpur and Bagalkot. Similar results were reported by Ramalingam et al., (1994). Seed index showed positive and significant correlation with oil per cent. It seems selection for seed index in turn selects for increased oil content. Similar results were observed by Ramalingam et al., (1994). Seed cotton yield positively and significantly correlated with plant height, sympodial branches, bolls per plant, ginning outturn, seed index and oil per cent. Similar observations were reported by Muthu et al., (2004), Nilima et al., (2005, Annapurve et al., (2007) and Ganeshan and Ravindran (2007). Plant height had significant positive association with number of sympodial branches and bolls per plant. I II III IV V VI I II III IV V VI I II III IV V VI I II III IV V VI I II III IV V Twenty boll weight Ginning outturn (%) Seed index (%) Oil content (%) I II III IV V VI I II III IV V VI I II III IV V VI I II III IV V I II III IV V VI I II III IV V VI I II III IV V Similar results were reported by Ganeshan and Ravindran (2007). Monopodial branches were significantly and positively correlated with bolls per plant in all the locations. Boll weight showed significant positive association with seed index at Nagpur and Dharwad. Similar results were got by Nilima et al., (2005). With a special reference to oil content, the present study revealed that selecting for higher seed cotton yield, plant height, more number of sympodial branches, higher seed index and higher boll weight leads to increased oil content in the seed. Overall considering the correlation of seed cotton yield and oil content with other important traits the following conclusion can be drawn. Seed cotton yield showed positive and significant association with oil content at two locations viz., Nagpur and Dharwad. Selecting for higher seed cotton yield would also help in increased oil content. The oil content expressed positive and significant association with seed index with all three locations viz., Nagpur, Dharwad and Bagalkot. Therefore, seed index can be used to select for high oil content. Oil content also exhibited positive and significant correlation with twenty boll weight and plant height at two location viz., Nagpur and Bagalkot. However, oil content expressed negative and significant association with ginning outturn at two location viz., Dharwad and Bagalkot. The oil per cent positively correlated with seed cotton yield, plant height, sympodial branches, seed cotton yield bolls per plant, ginning out turn and seed index. The study clearly showed that if increased yield and yield related traits with help of breeding strategy in turns it will increase oil content or vice versa.
2019-04-09T13:08:39.116Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "9124297e8a4b4b141faaf46db8ba4f6c3d551ce6", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-3-2018/Nagappa%20Harijan%20and%20B.M.%20Khadi.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "21c26f1225d6d7564c37bf144fb024ac97915fa5", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }