content
stringlengths
1
315k
Structural modifications of mitochondria-targeted chlorambucil alter cell death mechanism but preserve MDR evasion. Multidrug resistance (MDR) remains one of the major obstacles in chemotherapy, potentially rendering a multitude of drugs ineffective. Previously, we have demonstrated that mitochondrial targeting of DNA damaging agents is a promising tool for evading a number of common resistance factors that are present in the nucleus or cytosol. In particular, mitochondria-targeted chlorambucil (mt-Cbl) has increased potency and activity against resistant cancer cells compared to the parent compound chlorambucil (Cbl). However, it was found that, due to its high reactivity, mt-Cbl induces a necrotic type of cell death via rapid nonspecific alkylation of mitochondrial proteins. Here, we demonstrate that by tuning the alkylating activity of mt-Cbl via chemical modification, the rate of generation of protein adducts can be reduced, resulting in a shift of the cell death mechanism from necrosis to a more controlled apoptotic pathway. Moreover, we demonstrate that all of the modified mt-Cbl compounds effectively evade MDR resulting from cytosolic GST- upregulation by rapidly accumulating in mitochondria, inducing cell death directly from within. In this study, we systematically elucidated the advantages and limitations of targeting alkylating agents with varying reactivity to mitochondria.
Planning of manipulator joint trajectories by an iterative method SUMMARY Manipulator joint trajectories are planned to make an arbitrary cost function as good as possible in consideration of physical constraints based on kinematics and dynamics of a manipulator system. An algorithm presented in this paper is an iteratively improving method using the local controllability of B spline. It can be also applied to the case that some points are specified and joint trajectories must pass through those points. This algorithm is applied to an example of trajectory planning of a manipulator with two links and two degrees of freedom.
The cytopathology of middle ear effusions (a new technique). This article describes a simple technique for the dissolution of the mucus elements within middle ear effusions and the subsequent separation of the cells contained. After the mucus and mucopolysaccharides within middle ear effusions have been dissolved, the cells can be separated allowing for absolute cell counts with automated counters, smears for examination with light microscopy and differential cell counts, and transmission electron microscopic examination of the cell population. Transmission electron microscopy was utilized in this study to evaluate the amount of cellular distortion and artifact that resulted from the separation of these cells from the middle ear effusions and to evaluate the relative status of preservation of these cells. The ability to preserve, count, and identify the cellular component of middle ear effusions will be of value in attempting to understand the pathogenesis of otitis media with effusion.
Correlation between bacterial population and axillary and plantar bromidrosis: study of 30 patients. Although studies on the chemistry of odors are expanding to identify the chemical structures of odorous substances, there are no universal standards as yet to measure odor and intensity of bromidrosis. Clinical evaluation can be made on a subjective scoring from 0 to 3 prior to prescription of an antiseptic soap. In order to appreciate the correlation between the intensity of bromidrosis (BI) and bacterial activity, a study was carried out with both clinical and bacterial assessment in thirty patients with axillary or plantar BI. Odor intensity was evaluated by two physicians using a score from 0 to 3 (i.e. absent, minor, moderate, major), meanwhile bacterial composition and density were assessed before and after 10 days of hygiene using an antiseptic detergent (trichlocarbanilide) provided on the first visit. Baseline count of diphtheroids/cm2 was 35.104 and baseline micrococci average was 32.104/cm2. At the end of the study, the reduction of odor intensity was observed in 20 patients (67%) without any change in sweat production. The clinical improvement correlated with a reduction of both micrococci (70%) or diphtheroids (73%) as compared with initial data. In patients presenting persistant bromidrosis, the bacterial count/cm2 did not significantly decrease and remained above 104 diphtheroids/cm2. Thus, this study suggests that body odor may be at least indirectly correlated to microbia counts with a bacteria threshold of BI ranging around and above 104.
1,2-Diarylimidazoles as inhibitors of cyclooxygenase-2: a quantitative structure-activity relationship study. The cyclooxygenase-2 (COX-2) enzyme inhibition activity of derivatives of 1,2-diarylimidazole is analysed through Fujita-Ban and Hansch approaches. The analyses have helped to ascertain the role of different substituents in explaining the observed inhibitory potency of these analogues. From both approaches it is revealed that the more hydrophobic X-substitutions that are present at the 3- and 4-positions of the aryl ring and are also non-hydrogen acceptor in character improve inhibitory action of a compound. The smaller substituent either H or F is preferred at the 2-X position as it is involved in steric interaction. Likewise, the substituent-NH2 instead of Me at R is advantageous. Further, for a data set of 35 congeners, the selectivity ratio related to the constitutive COX-1 isozyme is also analysed through the Fujita-Ban approach. The derived contributions of parent moiety and various substituents have helped to predict the substitution pattern in the design of more effective compounds that were not in the original data set.
Retraction: Chronic Intoxication with Cobalt following Revision Total Hip Arthroplasty Chronic intoxication with cobalt in a male implanted with an uncemented total hip arthroplasty (THA) with a ceramic-on-ceramic bearing. Three years postoperatively the acetabular ceramic liner fractured necessitating revision. The bearing couple was revised to a metal-on-polyethylene articulation. 20 months postoperatively, the patient represented with a dislocated THA. The patient presented with symptoms and signs of chronic intoxication with heavy metals, including quadriparesis, hypothyreosis, cardiomyopathy and perceptive amblyacousia. Severe metallosis was discovered at revision. High concentrations of cobalt, chromium and titanium were found in the serum, blood, pericardial exudate, urine and hair. On the basis of this experience the authors recommend always using a ceramic-on-ceramic pairing when revising fractured ceramic bearings.
Circular Housing Retrofit Strategies and Solutions: Towards Modular, Mass-Customised and Cyclable Retrofit Products The building sector consumes 40 % of resources globally, produces 40 % of global waste and 33 % of CO2 emissions. Creating a circular built environment is therefore of paramount importance to a sustainable society. The housing stock can be made more circular through circular retrofitting. However, strategies and solutions integrating circularity within housing retrofit are lacking. This paper focusses on developing a circular housing retrofit strategy and solution for Dutch housing constructed between 1970 and 1990. Through literature study, potential circular retrofit approaches are identified and translated into a general strategy. By developing a concrete retrofit solution, we illustrate how this general strategy can be applied in practice. It is found that in the Dutch context all-in-one sustainable retrofits are difficult to realise. By applying modular (allowing component-by-component retrofit), mass-customisable, and cyclable retrofit products, natural maintenance moments can be employed to gradually create a circular housing stock. As an example of such a product we describe the Circular Kitchen (CIK), which was developed together with industry. The CIK applies a plug-and-play design, separating components based on lifespan. The CIK supply-chain arranges relooping of the CIK in a return-street and return-factory. The CIK business model applies financial arrangements such as lease and sale-with-deposit, motivating the return and re-looping of the CIK after use. In conclusion, the strategy presented in this paper has the potential to support circular housing retrofit in the Dutch context and for housing with similar characteristics. However, development of more circular retrofit products is necessary to create a fully circular housing stock over time. Introduction The building sector consumes 40 % of natural resources globally, produces 40 % of global waste and 33 % of emissions. The Circular Economy (CE) proposes an alternative to the current linear economy by decoupling economic growth from resource consumption. The CE can be summarised in the following three principles: preserving and enhancing natural capital by controlling finite stocks and balancing renewable resource flows; optimizing resource yields by circulating products, components, and materials at their highest utility and value at all times in both technical and biological loops and fostering system effectiveness by revealing and designing out negative externalities. Due to its high impacts, the transition to a circular built environment is pivotal to achieve a resource 'effective' and sustainable society. The existing housing stock, as an important part of the built environment, can be used more circular through retrofitting. However, the natural retrofit moments can also be employed to make the stock circular at all levels: the housing stock, dwelling, components, parts and materials. Strategies and concrete solutions integrating circularity within housing retrofit are still lacking. Therefore, the aim of this paper is to develop a circular housing retrofit strategy and solution, focussing on Dutch housing constructed between 1970 and 1990. This part of the stock constitutes 24 % of Dutch housing and will be in need of retrofitting in the coming decades, which makes it a logical case to focus our efforts on. This stock is characterised by (mostly) low-rise dwellings, diversified designs, fragmented ownership and mixed tenures. Most housing is in a 'decent' state of maintenance with -on average -an energy label D or C. Although the stock is not (yet) in disrepair, there is need for adaptations and improvements, and there are substantial ambitions to improve the energy efficiency of the stock. However, the diversity, fragmentation and state of the housing makes the commonly applied 'all-in-one' sustainable retrofits difficult to realise. Hence, Meulendijks, Ubink and van der Steeg, and Brinksma propose three requirements for retrofit solutions for the Dutch 70's and 80's housing stock. Retrofit solutions should be able to spread the retrofit investment over multiple retrofit cycli; should accommodate different retrofit needs and practices from professional landlords and private owners through customisation; should be adaptable to accommodate future changes. We propose to extend the latter requirement so it does not only include future adaptability into the retrofit solution but requires circularity to be considered as well. Therefore, the retrofit solution should be able to accommodate the loops of the circular economy (i.e., maintenance, re-use, refurbishment, and recycling ). To determine key elements for a circular housing retrofit strategy and solution, we analysed existing circular building approaches (section 2). Through literature study and brainstorming we identified circular design strategies and principles. Subsequently, we identified existing building approaches which applied (some of) these circular design strategies and principles. The selected circular building approaches were analysed by identifying which of the circular design strategies and principles were applied. In doing so, the analysis identified gaps in existing approaches and elements which could be applied in the development of a circular retrofit strategy for the Dutch context (section 3). To illustrate and test if this strategy is also achievable, a concrete retrofit solution was developed to the level of a prototype: The Circular Kitchen (CIK) (see section 4). In section 5, we reflect upon the developed strategy and solution, and the conclusions are summarised in section 6. Analysis of 'circular' building approaches In this section, we elaborate on the analysis of circular building approaches. The circular design strategies and principles identified through the literature study and brainstorming are included in columns 2 and 3 of Table 1. The strategies and principles were organised into three categories: 'narrowing, slowing or closing resource loops'. Strategies which 'narrow resource loops' aim to reduce resource use; strategies which 'slow resource loops' aim to slow down the flow of resources through extension or intensification of the utilization period of the (building)product; strategies which 'close resource loops', aim to facilitate recycling of materials at the end of life. Indicates that the circular design strategy is applied in the approach. The analysis shows that most of the analysed (pr)circular building approaches remain fragmented: they focus either on narrowing and closing the loop, or slowing the loop. For example, the circular approaches 2.1-2.3, narrow and close resource loops locally. Ultimately, recycling is important to achieve material circularity. However, no strategies are implemented to slow resource loops on building or component level. Hence, premature obsolesce is not prevented. Subsequently, material depletion, emissions and waste generation are not fully minimized. Similarly, focussing only on slowing the loop will still result in material depletion, emissions and waste, just at a slower pace. From all the approaches, Table 2. Description pr-circular building approaches Name approach Origin Approach description Cases Avant-gardist designs of ever-evolving cities applying permanent mega-structures and interchangeable infill. Stichting Architecten Research (SAR) & Open building -1961 Reaction to the inability of residents to influence the post-war built environment. Built environment is separated into layers (e.g., tissue, support, infill) to allow for user customisation and future adaptations. Lean construction -1993 In reaction to economic and environmental inefficiency of traditional construction. Application of lean manufacturing principles to optimise product and process to reduce material and energy use. Shearing layers - Building on theories of ecologist and system theorist. To improve adaptability and prevent premature obsolesce, the building is divided into 6 layers based on expected lifespan. Industrial, flexible and demountable building (IFD) -1999 Building on SOB principles, IFD aimed to better fulfil clients demands in a construction project. IFD unites industrialisation of the building process, flexibility (i.e., customisation), and demountability to allow future changes. Slimbouwen In reaction to the economic and environmental inefficiency of traditional construction. A strategy separating the building into layers -especially decoupling piping -to improve adaptability and reduce material use. Conceptual building In reaction to the inefficiency (cost & process) of traditional construction and to the supplyoriented industry unable to customise solutions. Client-friendly construction process in which buildings are constructed with standardised, customisable building components. Mass-customisation in dwelling construction Uniting principles of mass-production and customisation in construction. Open & closed source concept dwellings or components which are (to an extent) standardised, customisable and mass-producible. LEGOlisation in construction LEGOlisation is a reaction to the traditional and project-based construction industry. Buildings are constructed (and renovated) with customisable, standardised, prefabricated, demountable components. The components are subdivided into sub-components, parts, etc. LEGO-lisation can improve and optimise the building process, increase adaptability and reduce material use. Circular recycling in housing demolition Instead of full demolition, housing is disassembled (as much as possible) with the aim to re-use these components and materials locally. Circular recycling in housing renewal and renovation Focusses on local re-use and recycling of components and materials in housing renovation and renewal (i.e., housing demolition and new built). A figurative 'fence' is placed around the site: what is demolished is re-used on site. Next to cycling building material streams locally, the approach is often combined with reduction and local self-sufficiency of other material and energy streams (e.g., water, food, and energy). Bio-based construction systems Housing construction and retrofit systems which reduce environmental impact and facilitate closing the loop by applying bio-based materials. In some cases, the systems are also modular, standardised and (to some extend) adaptable to future changes. Mass-customisable, 'cyclable' building systems Standardised building systems which can be customised to fit the wishes of the client. The system applies circular materials to narrow and close the loop of the building and its materials. Modularity is applied to facilitate fast construction and not so much to increase future adaptability. Modular, mass-customisable and 'cyclable' building systems Highly modular building systems which integrate mass-customisation and circular design principles to narrow, slow and close the loop of the building, (sub)components and materials. the 'modular, mass-customisable, 'cyclable' building system' (2.7) approach integrates -by far -the most strategies to narrow, slow and close the loop. However, none of the analysed approaches have yet applied all principles. The analysed approaches do provide useful elements to develop a circular retrofit strategy and solution for the Dutch context. All of the analysed cases provide concrete examples of how circular design principles can be integrated in retrofit solutions. In particular, the 'Bilt House' and the 'Circle House' -although new-built systems -provide convincing approaches. They differ from other cases in the level on which standardisation and modularity is achieved, namely on sub-component level. This seems to provide the most potential for standardisation, customisation, and adaptability. Circular housing retrofit strategy: modular, mass-customisable and 'cyclable' retrofit products By combining and specifying elements of circular building approaches in synergy to the requirements identified in the introduction, we developed a circular retrofit strategy for the Dutch context. This strategy proposes that the housing stock is retrofitted with products which are modular, masscustomisable and 'cyclable' (see Figure 1). A modular retrofit solution, as opposed to 'all-in-one' retrofit, can facilitate component-by-component retrofit. Buildings consist of many components such as installations, kitchens and facades, which could be replaced with circular retrofit products to gradually improve and create a circular housing stock. Moreover, modularity allows to spread the retrofit investment over multiple retrofit cycli. This provides an answer to the financial feasibility challenges posed by fragmented mixed-ownership and the 'minor improvements' needed in the stock. A retrofit solution suitable for 'mass-customisation' combines the advantages of mass-and industrial production with the advantages of product customisation. Mass-customisation can accommodate the different retrofit needs of professional landlords and private owners, increase affordability, and synergises with circular design principles such as: improving product quality, product and (sub)component standardisation, and offering (update) choices to users. A 'cyclable' retrofit solution is designed, applying circular design strategies and principles, to integrally narrow, slow and close the loops on building, building component, part and material level. A circular (technical) design requires an integral approach to ensure the design can be, and is (used) circular along and beyond its life cycle [9,. In an integral design a technical model, business model, and industrial model are developed in cohesion with each other. This means that for the modular, mass-customisable and 'cyclable' retrofit products, a supporting business model is needed which incentivises the narrowing, slowing and closing of the loops. New contract models based on 'product service systems', such as: retrofit product lease, sale-with-take-back after use, sale-with-buy-back after use, and contracts with service and updates included, can provide an interesting value proposition for all involved stakeholders. This includes a similar or lower Total Cost of Ownership (TCO) for housing owners and tenants, more customisation options and future adaptability for users, a steadier revenue stream for manufacturers and (service)providers of retrofit products, long-term partnerships with clients, and a more sustainable product. Similarly, a supporting supply chain model is needed which organises the narrowing, slowing and closing loop activities. By (re)forming partnerships the needed (loop)activities -and the facility in which these takes place -can be determined. A circular housing retrofit solution: The Circular Kitchen To illustrate and test if the proposed strategy is achievable, an exemplary modular, 'mass-customisable', and 'cyclable' retrofit product -the Circular Kitchen (CIK) -was developed in co-creation with the TU Delft, AMS-institute, housing associations (as initial target group) and industry partners. The CIK was developed 'integrally' including not only the technical model (design), but also the supply chain and business model. The CIK applies a modular design which facilitates various circular loops by separating parts based on lifespan (see Figure 2). The kitchen consists of a docking station in which kitchen modules can be easily plugged in and out, allowing for customisation and future changes in lay-out. The kitchen modules themselves are also divided in a long-life frame to which 'module infill' (e.g., appliances and closet interiors) and 'style packages' (e.g., front, countertop, handles) can be easily attached using click-on connections. To narrow the loop, the CIK minimises material through separating the constructive 'frame' and the 'style package'. As the panels of the style package are optional and thinner (nonconstructive), material use is reduced. Furthermore, the choice of materials for the kitchen -a lowimpact, formaldehyde-free plywood with separable HPL coating -reduces the environmental impact and facilitates refurbishment and recycling. The supporting business model of the CIK is separated in a business-to-business (B2B) and businessto-consumer (B2C) side. The kitchen producer sells the docking station and modules directly to housing companies with a take back guarantee and maintenance subscription. This arrangement offers a clear incentive for the manufacturer to make products which are easy to repair and to give a second life, or more. A dealer offers extra kitchen modules and style packages to tenants through a variety of financial arrangements that motivate returning the product after use, such as: lease and sale-with-deposit. After use, products are collected in a local 'Return-Street' and are sorted to be traded, resold, lightly refurbished or sent back to the kitchen producer. Products that come back to the producer are sorted in their 'Return-Factory' to be refurbished, remanufactured or recycled. The design of the CIK was validated with a preliminary LCA (Life Cycle Analysis), material consumption analysis, and TCO (Total Cost of Ownership) analysis. The TCO analysis showed that the CIK could have a slightly lower TCO as the regular kitchen, due to the design based on lifespan. The material consumption analysis showed that, compared to the regular kitchen, the CIK could reduce material input with 25 % or more. The LCA showed that the CIK reduces the CO 2eq emissions with 75 % and eco-costs with 50 %. The CIK was tested for economic viability with housing associations, industry, and users. A prototype has been developed for further testing and refinement (see Figure 3). Discussion The developed CIK provides a concrete example of a modular, mass-customisable, 'cyclable' retrofit product. Through its preliminary validation, the proposed circular retrofit strategy presented in this paper has showed its potential to support circular housing retrofit in practice, both in the Dutch context and for housing elsewhere with similar characteristics. However, several limitations should be noted. First, the selection of (pr-)circular building approaches which we analysed, although extensive, was not complete. Other (pr-)circular approaches could provide valuable insights. Furthermore, a similar analysis can be made for the supporting industrial and business models. Also, future research is needed to refine and validate the developed strategy and solution. More retrofit products would need to be developed and tested (through implementation in demonstration projects) to validate the proposed strategy. To support the refinement of the CIK and to support industry in developing other circular retrofit products, a circular assessment method is needed. The assessment method should help select the most circular design variant in terms of design value, environmental impact, material consumption, and Total Costs of Ownership/Use. Further research can contribute to develop such assessment tool(s). Conclusion The goal in this paper was to develop a circular housing retrofit strategy and solution, focusing on Dutch housing constructed between 1970 and 1990. It was found that in this context 'all-in-one' sustainable retrofits are difficult to realise due to the fragmented ownership, the state of the housing stock and diversified dwelling designs. An alternative circular retrofit strategy was developed which applies modular (allowing component-by-component retrofit), 'mass-customisable', and 'cyclable' retrofit products, allowing natural maintenance moments to be employed to gradually create a circular housing stock. As an example and test, the Circular Kitchen (CIK) was described. The strategy presented in this paper has the potential to support circular housing retrofit in practice, both in the Dutch context and for housing elsewhere with similar characteristics. However, development and testing of (more) circular retrofit products is necessary to create a fully circular housing stock over time.
. The clinical course of HIV-infection was analysed in a group of homosexual patients (n = 76, 72%) compared to intravenous drug abusers (IVDA, n = 30, 28%) in a retrospective cross-sectional study. The mean age of homosexual patients was 37.5 years compared to 28 years for IVDA. The following diseases are found significantly more frequently in homosexual patients compared to IVDA: Pneumocystis-carinii pneumonia (PCP) 17.1% vs. 0% (p < 0.05); Kaposi' sarcoma 16% vs. 0% (p < 0.05); diarrhoea 47.4% vs. 23.3% (p < 0.05); oral candidiasis 51.3% vs. 23.3% (p < 0.01); non-specific pneumonia of bacterial aetiology or due to unknown organisms 30% vs. 0% (p < 0.001) und seborrhoeic dermatitis 13.2% vs. 0% (p < 0.05). In contrast, viral hepatitis, non-specific abscesses and gonorrhoea were seen significantly more often in IVDA. The data show clearly that the spectra of HIV-associated diseases and HIV-unconnected diseases are significantly different in the two main groups. A risk-oriented preventive prophylaxis of HIV-related diseases and other infections is therefore required for each of these groups.
Area functionals in plane grid generation II The area functional has played an important role in discrete variational grid generation. When minimized, it is expected to lead to a convex grid; however, this goal is achieved only for simple regions. In the rst part of this work a new area functional was introduced; it's minimization over an irregular plane region produces a convex quadrilateral grid. In this second part we provide theoretical foundations for our algorithm. Structured quadrilateral grid generation is useful in the numerical solution of partial diierential equations using nite-diierence methods. All the quadri-laterals must be convex and smoothness is a desired property. In the discrete variational approach, a function of the inner points is designed and its minimum is expected to be attained in a grid with good geometrical properties; a large scale optimization algorithm is used. This kind of functions are called discrete functionals. The rst studies using this approach were done by Castillo and Steinberg ((4]), who introduced the Length and Area functionals. Several works have been developed in this topic, among them we can mention those of Barrera, P erez and Castellanos ((1],,2], 3]). The methods developed by them failed to obtain convexity when the region was not simple. Tinoco and Barrera ((9], 10]) introduced a new family of functionals, named Adaptive Smoothness Functionals, whose optimization produces smooth and convex grids over general plane regions, regardless of whether the initial grid is convex or not. The authors introduced a procedure to reduce an eeect that is inherent to the smoothness functionals, namely the appearance of some cells with large areas. Theoretical results on the eeectiveness of this method were presented. In a recent paper ((11]), convex and smooth grids with good area control were obtained via the optimization of a new area functional. Experimental results were presented showing the performance of the method. In this paper theoretical foundations are given for that procedure. For the sake of clarity most of the previous work is reproduced here. The following material is organized as follows. Section 1 contains notation and deenitions. Section 2 introduces the adaptive area functional, and our
Two Cases of Proximal Subungual Onychomycosis Caused by Trichophyton rubrum in HIV-negative Patients During Treatment with TNF- Inhibitors Combined with Methotrexate. Proximal subungual onychomycosis (PSO) is a rare subtype of onychomycosis with a clinical presentation characterized by proximal leukonychia in the lunular area of the nail. PSO is associated with immunosuppression and regarded a sign of Human Immunodeficiency Virus (HIV) infection when caused by Trichophyton (T.) rubrum. We present two cases of PSO caused by T. rubrum developed during treatment with TNF- inhibitors combined with methotrexate (MTX).
Liver lipid content in broilers as affected by time without feed or feed and water. The purpose of this study was to determine if length of time of feed withdrawal of broilers prior to slaughter could affect the lipid content of their livers. Seven-week-old male broilers were allocated to three treatments: 1) no feed and water, 2) no feed, and 3) feed and water ad libitum. Those in the first treatment were held in plastic coops and those in the latter two treatments were kept in floor pens. Eight birds were randomly sampled initially and eight birds from each treatment at 4, 8, 12, 16, and 24 hr after the start of the study. The birds were weighed, killed, and the livers removed and analyzed for lipid content. The regression slopes of the two treatments without feed for body weight, liver weight, liver weight per unit body weight, and liver fat per unit body weight were significantly different from the control treatment. The slopes for liver fat were not significantly different among treatments. No obvious differences in the gross appearance of the livers were detected. The occasional problems with fatty livers in commercial broilers apparently cannot be accounted for by the length of time of feed withdrawal before slaughter.
Pilates and dance to breast cancer patients undergoing treatment: study protocol for a randomized clinical trial MoveMama study Background: Breast cancer is a global public health issue and the side effects of the clinical treatment can decline the quality of life of these women. Therefore, a healthy lifestyle is essential to minimize the physical and psychological side effects of treatment. Physical activity has several benefits for breast cancer women and Pilates solo and belly dance can be an enjoyable type of physical activity for breast cancer women undergoing clinical treatment. The purpose of the study will be to provide a Pilates solo and a belly dance protocol (3x/16 weeks) for women undergoing breast cancer treatment and compare its effects with the control group. Methods: The participants will be allocated to either the intervention arm (Pilates solo or belly dance classes 3x/week for 16 weeks) or a control group (receipt of a booklet on physical activity for breast cancer patients and maintenance of habitual physical activity routine). The Pilates solo and belly dance classes will be divided into three stages: warm-up and stretching; the main stage and relaxation. Measurements of study outcomes will take place at baseline, post-intervention, 6-, 12- and 24-months (maintenance period). The data collection for both groups will occur with a questionnaire application and tests, covering general and clinical information, primary outcome will be quality of life (EORT QLQ C30 and BR23), secondary outcomes will be physical aspects as cardiorespiratory fitness (6-minute walk test and cycle ergometer), lymphedema (sum of arm circumference), physical activity (IPAQ short version), disabilities of the arm (DASH), range of motion (goniometer test), strength (dynamometer test) and flexibility (sit and reach test) and psychological aspects as depressive symptoms (BECK Inventory), body image (Body Image After Breast Cancer Questionnaire), self-esteem (Rosenberg), fatigue (FACT-F), pain (VAS), sexual function (FSFI) and sleep quality (Pittsburgh Sleep Quality Index). Discussion: In view of the high prevalence of breast cancer among women, the implementation a specific protocol of Pilates solo and belly dance for patients with breast cancer is important considering the needs to improve the quality of life, physical and psychological aspects of their life. Pilates solo and belly dance are two kinds of physical activity that involves mental and body concentration, music, upper limb movements, femininity, and social involvement. An intervention with these two physical activities could offer a choice of supportive care to breast cancer women undergoing treatment to improve quality of life, physical and psychological aspects. https://clinicaltrials.gov/ct2/show/NCT03194997 Background Cancer has been considered a global public health issue. Among the different types of cancer, breast cancer, specifically, is the most common among women worldwide. In Brazil, breast cancer is also considered the most common type of cancer among women. Following the many advances in breast cancer treatment, there has been an increase in the five-year survival rate from 78% to 87% over the past years in Brazil. Despite the increase in the survival rate, breast cancer is a significant occurrence in the patient's life because of the serious side effects of the clinical treatment, which compromise the functional capacity, and directly affect the quality of life of the patient. Living with these symptoms can result in emotional and physical exhaustion for these women, so a healthy lifestyle that involves good nutrition and regular physical activity is essential to minimize the psychological and physical side effects of treatment. Considering this context, physical activity after the diagnosis of breast cancer, besides being a protector factor, is also auxiliary to the clinical treatment, minimizing the collateral effects and improving the patients' recovery. The American College of Sports Medicine (ACSM) recommends at least 150 minutes of moderate physical activity or 75 minutes of vigorous activity per week for patients with cancer. Resistance training twice a week is also recommended to improve general physical health. A metanalysis that analyzed 33 clinical trials regarding the benefits of physical exercise on psychological and physical aspects of women after breast cancer and recommended the physical activity at all treatment stages. Among all the types of physical activity that can relate to the healthcare of women with breast cancer, exercise that involves mind and body can be beneficial for these women, as Pilates and dance. These two types of activities are pleasant activities that can promote emotional connections and can be considered a moderate physical activity according to the ACSM recommendation. The Pilates solo includes resistance and stretching exercises, synchronized with breathing and it respects the principles of control, precision, centering, fluidity of movement and concentration. It promotes physical benefits for patients regarding functional capacity and muscle strength, and most exercises are performed in a position of dorsal decubitus with control of the speed, precision and movement quality promoting the relaxation of the body. Aspects that are considered consequences of breast cancer treatments, and therefore, their recovery becomes essential. Dance accompanist with the music promotes movements with awareness of the body rhythms. Belly dance, specifically, is directed only for women and is considered as a form of exercise that associates body and mind through body movements, involving specially the upper limbs, performed to the sound of traditional Arabic music. Because it is a dance that involves the worship of the earth and the woman's uterus, as well as the feminine sensuality, it can act in the rescue of femininity, softness and beauty, exploring self-confidence and self-esteem of patients. It is a modality of upper limb movement, by controlling the arms using veils, tambourines, vessels, and in this way, it can promote physical benefits, considering the consequences of surgery and treatment which pass through these patients. Both the Pilates solo and dance have been the target of studies investigating physical exercise in patients after the diagnosis of breast cancer. Clinical trials that address the Pilates solo method demonstrate its benefits in several aspects, such as improving quality of life, functional capacity and depressive symptoms, benefits in muscle strength, pain and upper limb functionality after eight weeks of treatment. Also, the improvement in external rotation and shoulder abduction in patients submitted to axillary emptying and improvement in shoulder range of motion, quality of life, body image and mood after 12 weeks of intervention. None of these studies had published protocols in Pilates solo methods for breast cancer women. There are several studies in the literature-involving the effects of dance in breast cancer [15,17,22,26,. However, published protocols for this population have not been identified; only two of these studies are characterized as randomized controlled trials. The modalities investigated include specific dance therapy methods; classical ballet and jazz; traditional Greek dance associated with the muscular strength training of the upper limbs; also, the practice of circular dance and ballroom dance for couples. Belly dance is also investigated as a pilot study of our research group, identifying benefits in depressive symptoms, fatigue and quality of life of breast cancer women undergoing treatment and after treatment stage. Thus, it is important to implement a specific protocol of dance and Pilates solo for patients with breast cancer since it has already been positively correlated with the health of women after diagnosis. For this study, belly dance was chosen as the modality of dance included in the protocol, considering the necessity of preserving the femininity of women during the disease. Belly dancing can also address the physical and psychological needs of patients. Furthermore, this type of dance is a form of physical activity that associates the body and mind through movement, particularly involving the upper limbs, to the sound of traditional Arabic music. This type of dance can also enhance the emotional aspects of women after the diagnosis of breast cancer since this practice involves expressive movements that facilitate the preservation of femininity, softness, beauty, trust, and security. The Pilates solo was chosen because its favors lymphatic and blood drainage, improves posture, intensifies flexibility, and increases range of motion and muscular strength. When breathing exercises are added, the proposed exercises stimulate the thoracic lymphatic system, and thus, they can promote a reduction in lymphedema, which improves muscle function and consequently improves quality of life. This study protocol describes a randomized controlled trial of Pilates solo and belly dance (3x/week) for women after the diagnosis of breast cancer and compare its effects with the group without intervention. The hypothesis is that the Pilates solo and belly dance protocol will promote improvement in primary (quality of life) and secondary (psychological and physical) outcomes in women after the diagnosis of breast cancer, providing a beneficial activity option for women with breast cancer. Our second hypothesis is that Pilates will have better improvements on the physical variables as belly dance will improve psychological variables. Study design This is a single-center prospective, three-arm randomized clinical trial to assess the Pilates solo and belly dance effects on the primary outcome quality of life, and secondary outcomes as physical aspects, such as cardiorespiratory fitness, lymphedema, physical activity, disabilities of the arm, range of motion, strength and flexibility and psychological aspects as depressive symptoms, body image, self-esteem, fatigue, pain, sexual function, sleep quality of women undergoing clinical treatment of breast cancer. Participants will be randomized into either a Pilates solo intervention group, belly dance intervention group or the control group. This study was conducted according to SPIRIT 2013 Checklist: Recommended items to address in a clinical trial protocol and related documents (Supplementary material). Ethical approval This study will be conducted in compliance with the Declaration of Helsinki subjects. Also, the trial participants, trial registries and the journal submitted will be informed. Participants The study will be conducted in the city of Florianopolis -State of Santa Catarina, Brazil, the participants of the study will be women who were diagnosed with breast cancer and who will be undergoing treatment in the Oncology Research Center (CEPON) at the time of data collection. The group will receive an explanation of the stages of the study and after give the consent to participate, will sign an informed consent form and then be provided with an initial paper questionnaire for data collection. Eligibility criteria The inclusion criteria comprise: Women aged 18 years or older; Clinical stage 0 to III breast cancer; Be in adjuvant treatment with hormone therapy in CEPON at any time of the treatment cycle; Receive the release of the oncologist responsible for the practice of physical activity or the Physical Therapy sector of CEPON. Exclusion criteria include the diagnosis of some orthopedic or neurological limitation that prevents the practice of physical activity, as Parkinson disease, Alzheimer's or use of a wheelchair. Sample size To calculate the sample size, the method of distinguishing between the means was used, being n = ( + ) 2 2 / d2. The values of alpha ( = 0.05) and beta ( = 0.80) were adhered to, which according to the Gauss curve table values of 1.64 and 0.84, respectively, were used. The difference between the means was obtained through the pilot study, and a variable considered for the calculation was a quality of life, in which an average of the difference between all the scales in the pre and post periods was found. This value of the expected selection was 6.15 ± 9.4. The expected variance (2) was 89.49. At the end of the analysis, after the inclusion of a 30% margin of sample loss, a sample of 19 patients was selected for each group. Randomization and blinding process The randomization of the sample will be performed by one of the researchers, who should have access to a list of patients with breast cancer (stage 0 -III) who were in adjuvant treatment with hormone therapy at CEPON in the past three years, with the intention to achieve adequate participant enrolment to reach the target sample size. From this list, a randomization will be hold in a website (http://www.randomization.com), which will predict the allocation of patients in the three groups: group A: intervention with belly dance; group B: intervention with Pilates solo; group C: control group, who will be requested to maintain their routine activities. The randomization will be stratified by age, dividing the patients between those younger and those over 60 years of age. Considering those older The data from the patients will be maintained only with the principal researcher to protect confidentiality before, during, and after the trial. Since the protocol is difficult to blind for the subjects and the instructors due no proper way to sham physical exercise, all the data analysis will be performed by an external researcher. In this way, at least the data analysis will not receive interference. Pilates solo intervention The women allocated to this group will participate in the Pilates solo protocol. The 16week protocol will be implemented in 60-minute Pilates solo classes, three afternoons per week under supervision of an exercise science professional and a physiotherapist. The Pilates protocol will be divided in three stages: Warm-up and stretching: Including breathing, imprint and release, hip release, spinal rotation, cat stretch, hip rolls, scapula isolation, arm circles, head nods, elevation and depression of scapulae exercises during warm-up in all sessions. The main stage: A brief explanation of the purpose of the class will be provided and the exercises will take place as detailed in Table2. To increase the load during the protocol, TheraBand and toning ball exercises will be added at the 10th session, in the 20th will be added the exercise in the arms and from the 24th session the exercise of the spinal rotation will be realized with weight of 1 kg. Exercises will be performed according to each patient's ability (principle of sports training) to avoid pain (during and after exercising) and embarrassment associated with possible physical or even psychological and emotional difficulties. Relaxation: For this stage of the class, the patients will be invited to sit on the ball, spine stretch forward on the ball, self-stretching of cervical muscles on the ball (upper trapezius and scalene muscles), and active mobilization of the cervical spine. At the end of each class, a brief discussion will hold on the women's perceptions regarding the objectives discussed in the beginning of class and whether they were achieved, and this data will be recorded by a third researcher to identify whether the participants enjoyed the class and felt that they achieved the objectives of the class. Belly dance intervention Women allocated to this group will participate in the belly dance protocol. The 16-week protocol will be implemented in 60-minute belly dance classes, three afternoons a week under supervision of an exercise science professional and a physiotherapist. The classes will be divided into three stages: Warm-up and stretching: The beginning of the class will include songs with up to 80 beats per minute (bpm), thus identified as slow pace. The sequence of movements for this class stage will cover large movements for specific joints, including flexion, extension, abduction, adduction and rotation, initiated by the upper body until it reaches the lower limbs, lasting 10 minutes. The main stage: A brief explanation of the purpose of the class will be provided (i.e. the theory of dance or the specific step to be developed), followed by the practical part of the technical learning. The aim of this part will be for participants to learn the movements of the belly dance technique, to stimulate motor coordination, rhythm, and body awareness, as well as to improve aspects of flexibility and range of motion (ROM) of the upper limbs. The practice of the movements will be explored using individual, pair, or group dynamics, involving movements corresponding to the rhythm of music or the rhythm stipulated for the women. The participants will have the artistic freedom to create their own pattern of movement based on the belly dance technique, while respecting their own body awareness and allowing the expression of feelings. The evolution of the belly dance technique will be applied as outlined in Table 1. For this part of the classes, medium-paced music with up to 120 bpm will be use, as well as fast-paced music with up to 150 bpm. This part of the class will have an average duration of 40 minutes. Relaxation: This stage will be developed from slow-moving practices, with music up to 80 bpm, usually the same songs used in the initial warm-up and stretching. With heartrate normalization, this part will last 10 minutes. At the end of each class, a brief discussion will hold on the women's perceptions regarding the objectives discussed in the beginning of class and whether they were achieved, and this data will be recorded by a third researcher to identify whether the participants enjoyed the class and felt that they achieved the objectives of the class. Verification of the songs' rhythm was performed by measure the beats per minute (bpm), according to the ballroom dance protocol used in the study by Braga et al. The songs will be categorized in groups: slow (up to 80 bpm), medium (up to 120 bpm), and fast (up to 150 bpm). The performance score was calculated using the bpm Detector Pro application. Safety and intensity To control the intensity of the Pilates methods protocol and belly dance protocol assuring that all the patients experience the same intensity of the intervention and to promote the safety of the practice of physical activity in these breast cancer patients, the heart rate (HR) control will be performed in every session using a Polar pro trainer 5. HR values will be checked in four moments of the class as after the beginning of the class, after the warming and stretching, after the main stage and at the end of the class. The safety of the intervention will be assessed every session, according to HR and patient's own report. If participants do experience an adverse event, this will be brought immediately to the attention of the researchers. Adverse events will be evaluated by the researchers who will make the decision to stop the study early, and researchers will take responsibility and provide all care to patients included in the study. Control group Women allocated in this group will be asked to continue their routine activities during the 16-week intervention period. They will be contacted every two months by telephone. It will be also offered to this group three meeting during the 16 weeks of intervention, the first meeting will focus on stretching exercise to be performed at home on a regular basis, a second meeting will be about self-esteem and the last meeting will be about prevention of lymphedema. These meetings will occur with the purpose of promoting an environment where these women could talk and share their experiences with other women with breast cancer and make sure that they also receive health educational information, as they will not receive the exercise intervention in the first phase of the study. These meetings were an exigence of the Ethics Committee of the CEPON, hospital that will take place the study in Brazil to ensure that the control group also receive possible benefits of the study. Likewise, this strategy can improve the adherence of the control group as they will feel as a group and make social binds. Both groups, experimental (Pilates solo and belly dance) and control group, will receive, after the 16 weeks of intervention, an explanatory booklet on the benefits of practicing physical activity after the breast cancer diagnosis, as well instruction on the prevention of lymphedema. As a strategy to improve adherence of the subjects on the trials, all the patients will be invited to social meetings, groups in social medias, thematic classes according with specific dates (E.g Carnival, Easter, Halloween, Christmas), and receive a T-shirt from the Project at the first meeting. Additionally, direct contact with the subjects that miss a class will occur via SMS and phone calls. These activities are planned to make the subjects feel familiar with the trial environment. For the 2-year follow up, the intervention and control group will be invited to a physical activity program organized by the university. They will also be contacted through social media and SMS once in a month to motivate the practice of physical activity and to remind of future data collections. After the end of the study besides the publication in academic journals, the main results will be presented at the Hospital in Brazil and shared with the patients in brochure format. Other outcome measures As secondary outcomes will be evaluated the physical and psychological variables associated with the quality of life. The physical outcomes are cardiorespiratory fitness, functional capacity, lymphedema, disabilities of the arm, range of motion, strength, flexibility and physical activity. In addition, the psychological outcomes are depressive symptoms, pain, fatigue, body image, self-esteem, sexual function and sleep quality. Cardiorespiratory fitness To assess the cardiorespiratory fitness, a submaximal incremental exercise test (85% of maximum heart rate -HRmax) will be performed using a cycle ergometer (Lode Excalibur Sport, Groningen, the Netherlands). The protocol will start with a power of 20W, and in every 3 minutes, 15W will be added, until the patient reaches 85% of his HRmax, which will be identified through the equation (207 -0.7 * age). In the initial three minutes of the test, the patient will be asked to remain in a rest position, accommodated in the cycle ergometer, to identify the values of resting heart rate and oxygen consumption. Patients will be asked to maintain the rotation per minute (RPM) of the cycle ergometer always above 60 RPM. Expiratory gases and flow volume will be collected during the test and analyzed by calibrated metabolic system (Quark CPET Ergo, Cosmed, Rome, Italy) to provide measurements of oxygen consumption. The heart rate will be monitored by a POLAR mark frequency and will be observed within the first three minutes of the test and at the end of each minute of the three-minute test stages. Also, every three minutes, the patient will be questioned about her perception of the exercise, through the Subjective Perception Scale of Effort -Borg Scale 6-20 points. This scale ranges from six to 20 points, where the sixth position would be the perception of "very easy" and 20 the "exhaustive". The 6-minute walk test measures the distance a person can travel on a flat, rigid surface in six minutes. Its main objective is to determine the tolerance to exercise and oxygen saturation during a submaximal exercise. Patients are asked to walk at their own pace as fast as possible during the six minutes, being allowed to walk slowly, stop and / or rest when necessary, and return to walking when they feel fit. Lymphedema The evaluation of lymphedema will be performed by calculating the arm volume, performed by measuring the circumferences of both upper limbs, at five points distributed along the arm and forearm: at 21 cm and 11.5 cm above the olecranon; to 7.5 cm, 14 cm and 24 cm below the olecranon. The circumference will be obtained, with the patient sitting, keeping the arm in abduction, flexed forearm and hand resting on the chest. These measures are used to calculate the approximate volume of the five truncated cones formed at the points of circumference measurements. The sum of these five parts gives the total limb volume.. Range of motion To verify the range of motion, evaluations of flexion, abduction, and external rotation of the shoulder according to previous studies with breast cancer patients will be carried using the digital goniometer (Absolute Axis 360°). The protocol used by Marques to perform the range motion assessment will be performed. The shoulder flexion movement will be performed with the subject lying down, the flexion movement will be performed with the palm facing medially parallel to the sagittal plane, the fixed arm of the goniometer will be placed along the axillary line of the trunk and the movable arm placed Strength of the upper limb The muscle strength of the upper limb on both arms will be measured by the Chatilln® portable digital dynamometer, which can measure overall appendicular muscle strength and all body segments. This equipment provides the value of the peak of isometric maximum force exerted by the evaluated segment, and for this it requires a generation of fast force, that does not fatigue the muscle. The maximum force generated is registered in Newtons. The muscle groups responsible for flexion, extension, abduction, adduction, internal and external rotation of the shoulder will be evaluated. The dynamometer will be placed over the specific location and patients will be asked to perform force against the equipment for up to five seconds. Each muscle group will be evaluated three times, and the mean value of these evaluations will be used, with a thirty second interval between the tests, and bilaterally. In all cases, the patients will be instructed before the start and during the repetitions, on the specific position. Flexibility "Sit and Reach" test, which allows assessing the flexibility of the coxofemoral joint. The Sit and Reach box should be supported on a wall, and for evaluation the patient is asked to keep the knees extended, bare feet resting the Sit and Reach box, and hands overlapped on the horizontal surface of the box. It should be performed with an anterior flexion of the spine, keeping the head between the arms, without flexing the knees, revealing a pause in the moment it reaches the maximum of the dotted line. Three replicates are performed, and it is considered the best mark among the three. Physical activity The physical activity level will be investigated through the International Physical Activity Questionnaire (IPAQ -short version). The Brazilian validation and reproducibility were performed by Matsudo Depressive symptoms Investigated using the Beck Depression Inventory (BDI), that is a self-report questionnaire originally developed by Beck et al.. It was validated in Brazil and factorially validated for cancer patients, indicating a Cronbach alpha of 0.82. Contains 21 multiple-choice objective questions related to depressive symptoms, in detail: sadness, pessimism, feeling of failure, dissatisfaction, guilty feelings, punishment feelings, self-dislike, self-criticism, suicidal thoughts, crying, irritability, withdrawal from family or friends. Each question provides four response options, ranging from zero to three. The sum of the scores of each question provides a total score, ranging from zero to 63, and the closer to 63, the greater the presence of depressive symptoms, indicating a higher degree of depression, and the greater the proximity to the zero greater absence of depressive symptoms. Pain The Visual Analogue Scale (VAS) will be used. VAS is a one-dimensional measure for assessing pain intensity. Composed of a 10 cm line, with anchors at both ends, at one end of the line is marked "no pain" and the other "worst pain imaginable". The magnitude of the pain is indicated by marking the line and a ruler is used to quantify the measurement on a scale of 0-100mm. showing an internal consistency of 0.91 for fatigue, and 0.92 for the total FACT-F and the total Cronbach's alpha 0.92. It is a self-report instrument aimed at patients with cancer that includes 13 items related to the perception of fatigue. Investigated by the Functional Assessment of Cancer Therapy-Fatigue instrument (FACT-F). Validated in Brazil Individuals will be asked to respond to each item with a score of 0 to 4, where 0 to 4, where 0 = not at all, 1 = a little bit, 2 = somewhat, 3 = quite a bit, and 4 = very much. In the total score the possible interval is between 0 and 52, being that, a higher score indicates a level of less perceived fatigue. Body image Addressed by the Body Image After Breast Cancer (BIBCQ) questionnaire originally developed in Canada which was translated, validated and culturally adapted in Brazil transparency, and concerns about the arm. In the end, the higher the score reaches, more compromised is the patient body image. Self-esteem The Self-Esteem Scale (EAR) developed by Rosenberg will be used. This scale was validated for the population with cancer, and in Brazil. It also received a validation review with Cronbach alpha of 0.90. It is a one-dimensional measure consisting of ten statements related to a set of feelings of self-esteem and selfacceptance that determine the global self-esteem. The total scale score varies from 10 to 40 points, and the following form is used as categorization: satisfactory or high selfesteem, those that presented a score greater than 31 points; mean self-esteem, those that resulted in their total score between 21 and 30 points; and unsatisfactory or low self-esteem, those with scores lower than 20 points. It is understood in this way, that the greater the value reached by the woman on the scale the better her self-esteem. Sexual function Evaluated by the Female Sexual Function Index (FSFI) with cross-cultural validation, revealing a Cronbach alpha of 0.96. Also validated internationally for patients with breast cancer with Cronbach alpha of 0.70. This questionnaire consists of 19 questions, grouped into six areas: desire, excitement, lubrication, orgasm, satisfaction and pain. The sexual function score, at the end of the analysis, can vary from two to 36 points, considering that the higher the score obtained, the better the sexual function of the woman. Sleep quality Evaluated by the Pittsburgh Sleep Quality Index. Validated with Cronbach alpha 0.76. This instrument is composed of seven sleep-related areas: subjective quality, latency, duration, habitual efficiency, disturbances, use of sleeping medication and daytime sleepiness. Scores range from zero to 21 and correspond to overall sleep quality. In the end, scores up to five determine a good sleep quality and scores greater than five points distinguish poor sleep quality. Descriptive and control variables The descriptive and control variables were divided into clinical variables (cancer stage, characteristics of treatment, previous clinical treatment, characteristics of surgical intervention, mammary reconstruction, date of surgery, presence of lymphedema, physiotherapy and other diseases), sociodemographic variables (age, education, marital status, economic level and occupation) and anthropometric measures (height and body mass). The descriptive and control variables will be acquired by self report. The variables of the study regarding the Pilates solo and belly dance protocol are present in Table 3 and in Figure 2. Data collection Data will be collect using an interview format with a paper questionnaire and physical tests. The principal investigator of the study, who received previously training, will conduct a 50-minute interview. The questionnaire will cover general and clinical information, quality of life and psychological variables. The physical variables will be investigated by the specific tests. All data collection will be administered before the beginning of the intervention (baseline collection) and after the conclusion of the 16-week protocol (post-intervention collection), also, 6-month, 12-month and 24-month after intervention (maintenance collection). (See Figure 1). The maintenance collection will take place considering the health behavioral change that the belly dance and Pilates solo intervention can promote in breast cancer women. For the control group, data collection will be conduct using the same paper-based questionnaire and tests applied to the intervention group, with data on general and clinical information, quality of life, physical and psychological variables. The collection will be schedule with the participants and will take place at the same intervals as for the experimental group, before the start of the intervention (baseline collection) and after conclusion (post-intervention collection) by the same principal investigator, also, 6-month, 12-month and 24-month after intervention (maintenance collection). All the data collection of intervention and control group will occur at Santa Catarina State University. Patients that discontinue intervention or control group that did not show for meeting, will be collect as well and analyzed as intention to treat group. During the intervention and data collection the researchers will collect spontaneously feedback from the patients to guarantee that the study will not have any adverse events. Data from all groups (experimental and control) will be held according to the Good Clinical Practice (GCP) and the Declaration of Helsinki and will be treated with confidentiality, following the current privacy policy. Statistical Analysis First, a spreadsheet will be created using the Excel 2016, from which the data will be transfer to SPSS version 20.0 for the analysis. Descriptive statistics (mean, standard deviation, and percentage) for the characteristics of the sample will be computed. To investigate the relationship between general and health information of the control and experimental groups, Chi-Square or Fisher's exact tests will be used. To analyze differences in the experimental and control groups in the baseline, post intervention, and in the maintenance periods, a two-way ANOVA with repeated measures and Sydak comparison tests will be conduct. Confounders variables will be considered in the analyses, as type of treatment, type of surgery, age and weight status. The analysis will occur according to the protocol and to intention to treat, meaning that all the patients will be evaluated according to the randomization process. For handle with missing data it will be used the multiple imputation method. The significance level of 5% will be two-sided. Discussion We presented a 16-weeks Pilates solo and belly dance protocol for women after breast cancer diagnosis. In the literature, the benefits of physical activity for breast cancer women are well established. The systematics reviews reported improved quality of life and cardiorespiratory capacity and reduced fatigue after practice of physical exercise during breast cancer treatment [12,. The proposal of this study is present a protocol of Pilates solo and belly dance (3x/week) for women who were diagnosed with breast cancer and compare its effects with the group without intervention, considering that these are two kinds of activities that valorize mind and body, and can bring different outcomes and benefits for breast cancer women. Dance can represent both psychotherapeutic treatment and a form of physical activity, based on body awareness, expression, and acceptance, to facilitate physical, emotional, cognitive, and spiritual integration. Moreover, through the socialization context promoted by dance, benefits are revealed in relation to decreased feelings of loneliness and misunderstandings with others. Pilates as well, was created by Joseph Pilates as a method based on Eastern mind-body-spirit theories combined with Western theories, according to the following six principles: centralization, control, concentration, fluidity, breathing, and precision. Its practice provides shoulder and pelvis stability and improves posture, stretching capacity, muscular strength, and mind-body connection. In a systematic review of dance and breast cancer published by our group, we identified dance as a viable alternative of adjuvant treatment for patients who have passed through breast cancer, as well as claiming that it can promote psychological benefits and improve strength and range of motion of the upper limbs. In this scenario, studies involving dance and breast cancer may involve specific dance therapy methods; traditional dance techniques, such as classical ballet and jazz; the practice of traditional Greek dance associated with the training of the upper limbs; as well as circular dance and ballroom dance. However, none of these studies presented published protocols, demonstrating the importance of a belly dance protocol for women after the diagnosis of breast cancer. And further publication of a belly dance protocol will improve the possibility of generalization of the study, assuring the external validity. The use of the Pilates method in patients with breast cancer was evaluated by a systematic review of four studies and determined that the method causes an improvement in patients' range of motion, pain and fatigue. Other evidences related to the benefits of the Pilates method for the health of these women reported improved quality of life, reduced pain and fatigue, decreased lymphedema, and increased upper limb functionality. These studies published on Pilates interventions but there is no protocol study for women with breast cancer. Also, the further publication of a Pilates solo protocol will improve the possibility of generalization of the study, assuring the external validity. Methods of dance therapy are generally similar, taking advantage of subjective approaches to the perceptions of body and movement fluency in relation to feelings. These methods can comprise the use of conscious walks and drives, verbal feedback, exploration of specific body parts, the use of different movement intensities (light and slow to energetic and active), and the work in pairs. These studies have shown positive results in relation to psychological and physical aspects of women after the diagnosis of breast cancer. However, they do not include the validation of a protocol, which, therefore, does not allow the study to be replicated by other researchers and does not indicate the frequency, duration, or intensity of the movements, as well as the beats per minute of the music used. The belly dance protocol presented in this study addresses a form of dance that has predetermined movements and specific techniques and was developed following a specific progression to the correct learning model. In this sense, belly dance has been chosen as the model for the intervention protocol for being an enjoyable practice that involves an intimate relation between movement and emotion. Also preserves the female identity and awakens a spontaneous body language, with beneficial movements that respect the individuality of each practitioner. Belly dance is also characterized as a practice that offers intense movement of the upper limbs, which directly benefits women; addressing limitations caused by the disease, such as the development of lymphedema and decrease of range of motion. A pilot study was developed by the research group itself and has been shown to be an effective possibility for interventions with breast cancer patients. In the pilot study, the intervention was only performed for 12 weeks, often twice weekly and with 60 minutes of duration per session, but already demonstrated benefits in breast cancer on quality of life, depressive symptoms and fatigue. Also, the adherence was 78,6% (IC95%: 71.3 -85.9). The Pilates intervention protocol presented here has not been yet performed in women with breast cancer, and it is of great relevance as an adjuvant therapy in the treatment of these women. The protocol was developed to achieve the great benefits reported in the international literature, including improvement in quality of life and reduction in the physical and psychological effects of adjuvant breast cancer treatment. In addition, this protocol influences and encourages the practice and maintenance of physical activity after treatment, as the practice of physical activity reduces the risk of breast cancer recurrence. The exercises include stretching of the upper and lower limbs, upper limb mobility, and strengthening of the upper and lower limbs and abdomen, with consideration and respect for each patient's limit and most exercises are performed in the supine position, avoiding impact to the joints. The time of intervention of 16 weeks for this protocol was chosen considering the pilot study and the systematic review of breast cancer and dance and Pilates. The pilot study of 12 weeks showed psychological benefits to breast cancer women, and the classes were performed in 24 sessions. Therefore, to improve physical and psychological aspects in this protocol it was decided to explore twice the number of sessions, leading to a 48 sessions protocol, in 16 weeks. In the systematic review of dance and breast cancer it was demonstrated that interventions were performed with a range of three to 24 weeks, with one to three sessions a week and one to three hours per session. It was also observed that most of the studies identified in the systematic review about Pilates and breast cancer had their interventions with a total duration of eight weeks, frequency of three times weekly and sessions of 45 to 60 minutes. Thus, as average of these findings we also propose 16 weeks with three 60-min sessions per week. Due to the lack of a systematic and specific protocol for patients with breast cancer and the importance of acting with adjunctive treatment, a Pilates solo and a belly dance intervention protocol was developed to improve quality of life, as well as to mitigate the psychological and physical outcomes of women after breast cancer diagnosis. For being two kinds of physical activity that is known worldwide, there is the possibility of application in other locations. Finally, Pilates solo and belly dance are characterized as important physical activity option for this population that can minimize the side effects of the disease and its treatment, assisting in the patients' recovery. modifications will be communicated not only to CEPH but also for the trial participants, trial registries and the journal submitted. It will be obtained written informed consent from all participants in the study. Consent for publication Not applicable. Availability of data and materials Data are available on request to the authors. Competing interests The authors declare that they have no competing interests. Funding The main investigator of this study was financed in part by the Coordenao de Aperfeioamento de Pessoal de Nvel Superior -Brasil (CAPES) -Finance Code 001. The PhD student financed is responsible for the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Author's contributions LB conceived of the study, initiated the study design, developed the methodology, commented on initial drafts of the manuscript; TBF conceived of the study, initiated the study design, developed the methodology, commented on initial drafts of the manuscript; MCSV initiated the study design, developed the methodology; GSP initiated the study design, developed the methodology, commented on initial drafts of the manuscript; JM initiated the study design, developed the methodology, commented on initial drafts of the manuscript; FFS initiated the study design, commented on initial drafts of the manuscript; AB initiated the study design, commented on initial drafts of the manuscript; FB initiated the study design, commented on initial drafts of the manuscript; MD initiated the study design, commented on initial drafts of the manuscript; ACAG initiated the study design, commented on initial drafts of the manuscript. All authors read and approved the final manuscript. Figure 1 Flow diagram of the study participants according to CONSORT 2010. Figure 2 Template of recommended content for the schedule of enrolment, interventions, and assessments. Supplementary Files This is a list of supplementary files associated with the primary manuscript. Click to download. SPIRIT_Fillable-checklist-.doc
Hyperthermia: effect on exercise prescription. Ten healthy male university students pedaled a bicycle ergometer (Monark) for three sessions each lasting 30 minutes. Each subject worked at an individually predicted work load corresponding to approximately 40% of maximal aerobic capacity. The same predicted work load was conducted at 24 degrees C, 44 degrees C and 54 degrees C for each subject. For practical purposes, the results reveal approximately a one beat per minute increase in exercise heart rate for each 1 degree C increase in ambient temperature above neutral (24 degrees C). The practice of exercising cardiac patients in hot ambient temperatures which produce potentially hazardous heart rate levels was challenged. Seasonal reevaluation of exercise heart rate prescriptions is of importance. Hopefully, these findings will also be of some importance to various community gymnasiums and to self-motivated joggers.
Interstitial cells of Cajal: is their role in gastrointestinal function in view of therapeutic perspectives underestimated or exaggerated? This manuscript reviews the current views on morphology and function of the distinct subpopulations of interstitial cells of Cajal (ICC) in the digestive tract and their interrelationships with surrounding cells. Three different functions have been postulated so far, i.e. a pacemaker role, a mediator in enteric excitatory and inhibitory neurotransmission and a mechanosensor. Attention will also be paid to the interstitial cells of Cajal and their possible involvement in pathophysiological conditions. Finally, perspectives for interstitial cells of Cajal as targets for therapeutic intervention will be discussed.
Three relapses after a haploidentical transplantation in a pediatric patient: Cure with no further transplantation Isolated extramedullary relapse (EMR) after hematopoietic stem cell transplantation (HSCT) is a highly fatal condition that creates uncertainty regarding treatment options. Although certain approaches such as repeat HSCT and donor lymphocyte infusion are recommended, we report a patient with acute lymphoblastic leukemia who had three isolated EMRs after HSCT at different locations and at different times that were responsive to local and systemic therapies, without the need for a second transplantation.
Wearable chemical sensors: Characterization of heart rate electrodes using electrochemical impedance spectroscopy A series of experiments was conducted in order to determine the reaction kinetics of various types of heart rate monitoring electrodes. In addition, nonmotion on-body measurements were performed in order to gauge how the difference in reaction kinetics translates to the electrocardiogram signal. Standard solid-gel Ag/AgCl single use monitoring electrodes are used here as the gold standard to which textile electrodes can be compared. The test method created here will serve as a basis to evaluate future heart rate monitoring electrodes.
INSAR-BASED DETECTION AND MAPPING OF SEISMICALLY INDUCED GROUND SURFACE DISPLACEMENT AND DAMAGE IN PAMPANGA, PHILIPPINES On April 22, 2019, an earthquake with a magnitude MW 6.1 struck the municipality of Castillejos in Zambales, the Philippines, and severely affected the province of Pampanga, which caused damage to commercial and residential structures reaching over 40 victims. This paper presents an approach for creating a pixel-based proxy damage assessment and displacement field maps to delineate the extent of ground surface displacements due to an earthquake. Specifically, this paper explored two change detection methods: the interferometric synthetic aperture radar (InSAR) technique and the coherence difference analysis method, using an open-source remote sensing software package and free SAR image data acquired by Sentinel-1 missions. Ground truth data were collected to substantiate the findings of the generated maps after the earthquake. Out of 7 surveyed damaged structures that were included in the National Disaster Risk Reduction and Management Council (NDRRMC) of the Philippines Situational Report, four damaged structures were successfully targeted using the proxy damage assessment map that had a coherence difference value ranging from 0.7-0.9 and damage grades of 3-5 based on the European Macroseismic Scale 1998 (EMS-98) damage classification system. This study confirms that change detection methods applied to C-band Sentinel-1 SAR data are valuable for mapping damaged areas and estimating ground surface displacements toward better hazard mitigation and disaster response.
Enhanced H₂ Sensitivity in Ultraviolet-Activated Pt Nanoparticle/SWCNT/Graphene Nanohybrids A surface engineering approach is exploited to enhance the performance of H2 sensors consisting of a single-wall carbon nanotube film/graphene 3D electrode decorated with catalytic Pt nanoparticles using atomic layer deposition (Pt-NPs/SWCNTs/Gr). Specifically, C-band ultraviolet (UVC) radiation has been applied on the Pt-NPs/SWCNTs/Gr sensors for up to 20 minutes to activate the carbon surface and enhanced H2 sensitivity and response speed have been obtained. Remarkably, at the optimal UVC irradiation of 10 minutes (intensity of 4.6 mW/cm2), the H2 gas response was enhanced by up to 4.3 fold, together with an enhanced response speed by 3.6 times as compared to that of the as-made Pt-NPs/SWCNTs/Gr sensors before the UVC irradiation. Specifically, a high H2 response up to 32% has been achieved at 10% H2 concentration. This enhancement can be attributed to desorption of residual molecules adsorbed on the SWCNTs and graphene surfaces during the sensor fabrication using UVC irradiation. This result illustrates the importance of the carbon surface activation in development of high-performance H2 sensors using carbon nanostructures. The obtained high performance in the Pt-NPs/SWCNTs/Gr sensors can be attributed to the large sensing surface area of SWCNT films with carbon surface activated using UVC treatment, the catalytic benefit of the conformally coated Pt-NPs, and high mobility signal transport through graphene. In addition, this result demonstrates that the UVC irradiation can provide an effective, non-destructive, and facile method to activate the carbon surface on sensors composed of carbon nanostructures.
Ergodic Stochastic Optimization Algorithms for Wireless Communication and Networking Ergodic stochastic optimization (ESO) algorithms are proposed to solve resource allocation problems that involve a random state and where optimality criteria are expressed in terms of long term averages. A policy that observes the state and decides on a resource allocation is proposed and shown to almost surely satisfy problem constraints and optimality criteria. Salient features of ESO algorithms are that they do not require access to the state's probability distribution, that they can handle nonconvex constraints in the resource allocation variables, and that convergence to optimal operating points holds almost surely. The proposed algorithm is applied to determine operating points of an orthogonal frequency division multiplexing broadcast channel that maximize a given rate utility.
Marriage Guidance, Women and the Problem(S) of Returning Soldiers in Finland, 19441946 When former military chaplains began to give marital guidance to troubled couples after the end of hostilities with the Soviet Union (19411944) in Finland, new information about the causes and experiences of marital problems and divorces emerged during guidance sessions. Even lengthy marriages were seen to be burdened due to the stress of reunion and mens wartime infidelity, increased inclination to drinking and aggressive behaviour. The article discusses the meaning and construction of marital expectations with respect to the development of post-war marital dissolution, and argues that wives in particular tried to adjust their marital expectations in accordance with the general developments in personal life and society. Especially in the case of older marriages, for the majority of women, divorce was seen more as means of personal survival than of seeking happiness, even in the urban areas. Although contemporaries feared that the marital institution was disintegrating, the majority of wives were willing to work to save, or endure, even troubled marriages.
Vertical Lip Position and Thickness in Facial Reconstruction: A Validation of Commonly Used Methods for Predicting the Position and Size of Lips This study examined several methods used to estimate oral fissure position, lip margin position, and lip thickness recommended by Angel, George, Lebedinskaya, Taylor, Wilkinson et al., Balueva and Veselovskaya. A sample of 86 lateral head cephalograms of adult subjects from central Europe were measured and the actual and predicted dimensions were compared. The best estimation for oral fissure position was opposite the lower mark of maxillary incisors (error of 1.3 mm). Upper lip margin was predicted best by upper mark of maxillary incisors (error of 1.7 mm), and lower lip margin by cementumenamel junction of mandibular incisors (error of 2.3 mm). The regression equations of Wilkinson et al. displayed least error (1.3 mm and 1.8 mm, respectively) for upper and lower lip thickness, and method of George (error of 3.4 mm) for total lip thickness.
Can You Test Me Now? Equivalence of GMA Tests on Mobile and NonMobile Devices As technology continues to evolve, organizations seek to use personal electronics like smartphones for selection and assessment. While this promises to increase access to a more diverse applicant pool, research is needed to examine whether commonly used assessments function similarly on these devices as on a conventional computer. Contrary to past research, we did not find meaningful differences in general mental ability (GMA) test scores between device groups. We also observed few differences in item functioning between devices. Screen size had a positive, but marginal effect on test scores. These results are optimistic for the use of mobile devices in GMA testing, but additional research is needed to examine the functioning of alternative GMA tests administered on mobile devices.
Preparatory set associated with pro-saccades and anti-saccades in humans investigated with event-related FMRI. Previous studies have shown that the BOLD functional MRI (fMRI) signal is increased in several cortical areas when subjects perform anti-saccades compared with pro-saccades. It remains unknown, however, whether this increase is due to an increased cortical motor signal for anti-saccades or due to differences in preparatory set between pro- and anti-saccade trials. To address this question, we measured event-related fMRI in a paradigm that allowed us to separate instruction-related brain activity from saccade-related brain activity. In this paradigm, the instruction to either generate a pro-saccade or an anti-saccade was conveyed by a switch in the color of the central fixation stimulus and preceded the presentation of a peripheral stimulus by either 6, 10, or 14 s. Cortical areas were functionally mapped using the general linear model comparing standard pro- and anti-saccade blocks with fixation blocks. When the trials were aligned on the onset of the instruction stimulus, bilateral frontal eye fields and right hemisphere dorsolateral prefrontal cortex showed an increased signal during the instruction period on anti-saccade trials as compared with pro-saccade trials. When the trials were aligned on the movement stimulus and the instruction period activity was subtracted, there were no differences between pro- and anti-saccades. This finding suggests that the increased cortical activation found in previous blocked designs originates predominately from differences in preparatory set and not from differences in the motor signal between pro- and anti-saccades.
Evaluating rules for phonological reduction in In this paper, pronunciation variation in Swedish due to speaking style and speech rate is discussed. A tentative rule system for segment-level reduction is currently being evaluated by letting subjects assess the naturalness of synthetic speech generated from canonical transcriptions and transcriptions reduced by the system, respectively. Results from experiments using short sentences with explicit control over the rules applied have shown that reduced forms are preferred at high speech rates (rates above the synthesis default rate), while there is no significant bias in preference between canonical and reduced forms at the synthesis default speech rate. Presently, longer and less controlled passages of synthetic speech are being evaluated using the same experiment set-up. Using text passages of varying degree of formality, this experiment allows for testing the effects of text formality on perceived naturalness.
Group differential games for multiparameter singularly perturbed systems Abstract In this paper, a group differential game problem is formulated using the system model of multiparameter singularly perturbed systems (MSPS). The case that there exist two groups of players with a conflict interest in the game is considered, and the players in each group must make their own decisions by taking into account the group interest. A method is proposed to find the approximate strategy for every player which will lead to an O (||||) near saddlepoint equilibrium.
EEG-based Mental Workload Estimation using Encoder-Decoder Networks with Multilevel Feature Fusion In this paper, we propose a model that combines the multilevel feature fusion algorithm and encoder-decoder structure for evaluation of mental workload using electroencephalogram (EEG) signals. The encoder-decoder structure was used to reduce additive noise and subject variations of EEG data. The encoder is structured by incorporating a 3D convolutional neural network (3DCNN) and multilevel feature fusion concept, which extracts unified key features from combining the low-level and high-level features. The decoder consists of simple 3DCNN layers to recover the input EEG image from the latent vector. The proposed model can achieve higher performance by mitigating feature variations. We evaluate our network with EEG data obtained through the Sternberg task to estimate mental workload, which has 91.6% accuracy and outperforms the conventional algorithm.
Brain-to-Brain Interaction at a Distance: A Global or Differential Relationship? Background: The main objective of this exploratory study was a confirmation of the results obtained by Giroldini et al, 2016, relative to the possibility of identifying a long-distance connection between the EEG activities of two totally sensory shielded subjects, one of whom was stimulated with light and sounds. Furthermore, this study sought to answer the following questions: - What is the relationship between the power of the EEG signal in the stimulated partner and that of the other distant partner? - Is the relationship between the EEG activities of the stimulated and distant isolated partners global (i.e., an undifferentiated response), or is it differentiated and thus displays variations depending on the characteristics of the stimulation applied to the stimulated pair? Methods: Five adults chosen for their experience in mind control techniques and their mutual friendships took part in this study. Each participant took turns in being both the stimulated partner and the isolated non-stimulated partner with each of the others, making a total of 20 pair combinations. The stimulated partner received three blocks of 32 visual-auditory stimulations lasting 1 second modulated at 10 Hz, 12 Hz, and 14 Hz respectively, with a constant inter-stimulus interval of 4 seconds. The EEG activity of each pair was recorded at 128 samples/sec over 14 channels and analyzed by measuring traditional steady-state potentials and the Pearsons linear correlation between all possible signal pairs with an innovative algorithm. Results: From the results of the twenty pairs, we found an increase in the correlation among the EEG channels of the isolated distant partners, corresponding to frequencies of the steady-state visual and auditory potentials used for the stimulated partner. Furthermore, we did not find a correlation between the response intensity elicited in the stimulated partners and that observed in the non-stimulated one suggesting that this physical characteristic cannot be transferred between isolated partners. Discussion: A mental connection at distance can allow connection of informational rather than physical characteristics of the shared signals. Introduction The possibility that the brain activities of two physically distant, but emotionally and mentally connected individuals, display a correlation in the absence of any normal sensory connection, has been supported by Giroldini et al.,, independently confirmed by Radin, and further approximately thirty studies (see Table S1 in the study of a). In a typical study of this kind, two members of a pair are separated (varying from meters to kilometers) and sensorially isolated from each other. One member is stimulated with either structured (e.g. images) or unstructured (e.g. lights and sounds) information, and the correlation between their respective EEG activities is measured. For example, if the sensorially isolated partner's EEG shows a variation correlated to the stimulated partner's EEG, we can assume (unless potential artefacts are discovered) that it is proof of a non-local (long-distance) connection between the two brains. Even if, from a phenomenological point of view, this correlation seems to show a causal effect of the stimulated partner upon the sensorially isolated one, some authors believe it to be a form of biological entanglement similar to what occurs in quantum physics (see for example Walach, Tressoldi, & Pederzoli, 2016), and therefore is an expression of an acausal correlation. However, to date, the relationship between the quality and intensity of the stimulated partner's (SP) signal and the same parameters seen in the isolated non-stimulated distant partner (NSP) have yet to be examined in depth, except for the fact that the latter's signal is much weaker (by approximately a factor of ten). This study is an exploratory contribution to better understand this relationship. We specifically sought to answer the following questions: --What is the relationship between the intensity (or power) of the observed EEG signal between the stimulated partner and the isolated distant partner? --Is the relationship between the EEG activities of the stimulated and distant isolated partners global (i.e., an undifferentiated response), or is it differentiated and therefore exhibits changes depending on the characteristics of the stimuli applied to one of the pair? The first answer is important in understanding whether or not a correlation exists between the recorded EEG signal intensities of the stimulated partner and the isolated one, with all the related consequences of honing in on this relationship not just within groups but between pairs of subjects as well. The second answer, however, is important in recognizing which physical characteristics of the signal can be identified from this strange distant correlation, for their possible future development and even technological application. Participants Five adults -two women and three men -took part in this study, with an average age of 38.3 years (SD = 7.5), chosen for their experience in mind control techniques (mainly meditation) and their mutual friendships. We consider these pre-requisites essential for an adequate "mental and emotional connection" between the pairs. Each participant took turns in being both the stimulated partner (SP) and the non-stimulated partner (NSP) with each of the others, making a total of 20 pair combinations. Statement of Ethics The use of experimental subjects is in accordance with ethical guidelines as outlined in the Declaration of Helsinki, and the study has been approved by the Ethical Committee of the University of Padova's Department of General Psychology, prot. n. 63, 2012. Before taking part in the experiment, each subject gave his/her informed consent in writing after having read a description of the experiment. EEG equipment Two Emotiv Epoc™ EEG devices were used, modified to allow connection (via multi-contact connectors) to professional Bionen headsets (See Figure S1 in the Supplemental Information) to ensure high quality EEG signals. The system's accuracy and signal quality were thoroughly checked and ascertained. The sample frequency was 128 samples/sec over 14 channels connected to locations Fp1, F3, C3, P3, O1, F7, T5, Fp2, F4, C4, P4, O2, F8, T6. The instruments were provided with a built-in fifth order low-pass digital filtre (bandwidth from 0.2 to 45 Hz), as well as two notch filtres at 50 and 60 Hz respectively as protection against noise produced by the local electricity network. Signal acquisition by the two EEG devices was controlled by a specially designed software program with an acquisition synchronicity precision better than 1/128 second and which ensured total electrical independence and separation between the two devices (see a). The experiment was conducted at the EvanLab laboratory in Florence (Italy), which is comprised of two separate sound-and lightproof rooms with no electromagnetic disturbances (see Figure 1). Visual-auditory stimulation The visual-auditory stimulations were conducted in three blocks of 32 simultaneous stimulations lasting 1 second, at the same time on-off modulated at 10 Hz, 12 Hz, and 14 Hz respectively, with a constant inter-stimulus interval of 4 seconds. The audio modulation was performed on a 900 Hz sinusoidal carrier (80 dB). This method of stimulus administration, with a modulation frequency from 4 to 20 Hz, is also called "Steady-State" (Pastor, Artieda, Arbizu, Valencia, & Masdeu, 2003;Picton, John, Dimitrijevic, & Purcell, 2003). The interval between the three blocks was randomly varied at between 40 and 90 seconds. The visual stimulus was provided by an array of 16 red LEDs positioned about 30 cm from the SP's closed eyes, while the sound was sent directly to the ears via 32 ohms impedance earphones. The three frequency blocks were given randomly without repetition of the same frequency. The raw data are available at: http://tiny.cc/owzyly Procedure The SP was given the following instructions: "When you are ready, relax and be prepared to receive a visual and auditory stimulus connecting mentally and with positive emotions with your partner. Limit your body movements to prevent interference with your EEG activity. You will perceive three blocks of 32 stimulations of 1 second each, the blocks will be separated by long random pauses in order to avoid predictable rhythms. The experiment will last about 15 minutes." The NSP was given the following instructions: "When you are ready, relax and connect mentally with positive emotions with your partner, who is receiving visual and auditory stimulations. Keep your body still to prevent interference with your EEG activity. The experiment will last about 15 minutes." At the end of each trial involving pairs their roles were reversed. Timing of the stimuli Tests performed later on the data acquisition process showed a slight shift of the presentation of the stimulus with respect to the theoretical instant of stimulation. This shift is caused by the software program's execution features due to the operating system (Windows 10). The shift is equal to around 10 samples (~ 0.08 secs) and can easily be compensated for during data analysis. In addition, this shift displays a jitter (equal to about 3 samples); this is because, since Windows is a multitasking operating system, programs and system services run simultaneously and compete with each other for the microprocessor. This jitter is small, however, and does not cause problems. The image on the left in Figure 2 are obtained by measuring the signals from a photodiode in front of the stimulus LEDs to obtain time references, which are then fed back into the Emotiv Epoc™. From these measurements, it is possible to completely compensate for the shift. The signal obtained from a BPW34 photodiode in front of the illuminator lights shows a delay between the stimulus command and its actualization (about 10 samples, or 0.08 secs). It is taken into account by adding up the time at the start of the stimulation period. In the image, the stimulus was at 12 Hz. On the right is a diagram showing placement of the EEG's 14 electrodes. Data analyses Since each recording contained 32 stimulations for each of the three different frequencies, each of the three was processed the same way. One of the first types of analysis used was the FFT (Fast Fourier Transform), applied to a pre-stimulus 1 second period, a 1 second stimulus, and then a 1 second post-stimulus period, and then averaged over all stimuli (32 for each frequency in each of 20 files). Next the FFT differentials were calculated -that is, the differences between the stimulus period and the pre-stimulus period. The post-stimulus period was ignored. SP EEG data analysis Generally, in directly stimulated subjects, the FFT shows peaks very close to the stimulus frequencies (10, 12, and 14 Hz) and their potential harmonics although they are only 10 -15% bigger than the baseline, see Figures 3, 4, and 5. This means the stimulation effect is not strong enough to be seen without appropriate processing. So as to better highlight the effects, the differences between the two situations (stimulus and prestimulus) were calculated and, once amplified, the stimulation frequencies became clear. All three stimulation frequencies show a strong reduction in the subjects' spontaneous Alpha frequency (the well-known typical Alpha-block). Figure 3: On the left is the FFT (between 1 and 42 Hz) of the pre-stimulus, stimulus, and poststimulus periods. On the right are differential graphs showing the loss of spontaneous Alpha and presence of stimulus frequency (10 Hz) including two of its harmonics, 20 Hz and 30 Hz. Note that the top right graph effectively represents the absolute value of the difference between the stimulus and pre-stimulus, whereas bottom right graph represents the signed difference. Figure 4: On the left is the FFT (between 1 and 42 Hz) of the pre-stimulus, stimulus, and poststimulus periods. On the right are differential graphs showing loss of spontaneous Alpha and presence of stimulus frequency (12 Hz), as well as its first harmonic, 24 Hz. Figure 5: On the left is the FFT (between 1 and 42 Hz) of the pre-stimulus, stimulus, and poststimulus periods. On the right are differential graphs showing loss of spontaneous Alpha and presence of stimulus frequency (14 Hz) as well as its first harmonic, 28 Hz. A small peak at 28 Hz appears only in SPs probably due to a weak disturbance at 50 Hz from the power source for the LED array used for visual stimuli or a second harmonic of 14 Hz. The 28 Hz peak is however eliminated by the differential FFT. Analysis of SP EEG data using the GW6 method. All EEG signals were pre-processed as outlined in Giroldini et al. (2016b), then narrow band filtered (1 Hz width) centered at the stimulation frequency (10 Hz, 12 Hz or 14 Hz -see Figure 6) by a fourth-order band-pass Butterworth filter. We chose to implement a time-reversal filtre to ensure a zero phase delay: the effective filtre order was 8. Therefore, the classic ERP was identified in SPs through simple averaging, calculating the power, and finally extracting a multiple correlation value between EEG channels in accordance with the GW6 method. A MatLab version is available at: http://tiny.cc/owzyly To outline the fundamentals of this method, it is based on calculating Pearson's linear correlation between all possible signal pairs, from which pairs of fixed duration data segments of about 250 ms are extracted. These segment pairs (sliding windows) are then slid along the time axis of the two signals, generating a series of curves R(I, X), where I represents pair combinations (I = 91 in this case) and X is time. Subsequently this series of curves is processed to produce a single graph Sync(x), that basically represents the global variations of correlation (or synchronisation) between all EEG channels, using suitable pre-and post-stimulus periods as a baseline. This method was applied to each stimulus period, examining 4 seconds of data (1.5 s pre-stimulus, 1 second stimulus, 1.5 s post-stimulus). The EEG signals were filtred in a narrow band (1 Hz width) centered at the stimulation frequency (i.e. 10 Hz, 12 Hz, 14 Hz -see Figure 6). The GW6 graph, the ERP's Power and the signal power were calculated as the average of all stimuli and all subjects. Taking together all the graphs obtained with the GW6 method and shown in Figure 6, it is possible to clearly see the normal response to the stimuli, for example the power of classic ERP, or the signal power. It is also important to stress that the greatest SP response is obtained by effectively filtreing the signal at the true stimulus frequency (e.g. 9.5-10.5 Hz for the steady-state frequency of 10 Hz). If we filtre within a frequency range even slightly shifted (e.g. 10.0-11.0 Hz), the response is always reduced. In short, in the SPs the greatest steady-state response exactly coincides with the stimulus frequency. From Figure 6, we note that the curve's height increases as it moves from 10 to 14 Hz, probably because the stimulation frequency moves away from that of the spontaneous Alpha band frequencies (~ 8-12 Hz). The continuous red line together with the green curve generated by GW6 represent the random expectation (calculated using a method described later). In all cases, we see that the GW6 graph greatly exceeds the random "zero curve". In particular, by simply calculating the power of the filtred signal within a narrow band (1 Hz bandwidth) at 10 Hz, 12 Hz, and 14 Hz, it is possible to classify -almost always correctly -the frequency of each of the three groups of stimuli given to the SPs. Analysis of NSP EEG data. Data analysis of the NSP data does not evidence any significant peaks at stimulation frequencies in differential FFT graphs equivalent to those in Figures 3, 4 and 5. Furthermore, on average there is no peak resembling a classic ERP, even when power is taken into account. In the EEG signal power, there are only weak and variable fluctuations and therefore there is no response which can be definitely correlated to that obtained in SPs. The only graph showing any significant variation with respect to baseline EEG activity is that obtained using GW6, by filtering half a Hz below the stimulus one (see Figure 7). For example, in order to identify a more significant variation in the band around 10 Hz, it was necessary to filtre (using the same band-pass filtre described for SPs) in the range from 9.0 to 10.0 Hz. The same applies to other stimulation frequencies (e.g. 11-12 Hz for 12 Hz stimulus and 13-14 Hz for 14 Hz stimulus). This frequency shift in the NSPs response does not appear to be due to the software, because the same program, when applied to SPs, shows the maximum response peak exactly at the stimulation frequency. Note that the typical ERP curve and the ERP power curve are virtually flat. Only the GW6 method reveals significant deviations from what is expected from chance alone (red curve), mainly in the global Sync1 curve. By applying the GW6 algorithm to each subject for each stimulus condition, a Sync1 curve was produced, making a total of 20 x 3 = 60 Sync1 curves. Then the average of the curves of each subject, relative to the same stimulation conditions, was calculated, producing 3 global Sync1 curves. Additionally, an overall Sync1 curve was obtained by averaging the 3 global Sync1 curves. From these resulting 4 curves, the maximum correlation value for each frequency was calculated, then from these values the average in the stimulation zone was determined. Furthermore, the accuracy (Standard Deviation and 95% Confidence Intervals) of the Sync1 curves was estimated with a bootstrap method using 10.000 resamples. The results are displayed in Table 1. According to GW6 algorithm, the Sync1 curve is computed from epoched data with respect to the stimulus onset. In order to compare these observed values with a random estimate, for each subject and for each stimulus condition, on the raw datasets we created "fake" stimulus identifiers in the same quantity of real stimulus identifiers (32 identifiers for each stimulus condition), obtaining a so-called "random dataset". The time-features of such "fake" identifiers are similar to real ones, that is, randomly distributed with minimal distance between adjacent identifiers greater than 10 s. The random dataset is processed in the same way as the real one, obtaining a "fake random max correlation" Sync1 curve. For each subject and for each stimulus condition we created 60 random Sync1 curves. As we did for the observed max correlation, for each frequency and their average, we estimated the precision of the random max correlation. The results are displayed in the following Table 2: maximum correlation would be equal to or greater than the observed one. The results are displayed in Table 3: The above results suggest that there is a probable global relationship between the EEG activity of the SPs and NSPs, associated with a less probable response in relation to the 14 Hz stimulation. A comparison using Bayes Factors and paired t-test of the observed and fake max correlation, yielded the results presented in Table 4. Bayes Factors were estimated by using the software JASP 0.8.2.0 (Jasp Team, 2017), with a default Cauchy prior of.76. The Bayes Factors values, suggest that the observed max correlation have a moderate level of probability to be higher than the fake ones for the 10 Hz, 14 Hz and average frequencies. Another way of analyzing the results, is to compare the average of the observed maximum correlation of each of the 20 participants with the fake max correlation values. Raw data are available at: http://tiny.cc/owzyly The results are presented in Table 5. The Bayes Factors, confirmed by robustness checks, clearly suggest that for all three frequencies the probability that the mean values of the max correlation observed in the twenty participants is greater than the estimated fake max correlation, ranges from 14/1 for the 14 Hz frequency, to 210/1 for the 10 Hz frequency. The differences between the results reported in Table 3, Table 4 and Table 5, suggest the importance of taking into account individual differences instead of the estimated overall averages. As indicated in the introduction, the second aim of this study was to investigate the relationship between the strengths of the EEG signals between the stimulated partner and the isolated distant partner. This relationship was analyzed by simple binary correlations between the values of the maximum correlations obtained with the GW6 algorithm of the SP and NSP pairs. The values of the Kendall's tau-b correlations are presented in Table 6. Is it evident that the values of the correlations are very low, ranging from a minimum of -0.04 to a maximum of -.33, indicating an almost null correlation between the strength of the signals of the pairs of participants. In evaluating the results, it is very important to determine not only the statistical significance, but also and especially the effect size, i.e., the quantitative measure of the strength of the phenomenon being examined. Regarding the SP data, during the stimulus the variation in the Pearson correlation between the different electrode combinations is greater by about 10-15% with respect to the pre-stimulus period. Consequently, with regard to these steady state potentials induced by direct stimulations, no statistical analysis is needed to understand this difference compared to the expected chance value. In the case of a response in NSP EEG activity induced by a distant mental connection, it is instead normal to expect a decidedly smaller effect-size, which can be estimated as the difference between the experimental maximum and the random value, for example: 1.98 -1.37 =.61 (Table 1 and Table 2, line 1). The result varies roughly between.6% and 1.2%, but the value relative to the single NSP is greater than 2%. This new experiment, conducted with the Steady-State method, confirms the results of Giroldini et al. (2016a) in which a sinusoidal sound frequency of 500 Hz and the same red LED array were used for stimulation, but neither of them was modulated. Discussion The primary objective of this study was to confirm that the brain activities of two physically distant but emotionally and mentally connected individuals display a correlation in the absence of any normal sensory connection, as well as to obtain more information on the characteristics of their EEG signals. Specifically, as mentioned in the introduction, we aimed at acquiring more information on the relationship between the observed signal intensity in the EEG activities of the SPs and NSPs, as well as the possibility that this relationship between each partner's EEG activity is either undifferentiated (i.e., global), or differentiated, thus varying depending on the type of stimulation applied to the SPs. Regarding the relationship between the strength of the EEG activities of the SPs and NSPs, our results suggest that there is no such relationship and hence that this parameter cannot be communicated at a distance. On the other hand, our results confirm the relationship between EEG signal characteristics of both the SP and NSP, at least regarding the frequency parameter in the 10 to 15 Hz band, when taking into consideration the maximum correlation values as estimated by applying the GW6 algorithm. Indeed, in some preliminary tests SPs were stimulated with two frequencies, 15 and 18 Hz, repeated 100 times. From analysis of the NSP's EEG recordings, we found significant results for both (see Supplemental Information). At this stage of our studies we are unable to accurately determine which areas of the brain are the most sensitive to remote stimuli, even though we believe this may be due to the limitations of our mathematical analysis tools in distinguishing the actual signal from a strong background noise. We emphasize that this is an exploratory study and results were obtained after a series of unplanned post hoc choices, such as the extension of the filtration band (1 Hz rather than 1.2 or.8 Hz), the size of the GW6 method's sliding window, the choice of individual vs overall values, etc. Conclusions Nonetheless, despite its limits, the information that emerge from this study can, if confirmed, provide important details about the relationship between the EEG activities of two physically distant and mentally connected partners. To summarize, we can tentatively affirm that the EEG activity in NSPs generates a weak response with a signal to noise ratio of around 1% that changes little as a function of the intensity of the response generated in the SPs. We can also affirm that the NSP's response seems to be specific to the frequency induced in the EEG activity of SPs. The GW6 method as well similar ones based on the correlation of the EEG signals (e.g. Radin, 2017), seems suitable for this purpose, but we believe more refined tools are necessary, for example, those based on machine learning algorithms (;Stober, Sternin, Owen, & Grahn, 2016), especially if we wish to detect signals after only a single stimulation and not, as we have done until now, after multiple repetitions of the same stimulation. It seems right to theorize then that a mental connection at distance can allow connection of some informational, rather than physical, characteristics of the shared signals. This seems plausible, given that we cannot theorize about any information transmission based on conventional electromagnetic waves. This interpretation fits well with a quantum-like mental and biological entanglement as predicted by the Generalized Quantum Theory (Walach, Tressoldi, & Pederzoli, 2016) which predict quantum-like phenomena in areas outside quantum physics, such as biology and psychology. Figure S1: EEG headset and modified Emotiv Epoc™ Pilot investigation using 15 and 18 Hz frequencies This is a summary of the preliminary research in preparation for the study representing the aim of this work. This preliminary research allowed us to better choose the frequencies to use in the definitive study and to improve experimental conditions, so much so that the final data produced proved to be excellent quality. The same experimental method as that described under Materials and Methods was used, but somewhat simplified, so that a higher number of tests could be performed in less time: in particular, only the EEG activity of the non-stimulated subjects were recorded. Furthermore, the stimuli (about 100 for each SP) were provided with the Steady-State modality at the modulation frequencies of 15 Hz and 18 Hz. The sensory and electrical separation of the two subjects was good, but sometimes external noises could still reach them. Because the applied stimuli were numerous and on-off modulated, we believe these noises did not pose any significant influence on final results. Data were collected from 20 pairs of subjects who all got along well (mostly friends) and the results are described in Figures S2 and S3, which only show graphs from the GW6 method, in that values relative to power and normal ERPs were shown to be insignificant (as in graphs from Figure 7). Graphs from Figures S2 and S3 show a large response (green curve) compared to what is expected by chance (red curve). The corresponding average values are presented in the Table S1. Keeping these results in mind we decided to carry out the actual experiment at frequencies considerably less than 18 Hz, considering that the SP -NSP connection at this frequency becomes appreciably weaker. We wanted to insert one frequency (10 Hz) which is well within the Alpha range, to see if it was discernable against the EEG 'noise' typical of this band. The other two frequencies (12 and 14 Hz) are within a band thought to be relatively undisturbed by Alpha waves and low enough to not be seriously weakened in the SPs.
The Renin Academy Summit: advancing the understanding of renin science. s from the presentations given at the Renin Academy Summit were published in a supplement to the March issue of this journal (March 2008, Volume 9, Supplement 1). In the supplement, we raised a number of critical questions within renin science, such as: Does the (pro)renin receptor finally provide a role for prorenin, without being converted to renin? Can the (pro)renin receptor really be blocked by the handle-region peptide? How is it possible that the effect of DRIs lasts several weeks after stopping treatment? What are the implications of the rise in renin during direct renin inhibition? Based on the extremely valuable discussions at the Renin Academy Summit, here we provide responses to these questions. Spotlight on Renin
Investigation on the intense fringe formation phenomenon downstream hot-image plane. The propagation of a high-power flat-topped Gaussian beam, which is modulated by three parallel wirelike scatterers, passing through a downstream Kerr medium slab and free spaces is investigated. A new phenomenon is found that a kind of intense fringe with intensity several times that of the incident beam can be formed in a plane downstream the Kerr medium. This kind of intense fringe is another result in the propagation process of nonlinear imaging and it locates scores of centimeters downstream the predicted hot image plane. Moreover, the intensity of this fringe can achieve the magnitude of that of hot image in corresponding single-scatterer case, and this phenomenon can arise only under certain conditions. As for the corresponding hot images, they are also formed but largely suppressed. The cause of the formation of such an intense fringe is analyzed and found related to interference in the free space downstream the Kerr medium. Moreover, the ways it is influenced by some important factors such as the wavelength of incident beam and the properties of scatterers and Kerr medium are discussed, and some important properties and relations are revealed.
Magnetite Versus Quartzite: Potential Candidates for Thermocline Energy Storage The objective of this work is to investigate magnetite storage performances using thermocline packed-bed single tank concept and confront it with the quartzite commonly used as TESM in this area. To achieve this purpose dual phase model for thermal energy storage (TES) is developed for describing heat and masse transfer inside the porous packed-bed contained in the storage tank. After validation, the developed model is used to simulate the thermal behavior discharging process and then to evaluate system performances by calculating discharge efficiency and storage efficiency for two TESMs: magnetite and quartzite coupled with Colza oil as a heat transfer fluid (HTF). This paper presents and discusses results of discharge efficiency and storage efficiency obtained for both TESM and highlights the impact of TESM choice on TES efficiency.
Middle-Upper Miocene stratigraphy of the Velarde Graben, North-Central New Mexico: Tectonic and paleogeographic implications A BSTRACT. Geologic mapping and correlation of tephras in and near the Velarde graben supports the southward extension of the Cieneguilla member of Leininger into the graben, refines the stratigraphic relationships of various lithostratigraphic units, and has produced better estimates of vertical stratigraphic displacement on several graben faults. The Velarde graben is a 6-10 km-wide, northeast-trending extensional feature in the northern Espaola Basin of the Rio Grande rift. It coincides with a 19 km-long gravity low between the north tip of Black Mesa and the town of Hernandez to the southwest. In the general area of the Velarde graben, the Santa Fe Group is differentiated into seven lithostratigraphic units assigned to the Tesuque Formation. Some of these units were originally assigned to the Chamita Formation in previous studies. However, we propose abandoning the Chamita Formation, and reassigning its strata to the Tesuque Formation, because 1) strata assigned to the Tesuque Formation correlate into the Chamita type section, and 2) in the absence of the Ojo Caliente Sandstone Member it is not possible to map a contact between the Chamita and Tesuque formations with confidence. Vertical stratigraphic displacements along the border faults of the Velarde graben since 7.7-8.4 Ma range from as low as 65 m on the Rio de Truchas fault to 435 m on the Santa Clara fault. On the southern tip of Black Mesa, comparison of vertical slip rates for the Santa Clara fault over two time periods yields slightly higher vertical slip rate values for 3-8 Ma (48-56 m/Myr) compared to 0-3 Ma (35-48 m/Myr). On the gravity high separating the Velarde graben from another gravity low to the south, a vertical slip rate calculated for the Santa Clara fault gives 48-50 m/Myr for time after 9.9 Ma. Increasing slip rates on the Velarde graben faults in the late Miocene may have induced west-northwestward progradation of alluvial slope facies (lithosome A) derived from the Sangre de Cristo Mountains.
On the origin of the thermoluminescence of Al2O3:Cr,Ni The recently discovered high-intensity thermoluminescence (TL) emission in Al2O3 doped with Cr and Ni is analysed more deeply by measuring the effects of x-ray irradiation on the optical absorption in parallel with the TL process, together with the effect of optical bleaching. It is proposed that the high-intensity peak is due to oxygen vacancies, induced by the presence of Ni. The main recombination centre is Cr.
Antimicrobial and Cytotoxic Activities of Cyanobacteria. Present study screened ten Cyanobacterial extracts for antimicrobial activity, cytotoxicity (against human cervical carcinoma cells Hela and SiHa) & GC-MS for chemical composition. Cyanobacterial extracts were subjected to Agar Well Diffusion Assay at concentration of 100g. well -1 and incubated at 37 + 2 o C. Inhibition zone was measured in millimetres (mm) after 18-24 hour. Minimum inhibitory concentration (MIC) was determined by using the broth microdilution method in 96-well microtitre plates. Drug dilutions were performed using cation adjusted Mueller Hinton Broth (MHB) in a concentration range of 128-0.25 g.ml -1. Cytotoxicities were assessed using MTT assay. GCMS analysis was carried out on a GCMS-QP 2010 Plus Shimadzu system having RTX-5 column (Restek, USA) (60 m, ID 0.25mm, film thickness 0.25 m). Cyanobacterial extracts exhibited significant antibacterial effect on clinical isolates of S.aureus MRSA & MRSE whereas selectively inhibited Gram-negative. Minimum inhibitory concentrations (MIC) were in the range between 64 to 128 g/ml. Activity Index of active extracts ranged from 0.33 to 1.50. Activity Index and Zone of Inhibition were significantly correlated with p>0.03.GC-MS detected distinct groups of active compounds with pronounced presence of saturated and unsaturated fatty acids. Pharmaceutically important compounds like sesquiterpenoids (Farnesol), dicarboxylic acids, imidazole, indolinones, -tocopherol, phenolics, phytosterols, heptadecane, tetradecane and 9-octadecenal were moderately present. Extracts also exhibited cytotoxic effect on human cervical carcinoma cells lines with LD50 values ranging from 34 to 146g/ml. Cyanobacterial species have distinct active group metabolites which are promising sources of antiproliferative and antimicrobial compounds.
Efficacy and safety of miconazole for oral candidiasis: a systematic review and meta-analysis. The objective of this study is to assess the efficacy and safety of miconazole for treating oral candidiasis. Twelve electronic databases were searched for randomized controlled trials evaluating treatments for oral candidiasis and complemented by hand searching. The clinical and mycological outcomes, as well as adverse effects, were set as the primary outcome criteria. Seventeen trials were included in this review. Most studies were considered to have a high or moderate level of bias. Miconazole was more effective than nystatin for thrush. For HIV-infected patients, there was no significant difference in the efficacy between miconazole and other antifungals. For denture wearers, microwave therapy was significantly better than miconazole. No significant difference was found in the safety evaluation between miconazole and other treatments. The relapse rate of miconazole oral gel may be lower than that of other formulations. This systematic review and meta-analysis indicated that miconazole may be an optional choice for thrush. Microwave therapy could be an effective adjunct treatment for denture stomatitis. Miconazole oral gel may be more effective than other formulations with regard to long-term results. However, future studies that are adequately powered, large-scale, and well-designed are needed to provide higher-quality evidence for the management of oral candidiasis.
A preliminary estimate of the Stokes dissipation of wave energy in the global ocean The turbulent Reynolds stresses in the upper layers of the ocean interact with the vertical shear of the Stokes drift velocity of the wave field to extract energy from the surface waves. The resulting rate of dissipation of wind waves in the global ocean is about 2.5 TW on the average but can reach values as high as 3.7 TW, making it as important as the dissipation of wave energy in the surf zones around the ocean margins. More importantly, the effect of Stokes dissipation is felt throughout the mixed layer. It also contributes to Langmuir circulations. Unfortunately, this wave dissipation mechanism has hitherto been largely ignored. In this note, we present a preliminary estimate of the Stokes dissipation rate in the global oceans based on the results of the WAVEWATCH III model for the year 2007 to point out its potential importance. Seasonal and regional variations are also described.
Endoparasitic diseases cause losses of cattle and lambs Coccidiosis in beef and dairy calves Ketosis and fatty liver syndrome causing a variety of clinical signs in dairy cattle Nematodirosis causing losses in lambs Pandemic H1N1/09 influenza virus in pigs predisposing to Streptococcus suis septicaemia Deaths of roseate terns (Sterna dougallii) caused by predator attacks These are among matters discussed in the Veterinary Laboratories Agency's (VLA's) disease surveillance report for June
Electrostatic free energy of weakly charged macromolecules in solution and intermacromolecular complexes consisting of oppositely charged polymers. When oppositely charged polyelectrolytes are mixed in water, attraction between oppositely charged groups may lead to the formation of polyelectrolyte complexes (associative phase separation, complex coacervation, interpolymer complexes). Theory is presented to describe the electrostatic free energy change when ionizable (annealed) (macro-)molecules form a macroscopic polyelectrolyte complex. The electrostatic free energy includes an electric term as well as a chemical term that is related to the dissociation of the ionic groups in the polymer. An example calculation for complexation of polyacid with polybase uses a cylindrical diffuse double layer model for free polymer in solution and electroneutrality within the complex and calculates the free energy of the system when the polymer is in solution or in a polyelectrolyte complex. Combined with a term for the nonelectrostatic free energy change upon complexation, a theoretical stability diagram is constructed that relates pH, salt concentration, and mixing ratio, which is in qualitative agreement with an experimental diagram obtained by Bungenberg de Jong for complex coacervation of arabic gum and gelatin. The theory furthermore explains the increased tendency toward phase separation when the polymer becomes more strongly charged and suggests that complexation of polyacid or polybase with zwitterionic polymer (e.g., protein) of the same charge sign (at the "wrong side" of the iso-electric point) may be due (in part) to an induced charge reversal of the protein.
Extended Analysis of the Sn V Spectrum The spectrum of palladium-like Sn V excited in a vacuum spark has been studied in the 200500 wavelength region. More than 200 new spectral lines of the 4d95s - 4d96p, 4d95s - 4d85s5p, 4d95p - 4d97s and 4d95p - 4d96d transitions were identified and about 80 levels of the 4d97s, 4d96p, 4d96d and 4d85s5p configurations were found. The number of known Sn V lines and levels was increased by a factor of 2. 16 lines with measurable autoionization widths were observed in the 200209 region and identified as the 4d95s - (4d85s6p + 4d85s4f) transitions.
Examples of semirings of endomorphisms of semigroups Semirings of endomorphisms of semigroups form an important class of semirings. All examples which can be found in the literature concern semigroups with a subcommutative operation. We show that there exists a non-subcommutative semigroup whose endomorphisms form a semiring (this answers a question raised by Professor A. H. Clifford). We also give an example of a semigroup whose set of endomorphisms is not embeddable into a semiring; however it is the disjoint union of two semirings, but one of these semirings is not embeddable into a semiring with identity.
Where Economics Went Wrong Milton Friedman once predicted that advances in scientific economics would resolve debates about whether raising the minimum wage is good policy. Decades later, Friedman's prediction has not come true. This book argues that it never will. Why? Because economic policy, when done correctly, is an art and a craft. It is not, and cannot be, a science. The book explains why classical liberal economists understood this essential difference, why modern economists abandoned it, and why now is the time for the profession to return to its classical liberal roots. Carefully distinguishing policy from science and theory, classical liberal economists emphasized values and context, treating economic policy analysis as a moral science where a dialogue of sensibilities and judgments allowed for the same scientific basis to arrive at a variety of policy recommendations. Using the University of Chicagoone of the last bastions of classical liberal economicsas a case study, the book examines how both the MIT and Chicago variants of modern economics eschewed classical liberalism in their attempt to make economic policy analysis a science. By examining the way in which the discipline managed to lose its bearings, the authors delve into such issues as the development of welfare economics in relation to economic science, alternative voices within the Chicago School, and exactly how Friedman got it wrong. Contending that the division between science and prescription needs to be restored, the book makes the case for a more nuanced and self-aware policy analysis by economists.
Interner Bericht Fb 10 Abteilung Semantik Modular, Changeable Requirements for Telephone Switching in Csp-oz Requirements documents for software need not only be written, but they also need to be maintained afterwards. In telephone switching, there arise particular problems due to the strong mutual dependences of telephone features and due to the current rapid change in this area. We attempt to avoid or at least reduce feature interaction problems during the extension or change of a requirements document through a suitable requirements document structure. We perceive all variants and revisions as a single requirements family, documented together. Our approach to requirements speci cation grew out of the Functional Documentation approach, also known as \Parnas tables". We now apply and extend this approach using the formal description technique CSP-OZ, taking advantage of its built-in support for inheritance and parallel composition. We structure the requirements in a modular way suitable to our application area; and we present a way to compose the partial speci cations. A preliminary case study demonstrates our approach and shows that CSP-OZ can indeed be used for it. More work is required. Besides an extension of the case study, several aspects of the incremental speci cation formalism still need to be worked out.
Efficient integrated weed management practices for higher productivity and profitability in vegetable pea ( Pisum sativum var. hortense ) A field experiment was conducted on vegetable pea ( Pisum sativum L. var. hortense ) at Vegetable Research Station, Chandra Shekhar Azad University of Agriculture and Technology, Kanpur during 201417 to develop efficient integrated weed management practices. Seven different treatments, viz. pendimethalin @0.75 kg a.i./ha (pre-emergence), pendimethalin @0.75 kg a.i./ha (pre-emergence) + one hand weeding at 40 DAS, glyphosate @1.0 kg a.i./ha 15 days before sowing, glyphosate @1.0 kg a.i. /ha + one hand weeding at 40 DAS, mulching with black polythene, straw/grass mulch, hand weeding thrice at 20, 40 and 60 DAS were tested against two checks, i.e. weed free and weedy check (no weeding) in randomized block design with three replications. Vegetable pea variety Azad Pea-3 was used in the experiment. Crop was raised with recommended package of practices except treatments. Based on pooled data, among different treatments excluding weed free check, pendimethalin @0.75 kg a.i./ha (pre-emergence) + one hand weeding at 40 DAS recorded highest plant height (58.75 cm), seed weight/plant (12.73 g), number of seeds/pod (6.55), 100-seeds weight (31.93 g), seed yield (15.96 q/ha), seedling length (17.76 cm), seed vigour indexI (1619.92) and seed vigour index- II (13.17). In case of net return the same treatment also recorded significantly highest B:C ratio (2.30). Thus, pendimethalin @0.75 kg a.i./ha(pre-emergence) + one hand weeding at 40 DAS proved to be most profitable integrated weed management practice for vegetable pea. Vegetable pea (Pisum sativum var. hortense) is an important vegetable crop. It is grown in almost all agroclimatic zones during rabi season in plains and summer season in hills as cash crop. It is mainly grown for its tender green pods as a fresh vegetable. It is a rich source of protein, calcium, phosphorous, iron and vitamins. The vegetable pea productivity is severely affected by various biotic and abiotic factors. Among them, heavy weed infestation is the major biotic factor and constraint responsible for low seed yield as well as poor seed quality. Weeds compete with main crop for the use of nutrients, moisture, sunlight, space etc. hence, resulting in lower yield and poor quality (). Both grassy as well as broad leaved weeds infest the crop which results in significant yield losses in commercial crops (). Peas are poor competitors, particularly at the seedling stage but critical period for crop-weed competition varies from 40-60 days after sowing, hence, avoiding early season weed interference is critical (, ). Due to weed competition more than 40% yield reductions in pea have been reported (). Some authors reported yield reduction in the range of 37.3-64.4% (,, Harker 2001. The integrated weed management is becoming popular among the farmers as they continue to realize the usefulness of herbicides along with few manual weeding. Bakht et al. found that newspaper and black mulch are effective tools to control weed. Application of pre-emergence herbicides effectively decreased weed density and resulted in higher pod yield (). Vaishya et al. reported that the post-emergence herbicides have long persistence and wide spectrum of weed control. Application of various herbicides significantly increased vegetable pea yield (79.6-85.1%) (). Keeping this in view, the present study was undertaken to develop efficient integrated weed management practices for pea. MATERIALS AND METHODS The field experiment was conducted during rabi season of 2014-15, 2015-16 and 2016-17 and Technology, Kanpur, India located at 26.4912° N latitude, 80.3071° E longitude at an elevation of 133 m amsl. Seven different treatments, viz. pendimethalin @0.75 kg a.i./ha (pre-emergence), pendimethalin @0.75 kg a.i./ha (pre-emergence) + one hand weeding at 40 DAS, glyphosate @1.0 kg a.i. /ha at 15 days before sowing, glyphosate @1.0 kg a.i. /ha + one hand weeding at 40 DAS, mulching with black polythene, straw/grass mulch, hand weeding thrice (at 20, 40 and 60 DAS) were tested against two checks, i.e. weed free and weedy check (no weeding) in randomized block design replicated thrice. Vegetable pea variety Azad Pea-3 was used in the experiment. Crop was raised with recommended package of practices except treatments. The recommended doses of nitrogen (40 kg/ha), phosphorus (60 kg/ha) and potassium (50 kg/ha) were applied. The crop was sown in the month of November during all three years at 30 10 cm spacing with the seed rate of 85 kg/ha. The observations were recorded as per the standard procedure. Further, observations on seed quality parameters were observed as per standard procedure (ISTA 1993). Vigour index of the seeds was assessed based on germination percentage, seedling length and seedling dry weight as suggested by Abdul-Baki and Anderson. Germination %, seedling length, seedling dry weight and vigour index (I & II) were calculated by using the following formulae: Number of normally germinated seeds 100 Total number of seeds Seedling length: Root and shoot length of five fresh seedlings was measured in centimeters up to one decimal. Total seedling length was calculated by adding root and shoot length. Seedling dry weight: The seedlings used for recording were oven dried at 103°C+1°C for 12 h. Measurement of dried samples was record on an electronic balance up to three decimals in mg. Vigour Index (I) = Germination percentage Seedling length (cm) Vigour Index (II) = Germination percentage Dry weight (mg) RESULTS AND DISCUSSION Growth and yield attributes: Vegetable pea growth and yield attributes were influenced significantly by different treatments (Table 1). Herbicide alone or along with one hand weeding, mulching and hand weeding thrice (at 20, 40 and 60 DAS) proved better for growth and yield attributes than the weedy check. Among seven tested treatments, pre-emergence application of pendimethalin @0.75 kg a.i./ ha + one hand weeding at 40 DAS recorded significantly highest plant height of 58.75 cm. It was followed by hand weeding thrice and mulching with black polythene. In case of seed weight/plant also, pre-emergence application of pendimethalin @0.75 kg a.i./ha + one hand weeding at 40 DAS showed highest seed weight/plant (12.73 g) followed by straw/grass mulch and hand weeding thrice. Similar trend was also observed in case of number of seeds/pod and 100-seeds weight. The results of the study revealed that the growth and yield attributes of vegetable pea increased significantly with pre-emergence application of pendimethalin @0.75 kg a.i./ ha + one hand weeding at 40 DAS. It might be due to better weed management during the cropping period, which helped in minimum competition between crop plant and weed for moisture and nutrients as these are basic requirements for healthy plant growth. Sultana et al. and Rana et al. in vegetable pea also reported similar results. Brijbhooshan et al. also reported that one hand weeding at 25 DAS reduced the density and dry matter of weeds significantly and increased the yield attributes and seed yield. The growth and yield attributes of vegetable pea performed well to hand weeding thrice (at 20, 40 and 60 DAS) which might be due to effective weed management. Similar results were obtained by Muhammad et al. in chickpea and conveyed that three-hand weeding in a crop period effectively controlled weed density up to 96.22% in chickpea. The lower values of growth and yield attributes under the treatment of glyphosate alone or along with one hand weeding were mainly due to toxic effect of glyphosate, which is broad-spectrum herbicide and hinder the photosynthetic activity of plant leads to reduced growth and yield attributes. The weedy check recorded minimum values of all traits due to excess weed population, which leads to more competition with crop plant for moisture and nutrient. Seed yield: Seed yield of the vegetable pea was influenced significantly by different treatments (Table 2). Herbicide alone or along with one hand weeding, mulching and hand weeding thrice also proved better for seed yield than the weedy check. Among seven different tested treatments, pre-emergence application of pendimethalin @0.75 kg a.i./ha + one hand weeding at 40 DAS recorded significantly highest seed yield of 15.96 q/ha followed by hand weeding thrice (at 20, 40 and 60 DAS) and mulching with straw/grass. Although, the seed yield was maximum in weed free check (16.73 q/ha) however was statistically at par with pre-emergence application of pendimethalin @0.75 kg a.i./ha + one hand weeding at 40 DAS. The yield enhancement under the treatment of pre-emergence application of pendimethalin @0.75 kg a.i./ha + one hand weeding at 40 DAS over the treatment of weedy check was to the tune 5.96 q/ha or 59.6%. These results were similar to the outcomes of Mathukia et al.. Seed quality parameters: The pooled analysis showed that the seed quality parameters were significantly affected by different weed management practices (Table 2 and 3). Among tested treatments, pre-emergence application of pendimethalin @0.75 kg a.i./ha + one hand weeding at 40 DAS also recorded higher values of seed quality parameters, viz. seedling length (17.76 cm), seed vigour index-I (1619.92) and seed vigour index-II (13.17) followed by hand weeding thrice (at 20, 40 and 60 DAS). Similar trend was also observed in case of germination %. Kumar and Singh also found the same result while using pendimethalin @0.5 kg/ha along with one hand weeding. Economics: Net return is the resultant of gross income and cost of cultivation where gross income dominated cultivation cost in present study. Pre-emergence application of pendimethalin @0.75 kg a.i./ha + one hand weeding at 40 DAS registered significantly highest B:C ratio (2.30) followed by pendimethalin @0.75 kg a.i. /ha alone. It might be due to higher gross income in these treatments. Although, the seed yield was maximum in weed free check but B:C ratio of this weed management practice is very poor as more manpower requirement increases the cost of cultivation. It can be inferred that pre-emergence (PE) application of pendimethalin @0.75 kg a.i./ha alone or along with one hand weeding at 40 DAS are profitable weed management practices in vegetable pea for agro-climatic condition of Zone-IV.
Localized delivery of ibuprofen via a bilayer delivery system (BiLDS) for supraspinatus tendon healing in a rat model The high prevalence of tendon retear following rotator cuff repair motivates the development of new therapeutics to promote improved tendon healing. Controlled delivery of nonsteroidal antiinflammatory drugs to the repair site via an implanted scaffold is a promising option for modulating inflammation in the healing environment. Furthermore, biodegradable nanofibrous delivery systems offer an optimized architecture and surface area for cellular attachment, proliferation, and infiltration while releasing soluble factors to promote tendon regeneration. To this end, we developed a bilayer delivery system (BiLDS) for localized and controlled release of ibuprofen (IBP) to temporally mitigate inflammation and enhance tendon remodeling following surgical repair by promoting organized tissue formation. In vitro evaluation confirmed the delayed and sustained release of IBP from Labrafilmodified poly(lacticcoglycolic) acid microspheres within sintered poly(caprolactone) electrospun scaffolds. Biocompatibility of the BiLDS was demonstrated with primary Achilles tendon cells in vitro. Implantation of the IBPreleasing BiLDS at the repair site in a rat rotator cuff injury and repair model led to decreased expression of proinflammatory cytokine, tumor necrotic factor, and increased antiinflammatory cytokine, transforming growth factor1. The BiLDS remained intact for mechanical reinforcement and recovered the tendon structural properties by 8 weeks. These results suggest the therapeutic potential of a novel biocompatible nanofibrous BiLDS for localized and tailored delivery of IBP to mitigate tendon inflammation and improve repair outcomes. Future studies are required to define the mechanical implications of an optimized BiLDS in a rat model beyond 8 weeks or in a larger animal model.
Characteristic quasi-polynomials of ideals and signed graphs of classical root systems With a main tool is signed graphs, we give a full description of the characteristic quasi-polynomials of ideals of classical root systems ($ABCD$) with respect to the integer and root lattices. As a result, we obtain a full description of the characteristic polynomials of the toric arrangements defined by these ideals. As an application, we provide a combinatorial verification to the fact that the characteristic polynomial of every ideal subarrangement factors over the dual partition of the ideal in the classical cases. INTRODUCTION In recent years, the "finite field method" for studying hyperplane arrangements have been developed, extended and put into practice. Roughly speaking, suppose that the real hyperplane arrangement A(R) associated to a list A of elements in Z ℓ is given, we can take coefficients modulo a positive integer q and get an arrangement A(Z/qZ) of subgroups in (Z/qZ) ℓ. The central theorem in the theory asserts that when q is a sufficiently large prime, the arrangement A(Z/qZ) now is defined over the finite field F q, and the cardinality of its complement #M(A; Z ℓ, Z/qZ) coincides with A(R) (q), the evaluation of the characteristic polynomial A(R) (t) of A(R) at q (e.g., ). Later on, Kamiya-Takemura-Terao showed that #M(A; Z ℓ, Z/qZ) is actually a quasi-polynomial in q, and left the task of understanding the constituents of this quasi-polynomial to be an interesting problem. A number of attempts have been made in order to tackle the problem (e.g.,,, ), especially, two interpretations for every constituent via subspace and toric viewpoints have been found. The mentioning establishments open a new direction for studying the combinatorics and topology of hyperplane and toric arrangements in one single quasi-polynomial. Much of the motivation for the study of the hyperplane and toric arrangements comes from the arrangements that are defined by irreducible root systems. Apart from the theoretical aspects, the "finite field method" and its toric analogue proved to have efficient applications to compute the characteristic (quasi-)polynomials of several arrangements arising from these vector configurations (e.g.,,,,, ). More concrete computational results have also been derived to assist the observation of interesting coincidences, which we choose to mention some important examples in our study. The surprising connection between independent calculations on the Ehrhart quasi-polynomials and the characteristic quasi-polynomials produced a main flavor to the analysis on the deformations of root system arrangements in. Combining the computation on the arithmetic Tutte polynomials of classical root systems with the previously mentioned calculations provided the authors in with the key observation of the identification between the last constituent of the characteristic quasi-polynomial and the corresponding toric arrangement. Passing from global to local, one may wish to compute the characteristic quasi-polynomials of subsets of a given root system. A particularly well-behaved class of the subsets is that of ideals with the associated ideal subarrangements are proved to be free in the sense of Terao. As a consequence, the characteristic polynomial of every ideal subarrangement factors over the integers with the roots are described combinatorially by the dual partition of the ideal. However, some combinatorial explanations for the factorization may have been hidden because of the freeness. The main goal of this paper is to compute the characteristic quasi-polynomials of the ideals of classical root systems with respect to two different choices of lattices. We were inspired and motivated by the ideas and techniques used in that will greatly help us in doing so. In addition, we wish to provide more combinatorial insights to the understanding of the constituents in connection with the signed graphs. The remainder of the paper is organized as follows. In Section 2, we recall definitions and basic facts of the characteristic quasi-polynomials, irreducible root systems and their ideals. We also recall the constructions of the classical root systems together with the properties of the associated signed graphs. In Section 3, with a combinatorial ingredient is signed graphs, we compute the characteristic quasi-polynomial of every ideal of a given classical root system with respect to the integer and root lattices. As a result, we obtain a full description of the characteristic polynomials of the toric arrangements defined by the ideals. We will also provide a direct verification to the factorization of the characteristic polynomial of every ideal subarrangement in the classical cases without using the freeness (Theorem 3.12). 2. PRELIMINARIES 2.1. Characteristic quasi-polynomials. Let:= Z ℓ. Let A be a finite list (multiset) of elements in. Let q ∈ Z >0. For each = (a 1,..., a ℓ ) ∈ A, define the subgroup H,Z/qZ of (Z/qZ) ℓ by Then the list A determines the q-reduction arrangement in (Z/qZ) ℓ The complement of A(Z/qZ) is defined by It is proved in that #M(A; Z ℓ, Z/qZ) is a monic quasi-polynomial in q for which A is a period. The quasi-polynomial is called the characteristic quasi-polynomial of A (or of A(Z/qZ)), and denoted by quasi A (q). More precisely, there exist monic polynomials f k. It is known that (e.g.,, ) the 1-constituent f 1 A (t) coincides with A(R) (t) the characteristic polynomial (e.g., ) of the real hyperplane arrangement (or R-plexification in the sense of 2.2. Root systems and signed graphs. Our standard reference for root systems is. Let V be an ℓ-dimensional Euclidean space with the standard inner product (, ). Let be an irreducible (crystallographic) root system in V. Fix a positive system + ⊆ and the associated set of simple roots (base) ∆:= { 1,..., Notation: For simplicity of notation, we use the same symbol M for the realization of the matrix M of size ℓ m as the finite list of elements in = Z ℓ whose elements are the columns of M. For each ⊆ +, we assume that an ℓ # integral matrix In other words, S is the coefficient matrix of with respect to the base ∆. Denote (Z/qZ):= Z/qZ {0}. We then call quasi S (, q) the characteristic quasi-polynomial of with respect to the root lattice, and interpret it by the formula is the hyperplane orthogonal to. It is not hard to see that H is the Rplexification of S i.e., H = S (R). Note also that H + is called the Weyl arrangement of +, and H is a Weyl subarrangement. In the remainder of the paper, we are mainly interested in the root system of classical type (ABCD). Let us recall briefly the constructions of these root systems 1 following. Let { 1,..., ℓ } be an orthonormal basis for V. If ℓ ≥ 2 then with #(B ℓ ) = 2ℓ 2 is an irreducible root system in V of type B ℓ. We may choose a positive system for the coefficient matrix of with respect to the orthonormal basis. We then call quasi T (, q) the characteristic quasi-polynomial of with respect to the integer lattice. The 1 We decided to omit the construction of type A root systems as the calculation on this type follows from those on the other types (e.g., see formula (3.1)). matrices T and S are related by T = P (B ℓ ) S, where P (B ℓ ) is an unimodular matrix of size ℓ ℓ given by Similarly, let ℓ ≥ 2, an irreducible root system of type C ℓ is given by Finally, let ℓ ≥ 3, an irreducible root system of type D ℓ is given by From the constructions above, we obtain the comparison of the height placements of positive roots in (B ℓ ), (C ℓ ) and (D ℓ ) as in Table 1. In the language of signed graphs following, we can associate with the set of positive edges extract information from by using G(), we associate to it an unordered sequence of nonnegative integers, denoted SG(): Let us recall the recent advance towards the study of the ideals. Let (k) ⊆ + be the set consisting of positive roots of height k. Let I be an ideal of + and set M:= max{ht() | ∈ I}. The height distribution of I is defined as a sequence of positive integers: where i k:= # (k) for 1 ≤ k ≤ M. The dual partition DP(I) of (the height distribution of) I is given by a sequence of nonnegative integers: where notation (a) b means the integer a appears exactly b times. Although the definition of the dual partition seems to esteem the (increasing) order of components in the sequence, this requirement is not important in this paper. Two dual partitions of an ideal are conventionally identical if the partitions differ only by a re-ordering of the components. COMPUTATION ON IDEALS In the remainder of the paper, we assume that is of classical type. We summarize some easy cases that the computation of the characteristic quasi-polynomials is manageable thanks to Corollary 2.2. The minimum period coincides with the LCM-period. So the minimum period of quasi. For other cases, the minimum period of quasi S I (, q) is at most 2; hence we know the 1-constituents: We are left with the task of determining f 2 S I (, t), or equivalently, quasi S I (, q) when q is even, and is of type B, C or D. Turning the problem around, we would like to verify Corollary 2.2 by using the information of ideals via signed graphs without relying on the freeness, which we will do in Theorem 3.12. The partitions give a partition of I which we call it the B-partition, as follows: If i + j / ∈ I for all i, j (type A), then for all q ∈ Z >0, Now assume that some i + j ∈ I with 1 ≤ i < j ≤ ℓ. In particular, Thus R is an ideal of the root subsystem of (B ℓ ) of type B ℓ−s+1 with a base given by ∆(B ℓ−s+1 ) = { s,..., ℓ }. Furthermore, for all q ∈ Z >0, we have Then it suffices to consider s = 1 i.e., 1 ∈ I. For such ideals, d Proof. The proof of (a) is straightforward by the definition of ideals. The proof of (b) follows from the height placements in Table 1. Theorem 3.2. Under the Lemma 3.1's assumptions, if q ∈ Z >0 is even, Proof. The proof of the first equality is similar to (but more general than) that of. We have used the following changes of variables The second equality follows from Lemma 3.1 and Corollary 2.2. Let be an irreducible root system of type B ℓ−1 with a base given by We define a sequence of subsets {U k } ℓ k=1 (depending on I) of + (B ℓ−1 ) classified into two types as follows: 3 This fact is true for any root system, which is a consequence of, e.g.,. Proof. With the notion of contraction lists (e.g., ), we can write quasi For all q ∈ Z >0, by applying the Deletion-Contraction formula recursively, we get In Lemma 3.8 and Theorem 3.9 below, we use the same assumption and notation as in Lemma 3.7. Lemma 3.8. For even q ∈ Z >0, we have Proof. This follows from the height placements in Table 1. With a recent study on characteristic quasi-polynomials and toric arrangements, our computation gives a full description of the characteristic polynomials of the toric arrangements defined by the ideals. We complete this section by giving a direct verification of Corollary 2.2 when is any classical root system. We restrict the discussion to type D root systems as the other cases are easy. For any ideal I ⊆ + (D ℓ ) with SG(I) = (p 1,..., p ℓ ) defined in (3.4), we write for each 1 ≤ i ≤ ℓ. It is easily seen that DP(I) = (d 1,..., d ℓ ) with Here we agree that p (+) 0 = 0. Acknowledgements: The author is greatly indebted to Professor Masahiko Yoshinaga for drawing the author's attention to the characteristic quasipolynomials of the ideals and for many helpful suggestions during the preparation of the paper. The author wishes to thank Professor Michele Torielli for helpful comments concerning the signed graphs and thank Ye Liu for stimulating conversations. He also gratefully acknowledges the support of the scholarship program of the Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) under grant number 142506.
Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W>5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003<x<0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2<z<0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2<P_{\rm{hT}}^{2}<3$ (GeV/$c$)$^2$. The multiplicities are presented as a function of $P_{\rm{hT}}^{2}$ in three-dimensional bins of $x$, $Q^2$, $z$ and compared to previous semi-inclusive measurements. We explore the small-$P_{\rm{hT}}^{2}$ region, i.e. $P_{\rm{hT}}^{2}<1$ (GeV/$c$)$^2$, where hadron transverse momenta are expected to arise from non-perturbative effects, and also the domain of larger $P_{\rm{hT}}^{2}$, where contributions from higher-order perturbative QCD are expected to dominate. The multiplicities are fitted using a single-exponential function at small $P_{\rm{hT}}^{2}$ to study the dependence of the average transverse momentum $\langle P_{\rm{hT}}^{2}\rangle$ on $x$, $Q^2$ and $z$. The power-law behaviour of the multiplicities at large $P_{\rm{hT}}^{2}$ is investigated using various functional forms. The fits describe the data reasonably well over the full measured range. Introduction A complete understanding of the three-dimensional parton structure of a fast moving nucleon requires the knowledge of the intrinsic motion of quarks in the plane transverse to the direction of motion, both in momentum and coordinate space. While the spatial distributions of quarks in the transverse plane are described by generalised parton distributions (GPDs), the momentum distributions of quarks in the transverse plane are described by transverse-momentum-dependent (TMD) parton distribution functions (PDFs). A precise knowledge of TMD-PDFs is found to be crucial in the explanation of many singlespin effects observed in hard scattering reactions, in addition to the important role they play in spin-independent processes. In a similar way, transverse-momentum-dependent fragmentation functions (TMD-FFs) are crucial for the description of hard scattering reactions involving hadron production. Both PDFs and FFs are non-perturbative quantities that are assumed to be process-independent. The simplest examples are the spin-averaged TMD-PDF f q 1 (x, k T ) and the spin-averaged TMD-FF D h q (z, p h⊥ ), where x is the Bjorken scaling variable, k T is the quark intrinsic transverse momentum, z is the fractional energy of the final-state hadron, and p h⊥ is the transverse momentum of the final-state hadron relative to the direction of the fragmenting quark. After integration over k T and p h⊥, the TMD-PDFs and TMD-FFs reduce to the standard spin-averaged collinear PDFs and FFs, where collinear means along the direction of the virtual photon. While the knowledge on the collinear PDFs and FFs is quite advanced, very little is presently known about the dependence of TMD-PDFs and TMD-FFs on k T and p h⊥, as only sparse experimental data are available to date. One of the most powerful tools to assess TMD-PDFs and TMD-FFs is the semi-inclusive measurement of deep inelastic scattering (SIDIS), N → hX, where one hadron is detected in coincidence with the scattered lepton in the final state. According to the QCD factorisation theorem the deep inelastic scattering (DIS) process is considered to proceed via two independent sub-processes, i.e. the elementary QED process q → q is followed by the hadronisation of the struck quark. The outgoing hadrons provide information about the original transverse motion of the quark in the nucleon via their transverse momentum vector P hT. The latter is defined with respect to the virtual photon direction. The SIDIS cross section can be written as a convolution of a 'hard' scattering cross section, which is calculable in perturbative QCD (pQCD), with the non-perturbative TMD-PDFs and TMD-FFs. It depends on five kinematic variables. Two variables describe inclusive DIS, i.e. the negative square of the four-momentum transfer Q 2 = −q 2 and the Bjorken scaling variable x = −q 2 /(2P q), where q and P denote the fourmomenta of the virtual photon and the nucleon, respectively. Three more variables describe the final-state hadrons, i.e. the fraction of the virtual photon energy that is carried by a hadron, z = (P P h )/(P q), the magnitude P hT of the transverse momentum of a hadron and its azimuthal angle in the system of virtual photon and nucleon. Here, P h denotes the four-momentum of the hadron. In the present analysis, the dependence on is disregarded. When integrating it over, the differential cross section for spinindependent SIDIS reads as follows in twist two "TMD factorisation scheme": with Here, y is the lepton energy fraction that is carried by the virtual photon and s is the centre-of-mass energy, which are related to x and Q 2 through Q 2 = xys. The hadron transverse momentum is related to k T and p h⊥ by P hT = zk T + p h⊥. An important consequence of the factorisation theorem is that the fragmentation function is independent of x, and the parton distribution function is independent of z, while both depend on Q 2. In addition to azimuthal asymmetries in spin-independent SIDIS, the most relevant experimental observable to investigate spin-averaged TMD-PDFs and TMD-FFs is the differential hadron multiplicity as a function of P 2 hT, which is defined in Eq. 3 below. 'Soft' non-perturbative processes are expected to generate relatively small values of P hT with an approximately Gaussian distribution in P hT. Hard QCD processes are expected to generate large non-Gaussian tails for P hT >1 (GeV/c). They are expected to play an important role in the interpretation of the results reported here, which reach values of P 2 hT up to 3 (GeV/c) 2. Transverse-momentum-dependent distributions of charged hadrons in DIS were first measured by the EMC collaboration at CERN, followed by measurements by ZEUS and H1 at HERA. These measurements only provided data in a limited dimensional space. Only new-generation experiments provided higher statistics, thereby opening the way to analyse and present the results in several dimensions simultaneously. Recent results were obtained by several fixed-target experiments using various targets and complementary energy regimes, i.e. HERMES at DESY and COMPASS at CERN. The present paper reports on a new COMPASS measurement of transverse-momentum-dependent multiplicities of charged hadrons and extends the results of our earlier publication on transverse-momentumdependent distributions of charged hadrons. The present measurement enlarges the kinematic coverage in x up to 0.4 instead of 0.12, in Q 2 up to 81 (GeV/c) 2 instead of 10 (GeV/c) 2 and in P 2 hT up to 3 (GeV/c) 2 instead of about 1 (GeV/c) 2 with significantly reduced systematic uncertainties on the normalisation of the P 2 hT -integrated multiplicities. The data reported here represent the most precise results on differential charged hadron multiplicities available at this energy scale. This measurement is unique as its high statistics allows us to analyse the P 2 hT -dependence of charged-hadron multiplicities in four variables simultaneously. The paper is organised as follows. Section 2 briefly describes the experimental apparatus. Details about the data analysis are given in Sec. 3. The measured charged-hadron multiplicities are presented and compared to previous measurements in Sec. 4. In Sec. 5 fits to the results are presented and discussed. The results are summarised in Sec. 6. Experimental setup The set-up of the COMPASS experiment is shortly described in this section. A more detailed description can be found in Ref.. It is a fixed-target experiment, which uses the CERN Super Proton Synchrotron M2 beamline that is able to deliver high-energy hadron and muon beams. The data were collected in 2006 using a naturally polarised + beam of 160 GeV/c with a momentum spread of 5%. The intensity was 4 10 7 s −1 with a spill length of 4.8 s and a cycle time of 16.8 s. The momentum of each incoming muon was measured before the COMPASS experiment with a precision of 0.3%. The trajectory of each incoming muon was measured before the target in a set of silicon and scintillating fibre detectors. The muons were impinging on a longitudinally polarised solid-state target located inside a superconducting magnet. The target consisted of three cells that were located along the beam one after the other. It was filled with 6 LiD beads immersed in a liquid 3 He/ 4 He mixture. The admixtures of H, 3 He and 7 Li in the target lead to an effective excess of neutrons of about 0.2%. To first approximation, it can be regarded as an isoscalar deuteron target and will be referred to as such in the following. The polarisation of the middle cell (60 cm length) was opposite to that of the two outer cells (30 cm long each), and the polarisation was reversed once per day. In order to obtain spin-independent results, the target polarisation was averaged over by combining the data from all three target cells. Since the data taking in the two target polarisation states was well balanced and remaining polarisation-dependent effects are very small, this procedure ensures that for the data analysis the target can be considered as unpolarised. The COMPASS two-stage spectrometer was designed to reconstruct scattered muons and produced hadrons in a wide range of momentum and polar angle, where the latter reaches up to 180 mrad. Particle tracking is performed by a variety of tracking detectors that are located before and after the two spectrometer magnets. The direction of the reconstructed tracks at the interaction point is determined with a precision of 0.2 mrad. The momentum resolution is 1.2% in the first spectrometer stage and 0.5% in the second one. The trigger is made by hodoscope systems supplemented by hadron calorimeters. Muons are identified downstream of hadron absorbers. 3 Multiplicity and data analysis 3.1 Multiplicity extraction The differential multiplicity M h for charged hadrons, where h denotes a long-lived charged hadron ( +, −, K +, K −, p orp), is defined as the ratio between the differential semi-inclusive cross section d 4 h and the differential inclusive cross section d 2 DIS: Hadron multiplicities are measured in the four-dimensional (x, Q 2, z, P 2 hT ) space. The bin limits in the four variables are presented in Table 1. The data used in the present analysis were collected during six weeks in 2006. The data analysis comprises event and hadron selection, the correction for radiative effects, the determination of and the correction for the kinematic and geometric acceptance of the experimental set-up as well as for detector inefficiencies, detector resolutions and bin migration, and the correction for diffractive vector-meson production. Differential hadron multiplicities are evaluated as the ratio of hadron yields d 4 N h in every interval of (x, Q 2, z, P 2 hT ) and the number of DIS events d 2 N DIS in every interval of (x, Q 2 ) corrected as described above: Here, DIS and h denote the correction factors accounting for radiative effects in the inclusive and in the semi-inclusive case, respectively, a h accounts for acceptance effects, and C DIS(h) denotes the correction factor accounting for the diffractive vector-meson contribution in the case of an inclusive (semi-inclusive) measurement. The (x, Q 2 ) dependence is omitted for simplicity as it enters all terms. All corrections are evaluated in the four-dimensional (x, Q 2, z, P 2 hT ) bins except C DIS, DIS and h, which are evaluated only in bins of x and Q 2. Further kinematic dependences of h upon z and P 2 hT will be discussed in Sec. 3.2. Event and hadron selection The present analysis uses events taken with 'inclusive triggers', i.e. the trigger decision is based on scattered muons only. The selected events are required to have a reconstructed interaction vertex associated with an incident and a scattered muon track. This vertex has to lie inside a fiducial target volume. The incident muon energy is constrained to the range from 140 GeV to 180 GeV. In addition to the kinematic constraints given by the spectrometer acceptance, the selected events are required to have Q 2 >1 (GeV/c) 2 and W > 5 GeV/c 2. These requirements select the DIS regime and exclude the nucleon resonance region. The relative virtual-photon energy is constrained to the range 0.1 < y < 0.9 to exclude kinematic regions where the momentum resolution degrades and radiative effects are most pronounced. In the range 0.003 < x < 0.4, the total number of inclusive DIS events is 13 10 6, which corresponds to an integrated luminosity of 0.54 fb −1. The (x, Q 2 ) distribution of this selected 'DIS sample' is shown in Fig. 1, where a strong correlation between x and Q 2 is observed as expected in fixed-target experiments. For a selected DIS event, all reconstructed tracks associated with the primary interaction vertex are considered. Hadron tracks must be detected in detectors located before and after the magnet in the first stage of the spectrometer. The fraction of the virtual-photon energy transferred to a final-state hadron is constrained to 0.2 < z < 0.8. The lower limit excludes the target fragmentation region, while the upper one removes muons wrongly identified as hadrons and excludes the region with larger contributions from diffractive 0 production. This selection yields the 'hadron sample' with a total of 4.3 10 6 and 3.4 10 6 positively and negatively charged hadrons, respectively. The corrections for QED higher-order effects are applied on an event-by-event basis taking into account the target composition. They are computed as a function of x and y according to the scheme described in Ref.. For the hadron yields, the correction is calculated by excluding the elastic and quasi-elastic tails. The correction factors h and DIS are evaluated in bins of x and Q 2. They are found to be smaller than 12 % for x < 0.01 and are smaller than 5% elsewhere. An attempt to evaluate the smearing due to radiative effects as a function of z and P 2 hT was performed using a Monte Carlo (MC) simulation, where radiative effects were simulated using the RADGEN generator. A possible impact on the (z, P 2 hT ) dependence of the results due to radiative effects is accounted for in the systematic uncertainties of the P 2 hT -dependence of the multiplicities. Acceptance correction The hadron multiplicities must be corrected for geometric and kinematic acceptances of the experimental set-up as well as for detector inefficiencies and resolutions, and for bin migration. The correction for a possible misidentification of electrons as hadrons is included in the acceptance correction. The full correction factor is evaluated using a MC simulation of the muon-deuteron deep inelastic scattering processes. Events are generated using the LEPTO generator, where the parton hadronisation mechanism is simulated using the JETSET package with the tuning from Ref.. Secondary hadron interactions are simulated using the FLUKA package. The experimental set-up is simulated using the GEANT3 toolkits and the MC data are reconstructed using the same software that was used for the experimental data. The kinematic distributions of the experimental data are fairly well reproduced by the MC simulation. In order to minimise a possible dependence on the physics generator used in the simulation and to exclude kinematic regions with large acceptance corrections, a four-dimensional evaluation of the acceptance correction factor a h is performed in narrow kinematic bins. In each (x r, Q 2 r ) kinematic bin, where r denotes the reconstructed values of the variables, the acceptance correction is calculated as the ratio of reconstructed (d 2 N h r ) and generated (d 2 N h g ) hadron yields, where both are evaluated using the simulated DIS sample after reconstruction: An advantage of this definition is that the correction for muon acceptance cancels as it enters both numerator and denominator. The generated values of kinematic variables are used for the generated particles and the reconstructed values of kinematic variables are used for the reconstructed particles. All reconstructed MC events and particles are subject to the same kinematic and geometric selection criteria as the data, while the generated ones are subject to kinematic requirements only. The acceptance correction factor exhibits an almost flat behaviour as a function of z and P 2 hT in most (x, Q 2 ) bins, except at high x for P 2 hT > 1 (GeV/c) 2, where it remains larger than 0.4. Elsewhere, its average value is above and close to 0.6 for P 2 hT > 0.5 (GeV/c) 2 and is less than or equal to 0.6 for P 2 hT < 0.5 (GeV/c) 2. As an example, Fig. 2 shows the acceptance as a function of P 2 hT for positively charged hadrons. The two panels show the two z bins between 0.4 and 0.8, with two bins in (x, Q 2 ) in each case. The acceptance correction factors for positively and negatively charged hadrons are found to be very similar, with differences on the level of 0.02-0.04. Diffractive vector meson contribution The final-state hadron(s) selected as described above may also originate from diffractive production of vector mesons ( 0,, ) that decay into lighter hadrons (, K, p). This process, which can be described by the fluctuation of the virtual photon into a vector meson that subsequently interacts diffractively with the nucleon through multiple gluon exchange, is different from the interaction of the virtual photon with a single quark in the DIS process. The fraction of selected final-state hadrons originating from diffractive vector-meson decays and their contribution to the SIDIS yields are estimated in each kinematic bin using two Monte Carlo simulations. The first one uses the LEPTO generator to simulate SIDIS events, and the other one uses the HEPGEN generator to simulate diffractively produced 0 6 The COMPASS Collaboration and events. Further channels, which are characterised by smaller cross sections, are not taken into account. Events with diffractive dissociation of the target nucleon represent about 25% of those with the nucleon staying intact and are also simulated. The simulation of these events includes nuclear effects, i.e. coherent production and nuclear absorption as described in Ref.. The contribution of pions originating from 0 decay to the hadron sample increases with z, and reaches up to 40 − 50% for z close to 1. For kaons, the contribution from decay is concentrated in the z range 0.4 − 0.6, where it reaches up to 15%. The correction factors are separately evaluated for the DIS sample and the hadron sample: Here, f VM DIS denotes the fraction of diffractively produced vector-mesons present in the DIS sample, while f 0 and f K denote the fraction of 0 and decay products in the hadron sample, respectively. The fraction of pions, kaons, and protons in the latter sample, which is denoted by F, K, p, amounts to about 75%, 20% and 5%. The fractions F i (i =, K, p) and f VM i (i =, K and VM = 0, ) are evaluated as functions of x, Q 2, z and P 2 hT. In the following, the general behaviour of some of the above discussed correction factors is illustrated. The correction factor to account for diffractive 0 production in the DIS yield is shown in Fig. 3 as a function of x in the five Q 2 bins. It reaches a maximum value of about 4% in the lowest Q 2 bin. The correction factor for the contribution of diffractively produced 0 mesons to the pion sample, (1 − f 0 ), is shown in Fig. 4(a) as a function of P 2 hT in the four z bins for the lowest Q 2 bin, where it has the largest value. It reaches a maximum value of about 25% for P 2 hT ∼ 0.12 (GeV/c) 2 in the highest z bins, i.e. 0.6 < z < 0.8, and decreases to few percent at small z. The correction factor for the contribution of diffractively produced mesons to the kaon sample, (1 − f K ), is shown in Fig. 4(b). In this case, the maximum correction of about 35% is reached at very small P 2 hT in the middle z bin, i.e. 0.4 < z < 0.6. Fig. 3: Correction factor to the DIS yield due to diffractive 0 production as a function of x in the five Q 2 bins. Systematic uncertainties The dominant contributions to the systematic uncertainties originate from the uncertainties on the determination of the acceptance correction factor and those of the diffractive vector-meson contribution. The uncertainty on the acceptance calculation is evaluated by varying in the MC simulation both the PDF set and the JETSET parameters describing the hadronisation mechanism. The acceptance correction is estimated for each MC sample and the largest deviation with respect to the values obtained using the MC simulation described in section 3.3 is quoted as a systematic uncertainty. The validity of the correction for the electron contamination is confirmed by comparing the simulated and measured electron distributions for momenta below 8 GeV/c, where electrons are identified using the RICH detector. In order to check a possible dependence on the target cell, in which the event vertex is located, the multiplicities are independently measured from the three target cells. Results from upstream and downstream target cells agree within 2-3%, while the agreement is better than 1% with the middle target cell. These differences are well covered in the acceptance correction uncertainty. A total uncertainty of 5% is estimated for the multiplicities. The cross section for exclusive production of 0 calculated in HEPGEN is normalised to the phenomenological model of Ref.. The theoretical uncertainty on the predicted cross section in a kinematic region close to COMPASS kinematics amounts to about 30%. This results in an uncertainty on the diffractive vector-meson correction factor, which amounts up to 5 − 6% mainly at small values of x, Q 2 and P 2 hT, and large values of z. Nuclear effects may be caused by the presence of 3 He/ 4 He and 6 Li in the target. The EMC Collaboration has studied in detail such nuclear effects in a similar kinematic range using carbon, copper and tin targets. A z-dependent decrease of 5% was observed for the multiplicities obtained using copper compared to the ones obtained using deuterium. While the effect was larger for tin, no such effect was found for carbon, so that possible nuclear effects in the present experiment are expected to be very small and are hence neglected. When comparing the results obtained from the data taken in six different weeks, no difference is observed. Their numerical values are available on HepData with and without correction for diffractive vectormeson production. It should be noted that a few (x, Q 2 ) kinematic bins are discarded in the lowest (Fig. 5) and the highest (Fig. 8) bins of z because of low statistical precision as well as large acceptance correction factors (Sec. 3.3). The average values of x and Q 2 in the various kinematic bins are evaluated using the DIS sample and are given in Table 2. The results obtained by integrating the multiplicities presented here over P 2 hT are in very good agreement with those of Ref., where the multiplicities of charged pions are measured as a function of z in a restricted momentum range based on an independent analysis of the same data. Multiplicities are larger for positively than for negatively charged hadrons. This difference significantly increases as x increases and shows a weak variation with Q 2. It is observed to also depend on z and it increases in the range of large z, i.e. z > 0.4, which confirms the observations made in Ref.. Besides their magnitude, the P hT dependence of the multiplicities shows a significant variation with x at fixed Q 2 (as well as with Q 2 at fixed x) for any interval of z. These observations are separately illustrated in Figs. 9 and 10 and discussed in detail in the following. The comparison between the multiplicities of positively and negatively charged hadron is illustrated as a function of x and Q 2 in Fig. 9 for z = 0.35. On the top row, h + and h − multiplicities are presented at Q 2 1.3 (GeV/c) 2 in the smallest and the largest x bins with average values x = 0.0062 and x = 0.039, respectively. In the right column, h + and h − multiplicities are similarly presented at x 0.04 in the smallest and the largest Q 2 bins with average values 1.4 (GeV/c) 2 and 8.3 (GeV/c) 2, respectively. At fixed Q 2, the ratio of h + to h − multiplicities ranges from about 1 in the first x bin to about 1.3 in the last x bin. This increase as a function of x confirms the expectation from valence u-quark dominance, i.e. the dominance of scattering off u-quarks. At fixed x, the ratio of h + to h − multiplicities decreases from 1.3 in the first Q 2 bin to about 1.2 in the last Q 2 bin. While no significant difference is observed in the P 2 hT -dependence of h + and h − multiplicities, the P 2 hT -dependence of the multiplicities is observed to flatten at large values of P 2 hT, where contributions from higher-order QCD processes like QCD Compton and photon-gluon fusion (PGF) are expected to dominate. The data suggest that flattening occurs both as Q 2 increases (at fixed x) and when x decreases (at fixed Q 2 ). In Figure 10, the comparison between h + and h − multiplicities is illustrated as a function of z. The multiplicities are presented as a function of P 2 hT in the four z intervals in a given (x, Q 2 ) bin with average values x = 0.149 and Q 2 = 9.78 (GeV/c) 2. A high x bin is chosen, where the difference in the magnitude of the multiplicities is most recognisable. The ratio of h + to h − ranges from about 1.1 in the first z bin to about 2 in the last z bin, reflecting the fact that part of the negative hadrons (K − andp) can not be produced by the favoured fragmentation of a nucleon valence quark, which enhances the expected flavour dependence of TMD-FFs. Another feature of the data is the variation of the P 2 hT -dependence with increasing z for both small and large values of P 2 hT. In particular, the data show a tendency to flatten at large P hT as z decreases, which emphases a significant z-dependence of the hadron transverse momentum with respect to the transverse momentum of the fragmenting quark, p ⊥. Another intriguing effect is observed in the kinematic domain 1 (GeV/c) 2 < Q 2 < 1.7 (GeV/c) 2 and 0.6 < z < 0.8, in the range of small P 2 hT. Charged hadron multiplicities do not exhibit an exponential form in P 2 hT in this kinematic region and show an unexpected flat dependence at very small values of P 2 hT. This effect is also present in the earlier published distributions of charged hadrons as a function of P 2 hT. It is illustrated in Fig. 11, which shows the multiplicity of positive hadrons as a function of P 2 hT up to 0.8 (GeV/c) 2 at Q 2 = 1.25 (GeV/c) 2 and x = 0.0062 (left-hand side) and at Q 2 = 4.52 (GeV/c) 2 and x = 0.043 (right-hand side). It should be noted that this particular kinematic region suffers from the highest contribution of the 0 decay products to the charged hadron sample (Fig. 4, blue curve) evaluated using the MC simulation. This effect is further discussed in Sec. 5. The multiplicities shown in Figs 5-11 agree with the previous measurement of hadron distributions performed by COMPASS. However, as mentioned in Sec. 1, this measurement considerably extends the kinematic range and reduces the statistical and systematic uncertainties, in particular the uncertainties on the normalisation of the P 2 hT -integrated multiplicities. Comparison with other measurements The multiplicities presented above are compared in Figs. 12-14 to results from previous semi-inclusive measurements in similar kinematic regions. The experiments are compared in Table 3. In order to compare the present COMPASS results on TMD hadron multiplicities with the corresponding ones by EMC, our data sample is reanalysed in bins of z and W 2 according to the binning given in Ref.. The EMC measurements are performed in slightly different kinematic ranges in Q 2 and y, as shown in Tab. 3. While for the measurement described in this paper a deuteron target was used, EMC used proton and deuteron targets and also four different beam energies, which led to four different kinematic ranges. The comparison shown in Fig. 12, where the sum of h + and h − multiplicities is presented as a function of P 2 hT in four W 2 bins in the range 0.2 < z < 0.4, demonstrates good agreement between COMPASS and EMC results. According to the study in Ref., the P 2 hT -dependence of the EMC data could be explained in the simple collinear parton model up to 8 (GeV/c) 2 in P 2 hT. In Figure 13, the multiplicities of positively charged hadrons are compared in the four bins of z to the multiplicities of positively charged pions measured by the HERMES Collaboration, where both were corrected for diffractive vector-meson contribution. The measurements by HERMES cover the kinematic range Q 2 > 1 (GeV/c) 2 and 0.023 < x < 0.6. For this comparison, the COMPASS h + multiplicities are integrated over x in the closest possible range 0.02 < x < 0.4 and also over Q 2. It should be noted that the two experiments cover different ranges in Q 2. While the highest Q 2 value reached by HERMES is 15 (GeV/c) 2, COMPASS reaches 81 (GeV/c) 2. Despite this difference, a reasonable agreement in the magnitude of the measured multiplicities is found for z < 0.6 and small P 2 hT. Most likely due to the differences in kinematic coverage, the agreement between the two sets is rather modest, and the data sets exhibit different dependences upon P 2 hT. In addition, a dip is observed in the HERMES data at very small transverse momenta, i.e. P 2 hT ∼ 0.05 (GeV/c) 2. This dip, which is not observed in the shown Q 2integrated distribution, appears to be very similar to the trend shown in Fig. 11 by the COMPASS data at low Q 2. In Figure 14, the h + multiplicities are compared to the + semi-inclusive cross section measured by the E00-18 experiment at Jefferson Lab. The measurement by the E00-18 was performed at z = 0.55 and x = 0.32 in the range 2 (GeV/c) 2 < Q 2 < 4 (GeV/c) 2. The COMPASS results are given at similar (x, z) values, i.e. z = 0.5, x = 0.3, and span the range 7 (GeV/c) 2 < Q 2 <16 (GeV/c) 2. Similar to the case of the comparison of COMPASS and HERMES data shown in Fig. 13, here the observed different P hT -dependence could be due to the different Q 2 values of the two measurements. The P hT -dependence of the cross section for semi-inclusive measurements of hadron leptoproduction was empirically reasonably well described by a Gaussian parameterisation for the k T -and p h⊥ -dependence of TMD-PDFs and TMD-FFs in the range of small P hT, i.e. P hT < 1 (GeV/c). This Gaussian parameterisation leads to a P 2 hT -dependence of the multiplicities of the form: where the normalisation coefficient N and the average transverse momentum P 2 hT, i.e. the absolute value of the inverse slope of the exponent in Eq. 8, are functions of x, Q 2 and z. A fairly good description of SIDIS data was reached with the Gaussian parameterisation without considering neither the z nor the quark flavour dependence of TMD-FFs. Recent semi-inclusive measurements of transverse-momentum-dependent hadron multiplicities and distributions aimed at an extraction of both k 2 ⊥ and p 2 ⊥. These two observables, however, were found to be too strongly anti-correlated to be disentangled. In order to extract them, a combined analysis of both the differential transverse-momentum-dependent hadron multiplicities and the spin-independent azimuthal asymmetries in SIDIS may be required. In the following we will discuss separately fits in the region of small P hT and in the full range of P 2 hT accessible by COMPASS, i.e. 0.02 (GeV/c) 2 < P 2 hT < 3 (GeV/c) 2. The hadron multiplicities presented in Figs. 5-8 are fitted in each (x, Q 2, z) kinematic bin in the range 0.02 (GeV/c) 2 < P 2 hT < 0.72 (GeV/c) 2 using the single-exponential function given in Eq. 8. Using only statistical uncertainties in the fit, reasonable values of 2 per degree of freedom ( 2 dof ) are obtained in all (x, Q 2 ) bins, except for low values of Q 2 and small values of z, i.e. z < 0.3, where the 2 dof values are significantly larger than 3 in most of the x bins. Including the systematic uncertainties in the fit by adding them in quadrature to the statistical ones significantly improves the values of 2 dof, whereas the fitted parameters remain unchanged. The z 2 -dependence of P 2 hT obtained from the fits is shown in Fig. 15 for h + in the five Q 2 bins available in a given x bin. A non-linear dependence of P 2 hT on z 2 is observed in the range of small x and Q 2, in contrast to the range of large x and Q 2 where it becomes linear. In addition, P 2 hT significantly increases with Q 2 at fixed x and z, especially at high z. The h + multiplicities have larger values of P 2 hT than the h − ones at large z, while no significant difference is observed at small z. This conclusion confirms the one made in our previous publication, where a detailed study of the kinematic dependence of P 2 hT was presented and discussed. As mentioned earlier in Sec. 4.2, the kinematic region of small Q 2 and large z, i.e. Q 2 < 1.7 (GeV/c) 2 and 0.6 < z < 0.8, shows an intriguing effect in the range of small P 2 hT. As can be seen from Fig. 11, in this range h + and h − multiplicities do not exhibit an exponential form in P 2 hT and show an unexpected flat dependence at very small values of P 2 hT. Figure 16(a) shows the multiplicity of positively charged hadrons as a function of P 2 hT up to 0.8 (GeV/c) 2 at Q 2 = 1.25 (GeV/c) 2 and x = 0.006. While a single-exponential function reasonably describes the P 2 hT -dependence for 0.3 < z < 0.4, the experimental data clearly deviate from this functional form as z increases, with 2 dof values increasing from 1.8 in the smallest z bin to 4.6 in the largest one. As an example, Fig. 16(b) shows h + multiplicities at larger Q 2, i.e. Q 2 = 4.65 (GeV/c) 2 and x = 0.075, where the single-exponential function fits the data well in all z bins. The measured charged-hadron multiplicities show that in the range of small P hT, i.e. for P hT < 1 (GeV/c) 2, the simple parameterisation using a single-exponential function describes the P 2 hT -dependence of the re- Fig. 15: Average transverse momentum P 2 hT, as obtained from the fit of h + multiplicities using the single Gaussian parameterisation, shown as a function of z 2. The eight panels correspond to the eight x-bins as indicated, where in each panel data points from all five Q 2 bins are shown. Error bars denote to statistical uncertainties. sults quite well for not too large values of Q 2. For increasing Q 2, the P 2 hT -dependence of the multiplicities changes as can be seen in Fig. 9. A more complex parameterisation appears to be necessary to fit the data, as shown in Ref.. The full measured P hT range Up to now, only one study was performed to describe the full range in P hT using a Gaussian parameterisation for the k T -and p h⊥ -dependence of TMD-PDFs and TMD-FFs in the range P hT < 1 GeV/c, and calculating pQCD higher order collinear contributions in the range P hT > 1 GeV/c. A reasonable description of semi-inclusive hadron multiplicities and cross sections measured by the EMC and ZEUS Collaborations, respectively, was achieved. Below, we attempt to describe the P 2 hTdependence of the above presented charged-hadron multiplicities over the full P hT -range explored by COMPASS, i.e. 0.02 (GeV/c) 2 < P 2 hT < 3 (GeV/c) 2, using the following two parameterisations: The first function (F 1 ) is defined as the sum of two single-exponential functions (Eq. 9). While N 1 and N 1 denote the normalisation coefficients, 1 and 1 denote the inverse slope coefficients of the first and the second exponential function, respectively. All coefficients depend on x, Q 2 and z. Figure 17 shows in a typical (x, Q 2, z) bin the multiplicities of positively charged hadrons as a function of P 2 hT fitted using F 1. As described above for Ref., the two exponential functions in our parameterisation F 1 can be attributed to two completely different underlying physics mechanisms that overlap in the region P 2 hT 1 (GeV/c) 2. Figure 18 shows, as an example, multiplicities of positively charged hadrons as a function of P 2 hT, measured at Q 2 ∼ 1.25 (GeV/c) 2 for two bins of x with average values x = 0.006 and x = 0.016, in the four z bins. Only statistical uncertainties are shown and used in the fit. Values of 2 dof of about 1 are obtained in all (x, Q 2, z) bins, except for a few (6 out of 81) bins, where where values as small as 0.52 and as large as 2.52 are obtained. The normalisation coefficients N 1 and N 1 are found to have a strong variation with x and z and a rather weak variation with Q 2, reflecting the (x, Q 2 ) dependence of collinear PDFs and the z-dependence of collinear FFs. The inverse slope 1 has an average value of about 0.23 (GeV/c) 2 for Q 2 < 3 (GeV/c) 2 and about 0.28 (GeV/c) 2 for larger values of Q 2. Its dependence on z 2 is discussed below using Fig. 19. The inverse slope 1 has an average value of about 0.6 (GeV/c) 2 and shows a rather weak variation with x and Q 2. The so-called Tsallis function F 2, see Eq. 10, describes the two different kinds of power-law behaviour in the two regions of P hT through a single function. The advantage of this function is that it provides both the inverse slope parameter T that characterises the small-P hT range and the exponent 1/(1−q) that parameterises the power-law tail at large P hT. The charged-hadron multiplicities (Figs. are fitted in each (x, Q 2, z) bin using only statistical uncertainties. Reasonable values of 2 dof are obtained in most bins except for 11 out of 81 bins where they are larger than 2 reaching up to 3.65. The exponent parameter q has an average value of about 1.2. The exponent 1/(1 − q) strongly depends on x with a weaker dependence on z and no variation with Q 2, while its z-dependence is observed to increase with x. The inverse slope parameter T ranges between 0.15 (GeV/c) 2 and 0.4 (GeV/c) 2 and shows a significant non-linear dependence on z 2 over the full z-range. The inverse slopes P 2 hT, 1 and T, which were obtained using the fitting functions given in equations 8, 9 and 10 respectively, are presented and compared to each other as a function of z 2 in (x, Q 2 ) bins in Fig. 19. A weak non-linear dependence on z 2 is observed at small x and Q 2, which becomes more pronounced at larger values of Q 2. The inverse slope T reproduces the same z 2 -dependence as that of P 2 hT described in Fig. 8. It is observed to be in fair agreement with 1 except for z > 0.6. A comparison between the data and the fit function F 1 is shown in Fig. 20(a) in a typical kinematic bin with Q 2 = 2.12 (GeV/c) 2 and x = 0.011. The upper panel shows the multiplicities of positive hadrons as a function of P 2 hT and the corresponding fit function, and the lower panel shows the ratio between the data and the fit. A comparison between the two fitting functions F 1 and F 2 is shown in Fig. 20(b) for the same (x, Q 2, z) bin. The P 2 hT -dependence of h + multiplicities is equally well described by the two functions F 1 and F 2, as can be seen from the ratio in Fig. 20(b). The same agreement is obtained for negatively charged hadrons. Fig. 19: Average transverse momentum obtained from the fit of h + multiplicities using the three fit functions given in Eqs. 8, 9, 10: P 2 hT, 1 and T as a function of z 2 in (x, Q 2 ) bins. Multiplicities of positively charged hadron as a function of P 2 hT for Q 2 = 2.12 (GeV/c) 2, x = 0.011 and z = 0.35. The black dotted curve represents the first exponential function of Eq. 9, the blue dashed curved represents the second exponential function of Eq. 9, and the red curve represents the sum. Only statistical uncertainties are shown and used in the fit. Lower panel: The ratio of the experimental points to the fit as a function of P 2 hT. (b) Comparison between the fits obtained using F 1 (Eq. 9) and F 2 (Eq. 10) in the same kinematic bin as in (a). Summary We have measured differential multiplicities of charge-separated hadrons in semi-inclusive measurements using muons of 160 GeV/c impinging on an isoscalar (deuteron) target. Using a high-statistics data set collected in 2006, the measurement covers a wide kinematic domain of Q 2 > 1 (GeV/c) 2, W > 5 GeV/c 2, 0.003 < x < 0.4, 0.2 < z < 0.8 and 0.02 (GeV/c) 2 < P 2 hT < 3 (GeV/c) 2. The results are presented as a function of the square of the hadron transverse momentum P 2 hT in three-dimensional bins of x, Q 2 and z, which leads to a total of 4918 experimental data points. The numerical values are available on HepData with and without subtraction of the estimated contribution of diffractive vector-meson production in SIDIS. The h + multiplicities are only slightly larger than the h − ones in most of the bins, while for large x and z this difference increases. No significant difference between h + and h − is observed in the shape of the P 2 hT -dependence of the multiplicity. Both h + and h − multiplicities are observed to flatten at very small values of P 2 hT in the kinematic region of low x and Q 2 and large z, where contributions from diffractive vector-meson production are the highest. Our results are compared to earlier measurements of hadron multiplicities and cross sections by EMC, HERMES and JLab. Good agreement was found with EMC for W 2 < 150 (GeV/c) 2, although the EMC data were collected at different beam energies and with different targets. In order to compare with HERMES, we have integrated our multiplicities over the phase space that is common to both experiments. While reasonable agreement is obtained at small z and P 2 hT, differences are observed for large z and P hT where neither magnitudes nor P 2 hT -dependences agree. The + semi-inclusive cross section measured by the E00-18 experiment at JLab shows fair agreement with COMPASS h + multiplicities, albeit with some discrepancy in the P hT -dependence that might be explained by the difference in the kinematic ranges of the measurements. In the range of small-P 2 hT, i.e. P 2 hT < 1 (GeV/c) 2, the measured multiplicities were successfully fitted using a single Gaussian parameterisation. A non-linear z 2 -dependence of the average transverse momentum is observed in the range of small x and Q 2, which confirms the conclusions of Ref., while it is almost linear for large values of x and Q 2. In order to fit the multiplicities over the full P hT -range measured by COMPASS, a more complex functional form is required, i.e. either a sum of two Gaussian functions or the so-called Tsallis function. All fits reproduce the data well and their inverse slopes agree well with one another already when using only statistical uncertainties in the fits. Acknowledgements We gratefully acknowledge the support of the CERN management and staff and the skill and effort of the technicians of our collaborating institutes. This work was made possible by the financial support of our funding agencies.
Psychological Causes of Corruption: The Role of Worries This study is devoted to answering two questions: Do individuals worries and sufferings correlate with the acceptability of corruption from their perspectives? Does this correlation differ by country in terms of corruption levels? We focus on analyzing the correlation between macro and micro worries, on one hand, and individual acceptability of corrupt behavior, on the other hand. This study is based on the data from the 6th-wave World Value Survey. We identified three groups of countries based on the corruption perception index: countries with low-level corruption (Australia, The Netherlands, New Zealand, Singapore, and Sweden), countries with medium-level corruption (Belarus, China, South Korea, Malaysia, and Romania), and countries with high-level corruption (Russia, Brazil, Colombia, Peru, and Thailand). For the purposes of our analysis, we used structural equation modeling. We have found that macro and micro worries are significantly correlated with the acceptability of corruption. Our analysis shows that the more the people worry about themselves or their families, the more they accept corruption. The people who worry about society are more likely to disapprove of corruption. However, the significance of these links varies, depending on the group of countries. For the countries with low-level corruption, the correlation is significant only for the link between micro worries and the acceptability of corruption. The countries with high-level corruption show a significant correlation only for the link between macro worries and the acceptability of corruption. For countries with medium-level corruption and for Russia, the acceptability of corruption is significantly correlated with both micro and macro worries.
Low bone mineral density linked to colorectal adenomas: a cross-sectional study of a racially diverse population. BACKGROUND Epidemiologic studies suggest that lower bone mineral density (BMD) is associated with an increased risk for colorectal adenoma/cancer, especially in postmenopausal women. The aim of this study is to investigate the association between osteopenia and/or osteoporosis and colorectal adenomas in patients from a New York community hospital. METHODS We performed a cross-sectional observational study on 200 patients who underwent screening colonoscopies and bone density scan (dual-energy X-ray absorptiometry) at Nassau University Medical Center from November 2009 to March 2011. Among these, 83 patients were identified as osteoporosis (T score of -2.5 or below) and 67 were osteopenia (T score between -1.0 and -2.5). Logistic regression model was performed to assess the association between osteopenia and/or osteoporosis and colorectal adenomas. RESULTS Among the patients with osteopenia and osteoporosis, the mean ages were 59.1 years and 61.5 (SD =8.9), respectively. There were 94.0%, 85.1% and 74.7% women, respectively, in normal BMD, osteopenia and osteoporosis groups. The prevalence of colorectal adenomas was 17.9% and 25.3% in the osteopenia and osteoporosis groups, respectively, and 18.0% in the normal BMD group. After adjustment for potential confounders including age, sex, race, body mass index (BMI), tobacco use, alcohol use, history of diabetes, hypertension, or dyslipidemia, osteoporosis was found to be associated with presence of colorectal adenomas more than 2, compared to the normal BMD group. No significant associations were found for the prevalence, size, and location of adenomas. CONCLUSIONS Our study suggests that osteoporosis is significantly associated with the presence of multiple colorectal adenomas. Prospective studies with a larger sample size are warranted in the future.
Investigation of antibacterial and antifungal properties of tufting carpets containing metal composite yarns Abstract In this research, antimicrobial (antibacterial and antifungal) properties of tufting carpets containing metal/texturized polyester composite yarns were investigated. Carpet contains different yarn groups such as pile yarns, ground warps and wefts. Backing fabrics warp and weft yarns are suitable for gaining antimicrobial activity because of their placement and low usage amount. Thus, textured polyester yarns were commingled with copper, stainless steel metal wires and silver metalized polyamide yarn. Backing fabrics were produced with four different placements by composite yarns. Antibacterial activity tests were applied to carpet samples according to AATCC 100 standard against K. pneumoniae and S. aureus bacteria. AATCC 30 Part 3 standard was used for determining antifungal activity against A. niger. Results show that the antibacterial activity increases with increasing in the amount of metal composite yarn in unit area. Carpet samples which include copper or metalized silver composite yarn in all warps showed antibacterial activity about 99%. Moreover, antifungal activity can be provided against A. niger when copper and metalized silver composite yarn is used in all warps of carpet samples.
Image interpolation via combining patches based on point-sampling and new edge-directed ideas This paper proposes a new algorithm for image interpolation via combining bi-quadratic patches based on point sampling and new edge-directed method (PSE). Usually, the traditional methods use the image data to construct fitting surfaces directly. As a result, the accuracy of the interpolation may not be assured. A model is proposed to compute the point-sampling values first, then a bi-quadratic polynomial surface patch is obtained using point samplings. The whole image surface is constructed by combining all the local patches with weighting functions. For the edges and textures, the PSE adopts a new edge-directed approach to obtain the model parameters. Different from the existing edge-directed approaches, the relationships between the inner members of parameters are taken into account. The experiments for testing the efficiency of the new approach show that the interpolated images reproduced have best results in both PSNR measure and the objective visual quality compared with the competed methods.
Modeling Hand Trajectories during Sequential Reach Movements in a Pulley Threading Task Modeling of human motion is common in ergonomic analysis of industrial tasks and can help improve workplace design. We propose a method for modeling the trajectories of hand movements in the frontal plane during a sequential reach task that involves threading string through a system of pulleys. We model the motions as a combination of two consecutive phases, one where the hand is reaching between pulleys and another when the hand is engaged in threading a target pulley. Hand trajectories were modeled separately for each phase by fitting basis-splines to the observed data. Predicted trajectories were computed using task parameters as the input and compared to observed trajectories from the 12 participants who completed the study.
The clinical application of the upper extremity compound movements rehabilitation training robot In order to estimate the effect of neural rehabilitation robot for improving the upper extremity motor function, 23 hemiplegia patients who are BrunnstromTII at least receive clinical rehabilitation training by the upper extremity compound movements rehabilitation training robot. The assistance models of the neural rehabilitation robot and the programming of the clinical rehabilitation training are studied in this dissertation. The clinical assessment results make clear that after a period of rehabilitation treatment, the function of most patients, which could be assessed by Fugl-Meyer method, improved to a certain extent, and the rehabilitation effect is better than the traditional rehabilitation training. The outcome indicates that, the upper extremity compound movements rehabilitation training robot has a significant application prospect in clinical rehabilitation.
Media Access Scheme in Distributed Spectrum Sensing Spectrum sensing is a key component of cognitive radio (CR) system to understand its radio frequency (RF) environment. One critical challenge in spectrum sensing is detection of weak signals in particular in an indoor environment. Distributed spectrum sensing can address this shortcoming by providing a combined data from spatially distributed sensors. This approach often achieves higher sensitivity in comparison to a single standalone sensor by exploiting the inherent spatial diversity of the cooperating sensors. The overall system performance of distributed sensing, however, depends both on the quality of sensing at the individual sensors and the forwarding scheme from the sensors to the data fusion center (FC). In this aspect the choice of appropriate media access (MAC) control plays significant role. We can improve the system performance by optimizing the MAC and the spectrum sensing parameters jointly. In this paper we propose such cross layer approach to yield an enhanced distributed spectrum sensing scheme. To demonstrate our idea, we provide computer simulation by considering energy detection based distributed spectrum sensors and some of the 802.15.4 MAC specification assumptions.
Molecular characterization of firefly nuptial gifts: a multi-omics approach sheds light on postcopulatory sexual selection Postcopulatory sexual selection is recognized as a key driver of reproductive trait evolution, including the machinery required to produce endogenous nuptial gifts. Despite the importance of such gifts, the molecular composition of the non-gametic components of male ejaculates and their interactions with female reproductive tracts remain poorly understood. During mating, male Photinus fireflies transfer to females a spermatophore gift manufactured by multiple reproductive glands. Here we combined transcriptomics of both male and female reproductive glands with proteomics and metabolomics to better understand the synthesis, composition and fate of the spermatophore in the common Eastern firefly, Photinus pyralis. Our transcriptome of male glands revealed up-regulation of proteases that may enhance male fertilization success and activate female immune response. Using bottom-up proteomics we identified 208 functionally annotated proteins that males transfer to the female in their spermatophore. Targeted metabolomic analysis also provided the first evidence that Photinus nuptial gifts contain lucibufagin, a firefly defensive toxin. The reproductive tracts of female fireflies showed increased gene expression for several proteases that may be involved in egg production. This study offers new insights into the molecular composition of male spermatophores, and extends our understanding of how nuptial gifts may mediate postcopulatory interactions between the sexes.. Nuptial gift formation, transfer and fate in Photinus fireflies. (a) During mating the male spermatophore (stained here with rhodamine B) moves through the ejaculatory duct (Ej) into the female's bursa copulatrix (B). Several male glands contribute to the spermatophore, including the paired spiral glands (SpAG), and other accessory glands (OAG; long accessory gland not shown). (b) Spiral accessory glands (SpAG) manufacture the major portion of the spermatophore, which is visible as a dark structure edged with serrated scales; seminal vesicle (SV) stores sperm rings that get packaged into the spermatophore before transfer. (c) After transfer, sperm released from the tip of the spermatophore enter the female spermatheca (Spt), the sperm storage organ; the clear spermatophore sheath is visible (originally published in ref. 34). (d) The rest of the spermatophore moves into the spermatophore-digesting gland (SDG) where it disintegrates over the next 2-3 d). Scale bars are 500 m (a,b) and 50 m (c,d). Scientific RepoRts | 6:38556 | DOI: 10.1038/srep38556 sleep patterns 3,10,19. However, despite decades of research on nuptial gifts in select taxa, the detailed molecular mechanisms underlying how such gifts influence postcopulatory sexual selection remain largely unresolved. Transcriptomic studies of the male accessory glands (MAGs) that are responsible for manufacturing SFPs have been restricted primarily to Drosophila and other dipterans. Although detailed anatomical descriptions of MAGs do exist for other taxa 23, their glandular products remain poorly characterized. Additionally, sexual selection research shows a recognized bias toward male reproductive traits 24. Thus, despite the central role that females play in postcopulatory sexual interactions, remarkably little is known about the products of female reproductive glands. Understanding the role of nuptial gifts in the context of sexual selection will require comprehensive analyses interrogating the molecular composition of male nuptial gifts as well as secretions from female reproductive tissues that receive and process male gifts. Fireflies are bioluminescent beetles belonging to the family Lampyridae, which comprises ~2000 extant species 31. Their diverse life histories, sexual signals, and mating systems have made fireflies an important group for understanding the evolution of nuptial gifts. The firefly Photinus pyralis is a common North American species widely distributed across the eastern United States 36. Historically important, P. pyralis was used for early studies focused on the biochemistry and physiology of bioluminescence, as well as precopulatory sexual selection 39. Within the genus Photinus, males deliver nuptial gifts in the form of elaborate spermatophores that are manufactured by multiple reproductive accessory glands 34,40. Because most Photinus fireflies do not eat in their adult stage, all reproductive activities must be fueled by stored resources acquired during larval feeding 41. This is reflected in the decline of spermatophore size over successive matings 42. Although the production of nuptial gifts is costly for males, larger gifts are correlated with increased reproductive success 43. Male gifts also provide multiple benefits to females. While Photinus females are polyandrous, capable of mating with multiple males over successive nights, females mate with only a single male per night. Compared to females that mated only once, triply-mated Photinus females showed 73% greater lifetime fecundity 44. Furthermore, females that receive larger nuptial gifts showed a 12-16% increase in their longevity 43,44. Radiolabeling studies have shown that some spermatophore-derived proteins become incorporated into the developing oocytes of Photinus females 45. Thus, nuptial gifts have major fitness consequences for both male and female fireflies. Nearly 25% of all firefly species exhibit extreme sexual dimorphism: adult females completely lack wing development or have greatly reduced wings and are thus incapable of flight 46. Physiological tradeoffs between flight and reproduction are well documented in other insects 47,48, with flightlessness shifting resource allocation toward increased reproductive output. Recent phylogenetic analysis revealed that female flightlessness has evolved repeatedly in the Lampyridae, typically followed by loss of male nuptial gifts 35. Such correlated evolution between male and female traits suggests that firefly nuptial gifts not only mediate postcopulatory sexual selection, but may also be intimately linked with patterns of female reproductive investment. Thus, better understanding of the molecular composition of firefly nuptial gifts may provide new insights into their role in postcopulatory sexual interactions as well as their influence on other key life history traits. In P. pyralis, as in many other Photinus fireflies, the male spermatophore is produced by four distinctive paired reproductive accessory glands (Fig. 1a) 40. Most prominent are the tightly coiled spiral accessory glands (SpAGs). Prior to mating, the lumen of each spiral gland contains a secretion edged with two longitudinal rows of serrated scales (Fig. 1b). During mating, the spiral glands empty (Fig. 1a), and their secretions fuse to form the major structural component of the spermatophore. As it passes through the male ejaculatory duct (Ej; Fig. 1a), Figure 2. Distributions of gene ontology categories for P. pyralis genes up-regulated in males' other accessory glands (OAGs) and spiral accessory glands (SpAGs), both compared to thorax for: (a) males whose mating status was unknown, and (b) males that had mated within the previous 2 h. Scientific RepoRts | 6:38556 | DOI: 10.1038/srep38556 this spermatophore acquires additional material, including sperm rings that have been stored in the seminal vesicles (Fig. 1b) and the contents of three additional pairs of tubular accessory glands: the short, medium, and long accessory glands, here termed other accessory glands (OAG) (Fig. 1b). Spermatophore transfer from the male to the female bursa copulatrix (B; Fig. 1a) takes about 30-60 min. Sperm rings are released into the female sperm storage organ, the spermatheca (Spt, Fig. 1a,c), then disperse as sperm become capacitated and begin swimming slowly in dense aggregations. The rest of the male spermatophore enters a specialized female structure known as the spermatophore-digesting gland (SDG; Fig. 1a and d), where it disintegrates within 2-3 days after copulation. Here, we adopted a multi-omics approach to interrogate the synthesis, content and fate of the spermatophore nuptial gift in P. pyralis. We sequenced and analyzed the transcriptomes of both male and female reproductive tissues, which revealed unique patterns of gene expression in these tissues. We further carried out bottom-up MS/ MS proteomics and liquid-chromatography high-resolution accurate-mass spectrometry (LC-HRAM-MS)-based metabolomics to explore the molecular composition of P. pyralis spermatophores at the protein and metabolite levels, respectively. Importantly, this work is part of an expanding set of non-model organisms that lack a sequenced genome yet have biologically interesting reproductive molecules that can be identified using a combination of de novo transcriptomes, proteomics and RNA sequencing. Indeed, similar approaches have identified reproductive molecules in crickets 53, moths 25, and butterflies 54, expanding our knowledge of nuptial gifts beyond existing model systems such as Drosophila and humans, and further shedding light on specific accessory gland functions and the molecular mechanisms of postcopulatory sexual selection. Results Probing P. pyralis reproductive tissue gene expression profiles by RNAseq. To elucidate gene expression in specific male accessory glands that manufacture nuptial gifts as well as in the female tissues that receive and process such gifts, we used RNAseq to assemble a transcriptome of P. pyralis reproductive tissues. We successfully demultiplexed a total of 320,271,148 reads into 18 separate libraries, each containing an average of 17,792,841 sequences. All libraries were assembled into a de novo transcriptome containing 47,131 contigs with an average contig length of 1159 bp. Gene expression patterns indicated strong tissue-specificity, and biological coefficient of variation analysis based on normalized read counts demonstrated the expected clustering of biological replicates within each P. pyralis male tissue ( Supplementary Fig. 1). Differential gene expression in male reproductive glands. To examine P. pyralis differential gene expression in the spiral accessory glands and other accessory glands, we identified transcripts that showed a log 2 fold change (logFC) ≥ 2 in these reproductive glands compared to male thorax, and also showed a false discovery rate (FDR) ≤ 0.01. Our transcriptome analysis identified 3294 putative transcripts that were significantly up-regulated in the major accessory glands compared to male thorax (Supplementary Table 1). Both types of male accessory glands showed similar gene ontology (GO) functional categories (Molecular Function, Level III), including peptidases and peptidase regulators, metabolic processes, structural proteins, transmembrane transport and signal transduction (Fig. 2a). In other male accessory glands, 11.5% of the transcripts had functions related to peptidase and peptidase regulators, compared to only 4.8% of genes in the spiral accessory glands (Fig. 2a). To gain insight into differentiated function between these male glands, we first identified sequences that were significantly up-regulated with LogFC ≥ 10 in either male spiral accessory glands or other accessory glands compared to thorax, then identified sequences that were significantly differentially expressed between the two male gland types (LogFC ≥ 2; FDR ≤ 0.01). Comparison of GO functional categories for this subset of uniquely expressed genes confirmed that other accessory glands were mainly enriched in peptidase and peptidase regulator activities (Table 1). We further characterized differences between male spiral and other accessory glands by comparing expression levels of sequences co-expressed in both tissues ( Fig. 3; Supplementary Table 2). The 14 annotated genes that were up-regulated in males' other accessory glands compared to spiral accessory glands were predicted to be involved in general cellular processes. The 13 genes up-regulated in male spiral accessory glands compared to other male accessory glands (Fig. 3) included a homolog to a metalloprotease, a disintegrin and metalloproteinase with a thrombospondin motif (ADAMTS; DN15036_c0_g1_i8). Effects of mating on male gene expression. We also examined reproductive gene expression in P. pyralis males 2 h after mating, a time when they are actively manufacturing new spermatophores. We identified 206 sequences in the spiral accessory glands and 253 sequences in the other accessory glands that were up-regulated in each tissue compared to thorax and contained a secretion signal (Supplementary Table 1). Of these, 402 were uniquely expressed in only one type of male accessory gland. In comparison to other males (Fig. 2a), the other accessory glands of recently mated males showed an increase in metabolic processes (Fig. 2b), particularly purine and cysteine metabolism, while the spiral accessory glands of recently mated males showed an increase in transmembrane transport function (Fig. 2b), primarily amino acid transporters. Tissue and protein functional class Sequence ID-description e-value % similarity MW (kDa) Gel Section Predicted Signal Peptide? Spiral Accessory Gland (SpAG) Peptidases and peptidase regulators Protein composition of the firefly nuptial gift. To examine the composition of P. pyralis nuptial gifts, we dissected a spermatophore from a mated female immediately after copulation, separated solubilized proteins on a SDS-PAGE gel (Fig. 4), and examined protein composition by digestion of proteins into peptides followed by nano LC-HRAM-MS/MS proteomic analysis. Combined with transcriptome data from P. pyralis male accessory glands and fat body, this approach allowed us to identify 425 proteins that were transferred to females in the male spermatophore. Of these, 208 were annotated by identifying homologs in other organisms using Blast2go and InterProScan (Supplementary Table 4). Based on our male transcriptome results, we were also able to determine the putative anatomical production site for 68 of these spermatophore proteins (Table 2; Supplementary Table 3). As the spermatophore is extracellular, proteins that are packaged into the spermatophore presumably must first be secreted, though this is not the only possible mechanism of spermatophore incorporation. To identify protein products detected in the spermatophore that may be secreted, we performed an in silico prediction of signal peptide sequences. This analysis revealed that many of the proteins identified via proteomics and that were associated with differentially expressed transcripts do indeed contain predicted signal peptides (Tables 2 and 3; Supplementary Table 3). The spiral accessory glands were identified as the production site for two serine peptidases. One of these, a transcript with homology to the peptidase Snake (DN10938_c0_g1_i1), showed a LogFC of 9.8 compared to male thorax ( Table 2; Supplementary Table 3), which is within the top 8% of differentially expressed genes in this male gland. Snake is a member of the protease cascade that leads to the activation of the Toll pathway, which is important for Drosophila embryonic development and immune response activation 55. Another peptidase (DN8730_c0_ g1_i1) showed homology to trypsin 1; with a LogFC of 8.5, this transcript is in the top 15% of most differentially expressed genes compared to male thorax ( Table 2). Proteomics also confirmed the presence in the P. pyralis spermatophore of several male reproductive proteins apparently manufactured by other male accessory glands (Table 2; Supplementary Table 3). Among the peptidases, one transcript (DN14826_c0_g1_i1; LogFC = 3.5 compared to male thorax) showed significant similarity to Neprylisin 11 from Tribolium castaneum ( Table 2). Another transcript showed homology to Neprylisin 2 56. We also investigated the transcriptome of male fat body, an insect tissue possessing high metabolic and protein biosynthetic activity. The proteomics dataset of the P. pyralis male spermatophore contained several proteins that appear to be synthesized in male fat body (Table 2; Supplementary Table 3). One was a cysteine protease, Cathepsin L11 (DN10232_c0_g1_i1; LogFC = 2.2 compared to male thorax), a lysosomal endopeptidase that can be secreted and interact with structural proteins, such as collagen and fibronectin 57. Metabolomic analysis of the firefly nuptial gift. To examine the small molecule composition of the P. pyralis spermatophore, we conducted an LC-HRAM-MS metabolomic analysis aimed at elucidating compounds specifically enriched in the spermatophore compared to extracts from the male body with the posterior abdomen excised. In an untargeted metabolomic analysis, we noted several mass features exclusively present or present at significantly higher abundance in the spermatophore extract. However, these mass features did not match any compounds in the KEGG Database, suggesting they may represent specialized metabolites yet to be identified (MetaboLights Supplementary Data). Using a targeted metabolomic analysis, we determined that a known firefly defense compound, lucibufagin C, was present in both the spermatophore and male body. Lucibufagins have previously been shown to be a major class of anti-predator defense compounds in Photinus fireflies 58,59. In the positive ion mode extracted ion chromatograms (EICs), both tissues showed a large peak characteristic of lucibufagin C, as well as a smaller second peak likely representing a different isomer of diacetylated lucibufagin (Fig. 5). This targeted analysis also identified P. pyralis pterin, a high-abundance compound of unknown function previously purified from P. pyralis 60. The identity of lucibufagin C and P. pyralis pterin in the spermatophore was confirmed by comparison of retention time, exact mass, and MS/MS fragmentation spectra between male body and spermatophore. These compounds were among the most abundant mass features detected in the male body extract ( Supplementary Fig. 3), and were identified without authentic chemical standards, as the feature retention time, exact mass, isotopic pattern, and fragmentation spectra were consistent with their respective structural identities. Gene expression in the female reproductive tract. To determine how specific tissues might process the spermatophore and interact with male reproductive proteins, we examined differential gene expression in the reproductive tract of P. pyralis females relative to thorax, although the single replicate available for female tissues meant that we could not test for statistical significance. However, using the more stringent criteria of a LogFC ≥ 3 and FDR < 0.01, we identified numerous highly expressed genes in different portions of the female reproductive tract ( Table 3). The female bursa copulatrix initially receives the male spermatophore, which is then moved into spermatophore-digesting gland where the spermatophore is degraded over several days. In the combined spermatophore-digesting gland and bursa tissues, we found 80 transcripts that were up-regulated compared to female thorax, of which 33 were annotated (Table 3). Four sequences showed homology to peptidases, including one sequence (DN8737_c0_g1_i1; LogFC = 4.9 compared to female thorax) with homology to angiotensin-converting enzyme, a zinc-metallopeptidase. We also examined gene expression in the female spermatheca, where male sperm are stored prior to fertilization, and identified 80 up-regulated genes (39 annotated) in this female reproductive tissue ( Table 3). As in the spermatophore-digesting gland and bursa, a sequence (DN8737_c0_g1_i1; LogFC = 6.2 compared to female thorax) with homology to angiotensin-converting enzyme was up-regulated in the female spermatheca, along with five other peptidases (Table 3). Another peptidase showed homology to Neprylisin 2. Discussion Despite recent advances, we remain in the early stages of deciphering the molecular interactions that transpire between male ejaculates and the female reproductive tract during and after mating. Clearly, a necessary first step is to identify the players on both sides. To gain insight into postcopulatory sexual interactions, we sequenced the transcriptomes of both male and female reproductive glands in the firefly P. pyralis and performed proteomics and metabolomic analyses of the male spermatophore gift. Firefly spermatophores are produced by several distinct male reproductive glands, and are delivered to and processed within the female reproductive tract. Our de novo transcriptome of male reproductive glands demonstrated up-regulation of several proteases, which may play a role in postmating interactions, as well as transport proteins, which may serve to replenish seminal proteins transferred at mating. Combined with spermatophore bottom-up proteomics, we found 208 annotated proteins packaged into the P. pyralis male spermatophore and transferred to females, and identified the putative anatomical production sites for 68 of these male proteins. We also identified 217 spermatophore proteins that could not be annotated and may represent proteins that are rapidly evolving. Targeted metabolomic analysis also yielded the first evidence that P. pyralis males may incorporate lucibufagins, the primary antipredator defensive compounds in Photinus fireflies, into their nuptial gifts. We also examined gene expression in the female reproductive tract, and found up-regulation of several proteases. These results are discussed in greater detail below. Molecular Composition of Firefly Nuptial Gifts. Recent work reveals proteases to be a conserved protein class in the male seminal gifts of diverse taxa 18, suggesting their proteolytic roles help to regulate postcopulatory interactions. This study demonstrates that the reproductive accessory glands of P. pyralis males synthesize serine proteases, metalloproteases, and cysteine proteases, many of which are packaged into the nuptial gift and delivered to females ( Table 2). We identified several metalloproteases that are produced by male accessory glands and transferred to females within the male spermatophore (Table 2; Supplementary Table 3). Metalloproteases transferred in seminal fluid of D. melanogaster have been linked to the induction of egg laying and are also important for spermatogenesis and fertilization 56. Neprylisin 2, produced in P. pyralis male other accessory glands, has also been identified in male ejaculates of Dermacentor variabilis ticks 61, and Melanoplus sanguinipes grasshoppers 62. When Neprylisin-like 1 was knocked down in male mice, their mates produced smaller litters 63. Similarly, down-regulation of Neprylisin 2 in D. melanogaster males reduced post-mating fertility in females 64. Angiotensin-converting enzyme, shown here to be transferred to females in the P. pyralis male spermatophore (Supplementary Table 4), has been shown to also occur in the seminal fluid of several insects, including C. capitata fruit flies 20, T. oceanicus 53 and M. sanguinipes grasshoppers 62, and the flour beetle T. castaneum 65. In T. casteneum, knockdown of angiotensin-converting enzyme in males led to the production of abnormal sperm and decreased egg production by their mates 65. We also identified ADAMTS (DN15036_c0_g1_i8), another metalloprotease, which was up-regulated in the spiral glands of P. pyralis males ( Fig. 3; Supplementary Table 2) and may be important for sperm fertilization ability 66; however, this was not detected in our spermatophore proteomics. It is important to note that the detection threshold for a given protein with bottom-up proteomics varies greatly, hence the absence of a protein in our proteomic results does not rule out its presence in the spermatophore. Overall, several metalloproteases synthesized in P. pyralis male accessory glands may be important for increasing male fertilization success, perhaps through enhancing sperm storage and/or release. Serine proteases represent common components of insect ejaculates, and have been shown to mediate post-mating physiological changes in females of several taxa. For example, trypsin-like serine proteases transferred in male nuptial gifts increase female oviposition in A. socius crickets 67 and D. melanogaster fruit flies 18. In P. pyralis males, the spiral glands produce two serine proteases, Snake and Trypsin 1, and we confirmed these are also transferred to females in spermatophores ( Table 2). Snake acts as an important mediator of the Toll pathway immune response in mosquitoes and fruit flies and is part of a serine protease cascade controlling synthesis of drosomycin, an antifungal agent. Thus, inclusion of these enzymes in P. pyralis nuptial gifts may potentially increase female immune response and reduce the likelihood of infection by microbial pathogens introduced during mating. We also identified cysteine protease, Cathepsin L 11, a papain-like enzyme that appears to be produced by male fat body and transferred to the female ( Table 2). Cathepsin L is a secreted lysosomal endopeptidase, which degrades structural proteins such as collagen and fibronectin 71. Cathepsins are responsible for digestive proteolysis in the gut of cowpea weevils Callosobruchus maculatus 72, and may play a similar role in degrading the male spermatophore inside the spermatophore-digesting gland. Another possible function is suggested by high concentrations of Cathepsin L found in pre-ovulatory follicles of mice where it may initiate follicular rupture and ovulation 73. Radiolabeling studies in other Photinus fireflies demonstrated that some spermatophore-derived proteins are incorporated into female oocytes 45, so inclusion of Cathepsin L in the P. pyralis nuptial gift may act to stimulate follicle degradation and ovulation. Certain Photinus fireflies are known to derive protection against their predators through biosynthesis of specialized toxic steroidal pyrones known as lucibufagins 58,59. Notably, our metabolomic analysis provides preliminary evidence that P. pyralis males transfer detectable quantities of lucibufagins to females in their nuptial gift ( Fig. 5; Supplementary Table S3). We hypothesize that males may package lucibufagins into their nuptial gift, where these defense compounds could augment the female's own defenses to help protect the female or her eggs against predators or microbial attack. Previous studies have found that other insect males also transfer defensive chemicals to females within their spermatophores or seminal fluid 8. In many cases, such defensive compounds Scientific RepoRts | 6:38556 | DOI: 10.1038/srep38556 are derived from host plants, including pyrrolizidine alkaloids in Utetheisa ornatrix moths 74, cyanogenic glycosides in several Heliconius butterflies 75, and vicilin-derived peptides in Callosobruchus maculatus cowpea beetles 76. In blister beetles (family Meloidae), however, males actively synthesize a toxic terpene, cantharidin, which they store in their major accessory glands and transfer in their nuptial gifts 77. In fireflies, further experiments are needed to definitively elucidate the source of male lucibufagins, to quantify amounts contained within male nuptial gifts, and to determine the extent to which male-derived lucibufagins may be used to defend the female or her eggs. Gene Expression in Female Reproductive Tracts. Postcopulatory sexual interactions are evolutionarily important, yet have proven challenging to study because they typically take place within the female reproductive tract. Moreover, to date, gene expression in those female reproductive tissues that receive and process male ejaculates has been examined for only a few taxa, including Drosophila spp. 27, the honeybee Apis mellifera 30, the corn borer moth Ostrinia nubilalis 25, and Pieris rapae butterflies 26. In Photinus fireflies, after the male spermatophore is deposited in the female's bursa copulatrix, it enters the spermatophore-digesting gland where it subsequently disintegrates over the next several days 40. Male sperm are stored and remain viable within the female's spermatheca for up to two weeks before fertilization 78. In this study of P. pyralis fireflies, we demonstrated that the sperm-or spermatophore-receiving portions of the female reproductive tract express genes encoding proteases, protease inhibitors, and other proteins involved in immune response and in maintaining sperm viability. Female peptidases and peptidase regulators are likely to be important mediators of postcopulatory sexual interactions, and these have also been identified from female reproductive tracts of other insects. In D. melanogaster, female peptidases and peptidase inhibitors interact with male SFPs and are required to process into their active forms at least three male proteins that induce egg-laying and reduce female receptivity to remating 79. Within the Drosophila repleta group, females peptidases and peptidase inhibitors expressed in more promiscuous species show higher dN/dS ratios compared to monogamous species, indicating strong positive selection on these female reproductive proteins 80. In this study, we identified several proteases that are expressed in the reproductive tract of P. pyralis females. In the spermatheca, we identified six peptidases, including Neprilysin 2 (DN42525_c0_g1_i1), that were up-regulated compared to the female thorax. In both female tissues, we found a sequence encoding angiotensin-converting enzyme (DN8737_c0_g1_i1) that was up-regulated compared to the female thorax. As in male insects, female neprilysins have been shown to be critical in maintaining D. melanogaster fertility 64. When Neprilysin 2 is knocked down in D. melanogaster females, fewer eggs are laid and they show decreased viability 64. In P. pyralis females, Neprilysin 2 may also play a role in regulating ovulation and maintaining egg viability. Angiotensin-converting enzyme was also found in both P. pyralis female tissues, and has previously been shown to be expressed in the bursa copulatrix and spermatheca of female Lacanobia oleracea moths 81. The function of this peptidase in the female reproductive tract has yet to be determined. In summary, this study offers new insights into the molecular composition of the firefly spermatophore, and deepens our understanding of how such nuptial gifts can mediate postcopulatory interactions between males and females. One future challenge will be to perform functional studies in fireflies and other non-model organisms to determine how these reproductive proteins influence reproductive fitness of both sexes. Future studies examining intraspecific differences in nuptial gift composition will also shed light on the evolutionary forces that drive the origin and maintenance of nuptial gifts across taxa. Materials and Methods Specimen and tissue collection. Photinus pyralis fireflies used in this study were collected at Mercer Meadows Pole Farm, Lawrenceville, NJ (40°18 23.4 N, 74°44 53.9 W) on 27 June and 11-12 July 2015, and identified based on male genitalia 82 and flash patterns. Both sexes were kept individually in plastic containers with sliced apple and damp paper towel. Mating status of field-collected individuals was unknown. Fireflies were kept in the lab for less than one week prior to experimentation. We compared gene expression in male and female reproductive tissues as shown in Supplementary Fig. 2. Tissues were collected from fireflies anesthetized at − 20 °C for 20 min then dissected under 40-70x magnification in RNAlater. From 12 P. pyralis males, the following tissues were dissected and pooled into 3 biological replicates (4 males each): spiral accessory glands, other accessory glands, thorax, and fat body. Insect fat body is a metabolically active tissue responsible for protein synthesis; widely distributed throughout the abdomen, fat body is abundant surrounding the male accessory glands. As fewer females were available, three females were pooled to produce a single biological replicate of the following tissues: spermatheca, spermatophore digesting gland and bursa, and thorax. All tissues were stored in RNAlater at − 80 °C until RNA extraction. After mating, Photinus males immediately begin assembling a new spermatophore 40, thus we predicted that accessory glands of recently mated males would show higher transcription levels of functionally related genes. Males were mated with females in the lab, and then spiral accessory glands and other accessory glands were dissected 2 h after mating pairs had separated. Tissues harvested from four recently mated males were pooled into one biological replicate and stored in RNAlater at − 80 °C until RNA extraction. RNA extraction and sample preparation. Prior to RNA extraction, each pooled biological replicate was frozen in liquid nitrogen and homogenized in QIAzol lysis reagent (Qiagen, Valenica CA USA) using a mortar and pestle. RNA was extracted using RNeasy Lipid Tissue Kit (Qiagen), and Illumina sequencing libraries were prepared from total RNA enriched to mRNA with a polyA pulldown using the TruSeq RNA Library Prep Kit v2 (Illumina, San Diego, CA). A total of 18 libraries were sequenced at the Whitehead Institute Genome Technology Core (Cambridge, MA) on two lanes of an Illumina HiSeq 2500 using rapid mode (PE 100 bp). Raw sequencing data has been uploaded to NCBI SRA (SRP078386). Scientific RepoRts | 6:38556 | DOI: 10.1038/srep38556 Transcriptome assembly and differential expression analysis. Resulting RNA-Seq reads in FASTQ format were checked with the FastQC software package (http://www.bioinformatics.babraham.ac.uk/projects/ fastqc/), and Illumina TruSeq3 adaptor contamination and low quality reads were removed by the Trimmomatic software package (http://www.usadellab.org/cms/?page= trimmomatic) 83, with the following parameters "ILLUMINACLIP:TruSeq3-PE.fa:2:30:10 SLIDINGWINDOW:4:5 LEADING:5 TRAILING:5 MINLEN:25". 185,402,330 paired reads pooled from all libraries remained post quality filtering. A de novo transcriptome was assembled from the pooled quality-filtered paired reads with Trinity 2.2.0 84 using default parameters with the exception of "-min_glue 5 -min_kmer_cov 3", on a single high-memory server (Whitehead Institute). Candidate ORFs were translated in silico from the de novo transcriptome using Transdecoder 2.0.1 85, with the minimum protein length set to 20 amino acids. The de novo transcriptome and predicted ORF annotations have been uploaded to NCBI TSA (GEZM00000000). We note that the sequence names of the uploaded sequences have an internal sequencing run and assembly version indicating prefix, "151_Ppyr_v3_TRINITY_", but otherwise represent identical transcripts to those analyzed in the manuscript. Expression analysis was performed using Trinity by the included "align_and_estimate_abundance.pl" script, with default parameters. This script utilizes Bowtie 86,87 and RSEM 88 to map reads to assembled transcripts and perform transcript quantification with expectation maximization respectively. We identified male and female genes that were significantly differentially expressed between specific tissues using the Bioconductor package edgeR (comparisons of interest shown in Supplementary Fig. 2. Male genes were considered significantly differentially expressed if they had a log 2 fold change ≥ 2 (logFC) and a false discovery rate (FDR) ≤ 0.01. We focused our subsequent analysis on male genes that showed significant up-regulation in either spiral accessory glands or other accessory glands relative to male thorax. Female comparisons lacked replicates, so significant differential expression could not be assessed. Because of this lack of replication for female tissues, we are cautious in our conclusions drawn from this data. We also compared genes that were up-regulated compared to thorax in male spiral accessory glands (LogFC ≥ 10) to those up-regulated in male other accessory glands compared to male thorax (LogFC ≥ 10) directly. This list of genes was then analyzed for differential expression between the spiral accessory glands and the other accessory glands to determine how the function of each tissue differs. After differential expression analysis, all significantly differentially expressed genes were annotated using Blast2GO and InterProScan. To identify putative homologs, a Blast search was conducted between each sequence and the entire NCBI non-redundant protein database 96. All sequences with significant Blast hits (e-value ≤ 10 −10 ) were then mapped and annotation scores were computed for all possible gene ontology terms. We used InterProScan to obtain further protein domain/motif information, enabling us to identify protein domains that indicated secretion 97. It is important to note, these annotation methods rely on previously characterized proteins, making it difficult to annotate more rapidly evolving sequences. Here, we only discuss sequences that that were successfully annotated using Blast2GO. All differentially expressed genes that were successfully annotated had similarity search e-values ≤ 6 10 −10. Principal component analysis. We summarized multivariate variation in gene expression among various male and female tissues using principal component analysis. To normalize read counts, the trimmed mean of M-values normalization method was conducted for each transcript using edgeR 98. Next a biological coefficient of variation analysis was conducted in edgeR. Spermatophore proteomics. One hour after the initiation of stage II copulation 99, a mating pair of Photinus pyralis fireflies was separated, and the spermatophore was carefully dissected out from the female's reproductive tract. Upon removal from storage, the spermatophore was transferred into 50 L of 2 Laemmli Sample Buffer (Bio-Rad) with 2% -mercaptoethanol, and heated to 95 °C for 5 min. Sample (25 L) was loaded onto a 12% percent discontinuous Laemelli SDS-PAGE gel. BLUEstain ™ Protein ladder (Gold Biotechnology) was loaded in a neighboring well for inferring protein size. Eight sections containing proteins ranging from > 180 kDa to ~6 kDa were cut from the gel, and provided to the Whitehead Institute Proteomics Core Facility (Cambridge, MA). Thereafter the samples were digested with trypsin, and run individually on a Dionex Ultimate 3000 RSLCnano nanoflow LC coupled to a ThermoFisher Scientific Orbitrap Elite mass spectrometer. In silico translated ORFs from the Trinity de novo transcriptome concatenated with common contaminants in proteomics were used as the search database to identify tryptic peptides from the samples. Mascot (Matrix Science, London, UK; version 2.5.1) was used as the proteomic search engine. Verification of peptide and protein identification and general analysis was performed in Scaffold (Supplementary methods 1; Scaffold version 4.4.8, Proteome Software Inc., Portland, OR). Raw proteomic data and peptide identifications have been uploaded to the EBI PRIDE database (https://www.ebi.ac.uk/pride/archive/) with the following accession number (PXD004005). Potential signal peptides were predicted from the in-silico predicted ORFs using SignalP-4.1 100. Spermatophore metabolomics. To examine the small molecule composition of the male spermatophore, we conducted an untargeted liquid-chromatography high-resolution accurate-mass mass-spectrometry (LC-HRAM-MS) metabolomic analysis aimed at elucidating compounds specifically enriched in the spermatophore. Again, a pair P. pyralis fireflies was separated shortly after mating, and the spermatophore carefully dissected out of the female's reproductive tract. Briefly, we compared mass features detected in 1:1 water:methanol extracts of the spermatophore and the body of an adult P. pyralis male whose posterior abdominal segments (including the lantern and reproductive tissues) had been removed. We conducted targeted analyses to look for lucibufagin, pterin, and several insect hormones, as well as an untargeted metabolomic analysis to identify any compounds enriched in male spermatophores. Data processing and analysis was performed with MZmine2 101 (see Supplementary Methods for details). Raw and mzTab format feature called metabolomic data from the P. pyralis spermatophore and body have been uploaded to the EBI MetaboLights database (http://www.ebi.ac.uk/ metabolights/) with the following accession number (MTBLS362).
The American Movement to Aid Soviet Jews. William W. Orbach. University of Massachusetts Press. $15 The author has had long servicc with Time-Life books. This book is impressionistic and idiosyncratic. It resembles in many respects T. Kiernans The Arabs (Little, Brown & Co., 1975). There are many vkcs of style, journalistic notes interlard historical sections, and nothing goes very deep. Much of the book.a pears to have been written a good whilc before publication. Forbis prepgce against monarchy is forced on the reader throughout. Yet, ironically, he seems to be impressed with thc fact that Iran is a most traditional society. On the other hand, he mentions that thc clergy opposed Reza Shahs original intcntion to become (like Ataturk) a President. He might have noted the monarchic character of presidency (not only in Islamic states) and recognized the autocratic hereditarian character of Shiah. But perhaps that would be asking too much from a writer who speaks of Isfahan as the pluperfectly (sic) Persian city (p.39, says that in the back wall of the sanctuary of a mosque are an altar and a pulpit, describes Christian theology as stern compared to Islam, and acknowledges gratuitously that the Shah was courteous in putting his visitor at case.
Measurements of the Atmospheric Electric Field through a Triangular Array and the Long-range Saharan Dust Electrification in Southern Portugal Atmospheric electric field (AEF) measurements were carried out in three different sites forming a triangular array in Southern Portugal. The campaign was performed during the summer characterized by Saharan dust outbreaks; the 16th-17th July 2014 desert dust event is considered here. Evidence of long-range dust electrification is attributed to the air-Earth electrical current creating a positive space-charge inside of the dust layer. An increase of ~23 V/m is observed in AEF on the day of the dust event corresponding to space-charges of ~20-2 pCm-3 (charge layer thicknesses ~10-100 m). A reduction of AEF is observed after the dust event. Introduction Dust storms have been receiving significant attention in the past decades (e.g., ), among different roles, because of their impact on the planetary radiactive forcing and its relevance in Earth's climate. Though little information has been collected on dust electrification (e.g. Ette, 1970,, the interest has been raised recently due to its importance in two main areas: energy systems and planetary exploration. In the former, dust electrification can have technological importance since it is of great usefulness in the development of automatic electrostatic dust particle removal from Solar Energy systems, as it was used on lunar missions (). This technological improvement on Earth will permit the increase of the efficiency in these systems while reducing water consumption (). In the latter, the understanding of Martian dust devils electrification () is expected to be boosted by the ExoMars mission () which will deploy later this year two payloads: DREAMS and MicroARES; these measuring instruments are expected to further contribute to the understanding of these phenomena on Mars. Moreover, Williams et al. reported on the electrification of haboobs in Sahelian belt of West Africa. Measurements were made in the (source) region where the storms developed and significant electric perturbation where only found under heavy dust (high concentration of large sized particles) exhibiting in most of the occurrences strong and negative monopolar electrifications (absolute electric fields of ~1-10 kV/m). These observations seem to be line with the ones by Kamra; in which the author states that most of the dust storms dominated by clay minerals tend to produce negative space charges in the source region. Nevertheless, debate still exists on ether the negative electrification comes from clay minerals (dust) or quartz minerals (sand), Williams et al.. Many open questions also exist on the way space charge generated in the source region behaves under long-range transport. In principle, only small particles (e.g., clay particles with size ranges from 1 to 100 m) can be transported and according to previous observations of negative clay electrification, negative perturbations in the electric fields would have to be seem away from the source region. Even so, the detailed work of Reiter and co-authors seems to contradict this. The author has shown that, during a Saharan dust outbreak that reached the Zugspitze Peak (Germany), a positive space charge density (SCD) at ~3 km altitude was formed two times higher than in the normal "clean" conditions. Tropospheric LIDAR measurements showed that the dust layer was co-located with the space charge density around ~3 km and chemical aerosol analysis showed that sand particles were dominant with significant increased concentrations of SiO 2 and Al 2 O 3. More recently, balloon-borne charge measurements of Saharan dust layers (up to 4 km) have been made in Cape Verde Islands, where Saharan dust outbreaks frequently occur. The experiment depicted a maximum positive charge density of ~ 25 pC m −3 (). Furthermore, the authors estimated that dust charge takes roughly 70 s to decay () and consequently no long-range electrification would be observable. For that reason, the authors argued that a possible mechanism to explain long-range dust charging was the vertical air-Earth electric current, imposed by the global electrical circuit (GEC), flowing through the atmospheric electric conductivity gradient inside the dust layer (Nicoll and Harrison, 2010). The atmospheric electric conductivity gradient is a consequence of small ion scavenging by dust particles. In which the ion-particle attachment process charges dust particles (with low electrical mobilities) that significantly decrease electric conductivity () and generates the conductivity gradient. Ohm's law will then reflect this reduction in the conductivity by an increase in the Atmospheric Electric Potential Gradient (PG 1 ). Thus, if dust charging was caused by the GEC action, it would imply that dust desert plumes would be positively charged far away from its source and positive perturbations on the PG should be found. This was the case of Reiter observations and the present ones also tend to corroborate this. Previous work on the long-range dust electrification has been focused on a single measuring site where the PG was recorded (e.g., Rudge, 1913). Nevertheless, recent efforts in atmospheric electricity concern the development of networks of PG field-mills in large time (~1 hour) and space scales (~100 km), as it is the case of the network installed in South America (Tazca et al., 1 In Atmospheric Electricity it is common to use PG, as means to quantify the Atmospheric Electric Field. The convention is that the PG is defined by PG = dV I /dz, where V I is the ionospheric potential with respect to Earth's surface (where V=0) and z is the vertical coordinate. By this convention the PG is positive for fair-weather days (according to the international standards fair-weather days are selected as those with cloudiness less than 0.2, wind speed less than 5 ms -1 and with the absence either of fog or precipitation, Chalmers, 1967) and related to the vertical component of the atmospheric electric field E z by E z = -PG. GEC is a consequence of the V I (), it is charged in the thunderstorm active regions of the globe and discharged in the fair-weather regions by the flow of an air-Earth electric current. The daily variation of thunderstorm activity modulates globally the PG in what it is called the Carnegie Curve. 2014) and the one under development in Europe. The existence of such networks raises the possibility of the use of coordinated PG measurements to track atmospheric phenomena such as smoke plume transport, known to affect PG measurements. In this context, an experiment was conceived and undertaken during the ALEX2014 meteorological campaign (www.alex2014.cge.uevora.pt). It consisted on the installation of three similar PG field-mills, in Southern Portugal, forming a triangular array that allowed the recording of PG time series during a three-month period, from June to August 2014. This period corresponds to the summer season in the northern hemisphere and represents a unique opportunity to perform such experiment due to two main reasons: the frequency of occurrences of fair-weather days and the occurrences of isolated Saharan dust outbreaks transported over Africa to the measuring region (). The use an array of sensors instead of a single sensor is because it should permit to distinguish global perturbations from local ones. This paper is organized as follows: section 2 describes the experimental setup, section 3 outlines the Saharan dust event of July 16 th 2014 (day 46 of the campaign); section 4 presents the PG measurements during the ALEX2014 campaign; section 5 discusses the results and a brief formulation is derived to reinforce the observations; and in section 6 main conclusions along with recommendations for future work are given. Potential Gradient (PG) array and Aerosol Optical Depths (AOD) measurements An equilateral triangle is formed by three JCI field-mills, separated by nearly 50 km from each other, forming an triangular array of about ~1000 km 2 in Southern Portugal ( Figure 1). The geographic location of the three sites in which measurements of PG were conducted are: vora (EVO) at 38.50 N, 7.91 W; Amieira (AMI) at 38.27 N, 7.53 W and Beja Airbase (BEA) at 38.07 N, 7.93 W. The EVO and BEA sites follow almost a North-South alignment, whilst AMI is more deviated to the East and is settled approximately in the mid-way of the other two sites. The EVO station is situated in the center of the city of vora (~50 000 inhabitants), where major sources of pollutants are due to anthropogenic activity such as traffic, heating (winter) and cooling (summer) air systems. In EVO, a JCI 131 was installed in the University of vora campus (at 2 m height) with few trees and two University buildings in its surroundings (~50 m away). The instrument was calibrated in 2012 and has been operating since 2005. The AMI station is located on the shoreline of the Southern part of the Alqueva reservoir (currently the largest man-made lake in Western Europe), set upon a hill approximately 30 m above the lake water level, with low vegetation in its surroundings (). The BEA station is located further south on an air base in the outskirts of the small city of Beja (~40 000 inhabitants). In AMI and BEA two identical field-mills JCI 131F were used and installed as well at 2 m height above the ground. The characterization of the aerosol conditions in the region was based on the AERONET station (,) located at EVO. An automatic sun tracking photometer (CIMEL CE-318-2) is operated to measure aerosol optical depths (AOD) at several wavelengths in the range of 340-1640 nm. The AOD are a measure of the solar radiation extinction due to the aerosol load present in the atmospheric column. Moreover, the spectral dependence of the optical depth, AE (Angstrom exponent), provides information on the size distribution of the aerosol population (i.e., aerosol fine and coarse modes relative proportion). Desert Dust Transported into Southern Portugal During the period of the study, the presence of Saharan dust over the campaign region was detected by sun-photometer with maximum intensity on 16 th -18 th July. Trajectory analysis (not shown) confirmed this scenario of dust transported from the Sahara region. Various sunphotometer measurements within AERONET network, including the one installed at vora (EVO), are depicted in Figure 2, which provides some further insight on the dust outbreak, including an indication on its spatial extension. The measurements permit to perceive dust plume extended at least up to central Iberia Peninsula. One of the stations, Badajoz in Spain (BJZ), is located near vora, while the other three are clustered in the south of Spain, Mlaga (MLG), Granada (GRA) and Cerro de Poyos (CDP) (see Figure 1). Optical depths up to 0.51 (at 440 nm) and small wavelength dependence of the optical depth, a typical signature of dust, where observed in the period around 16 th -18 th July, as can be observed in Figure 2. A stronger increase in the aerosol perturbation can be seen in AOD, during 16 th July, which persisted during the following two days; on 19 th July the perturbed aerosol was no longer visible in the data. Data is scarce for EVO and BJZ on 18 th and 19 th July, due to clouds. The NAAPS maps in Figure 3 are consistent with this picture. A north-eastward movement apparent in Figure 3 is fairly consistent with the strong increase in optical depth at all sites in the beginning (16 th July). Additionally some south-to-north gradient in aerosol load is observed, as the optical depths measured in the south of Spain stations were always higher if compared to vora and the neighboring Badajoz station. It can be concluded that the sites were the electric field was being measured were under the dust perturbation. It's interesting to notice that on the one hand CDP (next to Granada station) is a mountain site i.e., the measurements are performed near 2000 m (1830 m a.s.l.); on the other hand the optical depths measured here are a large fraction, about 60-75%, of the optical depths measured in GRA (680 m a.s.l.) and MLG (40 m a.s.l.). This means that the dust was mainly present in the free troposphere, which most frequently happens and is known in the literature (e.g., ). Bearing in mind a mean layer height of 3.7 km for Saharan dust layers in the free troposphere over vora, after Preiler et al., it is fair to assume that during this episode the dust could also be mainly found in elevated layers in the region where the present study was conducted. Information from the satellite-Borne Lidar CALIOP () provides further insight on the dust plume and seems to confirm the above discussion. Its ground track was near the Portuguese coast, at less than 200 km from the region under study and close to 03:00 UTC. In figure 4, the attenuated backscatter coefficient indicates the presence of a layer aloft, between about 2 and 3 km, for the region under study, and its high depolarization capability, as given by the depolarization ratio, confirms the dusty nature of the aerosol plume observed. Potential Gradient data To easy the understanding of the plots it is set the beginning of the campaign as day 1 (1 st June 2014) until day 88 (28 th August 2014). In this notation the dust event of 17 th of July corresponds to day 46 of the campaign. Pollution levels in the BEA and AMI are low in comparison to EVO. This is a fundamental aspect on the present study since high pollution levels leave a common signature on the PG records in large metropolis (). A quality control criterion for the data was used, and values within the precision threshold of field-mills (~|1| V/m) were rejected. This allows the removal of values that correspond to equipment malfunction and-or maintenance, such as when a field-mill stops operating but the data logger continues to record. Two analyses were performed: a robust lowess (locally weighted linear regression) smoothing and a wavelet analysis over the 1-hour averaged data. The wavelet periodogram is, in some sense, a visual description of the way the dominant periods on the data developed during the campaign. The PG measurements in EVO station are represented in Figure 5a, where a 13-day gap is shown due to equipment malfunction (starting on 12 th June at 00:00 UTC until 25 th June at 00:00 UTC). An apparent modulation of the PG can be seen with the lowess curve after the desert dust event on day 46 (marked as a dashed vertical line). It seems to point towards a reduction of the PG after the dust event that is observed in the other two stations and which can be the result of a space charge generated by the desert dust event itself (as will be further discussed). This could establish a signature of the event in the PG data, though it should be mentioned that the dip does not start at the same time as the dust appears in the AOD measurements. Here, the vertical line corresponds to the maximum of dust concentration, with the dust event starting two days before; no noticeable change in the PG was observed at the starting day, though. This could be related to the time needed for enough dust to accumulate in the column above in order to affect the PG. Additionally, it is represented in Figure 5b the wavelet periodogram of the PG at EVO, the dominant periods are marked with solid black contour lines. This figure shows the persistence of the one-day periodicity throughout the observational period, as expected. The one-day periodicity is a consequence of the action of the global electrical circuit and the absence of that periodicity could mean that the PG is being perturbed. This is the case for the day of the dust event, day 46. The PG measurements in AMI were previously discussed on Lopes et al. in the context of radon interaction with atmospheric ions in fair-weather conditions. In the present analysis the complete record of the time-series at Amieira is used and depicted in Figure 6a. Here, no gaps on the data are observed. Additionally, the PG in AMI shows strong oscillations that occurred on day 22 of the campaign and which were probably caused by thunderstorms and periods of heavy rain. This event has a specific signature on the wavelet analysis (Figure 5b), presenting high spectral power from periods of 2-hours up to 2 days. A similar variation is found in the BEA station. Unfortunately, no data is available in the EVO station on that day. Moreover, the wavelet periodogram for AMI, Figure 6b, also shows the persistence of the one-day periodicity, but besides it evidences a half-day periodicity that could reflect the action of the lake near to which the station was installed, Lopes et al.. A clear one-day periodicity is present in the day of the dust event. The PG measurements in BEA station (Figure 7a) and the correspondent wavelet analysis ( Figure 6b) are generally similar to those observed in AMI. Nevertheless, BEA data shows two sizable gaps: a smaller one from the 1 st of June 13:50 UTC until the 3 td of June 12:55 UTC and a larger from the 3 td of July 22:47 UTC up to the 9 th of July 9:35 UTC. Furthermore, the most significant information extractable from the wavelet periodogram for BEA, Figure 7b, is the persistence of the one-day periodicity and that it appears to be diminished during the desert dust event. In This can be a confirmation that the desert dust signatures are not local, instead they result from a regional process. In this spirit, the NAAPS maps (Figure 3b,c) depict that the desert dust event arrived at the three stations simultaneously in the time scale of the dust plume transport (from hours to days), which means that the influence of this event should be similar and closely synchronous in all stations. It should be pointed out that the PG data from BEA station shows apparently an increase in the PG on the day of the event, followed by a decrease of the trend in the days after the desert dust event. This is highlighted by the lowess smoothing (Figures 4a, 5a and 6a). To deepen into the possible effects of the Saharan dust on the PG measurements done by the array of sensors, it is represented the daily variation for EVO, AMI and BEA stations for days 44 to 48 of the ALEX2014 campaign, respectively, in the upper, middle and lower five panels of Discussion In order to quantify the possible impact of the desert dust layer on the PG, the difference between the observed PG during the event and the expected PG without the dust layer (defined as ∆ F ) is estimated through the following described procedure. The PG data in three stations and the AE are presented in a daily boxplot 2 representation (Figure 8) around the time of occurrence of the maximum of desert dust event (day 46 of the campaign). A significant reduction of the AE is clearly depicted (Figure 8a), identifying the aforementioned event. Both AMI and BEA stations highlight an increase in the median from the lowess trend that is not observed in EVO. The fact that this increase is not present in EVO can be a result of local influences affecting the PG, since the EVO station is located in an urban environment, affected by local pollution (mainly from 2 On each box, the central dot is the median, the limits of the box are the 25 th (first quartile, q 1 ) and 75 th (third quartile, q 3 ) percentiles and the whiskers (solid lines) extend to the most extreme data points not considered outliers. Maximum whisker length (w) is set to 1.5 and outliers are defined to be larger than q 3 + w(q 3 -q 1 ) or smaller than q 1 -w(q 3 -q 1 ). traffic) which is known to impact severely on the PG (). In fact, the desert dust plume is expected to travel around altitudes of about 3 km from the surface (), implying that atmospheric processes below it (specially below the planetary boundary layer, PBL) can easily disguise the desert dust effect. Furthermore, if the lowess median value for the day of the event on AMI and BEA is used as the long-term reference for clean days behavior, 57.8 and 76.8 V/m (respectively), the median value for that day on both stations, 80.5 and 100.6 V/m (respectively), can be considered as the value for the perturbed PG. These allow the estimation of the observed F, 22.7 and 23.8 V/m, for AMI and BEA (respectively). The F value is remarkably similar in both stations and reveals a possible connection in the way that both stations react to the desert dust event as long as they are not affected by local effects. It should be mentioned that using the lowess trend as a reference is a way to quantify the possible effect of the desert dust electrification and is not mentioned to give a precise result. For example, on day 52, Figure 8, a similar situation is found though not attributable to the desert dust event here reported. Moreover, the detailed view on Figure 8 also highlights the trend for PG decrease after the desert dust event, in particular on day 48 of the campaign (two days after the event). This phenomenon can be attributed to the dispersion of the space charge in the low electric conductive region below the PBL, after the plume has passed. It should be mentioned that the dust event does not start at the same time as the dust appears in the AE measurements, but rather a few days before, however there is no noticeable change in the PG at the time. One explanation should be attributed to the time that is necessary for enough dust to accumulate in the column above in order to affect the PG. Dust electrification is usually recognized to result from contact and triboelectric charging between particles being bowled. The basic mechanism for charge separation is commonly thought to be the fact that, during collisions, the smallest grains gain negative charge with respect to larger particles (Freier, 1960;;Duff and Lacks, 2008). After this size dependent charging, the smallest particles are separated from the larger ones by gravitation (smallest particles being bowled easily than the larges ones) inducing what might be called gravitational charge separation, which is consistent with the PG observations in the dust storms source region (e.g.,;Kamra, 1972). Nevertheless, the contact and triboelectric charging depends strongly on the grain collision frequency and though high frequencies are expected in dust storms near to the source region to cause dust electrification, this is not the case for regions far from it, as is the present case. The layers that reach distant locations have low densities; which corresponds to low collision frequencies and ruling out contact and triboelectric charging as charging mechanism. Assuming that dust charge decays in time-scales of minutes () means that the dust layers will loss its negative charge if charging is absent. Consequently, the dust layers will no longer be charged when away from the source regions. Thus, to explain long-range electrification it is reasonably to consider that the charging of the dust layers is caused by the action of the air-Earth electric current. In accordance to the discussion in Nicoll et al., a layer of uncharged dust particles will scavenge atmospheric ions by attachment to the large dust particles, reducing the air conductivity in that region. Such reduction in conductivity results in the creation of a space charge density (SCD) by the action of the air-Earth current as follows: air-Earth electric current, flowing from the Ionosphere to the Earth's surface, will bring to the upper part of the dust layer positive small ions that, after equilibrium is reached, will no longer be scavenged by the dust particles but will accumulate due to reduced air conductivity. This accumulation of charge will create a net positive SCD commonly represented by with a given dependence on the altitude, z, defining its vertical profile. If it is assumed that this vertical profile is defined in terms of the Heaviside function H(z): The effect on the PG is calculated straightforwardly by integration of Gauss' law. In Equation it has been used h as the height where the dust layer is located, t is the thickness of the charge layer and 0 is the space charge density. Previous observations of desert dust in Southern Portugal set h ~3 km (). Denoting the positive PG near the surface as F, Gauss' law relates the vertical variation of F with (z) through the relation: here 0 is the permittivity of vacuum. This equation can be integrated easily to estimate the effect of the space charge density on F: Taking into account that the measurements are performed in the ground, the increase of the PG due to the space charge is defined by F = F h -F h+t, resulting in: Using Equation and the observed F~ 23 V/m (previous section) and charge layer thicknesses ranging t ~ 10-100 m, the values for the space charge density are estimated to be found in 0 ~ 20-2 pCm -3. This is in reasonable agreement with the values observed experimentally (). Nevertheless, it should be mentioned that this is a simplified model and for that reason it has several limitations. Two fundamental simplifications have been made, one is the assumption of a uniform space charge distribution and the other is that this space charge occupies a semi-infinite plane in the x and y coordinates. These simplifications tend to overestimate the real effect of the desert dust. Future work may consider a more realistic model assuming a disc of space charge having a distribution different from the uniform. Conclusions Long-range electrification of a desert dust event, traveling at an altitude of ~ 3 km, is for the first time observed with surface atmospheric electrical Potential Gradient measurements. A triangular array of field-mills, covering an area of ~1000 km 2, has been used aiming to remove the local perturbations imposed on the Potential Gradient, mainly by local sources of pollution or space charges. The observations point to a long-range electrification that might be caused by the action of the air-Earth current on the low conductivity region occupied by the dust layer, forming a space charge layer inside the dust layer. Based on that simple mechanism a formulation was derived. Considering a charge layer thicknesses in the range of ~10-100 m, the values for the space charge density are estimated to be 0 ~ 20-2 pCm -3. These values are comparable with values found on the literature. A final remark is made to the fact that the dust storm electrical signal is a weak one as a consequence of the weaknesses of the dust storm itself. The present manuscript can be a motivation to perform longer campaigns that may be able to capture stronger dust storms and permit a better correlation between the dust storm induced electrical perturbations measured at all the sensors of the array.
Acquisition of the body image in evolution -Role of actuators in realizing intelligent behavior- In conventional studies, it is considered that differences of actuators are not important for realizing controllers for robots, so, usually conventional robots employ motors as actuators. On the other hand, animals have muscles, and recently, it is reported that physical properties of the muscle like viscosity and elasticity play an important role in controlling their bodies. In this paper, we consider that learning time (time required for adapting themselves to the environment) works as selection pressure, and actuators like muscles are acquired in evolution. To discuss this hypothesis, we employ a two-link manipulator, and evolve the manipulator in simulation. The task of the manipulator is to catch a ball, and the manipulator learns timing to catch. Fitness of the manipulator is calculated from learning time. Simulations have been conducted, and as a result, manipulators that have actuators with adequate viscosity and elasticity have been obtained. By analyzing the result, we have found that the body image of the manipulator has consisted of the viscosity and elasticity, and the body image has reduced learning time.
Towards a conceptual framework of place attractiveness: a migration perspective Abstract. The attractiveness of places is currently gaining a high policy salience in policymakers' efforts to draw mobile capital. Yet, while there are a growing number of empirical studies considering the migration of people and the attractiveness of places, there is an acute lack of conceptual understanding of the phenomena that hamper discussions between researchers and policymakers. This article suggests a conceptual framework whereby place attractiveness can be better understood from a migration perspective. The empirical material for this article mainly draws upon interviews that were carried out with migrants who seem to have considered at least one alternative in their search for a suitable destination. The conceptual framework, which comprises the main result of the article, illustrates that needs, demands and preferences are central and empirically identifiable components for properly appreciating place attractiveness in a migration context. It is argued that the attractiveness of places increases with the successive fulfilment of these factors; but on the other hand, the more factors a migrant seeks to fulfil in his or her destination selection, the fewer the choice possibilities. The article moreover shows how a lifecourse perspective needs to be integrated in such analyses since not only do migrants' needs, demands and preferences depend upon their current lifecourse phase situation, their resources and constraints are also likely to correlate with the lifecourse. The conceptual framework can be used to ease understanding between researchers and policymakers in issues related to place attractiveness and the migration of relatively affluent migrants with choice opportunities.
Mislocalization of DNAH5 and DNAH9 in respiratory cells from patients with primary ciliary dyskinesia. RATIONALE Primary ciliary dyskinesia (PCD) is a genetically heterogeneous disorder characterized by recurrent infections of the airways and situs inversus in half of the affected offspring. The most frequent genetic defects comprise recessive mutations of DNAH5 and DNAI1, which encode outer dynein arm (ODA) components. Diagnosis of PCD usually relies on electron microscopy, which is technically demanding and sometimes difficult to interpret. METHODS Using specific antibodies, we determined the subcellular localization of the ODA heavy chains DNAH5 and DNAH9 in human respiratory epithelial and sperm cells of patients with PCD and control subjects by high-resolution immunofluorescence imaging. We also assessed cilia and sperm tail function by high-speed video microscopy. RESULTS In normal ciliated airway epithelium, DNAH5 and DNAH9 show a specific regional distribution along the ciliary axoneme, indicating the existence of at least two distinct ODA types. DNAH5 was completely or only distally absent from the respiratory ciliary axoneme in patients with PCD with DNAH5- (n = 3) or DNAI1- (n = 1) mutations, respectively, and instead accumulated at the microtubule-organizing centers. In contrast to respiratory cilia, sperm tails from a patient with DNAH5 mutations had normal ODA heavy chain distribution, suggesting different modes of ODA generation in these cell types. Blinded investigation of a large cohort of patients with PCD and control subjects identified DNAH5 mislocalization in all patients diagnosed with ODA defects by electron microscopy (n = 16). Cilia with complete axonemal DNAH5 deficiency were immotile, whereas cilia with distal DNAH5 deficiency showed residual motility. CONCLUSIONS Immunofluorescence staining can detect ODA defects, which will possibly aid PCD diagnosis.
On the presence of Alsodes coppingeri (Anura, Alsodidae) in Argentina, with comments on other southern Alsodes ABSTRACT The occurrence of Alsodes coppingeri is confirmed in Argentina for the first time, from Santa Cruz Province, close to the Lago del Desierto. Specimens of this species were identified according to external morphology and DNA sequences. These new records in Argentina are at the same latitude than the type locality (Puerto Ro Fro, Chile) about 100 km eastwards in a straight-line, but at the opposite side of the Andes mountain range and the Southern Continental Ice Fields. Five localities from Chile (Caleta Tortel, Canal Michel, Laguna Caiquenes, Puerto Yungay, and Villa OHiggins) are around 100 km north from our records, in a lower region of the Andes located between the Northern and Southern Continental Ice Fields. This region with discontinuous permanent ice sheet-cover may have acted as a corridor for amphibian species that are currently distributed on both sides of the Andes range. Introduction The temperate forests at the southern cone of South America present high levels of anuran endemisms, with moderate to low species richness, as can be seen in the Patagonia region. Recent studies discussed the diversity and phylogenetic relationships of endemic Patagonian anurans [e.g.. These studies, together with additional updates on their distribution ranges, provide essential information for the implementation of conservation programs. However, there are still significant gaps regarding species distribution and taxonomy. One of these problematic and poorly known taxa is Alsodes coppingeri (Gnther 1881). This species was described into the genus Cacotus from Puerto Ro Fro, Wellington Island, Chile. Soon afterwards it was named Borborocoetes coppingeri and later transferred to the genus Eupsophus by Codoceo and Capurro without justifications. Curiously, these last authors independently cited Puerto Montt (Chile) as the type locality of the species. Some morphological features were described for Eupsophus coppingeri, including observations on the holotype and specimens from different localities in Chile and Argentina. Since Lynch, E. coppingeri was considered a junior synonym of Alsodes monticola Bell 1843 (the only known species of Alsodes at that moment), and a reassessment of its identity was overlooked for some decades. Later on, new information about the genus Alsodes at high latitudes was published. For instance, Daz & Nez reported some morphological larval and adult features for Alsodes verrucosus from Baha White, Wellington Island and Formas et al. described Alsodes kaweshkari from Puerto Edn, in the same Island. Formas et al. analyzed the morphology, cytogenetics and DNA sequences of adults and larvae from the type locality of Cacotus coppingeri Gnther 1881, and resurrected the species under the combination Alsodes coppingeri. In addition, they considered that the descriptions of A. coppingeri provided by Cei and Grandison corresponded to different species, and consequently restricted the distribution of A. coppingeri to its type locality. One year later, specimens of A. australis were reported from Wellington Island, a fourth species of Alsodes cited from this place. More recently, the validity of Alsodes coppingeri was supported in a molecular phylogenetic analysis, and four additional populations other than that of the type locality were recognized for this species, some of them previously assigned to Alsodes australis. These new populations and additional records extended the distribution of A. coppingeri somewhat northwards, from Magallanes to the Aysn Region in Chile. In Argentina, specimens of Alsodes from the temperate forest of Santa Cruz Province were cited as Alsodes aff. coppingeri, without confirmation of their specific identity and precise location of occurrence. The present contribution confirm the presence of Alsodes coppingeri for the first time in Argentina at the eastern slopes of the Andes range. We also discuss about the identity and distribution of other Alsodes at these latitudes, based on external morphology and molecular characters. Materials and methods We carried out field work in search for Alsodes at Lago del Desierto area, in Santa Cruz Province, Argentina (49° 04S; 72° 53W), from 1996 to 2019. All specimens were deposited in the herpetological collection of Instituto de Diversidad y Evolucin Austral (CNP.A), Chubut Province, Argentina. Collection and handling of specimens followed standard practices suggested by Heyer et al., under the permits (year 1997 and No 491755/16) provided by Direccin de Fauna Silvestre of Santa Cruz Province, Argentina. The sequences were included in a phylogenetic analysis, selecting terminals in accordance to relationships proposed by the extensive analysis published by Blotto et al.. Already available DNA sequences of Alsodes 12S-tRNA Val -16S, Cyt b, and COI were also used. The fragments of 12S-tRNA Val -16S, Cyt b, and COI were aligned with ClustalW, executed in BioEdit under default parameters, and later concatenated using SequenceMatrix 1.8. We performed a maximum parsimony analysis in TNT software choosing the "implicit enumeration" option. A preliminary analysis showed that the monophyly of Alsodes coppingeri is supported by two mutational steps in the COI fragment. For this reason, a second analysis was run including only the samples of A. coppingeri for which COI was available; the excluded samples were used only for comparisons of genetic distances (Appendix A). Support values were estimated on tree running of 1,000 replicates under parsimony jackknife with default TNT settings, and 0.36 of removal probability. Uncorrected p-distances were obtained employing the software MEGA 7. where the so-called Salto del Anillo waterfall is formed (49° 07' 07"S; 72° 55' 29"W; 451 m a.s.l.). All these findings occurred in streams located within the humid forests of Nothofagus beechs, on the eastern slope of the Crestn and Vespignani range, and Campo Ro Toro, between 400 and 750 m a.s.l. (Figure 1; Table 1). Around Lago del Desierto, the forest is composed by N. pumilio (lenga) and some patches of N. betuloides (coihue de Magallanes), with an underwood composed by Embothrium coccineum (notro), Chiliotrichium rosmarinifolium (romerillo), Gaultheria mucronata (chaura), Empetrum rubrum (murtilla), and Myoschilos oblongum (codocoipo) and a herbaceous layer rich in pteridophytes and bryophytes, which can spread over fallen and standing trunks. In poorly drained sites, the hydrophytic herbaceous communities were predominant and the tree Nothofagus antarctica (ire) was present often as bushes. The frogs were found under rocks and trunks covered with bryophytic vegetation present on stream banks, while tadpoles were collected in low flow current sections of small to medium-sized streams ( Figure 2). Results Adult specimens presented the external characters of A. coppingeri provided by Formas et al.: snout profile truncate, legs with uniform coloration, almost unwebbed feet reduced to 3, 4, and 5 toes; but all presented fringes on the toes and also a tarsal fringe. Two of them had uniform brownish coloration, and the other two were uniformly grayish. The male CNP.A 4390 (SVL 58.69 mm) had well-developed fringes on toes, and marked secondary sexual characters, such as: hypertrophy of the forearms, spiny pectoral patches and nuptial pads on the fingers 1 and 2, scattered spines on the inner surface of the fingers 3 and 4, keratinous surfaces on outer bilobated metacarpal tubercle, on the dorsal and ventral surfaces of hands and feet, ventral surface of jaws, dorsal surface of the head, and flanks ( Figure 3). The size, the welldeveloped fringes on toes, and the secondary sexual characters of this specimen resembles the description of A. kaweshkari (see drawings in Formas et al. ), but the presence of a deep notch at the anterior edge of the outer metacarpal tubercle (present in three of four specimens analyzed) resembles A. verrucosus reported from Wellington Island. The size of the four male specimens was larger (43.22-58.69 mm) than that given by Formas et al. (43.2-44.0 mm) for A. coppingeri. In addition, the two metamorphs found by us have more than mid-webbed feet, a character that shown a wide plasticity. The tadpole morphology agrees with the description of A. coppingeri provided by Formas et al.. They are exotrophic larvae, with dorsolateral eyes, an emarginated oral disc with a single row of marginal papillae with a wide rostral gap, one single row of mental intramarginal papillae; tooth row formula 2 /3, spiracle sinistral with a protruding distal end, wide and dextral vent tube, low and straight fins with sub-parallel margins, and rounded tail tip. For the phylogenetic analysis, we obtained a molecular matrix of 4085 DNA base pairs (bp): 2424 bp for 12S-tRNA Val -16S, 658 bp for COI, and 1003 bp for Cyt b (most of samples have a fragment smaller than 400 bp; see Appendix A). The maximum parsimony analysis under "implicit enumeration" found a single shortest tree of 767 steps. Sequences of the two specimens of Alsodes provided herein were recovered in a clade along with other sequences of A. coppingeri, supported by two mutational transformations in COI fragments, as the sister taxon of A. verrucosus. We included a small fragment (304 bp) of Cyt b belonging to the holotype of A. kaweshkari, but this species was recovered nested in another clade together with A. gargola in a close relationship. In Figure 4 we show the maximum parsimony tree with jackknife supports, in which all relationships are consistent with those previously obtained by Blotto et al.. The uncorrected p-distances between samples of A. coppingeri were extremely low, ranging between 0.0% and 0.04% for 2355 bp of 12S-tRNA Val -16S (N = 7); 0.0% when we compared only 308 bp of Cyt b (N = 10); and 0.0% in 658 bp of COI (N = 7). In the same way, the uncorrected p-distances between A. coppingeri and A. verrucosus were as follows: 0.13% to 0.21% for 12S-tRNA Val -16S; 0.65% for Cyt b; and 0.61% for COI. Appendix A shows the GenBank accession numbers of all samples used in the comparisons, some of them not included in the phylogenetic analysis. Discussion DNA sequences showed that the specimens found around Lago del Desierto, including the mentioned Alsodes aff. coppingeri from Santa Cruz Province, belong to the species A. coppingeri, confirming its Table 1. presence in Argentina for the first time. All samples of this species used in the phylogenetic analysis were recovered as a well-supported clade, sister of A. verrucosus. The node A. coppingeri + A. verrucosus was weakly supported, but when other DNA markers were used for both species, a well-supported relationship was obtained. Alsodes verrucosus is a poorly defined species, with a non-detailed description, without assignment of type specimens, and a vaguely defined type locality that corresponds to a vast area, the Andes Range of Cautn Province in Chile. The exemplars of A. verrucosus we used were sampled from Puyehue (Osorno Province), about 200 km south of Cautn Province. From this last locality, karyotype and tadpoles were already described. In Chile, this species was also recently recorded from Cayutu, Llanquihue Province, and even for Wellington Island. In Argentina, it was cited from Ro Negro Province, a population not even detected again, and also from Neuqun Province from where specimens seems to correspond to A. neuquensis. Caution must be taken regarding comparisons with A. verrucosus unless they include specimens from Cautn. The samples of A. coppingeri (Caleta Tortel) and A. verrucosus (Puyehue) available to us are about 800 km apart. However, their low genetic divergence suggests that future studies are needed to establish species boundaries, including intermediate populations previously assigned to either one of these two species, as well as specimens previously assigned to A. verrucosus from Wellington Island. As early mentioned, A. kaweshkari was described in these high austral latitudes. The Cyt b sequences of A. kaweshkari we obtained did not provided differentiation from A. gargola of Futaleuf, Chile, as considered in Blotto et al.. This unexpected result deserves further consideration. The other Alsodes species mentioned for Wellington Island is A. australis, but we could not study the three specimens attributed to the species that have been collected to make direct comparisons with neither with A. coppingeri nor A. verrucosus. Asencio et al. only considered A. kaweshkari to be present in the Island. It is worth of mention that the authors referred to Wellington Island as the type locality of A. australis, which is in fact more than 300 km away northwards from this site, at Puente Traihuanca, in the Aysn Region of Chile. Due to these inaccuracies, the taxonomic identity of A. australis from Wellington Island should be re-evaluated. The adult specimens collected for this work showed some remarkable morphological variation. Three of them (SVL = 43.22-48.08 mm) slightly exceed the known size range of A. coppingeri but share other Table 1. Geographical coordinates for all populations know of Alsodes coppingeri in Chile and new records from Argentina. Type locality in bold. From Chile also was reported in Puyuhuapi, Aysn Province, and Pennsula Muoz Gamero, ltima Esperanza Province, but both localities require new studies. diagnostic characters of the species such as snout truncated in lateral view, uniform color on hindlimbs (without bars), reduced toe fringes and webbed feet. However, the characteristics of one adult male (CNP.A 4390; SVL 58.69 mm) matched with diagnostic characters of A. kaweshkari: SVL 56.5-62.2 mm, toes well fringed, webbing of feet reduced but present between all toes, granular dorsolateral surfaces, the skin around the vent and posterior thighs being granular, overall grey coloration, and notable development of secondary sexual characters. In spite of this, molecular data confirmed this last specimen to be A. coppingeri. The size range of adults found by us (43.22-58.69 mm) also overlaps with the only adult of A. verrucosus (43.7 mm) from Wellington Island. Double outer metacarpal tubercles were reported for this population, a character not found again in other specimens analyzed to date from this Island. Remarkably, three specimens from Lago del Desierto have outer metacarpal tubercles with deep anterior notches (bilobed), without molecular data these specimens could have been assigned to A. verrucosus. Regarding the development of webbing, the two metamorphs from Lago del Desierto have mid-or fully webbed feet, suggesting a great intraspecific variation of this character, as observed in other Alsodes. Grandison described webbed feet for A. coppingeri, but according to Formas et al. in the Grandison's diagnosis and the morphology provided by Cei were included specimens from a wide geographic range, many of them likely belonging to different Alsodes species. The phenotypic plasticity found in the few specimens of A. coppingeri known from Argentina overlaps with almost all characters that were used to distinguish among A. coppingeri, A. kaweshkari, and A. verrucosus. A thorough taxonomic revision of these taxa is pending, which should include the specimens of A. australis reported from Wellington Island by Asencio et al.. Regarding DNA data, all available information of Alsodes at latitudes above 47° S (N = 10) appear to belong to a single species, except for a Cyt b sequence of A. kaweshkari (see discussion in ). Nonetheless, cytogenetic characters may allow to distinguish among A. coppingeri, A. kaweshkari, and A. verrucosus. All species present 2n = 26 with bi-armed chromosomes (FN = 52), but the chromosomal configuration shows differences between the A. coppingeri -A. verrucosus and A. kaweshkari. Both Alsodes coppingeri and A. verrucosus from Wellington Island share four large, two intermediate, and seven small chromosomes, and show the nucleolus organizer regions (NORs) located within the secondary constrictions of the short arm of pair 4; but differ in the morphology of pairs 2, 7, 8, 9, 12, and 13; but see the A. verrucosus chromosome configuration from Puyehue. On the other hand, A. kaweshkari have five large, one intermediate, and seven small chromosomes with secondary constrictions on pairs one, four, and six. Like other Alsodes species, our studied larval specimens were aquatic and exotrophic tadpoles, of the lothicbenthic ecomorphological type, which are commonly associated with streams [e.g. 24,48,. The tadpoles found in summer ranged between 37.41 and 58.96 mm of total length and between 25 and 39 developmental stages. The 18 tadpoles collected on March 22 (beginning of autumn) ranged between 25 and 26 Gosner's stages, similar to the data presented from the Wellington Island by Formas et al.. These observations allow us to infer at least one overwintering episode during larval development in streams and permanent oligotrophic ponds of the study sites, as has been proposed for other Alsodes [e.g., rather than an acceleration of metamorphosis before the arrival of winter. This is supported by data from Laguna Caiquenes, Chile, where tadpoles of A. coppingeri were found throughout the year with metamorphosing individuals in January and March. The area of Lago del Desierto represents the southern limits for the genus Alsodes in Argentina. On Wellington Island (Chile), A. coppingeri, A. verrucosus, and A. kaweshkari can be found in sympatry. Alsodes coppingeri was mentioned for Pennsula Muoz Gamero (Chile), found in a lowland area outside the forest by the "Royal Society Expedition to Southern Chile", that would extends the distribution of the genus in Chile about 400 km south. The northernmost known locality for A. coppingeri may be Puyuhuapi, Aysn Province, but the taxonomic identity of this population and specimens from Pennsula Muoz Gamero requires a revision. All known records of A. coppingeri correspond to the temperate-cold forest altitudinal range. At these latitudes on the western side of the Andes, the weather is cold throughout the year, with prevailing winds from the west and annual rainfall usually exceeding 4000 mm. On the eastern side of the Andes, the temperate forest extends over slope of the Andes below 1000 m a.s.l., a narrow area bounded to the east by prevailing aridity. In Argentina, A. coppingeri lives in sympatry with Chaltenobatrachus grandisonae and Nannophryne variegata, while in Chile it dwells with A. kaweshkari, A. verrucosus, Batrachyla antartandica, B. nibaldoi, B. taeniata, C. grandisonae, Eupsophus calcaratus, and N. variegata. Lago del Desierto and Wellington Island at both sides of the Andes mountain range are at almost the same latitudes, about 100 km in a straight-line, with the interposition of the Southern Continental Ice Field. However, the populations of Aysn (Chile) are located close to a gap without ice cover that separates the North and South portions of the Continental Ice Field (Figure 1). It is possible that this area would act as a corridor for different amphibian species at both sides of the Andes. The new locations of A. coppingeri in Argentina are included in the Lago del Desierto Provincial Reserve recently created in 2005, near the northern limit of Los Glaciares National Park, where the species could be present. In Chile, the species is included in the Laguna Caiquenes Natural Reserve and the Bernardo O'Higgins National Park. However, the introduction of exotic salmonids in lakes and streams from both Chile and Argentina is a matter of concern, as may pose high predation pressure on tadpoles. The known geographic range of A. coppingeri is included in the Subpolar Nothofagus Ecoregion, categorized as Vulnerable and Bioregionally Outstanding ecoregion. Currently, the species is classified as Data Deficient given continuing uncertainties about its actual extent of occurrence, population status, and ecological requirements; this categorization indicates the need for further field data because some potential extinction risk factors and the extent of geographic occurrence may have been overlooked.
Physical Exercise and Selective Autophagy: Benefit and Risk on Cardiovascular Health Physical exercise promotes cardiorespiratory fitness, and is considered the mainstream of non-pharmacological therapies along with lifestyle modification for various chronic diseases, in particular cardiovascular diseases. Physical exercise may positively affect various cardiovascular risk factors including body weight, blood pressure, insulin sensitivity, lipid and glucose metabolism, heart function, endothelial function, and body fat composition. With the ever-rising prevalence of obesity and other types of metabolic diseases, as well as sedentary lifestyle, regular exercise of moderate intensity has been indicated to benefit cardiovascular health and reduce overall disease mortality. Exercise offers a wide cadre of favorable responses in the cardiovascular system such as improved dynamics of the cardiovascular system, reduced prevalence of coronary heart diseases and cardiomyopathies, enhanced cardiac reserve capacity, and autonomic regulation. Ample clinical and experimental evidence has indicated an emerging role for autophagy, a conservative catabolism process to degrade and recycle cellular organelles and nutrients, in exercise training-offered cardiovascular benefits. Regular physical exercise as a unique form of physiological stress is capable of triggering adaptation while autophagy in particular selective autophagy seems to be permissive to such cardiovascular adaptation. Here in this mini-review, we will summarize the role for autophagy in particular mitochondrial selective autophagy namely mitophagy in the benefit versus risk of physical exercise on cardiovascular function. Introduction Regular physical exercise is a part of healthy lifestyle, with multiple cross-sectional studies consolidating reduced overall risk of cardiovascular diseases and cardiac events associated with habitual or leisure physical exercises. Ample evidence has indicated a much better survival rate following a cardiovascular event in those who are physically active in comparison with more sedentary individuals, and the beneficial impact of physical exercise on heart failure is also described [1,. Regular physical exercise is now becoming a non-pharmacological remedy to lower cardiovascular morbidity and mortality courtesy of the exercise-induced cardiovascular benefit. Such maneuver drastically improves the overall cardiovascular survival despite the poor success for current pharmaceutical Acute Alterations in Cardiac Function during Exercise With exercise, hearts will experience physiological adaptations including increased cardiac output (CO) and peripheral perfusion to cope up with dramatically increased musculoskeletal and pulmonary requirements. The higher CO results from a concerted effort from increased heart rate (HR), stroke volume (SV), and/or cardiac contractile capacity. In addition, exercise may stimulate autonomic function to promote cardiac function. Cardiac chronotropic and inotropic responses to sympathetic system (-adrenergic response) may be facilitated by exercise along with stimulation of intrinsic myogenic tone. Ample evidence has depicted a rather minor role for parasympathetic system in the tonic control of myocardial function, with norepinephrine from sympathetic nerve fibers being the predominant myocardial regulator in response to exercise. Norepinephrine binds with 1 receptor to turn on G protein and adenylate cyclase. In consequence, cAMP is accumulated in the cytosolic space leading to elevated intracellular Ca 2+ levels and higher cardiac contractility, which may also be arrhythmogenic and harmful if it is excessive. Metabolic Flexibility Metabolic flexibility refers to the ability of an organism to adapt changes in metabolic demand. Physical exercise significantly increases energy expenditure and demand. Previous findings have identified a link between exercise and improved fatty acid and/or glucose oxidation. During exercise, changes in mechanical stretch, catecholamines and circulating substrates (such as free fatty acids) impact cardiac metabolism. Glucose catabolism is transiently suppressed during exercise and is then elevated above the un-trained state after recovery. In this regard, these metabolic changes are not only transient responses to physical activity but also adaptations that prepare the organism for the next bout of activity. This is possibly achieved through autophagy and other cellular catabolic processes to elevate metabolism capacity. Exercise also appears to improve insulin signaling. Exercise is known to promote insulin sensitivity and benefit glucose and energy homeostasis given that insulin signaling is vital for GLUT-4 and hemodynamic function. Preserved glucose uptake has been documented in insulin-resistant muscle following exercise. These events would promote glucose utilization and energy production in the heart. Mounting evidence has suggested that exercise may improve cardiovascular function through indirect actions on lipid and insulin profiles. In addition, non-target GC-MS metabolomics analysis of rat Acute Alterations in Cardiac Function during Exercise With exercise, hearts will experience physiological adaptations including increased cardiac output (CO) and peripheral perfusion to cope up with dramatically increased musculoskeletal and pulmonary requirements. The higher CO results from a concerted effort from increased heart rate (HR), stroke volume (SV), and/or cardiac contractile capacity. In addition, exercise may stimulate autonomic function to promote cardiac function. Cardiac chronotropic and inotropic responses to sympathetic system (-adrenergic response) may be facilitated by exercise along with stimulation of intrinsic myogenic tone. Ample evidence has depicted a rather minor role for parasympathetic system in the tonic control of myocardial function, with norepinephrine from sympathetic nerve fibers being the predominant myocardial regulator in response to exercise. Norepinephrine binds with 1 receptor to turn on G protein and adenylate cyclase. In consequence, cAMP is accumulated in the cytosolic space leading to elevated intracellular Ca 2+ levels and higher cardiac contractility, which may also be arrhythmogenic and harmful if it is excessive. Metabolic Flexibility Metabolic flexibility refers to the ability of an organism to adapt changes in metabolic demand. Physical exercise significantly increases energy expenditure and demand. Previous findings have identified a link between exercise and improved fatty acid and/or glucose oxidation. During exercise, changes in mechanical stretch, catecholamines and circulating substrates (such as free fatty acids) impact cardiac metabolism. Glucose catabolism is transiently suppressed during exercise and is then elevated above the un-trained state after recovery. In this regard, these metabolic changes are not only transient responses to physical activity but also adaptations that prepare the organism for the next bout of activity. This is possibly achieved through autophagy and other cellular catabolic processes to elevate metabolism capacity. Exercise also appears to improve insulin signaling. Exercise is known to promote insulin sensitivity and benefit glucose and energy homeostasis given that insulin signaling is vital for GLUT-4 and hemodynamic function. Preserved glucose uptake has been documented in insulin-resistant muscle following exercise. These events would promote glucose utilization and energy production in the heart. Mounting evidence has suggested that exercise may improve cardiovascular function through indirect actions on lipid and insulin profiles. In addition, non-target GC-MS metabolomics analysis of rat hearts revealed that endurance training offered cardioprotection against ischemia-reperfusion injury possibly through modulating protein quality control, CoA biosynthesis and ammonia recycling. Taken together, greater emphasis should be geared towards metabolic adaptations and mechanisms underlying metabolic flexibility, such as autophagy, during and after exercise. Chronic Adaptations in Heart and Vasculature Cardiac hypertrophy is thought to be a part of the adaptive remodeling process. The heart mass, especially those within the ventricular wall (eccentric hypertrophy), rises physiologically as a result of sustained changes in metabolic and remodeling pathways in the heart. Unlike hypertrophy observed in pathological conditions, such as hypertension, this cardiac hypertrophy is characterized by a mild increase in ventricular volume accompanied with reserved or increased myocardial function due to cardiomyocyte growth in size. In addition, this physiological hypertrophy displays none of the features of adverse cardiac remodeling, such as cardiac fibrosis and necrosis. More recent studies have demonstrated distinct signaling molecules mediating cardiac hypertrophy in both physiological and pathological states, while how exercise exerts disparate induction of hypertension remains unclear. Induction of IGF-1/IRS-PI3K-Akt pathway is deemed to mediate physiological hypertrophy, which regulates several transcriptional factors. In addition, intermittent hemodynamic stimuli induced by exercise also enhances vascular structure (i.e., increased angiogenesis) and function, which contribute to the increased cardiac output (CO) and lessened atherosclerosis. Exercise training is probably the most sufficient way to improve endothelial function. Based on a systematic review and meta-analysis, a reduction in blood pressure was noted in patients of stroke or transient ischemic attack following exercise training. Complex factors, such as shear stress and alternations in plasma profiles precipitate the activation or restoration of endothelial pathways during exercise. Exercise-induced circulating catecholamines could act on -3 adrenergic receptors (B3AR) to increase endothelial nitric oxide synthase (eNOS), which augments the bioavailability of NO (nitric oxide), an essential molecule responsible for vasodilation and anti-atherosclerosis effects. More recent evidence suggested that rhythmic handgrip exercise promoted increased eNOS phosphorylation, NO generation, and O 2 − production, along with improved autophagy markers including Beclin1, microtubule-associated proteins 1A/1B light chain 3B (LC3B), autophagy-related gene 3 (Atg3), and lysosomal-associated membrane protein 2A (LAMP2) as well as decreased levels of p62 in endothelial cells from human radial artery. These findings denote a close tie between eNOS/NO signal cascade and autophagy in exercise-induced regulation on endothelial function. Cellular and Molecular Alternations Induced by Exercise At the cellular level, findings have indicated that physiological hypertrophy is accompanied with the induction of several mechanisms that promote cellular survive, including protein quality control, cell growth protein synthesis, antioxidant generation, autophagy-lysosomal system, and mitochondrial adaptation. In a recent randomized controlled trial, endurance training and interval training (but not resistance training) were found to promote telomerase activity and telomere length, essential markers for cellular senescence, regenerative capacity, and healthy aging. Moreover, physical exercise appears to exert a favorable effect on aging-related cardiometabolic stress through mediating autophagy. Among these mechanisms mentioned above, emerging findings have consolidated a critical role for mitochondria in exercise-offered cardiovascular benefit. Mitochondrial remodeling is a vital determinant in exercise-dependent adaptations. Metabolic changes induced by exercise may influence mitochondrial function, dynamics and turnover, leading to robust mitochondrial network and enhanced metabolic flexibility. It has been shown that the transcription factor EB (TFEB) translocated to myonuclei during exercise and regulated mitochondrial biogenesis and glucose uptake, therefore acting as a major mediator for metabolic flexibility. During exercise, there is a significant increase of mitochondrial biogenesis. Catabolic process through mitophagy is required to confer materials for synthesis and remove dysfunctional organelles that otherwise might result in cellular death. Thus, it is probable that the cardioprotective effects of exercise are strongly associated with mitophagy. To further discern the upstream pathways in exercise-induced mitochondrial biogenesis and mitophagy, a number of studies were performed which have greatly enriched our knowledge of the impact of exercise on mitochondrial integrity [15,27,. For example, it was demonstrated that exercise-induced phosphorylation of an important energy sensor protein kinase AMPK (protein kinase AMP-activated catalytic subunit alpha 1) and AMPK-dependent ULK1 (unc-51 like autophagy activating kinase 1) phosphorylation is required to target lysosome to mitochondria. Previous studies have recognized a rather pivotal role for transcriptional coactivator peroxisome proliferator-activated receptor- coactivator-1 (PGC-1) in mediating exercise-induced responses on mitochondria. PGC-1 is capable of interacting with several nuclear transcription factors, such as peroxisome-proliferator activated receptor (PPAR) and estrogen-related receptor (ERR) to increase mitochondrial biogenesis and to improve mitochondrial energy metabolism. Exercise restores mitophagy in high-fat high-fructose-treated liver in a PCG-1-dependent manner, while deletion of PCG-1 compromises the flourishing of mitochondria following exercise. However, Kang and Ji established an overexpression model of PCG-1 via in vivo transfection and found that PCG-1 overexpression drastically suppressed the levels of FoxO1/3 and mitophagy in immobilization-remobilization muscles. Furthermore, the IGF-1/PI3K/Akt cascade implicates in chronic cardiac adaptations following exercise through regulating diverse cellular functions, such as cell growth, glucose metabolism and mitochondrial turnover. Akt inhibits the transcription factor C/EBPand then frees certain serum response factors (SRF) to bind target gene promotor, which orchestrates the maintenance of healthy mitochondrial network and contributes to cardiac hypertrophy. Collectively, these studies have delineated general mechanism underlying exercise-induced mitophagy (shown in Figure 2), while more questions remain to be answered. Risk of Exercise for Cardiovascular Function Regular exercise provides benefit to cardiovascular function, while much uncertainty still exists with regards to the impact of strenuous exercise. To date, most studies assumed that whether exercise is salutary largely depends on the frequency, intensity, and duration of exercise. High levels of physical exercise well beyond the recommended levels are tied with higher mortality risks in patients with preexisting cardiovascular diseases. Nevertheless, how much exercise is optimal to exert cardiovascular benefit remains unclear and equally controversial. Recent studies have suggested a U-or J-shaped curve which reflects the association between exercise level and health outcomes. Substantial evidence has shown that moderate levels of exercise are associated with a reduction in cardiovascular risks. While too much exercise may be detrimental and is associated with increased risk of cardiovascular mortality. Reports in endurance runners demonstrated that marathoners who completed at least 25 marathons in more than 25 years normally possess more severe coronary artery calcification and calcified coronary plaque. A recent survey denoted that individuals who maintain a very high level of physical activity have likely higher odds of developing coronary artery calcification, especially in white American males. Similarly, a large prospective cohort finding from Armstrong and colleagues involving 1,000,000+ women suggesting that strenuous daily physical activity may impose much higher risks of coronary heart disease. Not surprisingly, we should take special precaution in weighing the overall benefit versus risk when advising individuals with regards to the physical exercise engagement. A number of unfavorable cardiovascular events may occur following intensive or excessive physical exercise. For example, exercise is known to precipitate angina pectoris, myocardial infarction, arrhythmias, and sudden death in those individuals with pre-existing coronary artery diseases. chronic cardiac adaptations following exercise through regulating diverse cellular functions, such as cell growth, glucose metabolism and mitochondrial turnover. Akt inhibits the transcription factor C/EBPand then frees certain serum response factors (SRF) to bind target gene promotor, which orchestrates the maintenance of healthy mitochondrial network and contributes to cardiac hypertrophy. Collectively, these studies have delineated general mechanism underlying exercise-induced mitophagy (shown in Figure 2), while more questions remain to be answered. Mechanism and signaling pathways involved in mitochondrial adaptation in heart following exercise. Acute exercise augments mitophagy depending on the phosphorylation of AMPK (protein kinase AMP-activated catalytic subunit alpha 1) and ULK1 (unc-51 like autophagy activating kinase 1). AMPK could be activated by exercise-related increase of AMP/ATP ratio, sympathetic activation and other signaling. Mitophagy removes dysfunctional mitochondria and reduces reactive oxygen species (ROS). AMPK also promotes mitochondrial biogenesis through regulating PGC-1a. Regular exercise mainly activates the IGF1-PI3K-Akt pathway, which targets several transcription factors in nucleus and contributes to cell growth, cellular survival, metabolic homeostasis, and mitochondrial maintenance. Abbreviations: AMPK, AMP-activated kinase; Sirt1, Sirtuin 1; PGC-1a, peroxisome proliferator activated receptor gamma co-activator 1a; IGF-1, insulin-like growth factor-1; PI3K, phosphoinositide-3 kinase; Akt, serine/threonine-protein kinase; C/EBP, CCAAT/enhancer binding protein b; Cited4, cbp/p300interacting transactivator with Glu/Asp-rich carboxy-terminal domain 4; SRF, serum response factor. There are emerging data denoting that sustained intense exercise may lead to adverse electrical and structural remodeling in the heart. Moreover, plasma catecholamine responsiveness may be inappropriately affected by exercise which is manifested as chronotropic incompetence and lower plasma epinephrine response to exercise probably as a result of abnormal sympathoadrenal and autonomic function. Sustained exposure of catecholamine may trigger downregulation of -adrenergic receptors (desensitization), resulting in loss of adenylate cyclase responsiveness and cardiac contraction during exercise. The -adrenergic receptor-adenylate cyclase signaling cascade is essential to the maintenance of myocardial homeostasis. A loss in either quantity or sensitivity of -adrenergic receptors should disengage myocardium to sympathetic innervation (through norepinephrine) during physical exercise. Likewise, modification of -adrenergic receptor-linked adenylate cyclase may also decrease adenylate cyclase activity and exercise capacitance. Therefore, decreased (or sometimes unchanged) myocardial contractile function during exercise fails to cope with the need from cardiopulmonary system for blood and oxygen for a homeostatic condition. Other than decreased left ventricular contraction, compromised diastolic function was also noted during exercise. Although a number of mechanisms have been put forward, loss of myocardial function at rest and during exercise seems to be associated with myocardial alterations including myosin isozyme switch (V1 to V3) and phosphorylation of cardiac inhibitory protein TnI. In contrast, this scenario may not hold true in healthy individuals. High levels of strenuous or vigorous exercise seem to have little effects on overall mortality in healthy individuals although intensive training may compromise the health benefits associated with regular moderate physical activity. Greater emphasis should be made on how a well-functioning organism or individual combats the risks of exercise. Mounting efforts have illustrated exercise, especially intense or prolonged exercise, may cause oxidative stress and subsequent damage in myocytes. Oxidative stress, energy requirement, and mitochondria are closely linked. Therefore, we may propose that mitochondrial quality control is indispensable for beneficial adaptations induced by exercise. Oxidative stress could activate mitophagy to cope with mitochondrial dysfunction. Earlier studies have demonstrated a strong association between protective mitophagy and exercise, which we will elaborate in the next section. Mitophagy and Exercise Mitophagy is initiated when damaged mitochondria are labeled for degradation. The major fission protein Drp1 (dynamin related protein 1) is translocated to depolarized mitochondrial membrane and segregates the damaged components from the rest of the healthy mitochondria. Then, PINK1 (PTEN induced kinase 1) accumulates on compromised mitochondria and recruits E3 ubiquitin-protein ligase Parkin, which ubiquitinates a branch of proteins on outer mitochondrial membrane (OMM). Certain autophagy receptors, such as NDP52 (CALCOCO2, Ca 2+ binding and coiled-coil domain 2) and optineurin then tether mitochondria to autophagosomes, which subsequently fuse with lysosomes for lysosomal degradation. It is noted that PINK1 would recruit autophagy receptors at a low rate independent of Parkin. In addition to the PINK1/Parkin signaling cascade, several OMM-localized mediators, including NIX (NIP3-like protein X), BNIP3 (BCL2 interacting protein 3), FUNDC1, and cardiolipin could target mitochondria to autophagosome through binding to LC3 (microtubule associated protein 1 light chain 3) on phagophores in response to developmental signals or hypoxia. However, it should be noted that chronic hypoxia may overtly upregulate the level of housekeeper proteins. Thus, data normalized against these housekeeping proteins, such as GAPDH, actin and tubulin should be handled with special caution when heart tissue is exposed to hypoxia. Exercise-induced mitophagy might slightly differ from the conventional pathways. It has been demonstrated that Parkin is indispensable for exercise-induced mitophagy initiation. Exercise stimulates mitophagy flux courtesy of increased recruitment of Parkin to mitochondria, despite that Parkin knockout did not impact basal mitophagy. Examination conducted by Drake and colleagues found enhanced mitophagy levels in the absence of discernable PINK1 accumulation in skeletal muscles following exercise, while HeLa cells treated with carbonyl cyanide m-chlorophenyl hydrazone (CCCP) displayed overtly elevated PINK1. The relationship between exercise and mitophagy has been extensively studied, mainly using skeletal muscle or myocytes. Given the critical role of mitochondria in cardiomyocyte energy production and function, there has been an increasing interest in exercise-induced mitophagy in heart. In this section, we will introduce recent studies (last 5 years) on how exercise regulates mitophagy. Exercise as a Treatment or Prevention to Diseases: The Role of Mitophagy First, a mainstream of research has focused on revealing the close tie between exercise and temporarily enhanced mitophagy. It was indicated that Beclin1, LC3, and BNIP3 were remarkably upregulated in rat myocardium during acute exercise and were then slowly declined to baseline 48 h later. Likewise, PINK1, Parkin, Ubiquitin, p62, and LC3 were overtly elevated in rat skeletal muscles after downhill treadmill running for 90 min with the upregulation lasting for more than 24 h. It is noteworthy that shear stress has emerged as a modulator of autophagy during exercise. It was reported that 1 h of rhythmic handgrip exercise initiated autophagy, NO generation and O 2 − production in humans due to the elevated shear stress. In an earlier study, it was determined that inhibition of autophagy prevented NO production and enhanced ROS formation. Thus, autophagy plays a critical role in NO bioavailability and redox homeostasis in endothelial cells. In addition to acute exercise, Ju and coworkers observed remarkable activation of autophagy flux and mitochondrial dynamics (both fusion and fission) in mice following sustained (8-week) swimming training. Moreover, when treated with colchicine, a blocker for autophagosomal degradation, BNIP3 was found increased while exercise-induced mitochondrial biogenesis was greatly diminished, indicating a possible role of mitophagy in mitochondrial content or biogenesis following exercise. To date, studies have recognized a protective role of mitophagy during exercise. Mitophagy flux presumably protects the heart from exercise-induced risk. It is possible that mitophagy was stimulated by exercise-related activation of inflammation and accumulation of ROS, while upregulated mitophagy could remove ROS and eliminate inflammation, thus reducing mitochondrial injuries. Figure 1 shows the possible schematic of how exercise exerts cardioprotective effects through modulating mitochondria homeostasis. Moreover, exercise shows promise as a safe and inexpensive way to treat multiple diseases, including cardiovascular diseases. There is an increasing emphasis on mitophagy in exercise treatment. Short-duration exercise regimen has been recommended for cardiac rehabilitation after stable myocardial infarction based on the favorable response of short-duration exercise (15-min swimming training per day, 5 times per week for 8 weeks) on cardiac function in mice. It has been suggested that increased SIRT3 as well as PINK1/Parkin was responsible for this. Moreover, it was suggested that long-term (8 weeks) exercise coupled with caloric restriction prior to isoproterenol injection may prevent heart failure more efficiently than either therapy alone possibly through stimulation of autophagy. Despite few data available on the role of mitophagy in resistance exercise, it was indicated that resistance exercise may attenuate muscle atrophy through elevated mitophagy and biogenesis in rats. It is speculated that autophagy is required during caloric restriction and physical exertion for survival, and is repressed in nutrient-rich conditions. However, human beings are no longer forced to be engaged in frequent physical activity in modern life, with the development of science and technology. Moreover, there is a rising concern that both sedentary behavior and caloric abundance are major contributors to a range of chronic diseases, including insulin resistance, obesity, diabetes mellitus, cardiovascular diseases, and various forms of cancer, while regular physical exercise helps to prevent these chronic diseases. Moreover, metabolic diseases are among the major independent risk factors of cardiovascular diseases. Hopefully, physical exercise would promote cardiovascular health through primary and secondary prevention. Therefore, a number of investigators have sought to determine the salutary effects of exercise concurrent with low-quality diet. In particular, the contribution of autophagy or mitophagy has drawn close attention recently. Markers of mitophagy, autophagy, and mitochondrial dynamics were assessed in high-fat diet treated mice which were also engaged in either voluntary physical activity (VPA) or endurance training (ET). Researchers found that both VPA and ET rescued the high-fat-related increase of apoptosis and decrease of autophagy and mitochondrial biogenesis in mouse livers, leading to protection against nonalcoholic steatohepatitis. In particular, only ET reverted mitophagy and reduced mPTP opening. Likewise, Rosa and colleagues detected an increase of autophagy (LC3-II/I ratio, p62) in mouse livers following a 4-week voluntary wheel running in both Western diet and normal diet groups, while Western diet suppressed BNIP3 levels by 30% compared to normal diet group. These authors proposed that increased autophagy may protect the liver from excessive lipid accumulation. In addition, Tarpey found a remarkable increase of mitophagy in skeletal muscle biopsies from male runners after endurance training. However, they found no difference in mitophagy between fasting conditions and 4 h after high-fat diet intake, indicating that mitophagy may not be the dominant contributor to the exercise-induced protective metabolic flexibility against high fat diet intake. One can argue that 4 h of high-fat diet intake is too short to impose any metabolic abnormality. To this end, these inconsistent findings are slightly biased, given the complexity of exercise and diet. Mitophagy is Attenuated Due to Improved Mitochondrial Pool after Sustained Exercise Training Mitophagy triggered by regular exercise precipitates the accumulation of healthy mitochondria as well as improved mitochondrial function. Therefore, mitophagy is believed to be maintained at an optimal (perhaps a low) level as a result of the long-term exercise training. Muscle biopsy obtained from human subjects showed increased LC3I, BNIP3, and Parkin levels 2 h following moderate cycling training. Interestingly, an increased capacity for mitophagy was also observed following an 8-week training. Chen and coworkers further noted that sustained endurance training drastically attenuated exercise-induced mitophagy due to the overall improvement of mitochondrial quality. Likewise, a study examining mitophagy between young and aged rat muscles revealed upregulated mitophagy in the aged group, while chronic contractile activity (CCA) limited mitophagy and improved mitochondrial stabilization. In the same vein, another independent study also documented decreased mitophagy after a 5-day CCA. They further detected increased lysosome biogenesis regulator TFEB and LAMP1, indicating improved lysosomal degradation capacity. Li and associates found that exhaustive exercise following exercise preconditioning displayed an unchanged LC3-II/LC3-I ratio. They further determined the levels of autophagy in different phases. Exhaustive exercise (EE) showed reduced LC3-II/LC3-I ratio, while exercise preconditioning (EP) transiently activated autophagy (especially at 2 h after EP) and attenuated EE-induced myocardial injury, which indicated preserved basal autophagy might underlie EP-offered benefit. Besides endurance exercise, a study conducted by Estebanez and colleagues depicted that 8-week resistance exercise training prevented activation of mitophagy in peripheral blood mononuclear cells from otherwise healthy elderly individuals. Ample studies have focused on the long-term effects of exercise on diseases as summarized nicely in recent reviews. It is assumed that improved mitochondrial quality after exercise confers better cardiac performance and restrains pathological activation of mitophagy in response to acute stresses. Several attempts have been made to clarify whether exercise preconditioning imposes protective effects under acute cardiac stress. It has been shown that late exercise preconditioning protected the heart from exhaustive exercise-caused injuries through increasing Parkin-mediated mitophagy. It was further suggested that exercise preconditioning augmented mitophagy via H 2 O 2 oxidative stress-induced activation of PI3K. Consistent with this view, it was found that earlier aerobic exercise complemented by a natural herb Rhodiola sacra protected the cardiac and skeletal muscles in exhaustive exercise through enhanced mitophagy. Moreover, it has been demonstrated that exercise preconditioning may also exert protection against doxorubicin-induced cardiotoxicity. Marques and team suggested that endurance exercise training before or during sub-chronic doxorubicin treatment prohibited doxorubicin-induced mitophagy, mPTP opening and apoptosis. However, in contrast to finding from Marques, Lee argued that endurance exercise training prior to doxorubicin-treatment turned on protective mitophagy and suppressed NADPH oxidase 2 (NOX2) to protect against doxorubicin-induced cardiotoxicity. Arrhythmia, especially fibrillation serves as a hallmark of cardiac injury and contributes to high cardiac mortality. Although a tight correlation between mitophagy and ischemic injury has been extensively described, whether mitophagy/autophagy participates in myocardial arrhythmia remains somewhat elusive. Lekli and associates thoroughly examined the occurrence of autophagy as an adaptive response to arrhythmogenesis, which might improve myocardial recovery through offsetting proteotoxic stress. These authors suggested that intervention targeting autophagy should be taken with the precaution since excessive autophagy may be detrimental. A more recent study showed complex alternations of autophagy-associated proteins (decreased p62 and gradually reduced LC3BII/LC3BI) in ventricular fibrillation. Thus, the association between arrhythmia and autophagy is unclear and further studies should be warranted. Isoproterenol is an extensively employed non-selective -adrenergic agonist. It was suggested that at small dose of isoproterenol, autophagy may cope with the toxic arrhythmic effect of isoproterenol. It is anticipated that non-invasive interventions such as exercise might be the countermeasure to arrhythmogenesis. Compromised Mitophagy Response under Certain Pathological Conditions It is possible that aging or certain metabolic diseases, such as obesity and diabetes, might compromise the regulation of mitophagy during exercise (shown in Figure 1). To examine whether mitophagy response induced by exercise vary in pathological states, a number of studies have been carried out. It has been shown that mitophagy flux stimulated by exercise was attenuated with age, resulting in mitochondrial deficiency during exercise in aging muscles. Lipidated LC3II, the gold-standard indicator of autophagosome content, was upregulated 48 h following resistance exercise in untrained young but not older men. The unfolded protein response (UPR) is another important adaptive reaction to exercise. Transcriptomic analysis revealed that the activation of UPR was attenuated in older healthy women and men compared to young adults following a single bout of exercise. Furthermore, the coordination between UPR and p53/p21 axis of autophagy was less evident in older. In another independent study, an aging-induced significant decline of mitochondrial quality control proteins, such as Lon, could be partly rescued by exercise training. Likewise, despite the increase of mitochondrial complex II, there was no noticeable change in BNIP3, MUL1, and LC3 II/I ratio in muscle biopsies of type 2 diabetic patients following a 3-month endurance training. Nonetheless, contradictory findings are observed in human and rodent studies. A study tested mitophagy in mouse and human skeletal muscles. The results showed an aging-associated decline of PCG-1 and an increase of BNIP3 and LC3 II in mice, which was ameliorated by lifelong exercise training. However, markers of mitophagy and apoptosis were altered slightly during human aging, while lifelong exercise training upregulated BNIP3. It has been reported that a bout of unaccustomed resistance exercise for knee extensors transiently reduced the overall expression of mitochondrial proteins except for PCG-1 with no apparent change of mitophagy (VDAC, PINK1/Parkin) in both young and age candidates. Sex difference exists in cardiovascular function. Likewise, sex difference has also been noted in cardiac responses to exercise in individuals with cardiovascular diseases. Despite a similar exercise capacity, female heart failure patients with preserved ejection fraction (HFpEF) exhibited greater cardiac and extracardiac deficits, including worse biventricular systolic reserve, diastolic reserve, and peripheral O 2 extraction. A number of scenarios have been postulated for the sex difference in exercise-induced cardiovascular responses. For example, sex steroid hormones and their receptors exist in mitochondria from skeletal muscles, which may contribute to sex differences in cardiac performance in response to exercise. It was suggested that estrogen receptor binding attenuates the reduction in mitochondrial size and thus inhibits apoptosis. Other mechanisms in sex difference in exercise response may encompass activation of PI3K/AKT pathway and extracellular signal-regulated kinase 1/2 (ERK 1/2), which are also important regulators in exercise-induced mitophagy. Lack of estrogen and disruption of estrogen receptors might explain, in part, the reduced mitochondrial density and muscle mass in postmenopausal women. Nonetheless, whether autophagy directly participates in these sex-related differences in exercise response remains unclear. Moreover, studies have cast doubts on exercise-induced mitophagy. Unlike previous studies, Schwalm and colleagues provided evidence that mitophagy remained unchanged during and early (1 h) after acute high-density (70% VO 2peak ) endurance exercise in human skeletal muscles, whereas proteins and mRNA markers for mitochondrial fission and mitophagy (Drp1, Fis1, BNIP3) were more expressed in the fed state than the fasted state. For some reason, a study including 11 participants examined gene expression of human muscles after exercise and argued that PINK1 and PARK2 mRNA were transiently decreased 3 h after 60-min cycling and returned to baseline 6 h later. These investigators also noticed that PCG-1 was elevated after exercise but was gradually decreased (albeit not below the baseline level) 6 h later. In addition, a recent study also showed reduced mitochondrial mass and impaired respiratory function along with exercise-induced mitophagy induction in rat soleus muscles. However, the sample size was relatively small and the mitochondrial defect may also be attributed to lack of mitophagolysosome degradation. Taken together, further research is needed to clarify the transient changes of mitophagy in health and diseases during and after exercise as well as how it impacts cellular health. Conclusions Given the ever-growing public concern on cardiometabolic diseases, there is an urgent need to hunt for effective preventive regimen, from pharmacological and non-pharmacological perspectives. Given that physical inactivity is a well-known independent risk factor for all-cause mortality, regular physical exercise may offer profound health benefits in many aspects including cardiac performance, exercise tolerance, endothelial function, inflammatory response, insulin sensitivity, autonomic regulation, and blood pressure control along with glucose and lipid metabolism, adiposity, and psychosocial parameters. Considering that exercise may impose both benefit and risk to human health, only modest or moderate exercise (less resistant type), is recommended to achieve a cardiovascular benefit. It is well perceived that regular moderate exercise may serve as an essential measure for the prevention and management of chronic diseases, including obesity, diabetes mellitus, atherosclerosis, and coronary artery disease. Long-term exercise instigates physiological cardiac hypertrophy with preserved pump function. In this regard, a better understanding of the cellular and molecular mechanisms behind cardiac responses to exercise (physiology or pathological) should offer potential novel therapies against various cardiac anomalies. Given the critical role of mitochondria in the maintenance of cardiac homeostasis, mitochondrial quality control in particular mitophagy should be vital for cardiac health. In view of all that has been discussed in our review, we may propose that endurance exercise training protects cardiovascular system from acute stress possibly through maintaining homeostatic mitophagy. However, what we have learned about exercise-induced mitophagy is essentially based upon experimental studies and mainly skeletal muscles. There is a current paucity of well-controlled studies describing how exercise impacts cardiovascular function through regulation of mitophagy. To unveil the benefit versus risk for physical exercise on cardiovascular function, future studies should examine various types of exercise on autophagy and selective autophagy levels in an effort to provide insights into novel therapeutic avenues for the management of cardiovascular diseases. These findings will help us to evaluate the potential of mitophagy as a target for cardioprotection.
Impact of the Hospital to Home Initiative on Readmissions in the VA Health Care System Background: Hospital to Home (H2H) is a national quality improvement initiative sponsored by the Institute for Healthcare Improvement and the American College of Cardiology, with the goal of reducing readmission for patients hospitalized with heart disease. We sought to determine the impact of H2H within the Veterans Affairs (VA) health care system. Methods: Using a controlled interrupted time series, we determined the association of VA hospital enrollment in H2H with the primary outcome of 30-day all-cause readmission following a heart failure hospitalization. VA heart failure providers were surveyed to determine quality improvement projects initiated in response to H2H. Secondary outcomes included initiation of recommended H2H projects, follow-up within 7 days, and total hospital days at 30 days and 1 year. Results: Sixty-five of 104 VA hospitals (66%) enrolled in the national H2H initiative. Hospital characteristic associated with H2H enrollment included provision of tertiary care, academic affiliation, and greater use of home monitoring. There was no significant difference in mean 30-day readmission rates (20.0% ± 5.0% for H2H vs 19.3% ± 5.9% for non-H2H hospitals; P =.48) The mean fraction of patients with a cardiology visit within 7 days was slightly higher for H2H hospitals (3.0% ± 2.4% for H2H vs 2.0% ± 1.9% for non-H2H hospitals; P =.05). Patients discharged from H2H hospitals had fewer mean hospitals days during the following year (7.6% ± 2.6% for H2H vs 9.2% ± 3.0 for non-H2H; P =.01) early after launch of H2H, but the effect did not persist. Conclusions: VA hospitals enrolling in H2H had slightly more early follow-up in cardiology clinic but no difference in 30-day readmission rates compared with hospitals not enrolling in H2H.
The current place of intragastric balloon in the treatment of obesity what should clinicians know? Endoscopic bariatric therapy (EBT), including intragastric balloon (IGB), seem to fill the gap between medical and surgical options of obesity treatment. Currently, there are three IGB systems approved by the FDA: OrberaTM Intragastric Balloon System, ReShape Integrated Dual Balloon System, and the Obalon system. Despite the advantages of IGB such as anatomy preservation, potentially lower risk of serious complications, and costing than bariatric surgery, the achieved weight loss is smaller and often only temporary. The maximum efficacy of IGB therapy is achieved with a comprehensive weight management program including patient education and lifestyle modification. Careful selection of patients for IGB, frequent control visits after balloon placement, and its removal at 6 months on time after insertion are recommended to reduce complications and increase the safety profile of IGB therapy. We discuss the current place of IGBs in the treatment of obesity with particular focus on their efficacy and safety, recent FDA updates, and published data in order to facilitate future decisions on implementing EBT for individual patients.
The role of cannabinoids in shaping lifespan neurodevelopment Emerging research highlights the critical role of the endocannabinoid (eCB) system in shaping neural and glial development throughout the life span. Indeed, research shows that the eCB system is present early during gestation and modulates synaptogenesis, neuronal differentiation, myelination, and neuronal migration (). Later in life, the eCB system regulates synaptic functioning, including responses to stress, emotionrelated processes such as fear and anxiety, and motivated behavior (). This In Focus issue highlights exciting new research on the role of cannabinoids in shaping neural development from the prenatal period through adulthood, which has relevance for typical and atypical cognitive, behavioral, and emotional functioning. The three articles included in this issue demonstrate associations between genetic variation in eCB signaling and functioning of fear extinction circuitry and networklevel functional connectivity, as well as, the impact of early developmental insults to the eCB system, particularly prenatal cannabis exposure. The first article, by Baglot et al., examined the role of cannabinoids in shaping neural development from the earliest stages of life the prenatal period. In particular, using animal models, the authors examined the impact of prenatal cannabis exposure on maternal blood, placenta, and fetal brain. The authors point out that this is an important topic to study because the rates of cannabis use among pregnant people in the United States have more than doubled between 2002 and 2017 (), with as many as one out of five younger pregnant people (e.g., under age 24) reporting use (). At the same time, there have been drastic increases in the potency of cannabis constituents, particularly the psychoactive 9tetrahydrocannabinol (THC), over recent years (). Alarmingly, THC has been shown to undergo crossplacental transfer and may therefore impact offspring neurodevelopment (). In fact, a small but growing body of literature demonstrates the detrimental effects of prenatal cannabis exposure on offspring, including low birth weight (), smaller brain volumes, and poorer cognitive, behavioral, and emotional functioning in childhood (). In a comprehensive study, Baglot et al. evaluated how prenatal cannabis exposure affects the fetal eCB system and concentrations of THC and its metabolites (i.e., 11OHTHC, THCCOOH), which are also known to be psychoactive. Two routes of administration were tested: injection and inhalation. For the latter, a novel evape method was used to deliver puffs of cannabis, which results in plasma THC levels that mirror levels seen in humans. Results showed that THC and its metabolites can indeed cross the placenta and impact the fetal brain, with levels that vary by route of administration. Importantly, about 30% of THC in maternal blood reached the fetal brain following inhalation, which suggests that THC in maternal blood may be a useful proxy for fetal exposure. In addition, an arresting 215% and 155% increase in 11OHTHC was observed in the placenta and in fetal brain, respectively, after maternal cannabis inhalation. These findings are alarming given that 11OHTHC is two to seven times more potent than THC () and may be less likely to be broken down by fetal metabolism, thereby impacting the developing eCB system. Although there was no overall effect on fetal eCB levels, the levels of THC in the fetal brain positively correlated with the levels of the eCB anandamide (AEA) and negatively with the eCB 2arachidonoylglycerol (2AG). These findings highlight the potential for cannabis to accumulate in the central nervous system and disrupt normative eCB functioning in offspring. They also demonstrate the importance of route of administration, and of measuring THC metabolites in future studies. Looking later in development, Sisk et al. leveraged data from the nationwide Adolescent Brain Cognitive DevelopmentSM Study to examine associations between the eCB system and largescale restingstate brain networks in a sample of 3,109 9to 10yearold children. The authors focused on a common functional polymorphism (C385A) in the gene encoding fatty acid amide
Performance of an efficient sleep mode operation for IEEE 802.16m Power saving is one of the important issues for battery-powered mobile station in mobile WiMAX. Both IEEE 802.16e and IEEE 802.16m standards define sleep mode operations for power saving of mobile stations. In IEEE 802.16e, sleep mode alternates the listening window of fixed length and the sleep window where the sleep window can be doubled. The mobile station sends or receives packets during the active mode. In IEEE 802.16m, sleep cycle consists of extendable listening window and sleep window where sleep cycle can be doubled and the sleep window is the remaining part of sleep cycle. The mobile station sends or receives the data during the extendable listening window without going back to the active mode. The extendable listening window is implemented by T_AMS timer which plays the role of sleep mode request/response messages in the IEEE 802.16e. In this paper, we propose an efficient sleep mode operation for IEEE 802.16m advanced mobile WiMAX. The proposed scheme takes advantages of sleep modes in both IEEE 802.16e and IEEE 802.16m. This scheme has binary exponential sleep windows which guarantee the minimum length for effective power saving. The mobile station uses the T_AMS timer in IEEE 802.16m so that the mobile station sends or receives data packets during the extendable listening window in the sleep mode. We mathematically analyze the proposed scheme by an embedded Markov chain to obtain the average message delay and the average power consumption of a mobile station. The analytical results match with the simulation results very well. The analytical results show that the power consumption of our scheme is better than those of the legacy sleep modes in the IEEE 802.16e and the IEEE 802.16m under the same delay bound.
Self renewal of embryonic stem cells in the absence of feeder cells and exogenous leukaemia inhibitory factor. To evaluate the role of leukaemia inhibitory factor (LIF) for maintaining pluripotent embryonic stem (ES) cells in culture, we established several exogenous LIF-independent ES cell lines by continuous passaging in culture. The newly established ES cells, Kli and CBli, sustained their growth and remained undifferentiated in LIF-deficient medium. Analysis of chimaeric animals, produced with the beta-galactosidase transgenic Kli ES cells, revealed that LIF-independent ES cells can contribute to all embryonic germ layers. There was no detectable LIF protein in ES cell conditioned medium, and no upregulation of LIF mRNA was found. The addition of neutralising anti-LIF antibodies was not sufficient to abrogate the self renewal of the Kli ES cells. These studies suggest that the signalling pathway involving diffusible LIF can be bypassed for maintaining the pluripotency in culture, and indicate a considerable heterogeneity in growth factor dependence and differentiation of different ES cells.
Possibilities for m-Government in Latin America Mobile information technologies have revolutionized governments and society. Through them, people can communicate, have access to information, or make demands. Mobile government (m-Government) has begun to leverage these tools to improve services, create more opportunities, and extend access to IT. The integration of m-Government goals with social networks has opened new possibilities to promote open government, providing a great opportunity to reduce the digital divide. So, the goal of this research is to analyze the impact that mobile technology have in Latin America and the possibilities of developing the m-Government to deliver public services.
A comparison of three techniques for rapid model development: an application in patient risk-stratification. Accurately risk-stratifying patients is a key component of health care outcomes assessment. And, many health care organizations increasingly are relying upon automated means for assistance in making patient risk-stratification decisions. Unfortunately, the process of outcome model development, as it is currently practiced, is both time consuming and difficult. We investigated the relative abilities of three modeling techniques (logistic regression, artificial neural network (ANN), and Bayesian) to rapidly develop models for risk-stratifying patients. Our results demonstrated that all three modeling techniques perform equally well in certain situations. However, the Bayesian model with conditional independence had the best overall performance. Unfortunately, none of the models were able to achieve the degree of accuracy which would be required in a medical setting.
Isolation and Identification of Adenoviruses in Microplates A procedure for isolating and identifying adenoviruses in microplates is described. Comparison tests with standard tube methods show an agreement of 92%. Virus isolations are greatly facilitated by the microplate method. This method is sensitive, economical, and especially applicable to large-scale epidemiological surveys. A procedure for isolating and identifying adenoviruses in microplates is described. Comparison tests with standard tube methods show an agreement of 92%. Virus isolations are greatly facilitated by the microplate method. This method is sensitive, economical, and especially applicable to large-scale epidemiological surveys. The conventional method of isolating viruses in vitro is an expensive, cumbersome, and timeconsuming operation. The propagation and maintenance of host cultures for this purpose requires large quantities of cells, media, and utensils (tubes, caps, racks, etc.) as well as adequate space for manipulation and incubation of cultures. The isolation process is a lengthy one. A period of 4 to 6 weeks may elapse after inoculation of the tissue culture monolayer with the specimens before virus isolation work can be completed. During this time, cells are observed, media are changed, and passages are made, all requiring additional materials and handling of the culture tubes. This report describes the use of the microplate tissue culture system for isolation of viruses. This technique has been employed for viral serology, especially where large numbers of tests are required. Virus isolation involves the simultaneous inoculation of patient specimen and seed tissue cells in replicate wells of a microplate. Three 7-day passes are carried out without change of media. Isolates are typed in plates when extensive cytopathology (CPE) occurs. Microscopic observation is facilitated, since several specimens are contained in a single plate. This technique is particularly applicable for largescale epidemiological surveys enabling one technician to handle large numbers of specimens rapidly and economically. MATERIALS AND METHODS Microplate equipment. (i) Disposable polyvinyl "U" plates (Cooke Engineering Co.) were treated as previously described and exposed to ultraviolet light for 1 hr for sterilization. (ii) Lightweight plastic covers (Linbro Chemical Co.) were also irradiated. 'This investigation was done in connection with Research Project no. MF 12.524.009-4019AF6I, Bureau of Medicine and Surgery, Navy Department, Washington, D.C. (iii) Calibrated transfer loops and droppers were standard Microtiter equipment. Cell cultures and media. HeLa cells obtained from V. V. Hamparian, Children's Hospital, Columbus, Ohio, were used routinely for adenovirus isolation. WI-38, HEp-2, secondary rhesus monkey kidney, and human embryonic kidney cells have also been used for isolation of other viruses. Cells for microplate cultures were trypsinized by the residual trypsinization technique and diluted to 2 X 105 cells/ml. Growth medium consisted of Eagle's minimum essential medium (MEM) in Earle's balanced salt solution (EBSS) supplemented with 10% fetal calf serum. Antibiotics were added in the following concentrations; penicillin, 200 units/ml; streptomycin, 200,ug/ml; and amphotericin B, 5 jAg/ml. HeLa cells for tube cultures were prepared in the same manner. Each tube was seeded with 1 ml of cells at a concentration of 105. After 48 hr of incubation, cells were changed to a maintenance medium consisting of MEM in EBSS supplemented with 5% fetal calf serum. Diluent used for microplates consisted of 0.5% lactalbumin hydrolysate in EBSS. Specimens. Specimens obtained from Naval recruits included nasal washings and throat and anal swabs collected in veal infusion broth supplemented with 0.5% bovine albumin. Diluent for swabs contained antibiotics in the concentration stated in the growth medium. The nasal washes did not contain antibiotics. Typing sera. Typing sera were prepared in rabbits by using prototype adenovirus strains as immunizing antigens. Antibody titers and dosage for virus-typing tests were determined by the end point dilution technique. Microplate isolation procedures. Isolation plates were set up as follows: each specimen was inoculated into one row of eight wells, each well in the row receiving one drop (0.025 ml) of diluent, two drops of specimen, and one drop of cells in growth medium. A row of control cells was seeded in wells between specimen rows. Thus a plate could accommodate six specimens and six control rows (Fig. 1). The amount of specimen in the eight wells was equivalent to that normally inoculated into duplicate tubes in the stan-802 dard method (0.4 ml). Plates were covered with plastic covers and incubated in a humidified incubator (at least 90% relative humidity) in 2% CO2 atmosphere at 34 C. Twenty-four hours after inoculation, an additional drop of HeLa cells was added to each well. Cells were observed for 7 days without change of medium and then were subcultured. Three 7-day passes were routinely made before cultures were terminated. Passage was made as follows. The cell sheets were disrupted with the dropper tip, and the entire contents of the eight wells were drawn up into the dropper. Two drops were then passed to each of eight wells in a new plate. Control wells were also passed in the same manner. Excess passage material may be frozen at this time. Typing of isolates was done when CPE was complete. Positive specimens were diluted 1:2 by transferring one drop (0.025 ml) with a diluting loop to each of several wells containing 0.025 ml of diluent. One drop of typing sera, containing 10 to 20 neutralizing doses of adenovirus antibody, was added to each well. Virus and serum controls were also prepared in the same plate. Preincubation for 1 hr was carried out as described above. One drop of cell suspension was added to all test wells, and the plates were reincubated. Test results were evaluated at 2 and 5 days. Tube isolations. Duplicate tubes in maintenance medium were inoculated with 0.2 ml of specimen and rolled at 34 C. Tubes were read and media were changed three times weekly. Passage was made at the end of 2 weeks by one cycle of freezing and thawing and transferring 0.2 ml to new cell cultures. Typing of isolates was carried out in microplates as previously described. RESULTS AND DISCUSSION Examination by microscope of plates 24 hr after inoculation showed various degrees of toxicity in most specimens. As would be expected, anal specimen toxicity was most pronounced. To overcome this problem, an additional drop of cells was added to the wells at this time. These 48 hr, and although the cell sheet was not as complete as in subsequent passes, it was considered adequate; the remaining toxicity could be distinguished from viral CPE (Fig. 2). In extremely toxic specimens, subcultures were initiated earlier than 7 days. Besides specimen toxicity, it was discovered that the veal infusion broth in the swab diluent also contributed to this problem. In testing other media that would be suitable for sampling fluids, it was found that EBSS supplemented with 0.5% bovine albumin was the least toxic. Although this medium was not toxic, it was found that the development of virus CPE was delayed. In one experiment, 38 specimens were collected in veal infusion broth with 0.5%7o bovine albumin or EBSS with 0.5% bovine albumin, respectively. Eight viruses (adenovirus 4) were recovered eventually from the same specimens collected in either medium. However, six were obtained in first pass (7 days) and two in second pass with veal infusion broth compared to one in first pass, five in second pass, and two in third pass from the sampling with EBSS. Because of this delayed CPE, the veal infusion broth with 0.5% bovine albumin was chosen to be the collecting medium for further tests. Comparison of the tube and microplate systems for adenovirus isolation was made. Of 263 specimens cultured by both the standard tube method and the microplate procedure described, 241 or 92%o showed agreement either by the recovery of a virus or by negative results in both tests. Fifty-eight isolations (51 type 4, seven type 7) and 183 negatives were obtained from the same specimens by both methods. The 22 specimens showing disagreement were distributed as follows: nine which were positive in tubes were negative in microplates. On the other hand, 13 which were positive in microplates were negative in tubes. In these disagreements, adenovirus types 4 and 7 were randomly distributed in the two tests. None of the above differences in tests were statistically significant. During these studies it was found that viral isolates were recovered earlier in microplates than in tubes. In a comparison test of 50 specimens observed daily for appearance of CPE, 29 isolates were recognized in microplates within 13 days of incubation as compared to 19 in tubes. The difference within this period is statistically significant (P =.045) however, the total number of positives eventually recovered at the end of 21 days was not (31 in microplates, 26 in tubes). The median time required for virus isolation was 7 days for microplates and 9 days for tubes. In connection with these experiments, certain variations in the microtechnique were appraised. There was no apparent advantage to inoculating specimens on preformed or established monolayers, nor was it advantageous to freeze and thaw the plates between passages. It was also found that 7 days of incubation per passage was the optimal time for recovery of viruses. Most of the isolations were made within the first two passes (14 days). In the course of these experiments, many other advantages, in addition to the obvious ones of economy of time and materials, became apparent. Since an established cell monolayer was not necessary, specimens could be inoculated upon receipt, thus reducing the risk of loss of virus due to frozen storage and subsequent thawing. Also the use of microplates facilitates the detection of CPE. An entire well can be quickly scanned by microscope and any cellular change can be noted. In scanning tubes, often many fields encompassing the entire cell monolayer must be examined before CPE is detected. Another desirable feature was the ease of harvesting. Plates could be frozen intact for further passing, thus eliminating transfer of material to small vials as is necessary to conserve space in tube isolation procedures. Also, harvest time was not as critical, as an additional drop of cells could be added to wells, thus delaying time of harvest. Although sufficient numbers have not been tested, viruses of other groups have been isolated successfully using micromethods. Herpesvirus, influenza, rubella, poliovirus, echo, and rhinoviruses have all been isolated and identified. It is obvious, however, that this method as presently carried out, is not optimal for isolation of groups of viruses requiring special cultural requirements such as incubation of tubes in a roller drum apparatus for rhinovirus isolation. Also it should be emphasized that care must be exercised in performing this technique to avoid cross-contamination of cultures. Although more adenoviruses were recovered by the microplate method in these studies, we do not wish to imply that the tube method is less sensitive. Other comparison tests have indicated reverse results. Neither of these situations is believed to be statistically significant with agreement of tests ranging from 85 to 95 %. The microplate method does, however, have decided advantages when a large number of isolations need to be performed.
Can things get worse when an invasive species hybridizes? The harlequin ladybird Harmonia axyridis in France as a case study So far, only a few studies have explicitly investigated the consequences of admixture for the adaptative potential of invasive populations. We addressed this question in the invasive ladybird Harmonia axyridis. After decades of use as a biological control agent against aphids in Europe and North America, H. axyridis recently became invasive in four continents and has now spread widely in Europe. Despite this invasion, a flightless strain is still sold as a biological control agent in Europe. However, crosses between flightless and invasive individuals yield individuals able to fly, as the flightless phenotype is caused by a single recessive mutation. We investigated the potential consequences of admixture between invasive and flightless biological control individuals on the invasion in France. We used three complementary approaches: (i) population genetics, (ii) a mate-choice experiment, and (iii) a quantitative genetics experiment. The invasive French population and the biological control strain showed substantial genetic differentiation, but there are no reproductive barriers between the two. Hybrids displayed a shorter development time, a larger size and a higher genetic variance for survival in starvation conditions than invasive individuals. We discuss the potential consequences of our results with respect to the invasion of H. axyridis in Europe. Introduction Hybridization (interbreeding between genetically differentiated lineages) takes place in a very wide range of organisms (Barton and Hewitt 1985, Dowling & Secor 1997, Mallet 2005 and may play an active role in a variety of evolutionary processes ranging from local adaptation to speciation (Stebbins 1959;Arnold 1992;Barton 2001;). In the field of invasion biology, hybridization is now seen as a potential stimulus for the evolution of invasiveness (Ellstrand and Schierenbeck 2000;Lavergne and Molofsky 2007;;Blair and Hufbauer 2010). Traditionally, hybridization involves interspecific or intergeneric crosses as exemplified by the invasive plant Spartina anglica that mixes with native and other alien Spartina species (;). However, crosses between individuals from genetically differentiated populations of the same species (i.e. admixture, Ellstrand and Schierenbeck 2000;Culley and Hardiman 2009) are also considered hybridization (;Culley and Hardiman 2009). Admixture seems to be frequent in biological invasions. An increasing number of studies document biological invasions resulting from multiple introductions from distinct populations that bring together genetically differentiated individuals into a (;;, Lavergne and Molofsky 2007). To date, most studies dealing with admixture have aimed at detecting multiple source populations in biological invasions from selectively neutral markers (e.g. ). Only a few studies have explicitly investigated the consequences of intraspecific hybridization for the evolution of life-history traits and thus for the adaptative potential of introduced populations (Lavergne and Molofsky 2007;;). Hybridization may lead to very different outcomes ranging from detrimental to beneficial (Arnold and Hodges 1995;Burke and Arnold 2001). On the one hand, hybridization may reduce the fitness of parental individuals either due to incipient reproductive isolation in the form of genetic incompatibilities that reduce the mating success of parents (prezygotic isolation) or through a decrease in the fitness of offspring due to the loss of local adaptation and/or breakdown of co-adapted gene complexes (outbreeding depression, as exemplified in tension zones; Barton and Hewitt 1985). On the other hand, hybridization has the potential to boost invasiveness through two nonexclusive mechanisms: heterosis and generation of new genotypes. Heterosis (or hybrid vigor) occurs when hybridization masks deleterious alleles (Keller and Waller 2002) or in case of overdominance and/or synergistic epistasis between alleles inherited from the parental taxa. Allopolyploidy, which sometimes accompanies hybridization, may also contribute to the heterotic effect (). The generation of new genotypes occurs through recombination (;Ellstrand and Schierenbeck 2000;;Schierenbeck and Ellstrand 2009), and alleviates the loss of genetic variance after founder events and hence restores or even increases the efficiency of selection (Lee 2002). Given its invasion history, the invasive harlequin ladybird Harmonia axyridis provides an opportunity to examine whether individuals from genetically distinct populations interbreed freely and how admixture affects life-history traits. Native to Asia, H. axyridis has been introduced repeatedly as a biological control agent against aphids since 1982 in Europe (). Despite recurrent intentional releases of beetles for acclimation attempts, the species did not establish for 20 years. For unknown reasons it recently and suddenly became invasive on four different continents (). The species is known to be a harmful predator of nontarget arthropods, a household invader, and a pest of fruit production (Koch 2003); In Europe, invasive populations were first recorded in Belgium in 2001(. It has now spread widely in Europe with a current distribution that extends from Southern France to Denmark (). Up to now, whether the European invasive populations result from intentional introductions, accidental migrants or both remains unknown. In France, a flightless strain of H. axyridis is sold commercially for biological control (). This flightless strain, called Coccibelle (BIOTOP, Valbonne, France) was selected in the late 1990s for its inability to fly and disperse from a traditional flying biological control stock. The flightless phenotype is caused by a single recessive mutation in a gene involved in flight muscles (); thus only individuals homozygous for the mutant allele cannot fly. The Cocci-belle strain was developed with the goal of obtaining a more localized and hence effective control of aphids by both larvae and adults. As with most coccinellids, H. axyridis diapauses during cooler periods. It congregates into large groups (up to thousands individuals) to overwinter and is attracted to light colored dwellings and other manmade objects as overwintering sites (). Thus, an additional advantage of the Coccibelle strain is the inability of flightless individuals to reach wintering sites which minimizes both its impact as a household pest, and its ability to establish populations in the wild. However, the continued use of Coccibelle for biological control raises the possibility that it will cross with invasive individuals in Europe, especially in France. If such crosses occurred, they would yield individuals able to fly and hence could potentially impact the invasive process. The purpose of this study was to investigate the potential role of intraspecific hybridization (i.e. admixture) between Coccibelle and invasive individuals on the invasion of H. axyridis in France. Wolfe et al. outlined three criteria that must be met for intraspecific crosses to play a role in biological invasions. First, the populations involved in the admixture process should be genetically differentiated. Second, crosses should be possible between individuals from the different populations. Third, the admixed individuals should differ from parental ones in some of their life-history traits to impact the invasion process. This last criterion may involve direct heterosis, an increase in genetic variance, or both (Ellstrand and Schierenbeck 2000;Burke and Arnold 2001;Lee 2002;;Culley and Hardiman 2009). Here, we assessed the three above criteria for crosses between the Coccibelle biological control strain and the invasive French population of H. axyridis. First, we determined the level of differentiation between Coccibelle and the invasive French populations at 18 microsatellite markers. Second, we evaluated whether there are reproductive barriers that could prevent interbreeding between biological control and invasive populations using a mate choice experiment. Third, we used a quantitative genetics experiment to estimate the phenotypic means and variances for several key life-history traits of offspring produced by crossing Coccibelle with the French invasive population. Material and methods Population sampling and rearing conditions Invasive individuals (hereafter referred to as INV) were collected in the wild from an invasive population in Croix, Northern France (50°40¢35¢¢N, 3°08¢33¢¢E) where H. axyridis has been observed since 2004 (). It is worth stressing that we previously genotyped seven French populations covering the French repartition area (in 2007-2008) and found no genetic structure between them at 18 microsatellite loci (average F ST = 0.052; Arnaud Estoup, unpublished data). This absence of genetic structure at neutral loci made it reasonable to base our quantitative genetics study on a single invasive French population sample. The corresponding experimental design, while large (2400 larvae, as described below), was feasible, while additional crosses would not have been. Individuals from the Coccibelle biological control strain (hereafter referred to as BIO) were obtained from the firm BIOTOP (Valbonne, France), which originally commercialized it. Approximately 70 mature individuals of both INV and BIO were obtained in September 2007. These first generation individuals (G 0 ) were used to initiate both INV and BIO populations in the laboratory for two generations, under strictly controlled conditions, to avoid potential biases due to maternal effects. During these two generations, populations were fed with ionized Ephestia kuehniella (Lepidoptera: Pyralidae) eggs and reared at constant environmental conditions (23°C; 65% RH; L:D 14:10). At generation G 2, males and females were separated immediately after emergence to prevent mating. They were then maintained in the same environmental conditions for 2 weeks to ensure that all individuals had reached reproductive maturity at the beginning of the experiments. Are INV and BIO genetically distinct at microsatellite loci? To answer this question, we genotyped 28 G 0 individuals per population (both INV and BIO) at 18 microsatellite loci following Loiseau et al.. We estimated the genetic diversity within-population by computing both the allelic richness (R S; ElMousadik and Petit 1996) and the expected heterozygosity (H E; Nei 1987). The level of genetic differentiation between INV and BIO populations was estimated by computing F ST (Weir and Cockerham 1984). All computations were processed using the software Fstat (Goudet 1995). Differences in R S and H E values were tested using a Wilcoxon Sign Rank test and the F ST value was tested for significant deviation from zero using the permutation test implemented in Fstat (Goudet 1995). Are there reproductive barriers between the INV and BIO populations? We addressed this question by performing mate choice trials involving three individuals (one female and two males) in cylindrical boxes (height = 3 cm; diameter = 8.5 cm). We used virgin G 2 adults 2 weeks after emergence and created trios of one female from the focal population for an individual trial (either INV or BIO) and one male from each of the two populations (INV and BIO). We set up 23 such trios with BIO females and 26 with INV females. We left the three partners together until the female laid her first clutch. We then collected the males and preserved them in ethanol for genetic analysis. We isolated the first clutch and counted the eggs. After 5 days, we counted the number of living larvae and preserved them in ethanol. We repeated the procedure for another clutch 4 weeks later. We then preserved all females in ethanol for genetic analysis. We extracted individual genomic DNA using the Chelex method () for each mother and the two putative fathers as well as for eight larvae from each clutch (N = 49, 98 and 784 respectively for females, males and larvae). All these individuals were genotyped following Loiseau et al. for a subset of seven microsatellite loci (Ha 005, Ha 201, Ha 215, Ha 244, Ha 267, Ha 281, Ha 605). These seven loci were selected among a total of 18 loci available, as they can unambiguously discriminate the genetic origin (INV or BIO) of individuals, using the program whichrun (Banks and Eichert 2000). We assigned each offspring to their parents based on their multilocus genotypes using the program probmax version 1.3 (Danzmann 1997). This program assigns progeny to a set of possible contributing parents given that the genotypes are known for both the progeny and the possible parents. We used sas version 9.1 (SAS Institute 2003) to analyze these data. We tested the null hypothesis that the male reproductive success is equal (1:1 ratio) for the two types of males (INV and BIO) separately for each female type (INV or BIO) using a chi-square test for proportions. We also tested the effect of the female type on the male reproductive success with an analysis of independence in two way table. Finally, we tested whether the hatching rate differed significantly according to the parents using a generalized linear model with a binomial probability distribution and a logit link function; with female and male and the interaction as factors. Do life-history traits differ between hybrids and their parents? We addressed this question by creating four types of crosses (female male) from the two parent samples BIO and INV: BIO BIO, BIO INV, INV BIO and INV INV. For each cross, we randomly set up 10 couples (all the larvae produced by a couple will be thereafter referred to as a family) by putting one male and one female in a cylindrical box (height = 3 cm; diameter = 10 cm). As a consequence of this experimental design, the factor family was actually nested within the factor cross as it was not possible to produce the four crosses from a given pair of male and female (whose offspring formed a given family). At the beginning of the experiment, we collected and isolated four clutches (more than 20 eggs per clutch) of each couple. At the day of hatching (the fourth day), 15 larvae per clutch were randomly chosen and placed in a small cylindrical box (height = 2 cm; diameter = 5 cm) with a damp piece of cotton wool. For this experiment, we thus used of 2400 larvae (4 boxes 15 larvae 10 couples 4 crosses). Larvae were fed ad libitum every 2 days until adulthood with freeze-dried aphids (Acyrthosiphon pisum) for 30 larvae per family and with eggs of Artemia salina for the 30 remaining larvae. Individuals were maintained at constant environmental conditions (23°C; 65% HR; L:D 14:10) during the experiment. Larvae were checked every day and we recorded the number of individuals reaching adulthood (i.e. the larval survival) and the total development time from egg laying to adult emergence of each individual. A subset of individuals reaching adulthood was used to estimate four additional traits: reproductive investment of females, the lifespan of starving adults, the survival rate in quiescent conditions and the body length. To estimate reproductive investment, two adult females from each family were dissected and the number of ovarioles was counted using a binocular microscope (). To estimate the lifespan of starving adults from one to three females and one to three males (depending on the size of the family) were randomly collected and placed individually in a small cylindrical box (height = 2 cm; diameter 5 cm) with no food and thereafter checked every day for 45 days. To estimate the survival rate in quiescent conditions, from one to three females and one to three males (again, depending on the size of the family) were randomly collected and placed in a cylindrical box (height = 3 cm; diameter 10 cm) with no food in constant abiotic conditions that corresponded to conditions for diapause (5°C; 60% HR; L:D 12:12). After 5 weeks, we measured the number of individuals still alive in each box to estimate the survival rate. Finally, the body length of all the adults used to estimate survival rate in quiescent conditions was measured with a binocular stereomicroscope micrometer using the software ImageJ ª (http://rsbweb. nih.gov/ij/index.html). We analyzed data on the two juvenile traits (larval survival and development time) and the four adult traits (reproductive investment, lifespan of starving adults, survival rate in quiescent conditions and body length) using sas version 9.1 (SAS Institute 2003). For the response variables known to deviate markedly from a normal distribution (i.e. counts and proportions), we used the traditional transformations (square root for reproductive investment and arcsin for larval survival and survival rate in quiescent conditions; Sokal and Rohlf 1995). For the remaining variables, which followed approximately normal distributions, we used the original data. This choice is justified by the fact that (i) there was no obvious transformation that improved the normality of residuals and (ii) the experimental design was almost perfectly balanced and included large sample sizes, two features known to mitigate the effects caused by a non-normal distribution and/or the heterocedasticity of variances (Ananda and Weerahandi 1997). We used model selection following Burnham and Anderson and Shoukri and Chaudhary to determine the appropriate models on which to test the significance of effects of interest. First, including all main fixed effects (cross and food for the response variables reproductive investment, larval survival, development time, and cross, food, and sex for body length, survival rate in quiescent conditions, and survival in starvation) and their interactions, we compared models with different random effects. Models for all response variables included family nested within cross and family (cross) food as random effects. For the variables that included sex as a fixed effect, we also considered the interactions family (cross) food sex, family (cross) sex as random effects. Note that with the random effect of family (cross), we can either estimate one variance component (assuming the same variance in families over the four crosses) or four variance components (each one specific to each cross, assuming that the variances were heterogeneous). We compared the full models with simpler nested models by removing a different variance component each time, using Restricted Maximum Likelihood (REML) to assess the significance of random effects. If this removal worsened the fit of the model significantly as evidenced by likelihood ratio tests, the variance component was kept in the model; otherwise, the variance component was removed from the model and the model selection pursued from this simpler model (Shoukri and Chaudhary 2007;Goldman and Whelan 2000;Shapiro 1988; see Appendix A for details). Once a covariance structure was selected, we used Maximum Likelihood (ML) to select which fixed effects improved the fit of the model. Model selection was carried out based on the Information Criterion of Akaike corrected for small sample sizes (hereafter AIC c ) following Burnham and Anderson. As suggested by the same authors, we considered models with a delta AIC c of 2 or less as undistinguishable on statistical grounds; and on the basis of parsimony, we selected the model with the lower number of parameters for inferences. Results of the models selection procedures are detailed for each variable in Appendix A. To compare the genetic variance of the life-history traits between hybrid individuals and their parents, we used the variance components estimated for the family effect within each cross (V G ). The genetic variances of the measured traits were compared among crosses using the genetic coefficient of variation (CV G ), which is the square root of the genetic variance (V G ) divided by the trait mean (see Houle 1992). For each trait, we tested the hypothesis that admixture increases the genetic variance by comparing the CV G of the four crosses using Likelihood Ratio Tests. Are INV and BIO genetically distinct at microsatellite loci? The within-population variability was significantly higher in the INV sample (R S = 6.08, H E = 0.60) than in the BIO sample (R S = 2.44, H E = 0.40; P < 0.0001 for R S and P = 0.0005 for H E ). We also found that the BIO and INV populations were genetically substantially differentiated with F ST = 0.13 (P < 0.0001). Are there reproductive barriers between the INV and BIO populations? We observed mating and egg clutches production in all mate choice trials. All genotyped larvae could be unambiguously assigned to a male. Within clutch, eggs were sired by one or two males in variable proportion. For a given female fertilized by two males, the proportion of eggs sired by a given father could change drastically among successive clutches. Interestingly, we found that for both type of females (BIO and INV), the BIO males sired a higher proportion of offspring than INV males (Fig. 1). BIO males sired 80.3% of BIO female offspring, and 71.8% of INV females. Both proportions are significantly higher than the expected 50% fertilization by each male type (v 2 = 132.01, P < 0.0001 and v 2 = 81.70, P < 0.0001 for BIO and INV females, respectively). A similar result was obtained when using the clutch as an independent statistical unit, (excluding in this case the clutches sired by two males): for BIO females, 81% of clutches are sired only by BIO male and 19% only by INV male; for INV females, 78% of clutches are sired only by BIO male and 22% only by INV male. In both cases, BIO males sired significantly more offspring than INV males (P < 0.05). It is worth noting that we rejected the null hypothesis of independence between the two variables (Female type and Male type; P = 0.0135, Fig. 1). This result could be interpreted as the BIO males siring more offspring when mated with BIO females than with INV females. To test whether the hatching rate differed significantly according to the parents, we split up the male status in three categories: BIO, INV or a mixture of both types. The mean hatching rate across all the observed clutches was 73%. We did no detect any significant effect of male parent (P = 0.58), female parent (P = 0.52), or the interaction (P = 0.96) (see Fig. 2). Do life-history traits differ between hybrids and their parents? Results for models selection are detailed in the Appendix A. The results of the best models for the six studied traits are summarized in Table 1 and results of the full models in Appendix B. We first focused our analysis on the comparison between the hybrids and their parents. We found that the type of cross had a significant effect on development time Fig. 3F). Individuals from pure parental crosses did not differ between each other (P = 0.97). The type of cross did not have any significant effect for the four remaining traits (larval survival, reproductive investment, survival in starvation, and survival in quiescent conditions; respectively Fig. 3A,C,D,E). However, for reproductive investment, Fig. 3C shows that INV females tend to invest more in reproductive structures. Although the cross effect had not been retained in the best model for reproductive investment (see Appendix A), this effect was marginally significant in the full model (P = 0.094). In pairwise comparisons, the only significant comparison is between pure invasive females and pure biological control females. Regarding random effects, we found a significant family effect for all traits except for length and a significant interaction between food and family for development time, survival in starvation and length. This result means that variation for all the studied traits was, at least partly, genetically based (Table 1). Genetic coefficients of variation ranged widely among traits (Table 2). CV G was low for development time, reproductive investment and length (less than 5%) but high for larval survival, survival in starvation and survival in quiescent conditions (between 10% and 68%; Table 2). For development time, survival in quiescent conditions and length, there was no obvious difference between the four crosses. For reproductive investment, the two crosses involving an INV mother (i.e. INV-INV and INV-BIO) had a higher CV G than the two crosses involving a BIO mother (i.e. BIO-BIO and BIO-INV), although this trend was not significant (Table 2). For the two other traits (larval survival and survival in starvation), the observed pattern was an increase of CV G in the hybrid crosses relative to the invasive cross. This trend was significant, however, only for survival in starvation (P = 0.017; Table 2). Accordingly, survival in starvation is the only trait for which taking into account four specific variance components for the family effect improves the model (Table 1). For larval survival, the CV G of INV-INV was lower than that of the three other crosses. For survival in starvation, the two hybrid crosses had a higher CV G than the two parental crosses. Moreover, if we consider the family mean for this later trait as an average genotype within a family, we can observe some 'genotypes' in admixed individuals (INV-BIO or BIO-INV) that consistently outperformed both parental genotypes (Fig. 3D). We now deal with two factors, the type of food and sex, which are worth mentioning although they do not directly relate to the comparisons between hybrids and their parents. The type of food had a significant influence on development time, larval survival, survival in quiescent conditions and length (Table 1). Larvae fed with aphids had a greater larval survival and a shorter development time than larvae fed with Artemia eggs (SurvLarv = 80% and 65%, DvptTime = 22.01 and 24.02 days for individuals fed with aphids and Artemia eggs respectively). Individuals fed with Artemia eggs survived better in quiescent conditions than individuals fed with aphids (60% and 39% respectively), but had a smaller adult body size (6.27 and 6.56 mm for individuals fed with Artemia eggs and aphids respectively). Sex had a significant effect on survival in starvation and length (Table 1) with females having a greater survival in starvation (10.1 days) than males (8.4 days) and a larger body size (6.7 and 6.1 mm for females and males respectively). The interaction between food source and sex was only significant for length, and that no other interaction between fixed effects was significant. Finally, we did not find any significant interaction between cross and food or sex. Discussion Our study clearly demonstrates that admixture between individuals from the French invasive population and from the flightless biological control strain of the harlequin ladybird could potentially alter the invasion process. The first criterion proposed by Wolfe et al. to evaluate the potential role of intraspecific hybridization in invasion was that populations involved in admixture should be genetically differentiated. Using 18 microsatellites, we found that the two studied populations showed substantial genetic differentiation (F ST = 0.13). This differentiation could at least partly result from the loss of allelic diversity in the biological control population. This result can be explained by the fact that captive populations usually experience strong genetic drift due to a small number of initial founders and small effective population size during subsequent generations (). With regards to the flightless biological control strain, it is worth noting that low effective size probably also occurred during selection for the flightless phenotype. The second criterion of Wolfe et al. is that there must not be substantial barriers to crossing. Indeed, for H. axyridis, crosses turned out to be possible between the involved populations, at least in laboratory conditions. Our mating experiment, based on trios of one female and two males (one of each population), clearly illustrates that no reproductive barrier has evolved between these two distinct H. axyridis populations as every cross yielded viable offspring in similar proportions. Moreover, we found that males from the flightless biological control strain sired more offspring whatever the type of female. This result suggests that the cross between wild females and males from the flightless biological control strain might even be favored in nature. The advantage that males of the flightless biological control strain exhibited might be explained by selection on traits that increase male fitness in captive conditions, a feature already demonstrated in captive populations of several other invertebrates (Sgro &Partridge 2000, Lewis andThomas 2001). The third criterion of Wolfe et al. is that the admixed individuals should differ from the parental ones in life-history traits in a direction likely to enhance invasion. In the case of H. axyridis, the relevant comparison is between pure invasive individuals and admixed individuals, because individuals of the flightless biological control strain are unlikely to be able to overwinter and thus to durably settle a sustainable population in natura due to their flightless phenotype. A first important point is that invasive individuals never significantly outperformed the admixed ones. This result highlights that the use of flightless individuals as biological control agents in the field could potentially enhance invasion by decreasing the Allee effect typical of dispersing individuals founding new populations (). Indeed, in the invasion front, population sizes are expected to be low. If recurrent releases of flightless individuals are made near the invasion front, Allee effects would be reduced. A comparison of invasive females directly with pure biological control females reveals that they tend to invest more in reproductive structures. Additional experiments should be performed to understand whether this difference translates into effective fecundity. A second important point is that we found that admixture led to both heterosis and increased genetic variance. Admixed individuals developed more quickly and grew larger. These shifts indicate heterosis. Admixture increased genetic variance for survival in starvation, with CV G of hybrids significantly exceeding parental ones for this trait. While there was no significant shift in the mean value for survival in starvation some hybrid genotypes consistently outperformed parental ones. Thus, admixture could boost the efficiency of selection in direction of higher survival under stressful conditions of starvation (Ellstrand and Schierenbeck 2000;Lee 2002;). We will now consider how changes in development time, body length and increased variability for survival in Shifts in life-history traits due to hybridization/admixture events and associated with higher invasiveness have already been reported (e.g.;Lavergne and Molofsky 2007). Several studies have also highlighted that such recombination events often produce an increase in cell volume, body size or seed/juvenile size (see for instance Vila and D'Antonio 1998). In the case of H. axyridis, the observed increase of body size in admixed individuals has the potential to impact the interactions between this species and the native coccinellid species by enhancing the dominance of H. axyridis in interspecific competition and intraguild predation (;). It is worth noting that this increase in adult size does not occur at the expense of a longer development time. On the contrary, admixed individuals grow faster than invasive ones. This shorter development time should enhance population growth rate and hence impact the invasive potential of the species. As mentioned above, H. axyridis diapauses during cooler periods. During the rest of the year, it can complete between two and five generations (Koch 2003), and a shorter generation could shift that range up. The third trait impacted by admixture is linked to survival in stress conditions (absence of food). Several studies have pointed out that invasiveness may be associated with a higher stress-tolerance (see for instance Milne and Abbott 2000). For H. axyridis, increased ability to survive periods of famine may be especially advantageous when prey populations fluctuate or in areas where preys are at low density. The three traits for which admixture had an effect are hence likely to be advantageous in the context of invasion. Therefore, if crosses do occur in nature, selection should promote the introgression of genes from the flightless biological control strain into the invasive populations and enhance the invasive potential of H. axyridis. As noted, changes in these traits fall into two different categories: (i) for development time and body length, the shift in trait means provides evidence for heterosis and (ii) for survival in starvation, the difference between hybrids and parents stems from an increase in the genetic variance in hybrids. Predicting the long-term consequences of hybridization/admixture is not an easy task as they are strongly influenced by the genetic basis of hybrid fitness (Fitzpatrick and Shaffer 2007). Indeed, heterosis effects could be transitory due for instance to increasing homozygosity in later generations. Hybrids are also known to often express phenotypic breakdown in the F 2 generation as a result of recombination disrupting coadapted gene complexes or meiotic problems (Barton and Hewitt 1985;Burke and Arnold 2001). It is hence possible that outbreeding depression might be expressed in future generations of admixed H. axyridis individuals. Our results are only based on a F 1 -hybrid generation. Additional studies over further generations are hence needed to forecast the long-term consequences of a possible hybridization event. To better apprehend the evolutionary consequences of admixture between H. axyridis invasive and biological control individuals, both empirical and theoretical studies should be performed. For instance, it would be fruitful to simulate the introgression process through experimental evolution in the lab or in semi-natural conditions during several generations. The impact of the 'flightless' allele on the flying ability of heterozygous individuals should also be tested in experimental wind tunnel or flight mills. Moreover, it would be interesting to test how the higher male reproductive success of the flightless males translates into the admixed individuals. Another direction for future research would be to include into theoretical models the fitness consequences of admixture (with both the changes in traits we measured and the presence of the recessive 'flightless' allele), to better predict the impact of admixture with flightless biological control individuals on the invasion dynamics. We are still at an early stage in understanding how admixture between invasive individuals and biological control ones could affect invasion. Our ongoing study of H. axyridis supports the view that intraspecific hybridization (admixture) potentially alters the evolutionary process by contributing novel genetic advantages to admixed individuals (;Lavergne andMolofsky 2007, Schierenbeck andEllstrand 2009). Finally, our study illustrates a new situation where such admixture can occur, i.e. between invasive and biological control individuals, whereas situations documented so far corresponds to biological invasions resulting from multiple introductions from distinct native range populations bringing together genetically differentiated individuals into a common introduced area (;;Lavergne and Molofsky 2007;). Fixed effects The score of the best model in terms of AIC c is displayed in bold. Regarding the model selection concerning random effects for the variable SurvStarv, one can note that the removal of one of the random effects either 'sex.family' or 'food.family' did not worsen significantly the fit of the model while the removal of both effects led to a model significantly worst (LRT = 11.8 P = 0.01). Thus, we were left as best covariance structure model with either the model including 'sex.family' and 'family (4 VCs)' or the model including 'food.family' and 'family (4 VCs)', both models including the four variance components for the crosses. However, the estimates of variance components between the two models were very similar with, in particular, the same ranking among crosses (results not shown). Therefore, in the following steps of model selection we kept the model including 'food.family' and 'family (4 VCs)' (its deviance value was indeed slightly better; 2774.0 vs. 2777.9). At the end of the model selection process, the best covariance structure had the random effects 'food.family' and 'family (4 VCs)' including a different variance component for each cross. Fixed effects The score of the best model in terms of AIC c is displayed in bold. The best model in terms of AIC c is displayed in bold in the table and has cross and sex as fixed effects. However, the evidence for the inclusion of factor cross was weak (model 'c + s' vs. model 's') and thus for the sake of parsimony we used the model 's' for inferences. M1 So the best model is the model with 'family' as random effect. Fixed effects The score of the best model in terms of AIC c is displayed in bold. M1 So the best model is the model with 'family' as random effect. Fixed effects The score of the best model in terms of AIC c is displayed in bold. The random effects were kept as 'food.family' and 'family'. Fixed effects The score of the best model in terms of AIC c is displayed in bold. Fixed effects The score of the best model in terms of AIC c is displayed in bold.
Quality Control of Sodium High-Pressure Lamps by the Singular Decomposition Method We created a mathematical model of a sodium high-pressure lamp. This model is used in production before sending lamps to the consumer. To develop the model, we used a mathematical model. An analytical method was used to describe the operation of a sodium lamp based on differential equations. We also used the singular value decomposition algorithm to find the coefficients of the ARMA model. Also, the transfer function of the ARMA model was obtained. Then we tested the models to control the quality of sodium lamps in production. The obtained results of the simulation coincide with the experimental results. A graphical dependence is obtained in the case when the standard deviation is 1. Using a series of tests based on the singular value decomposition method, we confirmed the adequacy of elaborated model by Kolmogorov-Smirnov criterion.
Equity impacts of interventions to increase physical activity: A health impact assessment Behavioural interventions may increase social inequalities in health. This study aimed to project the equity impact of physical activity interventions that have differential effectiveness across education groups on the long-term health inequalities among older adults in Germany. We created six hypothetical intervention scenarios targeting adults aged 55 years and above: Scenarios #1 to #4 applied realistic intervention effects that varied by education. Under scenario #5, the lower and medium educated group adapted the physical activity pattern of the higher educated. Under scenario #6, all persons increased their physical activity level to the recommended 300minutes weekly. The number of incident ischemic heart disease, stroke and diabetes cases as well as deaths from all causes was simulated under each of these six intervention scenarios for males and females over a 10-year projection period using the DYNAMO-HIA tool, and compared against a reference scenario with unchanged physical activity pattern. For males, the highest reduction of disease cases and deaths would be achieved under scenario #4 (most effective in higher educated persons), while increasing inequalities between education groups. For females, the highest reduction would be achieved under scenario #3 (most effective in lower educated persons), while decreasing inequalities between education groups. Scenarios #1 to #4 would prevent only a fraction of the disease cases and deaths that would be avoided under scenario #5 or scenario #6. This modelling study shows how the overall population health impact varies, depending on how intervention-induced physical activity changes differ across education groups. For decision-makers, both the assessment of health impacts overall as well as within a population is relevant, as interventions with the greatest population health gain might be accompanied by an unintended increase in health inequalities. Health impact assessments with a focus on equity are essential for decision-makers. In order to correctly project population health effects, and choose between options of intervention types from a public health perspective, data on subgroup-specific intervention effects are needed.
Deep neural networks for anger detection from real life speech data There has been a lot of previous work on deep neural networks for automatic speech recognition, however, little emphasis has been placed on an investigation of effective deep learning architectures for anger detection from speech. In this paper, inspired by the state-of-the-art deep learning algorithms, we propose a variant of Deep Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs), Convolution Neural Networks (CNNs) with 3 3 kernels, and LSTM RNNs combined with CNNs, in conjunction with log-mel filter bank features and brute forced low-level-descriptors from the standardised ComParE set for speech anger detection. We extensively evaluate the deep networks on a big real-life speech corpus of 26 970 utterances with utterance-level labels collected from a German voice portal, finding that our proposed neural networks significantly outperform traditional modelling algorithms for speech anger detection.
Characteristics of magnetorheological fluids under single and mixed modes Rheological properties of magnetorheological (MR) fluids can be changed by application of external magnetic fields. These dramatic and reversible field-induced rheological changes permit the construction of many novel electromechanical devices having potential utility in the automotive, aerospace, medical and other fields. Vibration control is regarded as one of the most successful engineering applications of magnetorheological devices, most of which have exploited the variable shear, flow or squeeze characteristics of magnetorheological fluids. These fluids may have even greater potential for applications in vibration control if utilised under a mixed-mode operation. This article presents results of an experimental investigation conducted using magnetorheological fluids operated under dynamic squeeze, shear-flow and mixed modes. A special magnetorheological fluid cell comprising a cylinder, which served as a reservoir for the fluid, and a piston was designed and tested under constant input displacement using a high-strength tensile machine for various magnetic field intensities. Under vertical piston motions, the magnetorheological fluid sandwiched between the parallel circular planes of the cell was subjected to compressive and tensile stresses, whereas the fluid contained within the annular gap was subjected to shear flow stresses. The magnetic field required to energise the fluid was provided by a pair of toroidally shaped coils, located symmetrically about the centerline of the piston and cylinder. This arrangement allows individual and simultaneous control of the fluid contained in the circular and cylindrical fluid gaps; consequently, the squeeze mode, shear-flow mode or mixed-mode operation of the fluid could be activated separately. The performance of these fluids was found to depend on the strain direction. Additionally, the level of transmitted force was found to improve significantly under mixed-mode operation of the fluid.
Trigeminal evoked potentials in man: a new olfactory stimulation device. The recording of olfactory evoked potentials in healthy humans, using a continuous flow olfactory stimulator, is described. A stimulator pushed inert gas (N2) in a continuous flow through the nose at a rate of 4 l/min. At fixed 30-second intervals, (32 times) the flow was replaced by an equal amount of CO2, a trigeminal stimulant. Each pulse lasted 200 ms. An electronic timing circuit triggered both the stimulator and the recorder. Signal acquisition was performed using an Evoked Potential Recorder (Nicolet Compact Four by Nicolet Biomedical Instruments), triggered by the stimulator. Using this stimulator device reliable olfactory evoked potentials can be recorded in a clinical setting. Since this is a non invasive technique which can be used to test olfactory function whether or not the patient cooperates, it is expected to become widely used, particularly in non collaborating patients and in those suspected of malingering.
. In the past years a considerable amount of primary and secondary prevention programs for eating disorders was developed in German speaking countries. However, up to now there has been no systematic review of contents and evaluation studies. The main objective of the present systematic review is to identify and outline German prevention programs for eating disorders. This should facilitate the selection of appropriate and effective interventions for medical experts, other professionals and teachers. A systematic literature research was conducted and 22 German-language primary and secondary prevention programs were identified. Half of them were evaluated. The programs were conducted either in school, on the internet or in a group setting. The findings show that throughout almost all programs a reduction in weight and shape concerns and drive for thinness as well as an increase of (body) self-esteem could be observed in either the total sample or the high-risk sample. However, programs were inconsistently effective in reducing disordered eating behavior in the target population. All studies were effective in reducing at least one risk factor. Overall, higher effect sizes were found for secondary prevention programs than for primary prevention programs. Lastly, limitations of the studies and suggestions for future prevention efforts are discussed.
A Long Thin Electrode Is Equivalent to a Short Thick Electrode for Defibrillation in the Right Ventricle We hypothesized that a long thin right ventricular (RV) electrode would have equivalent defibrillation threshold (DFT) performance to a short thick electrode with approximately the same surface area. This could lead to thinner transvenous lead systems, which would be easier to implant. A thin (5.1 French) lead was compared to a standard control (10.7 French). The thin lead had an 8cm RV electrode length with a surface area of 4.26 cm2. The standard lead had a RV electrode length of 3.7 cm and a surface area of 4.12 cm2. A 140French capacitor 65%/65% tilt biphasic defibrillation shock was delivered between the RV electrode and a 14cm2 subcutaneous patch. DFTs were determined following 10 seconds of fibrillation in 11 dogs by a triple determination averaging technique. The thin lead had a lower resistance (77.1 ± 27.4 vs 88.9 ± 30.3, P < 0.001) than did the thick lead. There was no significant difference in stored energy DFTs (9.9 ± 2.5 vs 10.3 ± 2.7, P = 0.098 2sided, P = 0.049 1sided). This was in spite of the fact that the long thin lead had a portion of its RV coil extending above the tricuspid valve and, thus, not contributing efficiently to the ventricular gradients in the small dog heart. We conclude that a long thin right ventricular electrode and a standard short thick electrode had equivalent defibrillation performance. This preliminary result should be confirmed in clinical studies as it could lead to significantly thinner transvenous lead systems.
Ulnar sensory nerve impairment at the wrist in carpal tunnel syndrome In previous studies, changes in impulse transmission of ulnar motor axons have been documented in patients with carpal tunnel syndrome (CTS). We examined ulnar sensory conduction in 144 CTS hands. In particular, conduction parameters of the dorsal ulnar cutaneous branch (DUC) running outside Guyon's canal were compared with those of the superficial sensory branches (U4 and U5) passing through the canal. U4 and U5 response amplitudes and U5 conduction velocity were significantly lower than in controls. Conduction parameters of the DUC were similar in both groups. Patients with more severely impaired median conduction had smaller ulnar sensory action potentials. We propose that the ulnar nerve may be subject to compression in Guyon's canal as a consequence of high pressure in the carpal tunnel of CTS patients. This may provide insights into the mechanisms underlying extramedian spread of sensory symptoms in CTS patients. Muscle Nerve, 2007
Why New Hybrid Organizations are Formed: Historical Perspectives on Epistemic and Academic Drift By comparing three types of hybrid organizations18th-century scientific academies, 19th-century institutions of higher vocational education, and 20th-century industrial research institutesit is the purpose here to answer the question of why new hybrid organizations are continuously formed. Traditionally, and often implicitly, it is often assumed that emerging groups of potential knowledge users have their own organizational preferences and demands influencing the setup of new hybrid organizations. By applying the concepts epistemic and academic drift, it will be argued here, however, that internal organizational dynamics are just as important as changing historical conjunctures in the uses of science when understanding why new hybrid organizations are formed. Only seldom have older hybrid organizations sought to make themselves relevant to new categories of knowledge users as the original ones have been marginalized. Instead, they have tended to accede to ideals supported by traditional academic organizations with higher status in terms of knowledge management, primarily universities. Through this process, demand has been generated for the founding of new hybrid organizations rather than the transformation of existing ones. Although this study focuses on Swedish cases, it is argued that since Sweden strove consistently to implement existing international policy trends during the periods in question, the observations may be generalized to apply to other national and transnational contexts. Introduction The call for users of research to step forward and make their voices heard, both to counter the threat of a technocracy run loose and to keep the otherworldly tendencies of scientists in check, was not a new phenomenon in the 1990s or even the 19th century. In fact, users have always been an important feature of scientific culture (Hessels and van Lente 2008;Smith 1994;Porter 1995;Brown 2009a). Despite the focus on new roles for science in society, academic disciplines, and even epistemology in the literature on the boundaries of science, this article will show that a discourse revolving around the idea of the user as central to scientific endeavours, either as a rhetorical device or constructed for specific purposes, has been essential for centuries in creating organizations concerned with both the use and generation of new scientific knowledge (Hellstrm and Jacob 2003). In general, these types of historical organizations can be equated with hybrid organizations, that is, organizations relying on a combination of social practices drawn from the worlds of both science and politics (Miller 2001). 1 A basic observation is that since older hybrid organizations prevail as new ones are introduced, they form historical layers like superposed sediments. 2 By analyzing how these bodies have been created historically, stretching back beyond the Cold War era and even the 19th century, it is the purpose of this article to help to explain why new hybrid organizations have continuously been formed in ever-changing shapes and contexts since at least early modernity. Epistemic and Academic Drift In order to do this, a preliminary discussion regarding two concepts is necessary. The first of these, epistemic drift, will be applied to denote a process by which the criteria scientists use to assess the value of research problems and results, rather than being established entirely through internal protocols such as peer-review, are transformed so that scientists tend to place greater weight on the relevance of their research for politically, administratively, or commercially determined goals. 3 As has often been observed, the analytical distinction between external and internal relevance is difficult, if not impossible, to draw and maintain (Shapin 1992). Nevertheless, analyses of the symbolic function of the potential users and uses of research can be exploited in order to characterize the notion of epistemic drift without having to work out the boundaries between external and internal epistemic 1 Compare to hybrid forums as defined by Callon, Lascoumes and Barthe (2009: 18). There are also more specifically defined types of organizations for exchange between scientific expertise and policymakers (Braun 1993;Guston 2000;van der Meulen 2003). On the differences between boundary organizations and intermediary agencies, see Guston. Regarding weaknesses of principal-agent theory, see Morris and Shove. 2 For a similar analysis regarding post-war Swedish science policies, see Edqvist. criteria. 4 From such a perspective, epistemic drift can be defined as processes by which values from ideological systems external to science, for instance business or policy, are adopted by researchers making them pay increasing attention to the potential uses of their activities and practices. Originally, the concept of epistemic drift was developed to describe a state-driven process of increasing political influence over research agendas in Sweden in the 1970s and 1980s (Elzinga 2010). But using the somewhat broader definition given here, epistemic drift can be applied more extensively to denote any process where interests other than those of scientists influence scientific research, its results and the assessment of those results. These user-induced interests have, of course, transformed over time and thus given rise to demands for new hybrid organizations. But as will be shown here, this only partially accounts for the formation of new hybrid organizations. In addition to conjunctures in the uses of science, many hybrid organizations themselves follow a path that seems to make room for new ones to appear. In this article, epistemic drift is thus used to denote the process in which researchers and the representatives of other interests interact in order to generate new knowledge and make it more accessible to potential users. More specifically, the focus here is on how these processes may lead to the formation of new hybrid organizations. Of course, the balance between different stakeholders participating in processes of epistemic drift varies from one historical context to another. These stakeholders may also have varying agendas, whether hidden or explicit. As has been pointed out many times, the notions of the exclusivity and the usefulness of knowledge about natural phenomena both have long traditions stretching back to the late Renaissance or earlier (Hannaway 1986;Dear 2005). These two ideal types of ideological underpinnings of science, framed in concepts such as the vita contemplativa and the vita activa, have to varying degrees been the ideals of both the producers and the users of knowledge over the past centuries. The concept of epistemic drift is used here to describe a process leading to an increased focus on potential uses. In this way, it points out the direction of a process, but reveals nothing about its starting point. The concept of epistemic drift is balanced by a second concept, academic drift, conventionally defined as a process entailing an increased valuation and assimilation of academic practices. 5 Traditionally, academic drift has been used to describe and analyze tendencies within vocational education, typically engineering schools of different levels (Harwood 2006). The problem, then, is to pinpoint the meaning and content of academic practice, not an easy task considering the various meanings the term academic has been given throughout modern history. It is clear, however, that academic drift is used to describe a situation where institutions for vocational training pursue research and teaching based to a large extent on intellectual education and book learning rather than practice, irrespective of whatever else is implied by the term academic in a given context. Here, science is seen as a superior way of solving problems, while more practice-oriented actors view science as only one tool among others (Harwood 2005). Thus the concept of academic drift is also a rhetorical instrument connecting certain criteria for assessing the value of education to academic traditions. Keeping this in mind, academic drift can be defined as a process by which the practitioners of science pay increasing attention to scholarly procedures and routines, including the search for knowledge for knowledge's sake, while paying less attention to the potential uses of these activities and practices. Note that the scope of academic drift can easily be expanded to include other organizations. As we will see, many different types of hybrid organizations can be viewed as exposed to academic drift, at least when followed over appropriately extended periods of time. By bringing these two concepts together, it is possible to analyze processes that seem to counteract each other. On the one hand, academic drift can be viewed as the result of more or less successful endeavours to normalize an ideal of secluded research, where experimentation rather than experience is the primary mode of observation in labs isolated from the buzz and chatter of the outside world, and where the communication of interpretations and results in research networks for standardized data collection is judged more important than demands for external relevance (Callon, Lascoumes, and Barthe 2009: 37-70). Of course, the outside world is always there inside the lab, not only as noise and disturbance, but also as an inevitable influence and directing force. Still, academic drift is the result of more or less successful endeavours to normalize an ideal of seclusion. Epistemic drift, on the other hand, is the result of more or less successful endeavours to normalize an ideal of relevance to the outside world. Here, the mutual engagement of different interests in research networks is seen as a seal of legitimacy, and the results presented are assessed accordingly. Clearly, the concepts of epistemic and academic drift as defined here refer to two opposing and thus mutually exclusive processes, and it could be argued that the use of one of these concepts alongside the observation that the process it denotes is reversible would suffice. Here, both concepts will be used nevertheless, primarily for the sake of clarity, but also to show how they can be used in tandem. Note though, that while mutually exclusive, these two concepts are not jointly exhaustive. There are other forms of organizational drift that can occur that are neither epistemic nor academic. For instance, academic practices at universities can drift towards ideals stressing pedagogical skills at the expense of subject didactics or transdisciplinarity at the expense of traditional disciplines, both transforming academic values, but in other ways than epistemic drift would imply. Long-Term Transformations of Hybrid Organizations The central observation underpinning the argument of this article-indeed, the observation which makes it possible to compare hybrid organizations over extended periods of time, even over the course of centuries-is that outwardly differing historical forms of hybrid organizations such as scientific academies, institutions for higher vocational education, and industrial research institutes have one important feature more or less in common. They have all been founded in general as organizational solutions to a specific perceived problem: that of a great divide disconnecting, on the one side, the researchers who seek seclusion in order to obtain knowledge potentially valuable to users in different areas of economic life, and, on the other side, the users who are believed to be in no position to influence the focus of this research in order to suit their own purposes. In short, scientific academies, institutions for higher vocational education, and industrial research institutes have all more often than not been set up as hybrid organizations in order to bridge a perceived divide between the producers and potential users of knowledge. When studying the circulation of scientific truth-values and trust-values over the span of centuries, continuous epistemic authority (in the traditional meaning of the ability to determinatively influence the formation and use of knowledge) often turns out to be an important condition for hybrid organizations to last for those extended time periods needed in order to drift, either academically or epistemically (Pierson 1994). In its turn, epistemic authority is closely related to perceived reliability, relevance, and social robustness, all of which are needed for successful knowledge transfer over boundaries separating the producers from the users of knowledge and thus for the success of organizational and institutional hybridity (Nowotny 2003;Nowotny, Scott, and Gibbons 2001). In each historical context, however, epistemic authority has been understood differently. In addition, it has been related in different ways to the authority of other societal sectors, such as military power and coercion, religious beliefs, or professional organizations (Brown 2009b). Nevertheless, in all its diverse shapes and forms, epistemic authority is essential for making organizations successful in the transference of knowledge. In an environment of changing relations between different types of authorities, this means that hybrid organizations have to be dynamic for the preservation of epistemic authority while simultaneously maintaining social robustness. Sharing these prerequisites, the long-term transformations of hybrid organizations are seldom random, but seem to follow trends best described by the concept of academic drift. This does not imply that academic drift is inevitable for hybrid organizations. On the contrary, as Jonathan Harwood has argued for the case of higher professional education in the fields of agriculture, engineering, medicine, and management, certain features serve to strengthen the tendency for academic drift, features he has used to explain why academic drift has occurred in some institutions and not others. According to Harwood, there are different strategies used by hybrid organizations when seeking epistemic authority. One is to seek recognition within academic hierarchies, either informal and unofficial or statesanctioned, as was often the case in European settings. Another is to seek recognition within the realm of potential groups of users such as professional organizations. Yet another strategy is to seek access to material resources. In Harwood's analysis, an organization's position in the hierarchy of academic status, its geographic location, and the financial and political implications that these entail, determine to a large extent its tendency to drift in one direction or the other. More specifically, organizations only loosely connected to activities in the surrounding region tend to drift academically, as do organizations that have a high or moderate academic status. Conversely, organizations that are well-connected regionally and have a low academic status tend to drift epistemically. Especially interesting in this context is that Harwood's model can be generalized to encompass other types of hybrid organizations. Bruce Seely describes another dimension of this dynamic in his study of academic drift in American engineering colleges, where he notes an escalating interest in scientific inquiry and a marked increase in theoretical subjects in course curricula over the first half of the 20th century. By stretching the time frame forward in a later study, he has been able to show that the stress of American engineering education oscillated between theory and practice over the course of the century. According to Seely, engineering education was practice-oriented in the early 20th century, but became more focused on research during the mid-century through the influence of European-educated engineers with a more analytical and mathematical approach to the subject. Later, however, the pendulum swung back and there was a renewed interest in more practice-oriented education among American engineers. Seely explains this phenomenon as a result of differences between European and American engineers by pointing out that Americans, from simple technicians to those holding a doctorate, were all educated in the same institutions, while a more heterogeneous educational system for engineers was in place in Europe where theoretical perspectives had higher status. When European influence over American engineering education peaked in mid-century, the result was thus a leaning towards the theoretical parts of curricula. Taken together, it is clear from the findings of Harwood and Seely that academic drift is not inevitable. Instead, there are clear indications that these processes are driven by an organization's position in a status hierarchy, its geographic location, and its inclination to conform to prevailing perceptions of the relation between practice and theory. It is therefore important to point out that all the hybrid organizations dealt with here are easily recognized as having had strong tendencies towards academic drift according to Harwood's model of institutional dynamics. They all had prominent positions in the national hierarchy of academic status and were all located in the national capital. There were, of course, other less renowned hybrid organizations as well. These were less inclined towards academic drift, in accordance with Harwood's model, due to their weak positions in the prevailing status hierarchy. This lack of status corresponded to a lack of epistemic authority to survive as functioning hybrid organizations over extended periods of time. It is equally clear that the engineering colleges used as examples here were part of a heterogeneous system of educational institutions, as Seely argues was the case in Europe in general (Torstendahl 1975). It would therefore be wrong to claim that all hybrid organizations drift academically. They certainly do not. It would, however, accord with the results of Harwood and Seely to claim that hybrid organizations with the necessary epistemic authority and social robustness to survive for centuries tend to drift academically. In the following historical analysis of the creation and drift of a few hybrid organizations stretching from the 18th to the 20th century, the focus will be on Swedish examples, but the conclusions drawn have strong general implications. The reason is that Sweden has throughout its modern history implemented pre-existing policy trends with astounding consistency, making the country a model of the Western world in general (Elzinga 1984;Kaiserfeld 2010). And in terms of national research policies, the country has often served as a mirror for measures taken previously elsewhere. In this, Sweden is not very different from many other small Western countries where problems and solutions in science policy seem to appear almost simultaneously during the past centuries, a phenomenon often referred to as policy convergence (Wittrock 1984;Lemola 2002). By examining how and why hybrid organizations were founded in Sweden from the 18th century onwards, as well as by delineating how they drifted after their founding, it is the purpose here to propose an answer to the more general problem of why new generations of hybrid organizations continuously supplant one another, an answer that will point to organizational dynamics rather than changing historical conjunctures in the uses of science. The focus of this analysis will primarily be on 18th-century scientific academies, 19th-century institutions for higher vocational education, and 20thcentury industrial research institutes. Although this article deals with historical hybrid organizations, there are a number of different presently active hybrid organizations engaged in bridging a perceived knowledge producer-user division. One example is supplied by research councils often used to launch politically initiated programmes to distribute funding to politically selected research problems with more or less explicit demands for the participation of user categories (Jacob 2005). One other important institution for hybridity is constituted by the different types of forums for lay people interaction with researchers and politicians materialized in open hearings, citizen panels etc. (Maasen and Weingart 2005;Callon, Lascoumes, and Barthe 2009). Best known are perhaps different patient organization movements slowly transforming the relations between medical research, the medicine industry, medical doctors and their patients (Landzelius 2006). Whether these and other more recent organizations will drift, and if so which way, is, however, a matter for future analysis since patterns of epistemic or academic drift are only discernable through long-term historical analysis. Scientific Academies of the 18th Century According to the chronicler of 18th-century scientific academies, James E. McClellan III, approximately seventy such academies were established between 1660 and 1793. Modelled after the Royal Society of London and the French Academy of Sciences, they formed a collective unity sharing common members and undertaking common projects. Among their common features were legal charters granted by some civil authority (such as a king), systems of selfgovernment set down in written rules, officers and elected fellow members who met regularly, and activities such as prize competitions and published transactions or memoirs. These official academies were complemented by private organizations of a similar character. When surveying the landscape of scientific academies and societies of the 18th century, McClellan noted that academies were more common in countries where absolute monarchies ruled and agriculture dominated economic life, while societies modelled on the royal in London more often were oriented towards industry, trade and the sea. The academies of Berlin and of the Swedish capital Stockholm were, however, pointed out as more ambiguous from this perspective (p. 13). Nevertheless, when describing the hierarchy of status prevailing between the different scientific academies and societies of the 18th century, he placed the Royal Swedish Academy of Sciences (Kungliga Vetenskapsakademien) at the top together with those of London, Paris, Berlin, and St. Petersburg (McClellan III 1985: 34). Although the founding of the Royal Swedish Academy of Sciences in 1739 was influenced by all of these international precedents, not least in picking up the thread of Baconian empiricism that tied them together, it proved initially to be very different in its even more marked emphasis on utilitarian and economic goals (Henry 1999). Swedish precursors, the Uppsala-based Collegium Curiosorum formed in 1710 and the Societas Scientiarum founded in the 1720s, had functioned in the same vein, albeit on a more restricted scale (Hildebrand 1939a, b: 88-94;Liljencrantz 1939;Liljencrantz 1940). And there would prove to be derivatives as well, most notably the Royal Society of Arts and Sciences (Kungliga Vetenskaps-och Vitterhets-Samhllet) formed in 1778 in the commercial port city of Gothenburg (Eriksson 1978). Thus the Royal Swedish Academy of Sciences appeared neither in an international nor a national vacuum. Moreover, the histories of these contemporary societies and academies reveal similar developments over time. The Royal Swedish Academy of Sciences as a Hybrid Organization The name Vetenskapsakademien can be interpreted as 'knowledge society', given that vetenskap (roughly, 'science'; compare the German Wissenschaft) had a broader range of meaning than it does today, and given that the word akademi resonated more with the French notion of societethan the 18th-century Swedish meaning of a university or school in general (von Hpken 1739). The name nevertheless continued to connote an alternative type of university concerned with the discovery and propagation of new and useful knowledge, in contrast to the traditional notion of the university in which knowledge was disseminated primarily through the teaching of established curricula. In fact, the Academy was originally proposed to be named the 'Economic Scientific Society' (Oeconomisk Wetenskaps Societet) in accord with the ideological predilections of many of its founding fathers, most notably Carl Linnaeus, who claimed that science was primarily an instrument for economic mercantilism and patriotism. 6 The first paragraph of its rules stated that only the arts and sciences 'possessing real utility for the commonwealth' were to be the subject of the Academy's attention. 7 Already from the start, the Academy published transactions with original articles in Swedish intended to be read in wider circles, a strategy well in line with the principle of utility (Liedman 1989). Space was certainly reserved in these transactions for more specialized pieces on topics ranging over the whole fields of natural history and philosophy, but popular texts aimed at a wider audienceoften dealing with agricultural topics-dominated throughout the 18th century (Bergstrm 2000). Thus, the transactions of the academy were indisputably saturated with research problems and results evaluated on the basis of their relevance to politically and administratively determined goals making them exponents of epistemic drift. The suggestions to translate work from the publications of the Royal Society and the French Academy were, however, never realized (Hildebrand 1939a, b: 230-31). During its first years the Academy was heterogeneous in its makeup, one historian characterizing it as 'a mixed congregation'. 8 Counts and cabinet ministers collaborated with tax collectors; professors renowned on the Continent exchanged thoughts with apothecaries and accountants. The mix mirrored scientific practice since learned discourse could seldom be clearly distinguished from the world of commerce and politics (Klein and Spary 2009). Highlighting its heterogeneous character, a contemporary witness described its membership as comprising both 'protectors and protected', with representatives of political and commercial life on one side and natural philosophers on the other. 9 This amalgam of scientific and economic interests was upheld through an election process in which a new candidate nominated by an elected member had to be approved by at least a three-fourths majority of those present (Lindroth I 1967: 12-5). Two and a half years after the Academy's foundation in 1739, its membership had already grown to 64 and five years later it had increased to 94 (Lindroth I 1967: 27-30). By 1818 the number of elected members reached 383, nearly a fifth of whom were aristocrats, landed gentry, high-ranking state officials, and military commanders. Almost as large was the group of university professors and teachers. Other occupational groups represented were low-ranking public officials, physicians, and artisans. Throughout this period, the ratio between the different categories remained more or less constant (Lindroth II 1967: 28-9, 75-80, 91). The Academic Drift of the Academy The drift of the Academy began in the 1820s when its heterogeneous character started to slowly dissolve. During the 1820s, 30s, and 40s, university teachers constituted approximately one-third of the new domestic recruits. In the following decade, this percentage increased considerably, so that two-thirds or more of new members were active at Swedish universities (Dahlgren 1915;Lindroth II 1967: 573-5). Changes could also be detected in the published transactions, where articles aimed at a wider audience became less frequent at the expense of those written for specialist readers. One reason was a shortage of articles during the early 19th century, which compelled the editors to refashion the transactions into a more attractive publication for scholars who otherwise preferred to publish their results in international journals (Lindroth II 1967: 76, 123-6). While the Royal Swedish Academy of Sciences started off as a very clear case of a hybrid organization, it certainly became less so during the 19th century due to academic drift, here measured primarily by the changing composition of its body of members and the articles published in its transactions. It is hard to reconstruct the historical factors behind the academic drift of the Royal Swedish Academy of Sciences. One feature to take into account is the election system by which new candidates were nominated and elected by existing ones. Assuming that members had a tendency to vote for newcomers with a background resembling their own, this system was at an unstable equilibrium as long as the different member categories were reasonably proportionate to each other. Once one category started to grow, however, it could outnumber the others in a relatively short period of time. The jump from one-third of new domestic recruits of the Academy being university teachers in the 1840s to two-thirds in the 1850s could be understood as a result of the expansion of the Swedish system of higher education in the mid-19th century, especially in the Stockholm-Uppsala area, supplying a growing stock of university teachers to choose members from, and a subsequent shift in the proportion of university teachers making the numbers of this category tip over in their favour (Blomqvist 1992). It is important to point out that although these explanations partly rely on local non-generalizable factors, the expansion of different national systems of higher education and especially the natural sciences, occurred in many different countries in the mid-19th century as did the academic drift of scientific academies founded in the 16th and 17th centuries. Already by the 1810s and 20s, however, academy members had started to import into Sweden a new type of hybrid organization they had observed in Berlin. As a result, engineering schools were organized in Stockholm to instruct artisans and managers connected to 'industries grounded on chemical and physical foundations'. 10 This was not the Academy's first attempt to create spin-off hybrid organizations. It had earlier taken over botanical gardens and initiated libraries, 18th-century institutions in which knowledge was managed partly with users in mind. 11 Moreover, the Academy's attempt to import organized engineering training was part of a larger European trend in the 19th century to establish new educational institutions aimed at spreading what was thought to be useful knowledge. Higher vocational education was in vogue all over the Western world, and Sweden was no exception. Notably, however, the interest in vocational training in areas such as agriculture, medicine, and technology coincided with the Academy's exposure to academic drift. 19th-Century Institutions of Higher Vocational Education During the first half of the 19th century, the interest in engineering education went hand in hand with the gradual introduction of chemical and mechanical industries, as well as new methods of transportation such as the steam engine. In fact, as the golden era of Swedish natural history and philosophy began to decline in the 1780s or even earlier, technical and industrial endeavours seem to have become of greater ideological importance (Johannisson 1979(Johannisson -1980Lindqvist 1989: 121). And as the Academy slowly lost its character as a hybrid organization, institutions for engineering training seem to have been more reasonable candidates for making use of knowledge. In Sweden, as in many other European countries, a number of institutions for vocational education were formed during the 19th century. This process has been analyzed from a wide range of perspectives, having been depicted, for example, as a response to the demands of industrialized society, or as a form of rationalized education developed under the auspices of an expanding state (Day 1987;Artz 1966). The ideological and social underpinnings of educational efforts were of course relevant since they determined which vocations were deemed worthy of receiving higher educational institutions. But the existence of ideological and social platforms did not guarantee that the resulting organizations remained true to the original arguments and ideals concerning educational practices. In fact, in almost all the differing historical instances of vocational training, a central struggle can be discerned between the proponents of theoretical knowledge generated through scientific methods and their opponents, the practice-oriented seekers of know-how (cf. Gispen 1989;Grattan-Guinness 2005). In their recurring struggles over curricular content, institutions for vocational education have served as good examples of hybrid organizations in which practices have been drawn from the worlds of both science and politics. Higher Vocational Education in Sweden as Hybrid Organizations The role and use of knowledge were thus central topics in the discussions preceding the establishment of vocational training institutions in different contexts, for example, the Stockholm University College of Physical Education and Sports (Gymnastiska centralinstitutet) in 1813, the Caroline Medical Institute (Kungliga Karolinska medico-kirurgiska institutet) in 1819-22, the Technological Institute (Teknologiska institutet) in 1827, the Chalmers Institute (Chalmerska institutet) in 1829, the Forestry Institute (Skogsinstitutet) in 1828, and the Pharmaceutical Institute (Farmaceutiska institutet) in 1837 (Anon 1913;Johannisson, Nilsson and Qvarsell 2010;Lagerkvist 1999;Henriques 1917;Bodman 1929;Lagerberg 1928;Fries and Zimmerman 1978;Ekstrm and Danielsson 1987). This list, far from complete, gives only a hint of the 19th-century interest in forming new establishments of this kind, many of which were originally intended for education on lower levels as well. In general, the teachers of these institutions were titled professors and their background was, with some variation, from the universities as PhDs or as teachers or both. Engineering education in Sweden in the early 19th century was channelled through a number of schools, most notably the specialized School of Mining in Falun beginning in 1822, the Technological Institute in Stockholm in 1827, and the Chalmers Institute established two years later in the commercial city of Gothenburg on the Swedish west coast. The formation of these institutions had been preceded by debates on agricultural education in the Diet of the Estates, where the traditional agricultural methods of farmers stood against a more informed scientific way of pursuing farming especially connected to wealthier landowners (Torstendahl 1975: 44-55;Schaffer 1997). In the proposals for agricultural education, practice and theoretically informed knowledge were promoted in order for the teaching to rest on both scientific experiments and theories of agricultural chemistry. In political discussions about the most important type of knowledge for the improvement of agricultural practices, explicit references were made to foreign developments, especially those in Germany (Harwood 2005: 77-80). And eventually, after a number of parliamentary efforts stranded on the issue of costs, a private school was founded in 1834. The same types of issues are recognized in the debates on higher technical education held in the Swedish Diet during the 1820s. The key argument for the introduction of publicly financed higher engineering training was that knowledge and reason would raise productivity in industry as well as in agriculture (Henriques 1917: 70-94;Torstendahl 1975: 56-58). Originally, the idea was to teach science to younger people already employed in workshops and elsewhere in order to tie physics and chemistry in particular to practical working life. Education, it was argued, should focus on scientific knowledge that could be used directly by those employed in production in order to raise productivity, which shows that the users and uses of the knowledge disseminated were the centre of attention from the very beginning. Those in favour of a new educational institution pointed out that prominent Swedish scientists had moved abroad instead of having been engaged domestically to raise industrial productivity. Simultaneously, they stressed that technical knowledge could not be deduced from the sciences, but that it must rely instead on the scientific systematization of experiences from industry. All these standpoints regarding useful science were opposed, however, by the first vice chancellor of the Technological Institute in Stockholm, Gustaf Magnus Schwartz. Instead, he organized education along the lines of practical experience. The sciences were given a minor role, which soon led to bitter disputes between Schwartz, the board, and the Diet, where the proponents of scientific training raised their voices. These discussions make it clear that the curricula at institutions for higher engineering education were formed by both scientific and political debates. In both arenas, ideological considerations regarding the value of science in engineering rather than empirical proof marked the conclusions of the participants. Eventually, Schwartz had to resign as vice chancellor in 1845, opening the door for the introduction of scientific theory and experimental activities in the institute's curriculum. At the Chalmers Institute in Gothenburg, the situation was somewhat different (Torstendahl 1975: 76-82). Here, the first vice chancellor stressed the importance of bringing the sciences into the curriculum already from the opening of the institute, and his appeal was soon put into practice. The Academic Drift of Higher Vocational Education Higher engineering education was the subject of rather intense debate in many European countries during the second half of the 19th century (Manegold 1970;Lundgreen 1990;Fox and Guagnini 1993;Runeby 1976;Runeby 1978). In the 20th century, the signs of academic drift in Swedish higher engineering education became more visible, for instance, through an increase of public resources earmarked for experimental research in laboratories located in the engineering schools (Bjrck 2004: 287-295). In 1927, an engineering doctorate was introduced after much debate, and five years later, a programme in engineering physics was founded at the Technological Institute (Bjrck 1997). Both these novelties were conscious imports from Germany where they had been introduced some decades earlier. Another sign of academic drift in Swedish engineering schools depending on imports from abroad was the greater importance given to scientific credentials when appointing professors at the expense of industrial experience (Sundin 1981: 80-85;Larsson 1997: 88-101 and 191-212). Thus, despite the fact that the curricula were dominated by scientific and engineering topics, roughly balancing each other from the late 19th century onwards, there were other indications of the drift towards an increased valuation and assimilation of academic practices (Lindqvist 1993). It is important to stress that the arguments used to defend academic drift in Swedish engineering schools never relied on empirical support for the advantage of engaging scientifically trained teachers or expanding scientific subjects in the curriculum. Instead, proponents of both science-and practice-oriented teaching relied on assumptions as well as developments abroad to support their case. The same type of academic drift has been visible in other institutions of higher vocational education-for example, schools devoted to medical and veterinary training-where new authorities in the form of professional societies and organizations influenced developments in much the same way (Gispen 1989;Lundgreen 1990). There were also other types of hybrid organizations where the management and production of knowledge gravitated in the 19th century. Best known perhaps are museums, which were built in many countries as a way to create and support national identity: including the collection, maintenance, and display of material; the dissemination of information to the public through exhibitions, tours, and educational activities; and research performed in relation to the collections (Hooper-Greenhill 1992;Knell 2007). Sweden was again no exception, and the best-known example of a Swedish museum functioning as a hybrid organization was the Swedish Museum of Natural History formed in 1831 (Lindman 1916;Broberg 1989;Beckman 1999;Beckman 2004). Like the botanical gardens in the 18th century, the museum relied on collections gathered and exhibited by the Royal Swedish Academy of Sciences. Industrial Research Institutes of the 20th Century While institutions of higher vocational education were being exposed to academic drift in 20th-century Sweden, new types of hybrid organizations were contemplated. Again, the pattern had already been established abroad, where publicly and privately co-funded research institutes had been set up beginning in the second half of the 19th century. One important source of inspiration was the Kaiser-Wilhelm-Gesellschaft zur Frderung der Wissenschaften in Germany, which established institutes in different research areas of industrial interest such as chemistry (Johnson 1990). Similar organizations had also been introduced in Great Britain and America by the beginning of the 20th century. Modelled on the German Physikalisch-Technische Reichsanstalt founded in 1887 in Berlin, the British National Physics Laboratory and the National Bureau of Standards in America, founded in 1900 and 1901 respectively, dealt with materials testing as well as standardization issues and the control of scientific instruments (Cahan 1989;Moseley 1978;Pyatt 1983;Pyatt 1984). These efforts were intensified during World War I with the founding of the Department of Scientific and Industrial Research in Great Britain in 1916. The same year, the National Research Council was formed by the National Academy of Sciences in the US, largely funded by private foundations and with only loose connections to the federal government. Like the earlier scientific academies and institutions of higher vocational education, Swedish industrial research institutes were formed under the influence of foreign prototypes. In Sweden, the new types of hybrid organizations set up to supply knowledge of interest to different industrial branches materialized primarily as industrial research institutes. An important precursor was the Materials Testing Laboratory (Materialprovningsanstalten), which had first been formed as a branch of the Swedish Steel Producers' Association (Jernkontoret) to test metals and other materials to ensure quality and set standards. Towards the end of the 1890s, it was reorganized and put under the auspices of the Technological Institute in Stockholm, by this time renamed the Royal Institute of Technology. Swedish Industrial Research Institutes as Hybrid Organizations The first regular industrial research institute in Sweden, however, was the Wood Pulp Research Association (Pappersmassekontoret), formed in 1917 by companies in the pulp business which contributed in proportion to their respective production. The owners thus commissioned the research (Sundin 1981: 19;Bjrck 2004: 221-5). The ongoing World War I was an important factor in its creation, but the economic crisis following the War in the early 1920s together with a lack of serviceable results put an end to the association in 1922. That year, the Swedish Institute for Metals Research (Metallografiska institutet) was inaugurated as the result of a collection held by Stockholm University College (Stockholms hgskola) and the Swedish Steel Producers' Association. The state participated as well by supplying housing and an annual allocation of money for the running of these institutes. The boom in international steel production, which had increased nearly a hundredfold between 1870 and 1910, as well as the expansion of domestic Swedish steel production, paved the way for the foundation of the Institute (Sundin 1981: 163-85;Sundin 1992). After only a few years, however, accountants complained that much of the turnover came from gifts rather than from contributions from the steel industry, thereby implying that the research at the Institute lacked relevance for the financiers. In the early 1930s, the Institute was reorganized after an initiative taken by the board, and in 1935 the director left his position after complaints from the board that too much of the Institute's research was focused on basic research rather than the running of steel works. It was no coincidence that these two research institutes, for wood pulp and metal respectively, represented the two most important branches of Swedish industry, at least when ranked according to export value. These branches carried the weight needed to create epistemic drift strong enough to lead to the foundation of formal organizations. But it was the planning of the third institute during World War I that caused a more long-lasting change in the institutional landscape of conveying publicly funded knowledge of industrial relevance. This was to be an institute for power and fuel research, whose formation was motivated not by the perceived status of energy as a profitable industry in its own right, but as a response to the problem of finding domestic supplies to meet the energy needs of Swedish industry in general. This problem was addressed by a number of representatives of government and industry, resulting in several public investigations and reports regarding such an institute. The outcome was the Royal Swedish Academy of Engineering Sciences (Kungliga Ingenjrsvetenskapsakademien), an organization housing several smaller institutes for consultative research and commissions (Sundin 1981;Peterson 1990;Brissman 2008). During the interwar period, the Royal Swedish Academy of Engineering Sciences received substantial public as well as private funding for research on energy and building technologies. The financial backbone was the annual public contribution of between SEK 100,000 and 200,000 (today approximately corresponding to EUR 192,000 and EUR 384,000) for fuel and power research, a handsome sum considering that the total public allocation to the Royal Institute of Technology was SEK 360,000 (EUR 690,000) in 1919 (Sundin 1981: 98-106). 12 Fuel and power research was partly established through the formation of no less than three different research institutes in the 1920s and early 30s, one for electrical heating, one for coal, and one for steam heating, all co-financed by public funding and private industry (Liander 1970;Stlhane 1970;Stenberg 1970;Cederquist 1970;cf. Hrlin 1944). In 1929, the Concrete Laboratory (Cementlaboratoriet) was formed, and throughout the 1930s additional committees and commissions (e.g. for welding and corrosion research) were set up to assist technical areas in need of support (Giertz-Hedstrm 1970). In addition, the Academy of Engineering Sciences spent SEK 230,000 (EUR 441,000) funding approximately one hundred different studies during the 1920s, a sum that increased gradually so that about SEK 100,000 (EUR 192,000) was paid out annually towards the end of the 1930s (Ljungberg 1986, 36). These sums were, however, far from the SEK 400,000 (EUR 767,000) that the Academy had hoped to distribute annually, and lack of financial resources was a constant problem for the Academy throughout the interwar period (Sundin 1981: 128). The standard toolkit for establishing research institutes early on included joint financial contributions from the industrial and public sectors, that is, private capital as well as tax revenue in one form or another. Therefore, the successful establishment of research institutes often relied on intense networking on the part of both academics and industrialists, the most important single organization 12 The conversion of historical currency into present Euros relies on consumer price indices in Edvinsson and Sderberg, Table A8.1, as well as the exchange rate between SEK and EUR averaging 8,87 in January 2012. promoting the formation of research institutes in the interwar period being the Academy of Engineering Sciences. The hybridity of early industrial research institutes in Sweden was mirrored in the fact that their first directors all had backgrounds in the Materials Testing Laboratory, where their interest in technical as well as scientific problems had been formed (Sundin 1981: 204-6). They all embraced what Eda Kranakis has called 'hybrid careers'. As a result, the traditional academic view of science had to be accepted side by side with an ethos of utility. The industrial research institutes became a third arena, in addition to the earlier formed scientific academies and institutions of higher vocational education, where representatives of these two ideals of knowledge production could meet (Holmberg 2005;Holmberg 2010). But interaction between the spheres of academia and industry could also lead to failed efforts. One area in which the Academy of Engineering Sciences attempted rather unsuccessfully to establish research was that of rationalization, especially as it related to the organization of industrial work processes and the analysis of working conditions in order to improve efficiency. Between the founding of the Academy in 1919 and the mid-1920s, efforts were made to establish a psychotechnical institute. It proved difficult to secure sufficient funding, however, and the initiative was eventually abandoned, mainly due to a lack of interest from the industrial sector (De Geer 1978: 117-58). Thus, irrespective of the tendency for epistemic drift prevailing among psychologists and other academics, the response of industry was too weak to result in a research institute in this case. Instead, the involved companies seem to have been satisfied with the existing methods of rationalization imported from abroad. In one way, the failure to create a psycho-technical institute in the 1920s was an exception. Prior to 1919, when the Academy of Engineering Sciences appeared as an important hybrid organization, institutes and associations had been established exclusively in areas where revenue was large enough to support the type of uncertain, long-term investment that research often entailed. The Academy, however, seems to have established greater possibilities for funding the analysis of technical problems of a broader social interest. In short, the first half of the 20th century saw an intensification in the creation of industrial research institutes financed jointly by government and private interests and coordinated by the Royal Swedish Academy of Engineering Sciences (Weinberger 1997: 42-5). In most cases, institutes and laboratories were formed to serve branches of industry where there was no distinguishable government agency acting as a major customer, typically branches related to natural resources such as pulp and ore. Regarding other technical problems of a broader scope, the establishment of the Academy of Engineering Sciences increased the possibilities of research funding, at least on a smaller scale. It should be clear, then, that industrial research institutes qualify well as hybrid organizations, given their reliance on combinations of social practices drawn from the worlds of science and politics. It should be equally clear that the intended beneficiaries of these organizations were primarily industrial enterprises as well as branch organizations and their supporters, including unions and political parties. These beneficiaries influenced research problems as well as their solutions, and in doing so initiated and strengthened the process of epistemic drift. In university departments, however, the situation could differ. Historian of science Sven Widmalm (2004Widmalm (, 2008, for example, has shown how university-based research groups led by well-known Swedish Nobel laureates such as the chemists Theodor Svedberg and Arne Tiselius managed to balance funding from industry with academic freedom when studied over shorter time spans (a few decades rather than a century or more). His analyses of the network-building activities of Nobel laureates demonstrate that industrial funding did not necessarily hinder the free selection of research topics, and thus did not imply an unconditional epistemic drift. The Academic Drift of Swedish Industrial Research Institutes Moreover, when compared over longer time spans, it is obvious that industrial research institutes focused more on research activities than on the dissemination of knowledge through meetings, teaching, and publications in the vernacular, as had been the focus of the scientific academies and institutions for higher vocational education. It is likewise apparent that this knowledge had to be both useful and accessible for the industrial branches with a financial interest in the institutes. Otherwise, the institutes could be dissolved or at least reorganized, as the examples of the institutes for wood pulp and metals research demonstrate. Industrial research institutes did, however, exhibit a tendency for academic drift, the Swedish Institute for Metals Research serving as one early case in point. During and after World War II, the history of industrial research institutes became more tightly interwoven with the development of higher engineering education, leading to a substantial expansion of research resources for these institutes. The background was a public investigation into the possibility of establishing a national research policy in which the Swedish state would shoulder more financial and administrative responsibility for research activities beneficial to trade and industry (Nybom 1997: 45-52). When reviewing the different organizational alternatives in the early 1940s, the establishment of a national industrial research institute was highlighted as one of a number of feasible possibilities. The idea was abandoned, however, owing to the argument that the notion of a research institute separate from the education sector was outmoded, and that the existing institutions for higher engineering education should instead become more research intensive. The result was the introduction of research councils assigned to finance research in different areas by approving project applications from institutions for higher vocational education, universities, and other interested parties. Conclusions By comparing the creation and developments of three types of hybrid organizations founded in the 18th, 19th, and 20th centuries, it has been the purpose of this article to answer the question of why new hybrid organizations are continuously formed. A few clues to the solution of this problem can now be formulated. Firstly, the foundation of new hybrid organizations seems to be the result of epistemic drift, that is, the valuation of research problems and results according to their relevance to politically and administratively determined goals, goals often created with different categories of knowledge users in mind, or through the initiative of the potential users themselves. When, for instance, different scientific academies were formed in the 16th and 17th centuries in Sweden and elsewhere, it was a way to put the natural sciences to use for national economical interests along utilitarian lines of thought, something many thought that the conservative universities were unsuccessful in doing. Also the establishment of higher vocational education in the 19th century must be seen as the result of epistemic drift. The best indication of this is that institutions for higher vocational training in Sweden and elsewhere were exclusively set up in areas where there was support external to the traditional sciences, for instance in engineering, physical education, medicine, dentistry, veterinary medicine, pharmaceutics, agriculture and forestry. Nevertheless, these new institutions were to a varying degree populated by academics from the universities and thus promoters of epistemic drift. Secondly, hybrid organizations-at least those with a position in a status hierarchy, a geographic location, and an inclination to conform to prevailing perceptions of the relation between practice and theory according to Harwood's model of institutional dynamics-tend to be exposed to academic drift so that the individuals involved and the value systems they embrace become increasingly similar to those found in universities. In all examples recounted here, however, this process has been very slow, not detectable until after several decades or even centuries. In addition, hybrid organizations are not determined to drift academically. Instead, this process has to be documented in each separate case. Thirdly, since older hybrid organizations often prevail as new ones are introduced, they form historical layers like superposed sediments. From an international perspective, the most obvious indication of this is, of course, that there are very few cases of hybrid organizations of the types discussed here to have been abolished once they have acquired some measure of recognition (Hallonsten and Heinze 2012). Sure enough, such organizations do exist, but they have generally been short-lived experiments that never got off the ground rather than long-lasting organizations with potential to drift. To reach these conclusions, I have accounted for the relevant generalizable international developments-the wave of scientific academies formed in Europe in the 17th and 18th centuries, the equally distinct trend of institutions of higher vocational education in the 19th century and the somewhat less marked movement of industrial research institutes of the 20th century-and the different historical nongeneralizable processes these international tendencies led to in Sweden. Each case of hybrid organizations thus demonstrates the same type of causal chain where generalizable international developments led to non-generalizable national processes showing the nature of epistemic and academic drift in Swedish hybrid organizations. Noting, however, that Sweden consistently strove to implement existing international policy trends during the periods in question, not the least marked by the recurring international influences on the historical processes reviewed here, I claim that the observations made regarding the dynamics of epistemic and academic drift in Sweden are generalizable and can be applied to other similar contexts where ambitions to follow international trends dominate together with a willingness to let these trends influence local and national processes resulting in policy convergence. This claim is also valid when taking into account some Swedish peculiarities in research policy resulting from, among other things, the Nobel Foundation and its prizes established by the turn of the 20th century, which led to the formation of research institutes during the first half of the 20th century, and strengthened the connections between labour and capital (Crawford 1984;Friedman 2001;Gribbe, Lundin, and Stenls 2010). Of course, exceptions from these general observations come to light when developments in different countries are compared in detail. For instance, the American institutional system of knowledge dissemination in the agricultural sciences was built up so that educational institutions appeared before research facilities, which in turn appeared before information dissemination through academic journals and other forms of printed media (Cash 2001). In Sweden, the same sector demonstrates a different institutionalization process where journals and congregations for dissemination established in the 18th century were followed by organizations for training in the first half of the 18th century and research establishments later on in the same century (Edling 2003). It is needless to point out that such differences are important, and any attempt to explain why new hybrid organizations are formed is bound to rely on generalizations. Here, one such generalization is that epistemic drift characterizes the foundation of different types of hybrid organizations; another is that academic drift characterizes the historical development of some of them, and occasionally ensures their survival after their initial purpose has been abandoned or forgotten. With all these reservations in mind, an answer to the problem of the formation of new hybrid organizations can be proposed. The process of academic drift has often entailed a gradual marginalization of the knowledge users whom the different historical hybrid organizations had originally been formed to serve. Only seldom have hybrid organizations sought to make themselves relevant to new categories of knowledge users as the original ones have been marginalized. Instead, they have tended to accede to ideals supported by traditional academic organizations with higher status in terms of knowledge management, primarily universities. Through this process, in which older hybrid organizations tend to gradually turn their focus away from the original users, demand has been generated for the founding of new hybrid organizations. Note that this answer points to organizational dynamics rather than changing historical conjunctures in the uses of science. Simultaneously, the hybrid organizations analyzed here have not been founded with indifference to the organizational ideals dominating their respective time of foundation. Instead, they have all responded to differing notions of the most efficient way to make knowledge relevant: in the 18th century, heterogeneous congregations supported by members of networks stretching from universities over commerce and into politics; in the 19th century, vocational education supported by professional organizations and the state; and in the 20th century, industrial research institutes focusing on knowledge production supported by scientists and engineers pursuing hybrid careers.
Effect of tongue cleansing on morning oral malodour in periodontally healthy individuals. PURPOSE The aim of this randomised single-blind, cross-over trial was to assess the effect of tongue cleansing on morning oral malodour in periodontally healthy subjects. MATERIALS AND METHODS Ten systemically healthy non-smoker subjects (6 males, 4 females), 24-38 years of age, completed two 4-day periods of oral hygiene cessation with a 7-day wash-out period. In one of these test periods, subjects were instructed to clean their tongues with a tongue scraper 2-3 times a day. Participants presented at least 20 teeth, without cavities, overhanging restorations/prostheses or periodontitis, and had no history of previous periodontal therapy or use of antibiotics in the 3 months prior to the study. Volatile sulphur compounds (VSC; Interscan Halimeter) and organoleptic scores were measured in exhaled mouth air once a day, early in the morning, by one examiner. Comparisons were performed using Wilcoxon's signed rank test and Friedman's test (alpha = 0.05). RESULTS VSC levels at baseline were 206.3 ppb (SD 139.8) and 191.4 ppb (SD 127.7) for periods of usage and non-usage of the scraper respectively (p > 0.05). VSC levels did not change significantly during the 4 days, independent of tongue cleansing (Friedman, p > 0.05). Only at day 3 did the use of the tongue scraper lead to a significantly lower level of VSC compared with controls (131.1 ppb and 199.3 ppb respectively). No significant differences in organoleptic scores were observed between groups at baseline. During the whole experimental period, there were also no significant changes in organoleptic scores when individuals used or did not use the tongue scraper. CONCLUSION Tongue cleansing with a scraper was unable to prevent morning oral malodour in the absence of tooth cleaning in periodontally healthy individuals.
Outcome measurement of extensive implementation of antimicrobial stewardship in patients receiving intravenous antibiotics in a Japanese university hospital Background Antimicrobial stewardship has not always prevailed in a wide variety of medical institutions in Japan. Methods The infection control team was involved in the review of individual use of antibiotics in all inpatients (6348 and 6507 patients/year during the first and second annual interventions, respectively) receiving intravenous antibiotics, according to the published guidelines, consultation with physicians before prescription of antimicrobial agents and organisation of education programme on infection control for all medical staff. The outcomes of extensive implementation of antimicrobial stewardship were evaluated from the standpoint of antimicrobial use density, treatment duration, duration of hospital stay, occurrence of antimicrobial-resistant bacteria and medical expenses. Results Prolonged use of antibiotics over 2 weeks was significantly reduced after active implementation of antimicrobial stewardship (2.9% vs. 5.2%, p < 0.001). Significant reduction in the antimicrobial consumption was observed in the second-generation cephalosporins (p = 0.03), carbapenems (p = 0.003), aminoglycosides (p < 0.001), leading to a reduction in the cost of antibiotics by 11.7%. The appearance of methicillin-resistant Staphylococcus aureus and the proportion of Serratia marcescens to Gram-negative bacteria decreased significantly from 47.6% to 39.5% (p = 0.026) and from 3.7% to 2.0% (p = 0.026), respectively. Moreover, the mean hospital stay was shortened by 2.9 days after active implementation of antimicrobial stewardship. Conclusion Extensive implementation of antimicrobial stewardship led to a decrease in the inappropriate use of antibiotics, saving in medical expenses, reduction in the development of antimicrobial resistance and shortening of hospital stay. Introduction Antimicrobial resistance is becoming one of major problems during use of antibiotics worldwide. It has been demonstrated that inappropriate use of antibiotics is the predominant factor that causes an enhancement of antimicrobial resistance. Therefore, it is important to prevent or minimise the occurrence of antimicrobial-resistant bacteria. It has been reported that inappropriate use of antibiotics in the hospital ranges from 26% to 57%. Step Campaign to Prevent Antimicrobial Resistance Among Hospitalized Adult was established by the Centers for Disease Control and Prevention (CDC), in which withdrawal of inappropriate antibiotics is effective in preventing antimicrobial resistance. Anti-microbial stewardship programmes are known to promote appropriate use of antibiotics. The Infectious Diseases Society of America (IDSA) Society for Healthcare Epidemiology of America (SHEA) guidelines recommend two core proactive evidencebased strategies for promotion of antimicrobial stewardship, including 'formulary restriction and preauthorization' and 'prospective audit with intervention and feedback'. The goal of promoting appropriate use of antibiotics is to improve clinical outcomes by reducing the emergence of drug resistance and minimising drug-related adverse events. Furthermore, it has been shown that implementation of antimicrobial stewardship programmes leads to a reduction in the duration of hospital stay and saving in medical expenses. S U M M A R Y Background: Antimicrobial stewardship has not always prevailed in a wide variety of medical institutions in Japan. Methods: The infection control team was involved in the review of individual use of antibiotics in all inpatients (6348 and 6507 patients year during the first and second annual interventions, respectively) receiving intravenous antibiotics, according to the published guidelines, consultation with physicians before prescription of antimicrobial agents and organisation of education programme on infection control for all medical staff. The outcomes of extensive implementation of antimicrobial stewardship were evaluated from the standpoint of antimicrobial use density, treatment duration, duration of hospital stay, occurrence of antimicrobial-resistant bacteria and medical expenses. Results: Prolonged use of antibiotics over 2 weeks was significantly reduced after active implementation of antimicrobial stewardship (2.9% vs. 5.2%, p < 0.001). Significant reduction in the antimicrobial consumption was observed in the secondgeneration cephalosporins (p = 0.03), carbapenems (p = 0.003), aminoglycosides (p < 0.001), leading to a reduction in the cost of antibiotics by 11.7%. The appearance of methicillin-resistant Staphylococcus aureus and the proportion of Serratia marcescens to Gram-negative bacteria decreased significantly from 47.6% to 39.5% (p = 0.026) and from 3.7% to 2.0% (p = 0.026), respectively. Moreover, the mean hospital stay was shortened by 2.9 days after active implementation of antimicrobial stewardship. Conclusion: Extensive implementation of antimicrobial stewardship led to a decrease in the inappropriate use of antibiotics, saving in medical expenses, reduction in the development of antimicrobial resistance and shortening of hospital stay. What's known Antimicrobial stewardship programmes are known to promote appropriate use of antibiotics. But, antimicrobial stewardship has not always prevailed in a wide variety of medical institutions in Japan. What's new Antimicrobial stewardship intervention was found to be effective in reducing the inappropriate use of antibiotics, shortening hospital stay, reducing the MRSA ratio and saving medical expenses in Japanese hospital. Frequent monitoring resulted in an increase in the frequency of recommendation by ICT, reduction in antibiotic consumption and further shortening of antibiotic therapy and hospital stay. These findings supported an importance of day 3 bundle. However, such programmes have not always been carried out in a number of medical institutions, where the content of the work of the infection control team (ICT) is confined to the formulary restriction and pre-authorisation on a few specified antibiotics such as carbapenem and antimicrobial agents against methicillin-resistant Staphylococcus aureus (MRSA). In our hospital, we have carried out an extensive intervention programme to optimise antibiotic use since August 2009. The ICT members, including a physician, a clinical pharmacist, a medical technologist and a nurse well trained in infection control, have been involved in the preparation and implementation of the antimicrobial programme. A clinical pharmacist and a physician are mainly in charge of daily review of all prescriptions for inpatients receiving intravenous antimicrobials from a viewpoint of the appropriateness based on the published guidelines. The aim of the present study was to evaluate the outcomes of the profound implementation of antimicrobial stewardship in the light of the number of inappropriate use, rate of antimicrobial resistance and medical expenses after implementation of the programme. Ethics statement The present study was carried out in accordance with the guidelines for the care in human studies adopted by the ethics committee of the Gifu Graduate School of Medicine, and notified by the Japanese government (approval No. 23-175 of the institutional review board). Study design Our hospital is a national university hospital containing 606 beds. The ICT in our hospital consisted of an infection control doctor, a pharmacist who had a claim on the board-certified infection control pharmacy specialist, a nurse and a microbiological technologist, and has been extensively involved in the implementation of antimicrobial stewardship to all inpatients receiving antibiotic injections since August 2009. Physicians were all informed of the antimicrobial stewardship by ICT members when they prescribed antibiotic injections. The roles of ICT included a review of antimicrobial orders with respect to the usage, dose, isolated pathogens and site of infection for all inpatients receiving parenteral antibiotics, and consultation with physicians before prescription of antibiotics. The review was carried out when the antibiotic injections were prescribed. Patients receiving carbapenem or anti-MRSA agents were reviewed twice a week to facilitate de-escalation therapy. When an inappropriate use of antibiotics was found, ICT members made immediate contact with the prescribers over the telephone (Figure 1). Figure 2 is an example of care decision of the appropriate use of antimicrobial agents using electronic medical chart information. Unless otherwise indicated, the duration of antimicrobial administration was limited within 2 weeks, for patients receiving intravenous antibiotics for a longer period exceeding 2 weeks, a caution message was notified by the ICT members on an electronic medical chart, as shown in Figure 2. However, prolonged use of antibiotic injection over 2 weeks was not regarded as inappropriate for patients with infective septic arthritis (2-4 weeks), endocarditis (4-6 weeks), lung abscess (4-6 weeks) and osteomyelitis (6 weeks). When the message suggesting to discontinue the use was not accepted, ICT members asked the prescriber to stop or change the antibiotics. The appropriateness of antimicrobial use was decided according to the published guidelines, mainly the Sanford guide to antimicrobial therapy. Appropriateness of duration was also evaluated according to the Sanford guide to antimicrobial therapy. Furthermore, the duration was individually evaluated by the infection control doctor and clinical pharmacist. The ICT is also responsible for organising education programme on the topics of hand hygiene in healthcare settings, and antimicrobial therapy such as selection of antibiotics, dosage, treatment duration, 3-day rule and the examples of the inappropriate use of antibiotics for all medical staff twice a year. The ICT also provided the printed information monthly to all medical staff about infection control. Moreover, a physician and a pharmacist are always ready to reply to the inquiries from prescribers about antimicrobial therapy before prescription using mobile phones. Since August 2010, all inpatients receiving intravenous antibiotics were reviewed more than twice a week to enhance the appropriate use of antibiotics, according to the day 3 bundle. Furthermore, when antimicrobial injection was started without bacterial culture, the ICT member had started to contact the prescribers to perform bacterial culture (active intervention period). Data were extracted from electronic medical records kept in a central database in our hospital and compared before (period 1; during 1 August 2008 and 31 July 2009) and after (period 2; initial intervention, during 1 August 2009 and 31 July 2010, period 3; active intervention, during 1 August 2010 and 31 July 2011) extensive implementation of antimicrobial stewardship programme. Outcomes The use of antibiotics was converted into defined daily doses (DDDs) per 1000 patient-days, according to the World Health Organization (WHO) guidelines for anatomical therapeutic chemical classification and DDD assignment. Only the expenditure of antimicrobial injection was analysed. Prolonged use was defined as the continuous use of intravenous antibiotics over 2 weeks as the indicator of the shortening of the treatment duration. The duration of hospital MRSA was isolated with blood culture on 3 May. Therefore, Time 10:00 Figure 2 An example of care decision of the appropriate uses of antimicrobial agents using electronic medical chart information and the cautionary message to the prescribers on the electronic medical chart system. Appropriateness of antibiotic selection, usage, dosage was determined by using information of microbiological laboratory results, site of infection, renal function, serum drug concentration, which was obtained by the electronic medical chart. For patients receiving intravenous antibiotics for long periods exceeding 2 weeks, a cautionary message was notified by the ICT member stay was determined by the Kaplan-Meier plots and the median hospital stay was compared before and after implementation of antimicrobial stewardship using the Mantel-Cox log-rank test. Savings in medical expenses were estimated from the difference in the mean duration of hospital stay before and after intervention and the diagnosis-procedure combination (DPC) of the unit charge of the hospital stay (40% of mean unit charge for hospital stay), and the number of patients receiving antibiotic injections. The exchange rate of 1 dollar was considered as 77.0 Japanese yen. Data analysis Data were analysed using SPSS version 11 (SPSS Inc., Chicago, IL). Parametric variables were analysed using the t-test, while non-parametric variables were analysed by the Mann-Whitney U-test or v 2 test. p-value of < 0.05 was considered statistically significant. Patient demographics The patient demographics are shown in Table 1. The annual number of patients receiving intravenous antibiotics was 6251, 6348 and 6507 before implementation (period 1), and after implementation of antimicrobial stewardship (periods 2 and 3), respectively. Although there was no significant difference in gender, slight but significant differences were noted in age and the executing rate of surgical operation. Inappropriate antibiotic use After implementation of antimicrobial stewardship, a number of inquiries about antimicrobial therapy were made by physicians before prescription (40-50 month). Under such a condition, the ICT members detected 102 cases of inappropriate uses of antibiotics during the initial intervention (period 2). The number of inappropriate uses increased to 200 cases during the active intervention (period 3), in which frequency of review of antimicrobial injections also increased. The items of the inappropriate uses are shown in Figure 3. In such cases, ICT members made proposals on appropriate uses of antibiotics, in which 93 (91%) of 102 proposals in period 2, 186 (93%) of 200 proposals in period 3 were accepted and prescriptions were improved. Antimicrobial consumption and treatment duration Prolonged use of antibiotics exceeding 2 weeks during period 2 was significantly (p = 0.007) decreased from 5.2% to 4.1% as compared with that during period 1 ( Figure 4). The rate of prolonged use of antibiotics was further lowered to 2.9% during the period 3 (p < 0.001 vs. period 1). However, there was no significant difference in the total antimicrobial consumption between period 1 and period 2, although the consumptions of some antibiotics, including second-generation cephalosporins (p = 0.03), carbapenems (p = 0.003) and aminoglycosides (p < 0.001), during period 3 were significantly reduced as compared with those during period 1 ( Table 1). As a consequence, the total antimicrobial consumption during period 3 was significantly lower than that during period 1 (p = 0.003). Changes in occurrence of antimicrobial resistant bacteria The occurrence of MRSA in the total isolated S. aureus significantly decreased after intervention from 47.6% (period 1) to 39.5% (period 3) (p = 0.026) ( Table 1). Among patients from whom any Gramnegative rods (GNR) were isolated, the proportion of Serratia marcescens was significantly reduced during period 3 as compared with that during period 1 (p = 0.026). In addition, slight and not significant decrease in the occurrence of Pseudomonas aeruginosa showing resistance to ceftazidime and piperacillin was observed after implementation of antimicrobial stewardship. However, the rates of resistance to imipenem cilastatin and levofloxacin were not changed after implementation of antimicrobial stewardship. Duration of hospital stay As shown in Figure 5A, Kaplan-Meier plots indicated that the median length of hospital stay was significantly shortened from 12.0 days (interquartile range: 7-23 days) during period 1 to 11.0 days (6-21 days) during period 2 (p = 0.0005 by log-rank test) and 11.0 days (6-20 days) during period 3 (p < 0.0001 vs. period 1). On the other hand, the mean length of hospital stay in patients receiving antibiotic Figure 5B). Saving of cost for antimicrobial injections and medical expenses Annual cost of antibiotic injection was reduced from US$2.02 million (period 1) to US$2.00 million during period 2 and US$1.86 million during period 3 ( Table 2). The costs of antimicrobial injec-tions patient were US$324 (period 1), US$315 (period 2) and US$286 (period 3), resulting in the savings by 2.8% (US$9 patient) during period 2 and 11.7% (US$38 patient) during period 3. Therefore, the annual savings in antimicrobial cost were estimated to be US$0.058 million during period 2 and US$0.247 million during period 3. The reduction in the hospital stay (1.0 days in period 2, period 3) was considered to result in considerable savings in medical expenses, in which the amount was estimated to be US$1.95 million in period 2, US$3.92 million in period 3, calculating from the DPC of the mean unit charge for hospital stay (40% of unit charge), and the number of patients receiving antibiotic injections. Discussion The IDSA SHEA guidelines recommend that prospective audits of antimicrobial use with intervention and feedback to the prescriber can result in a reduction in the inappropriate use of antimicrobials. A review and feedback strategy also possesses an educational effect on prescribers. However, this strategy is time-consuming for the reviewer, and is performed mainly by an infectious disease physician or a clinical pharmacist with sufficient experience in infection control. Therefore, the antimicrobial stewardship has not always prevailed in a wide variety of medical institutions in Japan, thereby indicating a gap between the guidelines and clinical practices. To reduce the gap between evidence and clinical practice and to ascertain the clinical outcomes, extensive implementation of antimicrobial stewardship has been carried out since August 2009, which included (i) review of antimicrobial orders by ICT members with respect to the usage, dose, isolated pathogens and site of infection for all inpatients receiving parenteral antibiotics, (ii) consultation with physicians before prescribing antimicrobial agents and (iii) provision of education programme on infection control for all medical staff. The ICT members, particularly, a physician and a pharmacist, organised a co-operative system to accept the inquiries about the choice or usage of antimicrobial agents from prescribers using mobile phones. Indeed, to review all antimicrobial injection is time-consuming. An infectious disease pharmacist was newly placed and consumed almost daytime everyday to review all antimicrobial injections. Subsequently, we evaluated the outcomes of our antimicrobial stewardship programme. Among 6348 patients who received antibiotic injection, inappropriate use of antibiotics was observed only in 102 cases (1.6%) in period 2. In the active intervention period (period 3), we found 200 cases (3.1%) of inappropriate uses of antibiotics. This rate was much lower than that reported earlier. Kisuule et al. reported that the rate of inappropriate uses of antibiotics was reduced from 57% to 26% after antimicrobial stewardship intervention. Arnold et al. also reported that the rate of inappropriate antimicrobial use reduced from 26% to 7% after intervention. They also showed that the antimicrobial intervention results in fewer recommendations during the intervention period, as the major proportion of orders are already compliant with clinical practice guidelines. We did not show the precise rate of inappropriate uses of antibiotics before implementation of antimicrobial stewardship, however, the rate of inappropriate use of antibiotics, as assessed during 1 month before intervention, was 15.1% (data not shown). Consistent with the above report, a marked reduction to 1.6% in the rate of inappropriate use of antimicrobials was attained after intervention in the present study. In our data, the low rate of inappropriate uses detected by ICT members was considered to be due to the following reasons: first, a number of inappropriate uses were assumed to be prevented by the consultation on the proper use of antibiotics from prescribers to ICT members before prescribing. Therefore, we considered that extensive implementation of antimicrobial stewardship led to the optimisation of the antimicrobial prescription before use. Second, even when physicians prescribed with inappropriate uses, clinical pharmacists other than the ICT pharmacist verified the prescription before being checked by ICT members. Third, the introduction of education programme on infection control for all medical staff would draw prescriber's attention to avoid inappropriate uses. Finally, in our hospital, appropriate uses of antimicrobial agents have been facilitated by the implementation of clinical pathways generated on an electronic medical record system. On the other hand, infection prevention such as hand hygiene has been promoted regardless of the introduction of antimicrobial stewardship. Thus, we considered that promotion of infection prevention had no effect on the improvement of the appearance of antimicrobial resistance in the present study. Among 302 cases in periods 2 and 3, 279 (92%) were accepted and revised, indicating that the proposals by the ICT were adequate. The high rate of acceptance of the proposals was also reported by other investigators. In the present study, the majority of the recommendations about dose adjustment consisted of dose elevation. Evans et al. reported that 50% of patients received excessive dose of antibiotics before antimicrobial intervention. Our data were not consistent with their results. This may be explained by the fact that the approved doses of antibiotics are generally lower in Japan than those recommended by several overseas clinical practice guidelines. For example, we often suggested an elevation of the dose of ampicillin sulbactam, as the standard daily dose of this agent approved in Japan (6 g day) is lower than that (12 g day) approved in western countries. Several investigators have reported that review and feedback activities reduce antibiotic consumption. In contrast, Gyssens et al. reported a 25% increase in the antibiotic use after implementation of such interventions in a 948-bed university hospital in the Netherlands. On the other hand, Manuel et al. showed that antimicrobial intervention is associated with a shorter duration of antibiotic therapy, regardless of changes in antimicrobial consumption. In the present study, antimicrobial use density (AUD) was not changed in the initial intervention period (period 2) in spite of shortening of the duration of antibiotic treatment. The lack of change in AUD in period 2 may be due to the fact that the recommendation to elevate the dose of antibiotics was provided in a number of patients. Dose adjustment by elevation of the initial dose may lead to a reduction in the duration of antimicrobial use. However, the active intervention during period 3 caused a significant reduction in the antibiotic consumption with further shortening the duration of antibiotic treatment. In the active intervention period, we consider that frequent monitoring by ICT may facilitate the reassessment of antibiotic therapy and may result in a facilitation of de-escalation or termination of antibiotic therapy. Several investigators have demonstrated that antimicrobial stewardship results in a reduction in the development of bacterial resistance to antibiotics. In the present study, we surveyed a shortterm effect of the present intervention and found that the proportion of MRSA against total isolated S. aureus and the proportion of S. marcescens against GNR significantly decreased during an active intervention period, although slight and no significant reduction in the antimicrobial-resistant P. aeruginosa was observed. However, the occurrence of imipenem-resistant P. aeruginosa was reported to be approximately 17% or 21.7% before implementation of antimicrobial stewardship. But, the rates of the appearance of these resistant bacteria were consistently low (< 10%) in our hospital. It has been demonstrated that dose optimisation and reduction in the duration of antimicrobial use are the definite factors that reduce the development of antimicrobial resistance. Therefore, we focused our intervention into optimisation of dose and checking prolonged use of antibiotics. Dunn et al. reported in a before-and-after study that implementation of antimicrobial stewardship for improvement of the timeliness of switch to oral antimicrobials reduces antimicrobial costs without changing the length of hospital stay. It was noteworthy that, in the present study, Kaplan-Meier plots indicated that median duration of hospital stay was reduced by 1.0 day in periods 2 and 3. It is likely that the present intervention contributes at least in part to the reduction in the hospitalisation period, as the duration of hospital stay in overall patients was not different before and after implementation of antimicrobial stewardship. The reduction in the hospital stay may result in a considerable reduction in medical costs, in which the saving of annual medical expenses was estimated to be US$1.95 million during period 2, and US$3.92 million during period 3. The cost for antibiotic injections used in a patient was also reduced by 2.8% (US$9 patient) in period 2, and 11.7% (US$38 patient) in period 3, indicating that the annual saving of the total cost for antibiotics was US$0.058 million in period 2 and US$0.247 million in period 3. In the initial intervention, once the prescriptions were reviewed at the start of administration, no verification of the prescriptions was carried out until 2 weeks, except for carbapenem and anti-MRSA agents, although the duration of antimicrobial therapy for 2 weeks is too long in a number of cases. Reassessment of antibiotic prescriptions approximately every 3 days after administration has been shown to be effective for optimising empirical therapy (day 3 bundle). Therefore, in the active intervention period, we carried out more frequent monitoring of antibiotic therapy. We consider that frequent monitoring is especially effective in facilitating the de-escalation or shortening of antibiotic therapy. Indeed, this frequent monitoring resulted in an increase in the frequency of recommendation by ICT, reduction in antibiotic consumption, and further shortening of antibiotic therapy and hospital stay. These findings strongly supported an importance of day 3 bundle. However, frequent monitoring would be difficult to achieve in a number of medical institutions because of the shortage of healthcare professional. In conclusion, we carried out an extensive antimicrobial stewardship, and the outcomes were evaluated. Our present intervention based on a strategy of antimicrobial stewardship was found to be effective in reducing the inappropriate use of antibiotics, shortening hospital stay, reducing the MRSA ratio, and saving medical expenses in Japanese hospital.
Integrated analysis identifies a novel lncRNA prognostic signature associated with aerobic glycolysis and hub pathways in breast cancer Abstract Long noncoding RNAs (lncRNAs) play a crucial role in cancer aerobic glycolysis. However, glycolysisrelated lncRNAs are still underexplored in breast cancer. In this study, we identified the five most glycolysisrelated lncRNAs in breast cancer to construct a prognostic signature, which could distinguish between patients with unfavorable and favorable prognoses. To investigate the role of signature lncRNAs in breast cancer, we profiled their expression levels in breast cancer progression cell line model. Realtime PCR revealed that the five lncRNAs could contribute to breast cancer initiation or progression. Furthermore, we observed that the levels of four lncRNAs expression had a significant trend of gradient upregulation with the addition of glycolysis inhibitor in breast cancer cells. Afterward, random forest and logistic regression were conducted to assess the model's performance in stratifying glycolysis status. Finally, a nomogram including the lncRNA signature and clinical features was developed, and its efficacy in predicting the survival time and clinical utility was evaluated using a calibration curve, concordance index, and decision curve analysis. In this study, gene set enrichment analysis showed that the mTOR pathway, a central pathway in tumor initiation and progression, was significantly enriched in the highrisk group. In addition, gene set variation analysis was performed to validate our findings in two independent datasets. Subsequent weighted gene coexpression network analysis, followed by enrichment analysis, indicated that downstream cell growthrelated signaling was strikingly activated in the highrisk group, and may directly promote tumor progression and escalate mortality risk in patients with highrisk scores. Overall, our findings may provide novel insight into lncRNArelated metabolic regulation, and help to develop promising prognostic indicators and therapeutic targets for breast cancer patients. | INTRODUCTION Breast cancer is the most common cancer and the leading cause of cancer-related death in women worldwide. 1 Although conventional treatment strategies have been well applied, many patients with breast cancer still have unfavorable prognosis. 2 Consequently, it is essential to further investigate novel prognostic indicators, diagnostic biomarkers, and therapeutic targets for improved clinical outcomes. Altered energy metabolism is one of the pivotal fingerprints associated with cancer biological behaviors. 3 Aerobic glycolysis, known as the 'Warburg effect', is a preferential metabolic phenotype for cancer cells. 4 Although aerobic glycolysis has poor ATP production compared to mitochondrial oxidative phosphorylation, cancer cells accelerate the ATP production rate and increase glucose uptake via metabolic reprogramming. 5 Meanwhile, glycolysis intermediates not only contribute to macromolecule formation in various biosynthetic pathways, 6 but also induce resistance to chemotherapy and radiotherapy. 7,8 In addition, glycolysis provides a favorable tumor microenvironment for cancer cells to thrive. 9 Due to the crucial role of tumor aerobic glycolysis in breast cancer initiation and progression, further exploration could help to improve clinical outcomes for patients with breast cancer. To date, long noncoding RNAs (lncRNAs), which are RNAs longer than 200 nucleotides, have been shown to play an important role in transcription, posttranscription, and epigenetic modification, and influence genes associated with glucose metabolism in several cancer types. In addition, lncRNAs could contribute to metabolism reprogramming, which could regulate carcinogenesis and progression by providing adequate nutrition for cancer cells to circumvent energy stress. 15 Malakar et al. reported that lncRNA MALAT1 may induce glucose metabolism reprogramming to promote tumor malignant progression by upregulating SRSF1 and activating the mTORC1-4EBP1 axis in hepatocellular carcinoma. 16 Li et al. demonstrated that lncRNA UCA1 plays a positive role in glycolysis by upregulating hexokinase two through the mTOR-STAT3/microRNA143 pathway in bladder cancer. 17 Liu et al. revealed that downregulation of lncRNA NBR2 could attenuate AMPK activation and promote mTORC1-mediated protein synthesis and cancer cell growth under glucose-starved stress. 18 Additionally, Hung et al. suggested that lncRNA PCGEM1 may function as a crucial transcription regulator in central metabolic pathways, and promote cancer cell proliferation by regulating tumor metabolism via coactivation of both c-Myc and androgen receptor (AR). 19 Hence, glycolysis-related lncRNAs could provide novel insights for further exploration of metabolic strategies in breast cancer prognosis and treatment. In this study, we applied integrated bioinformatics analysis to identify a prognostic signature of five glycolysis-related lncRNAs, which could predict the survival time and glycolysis status in breast cancer patients. Moreover, we further investigated the potential biological roles underlying the lncRNA signature via systematic bioinformatics analysis and in vitro experiments. Thus, our findings provide a novel insight into lncRNA-related metabolic regulation and help to develop promising prognostic indicators and therapeutic targets for breast cancer patients. | Sample datasets and data processing The Cancer Genome Atlas (TCGA) data portal (https:// portal.gdc.cancer.gov/) was used to obtain TCGA RNA-Seq dataset. The raw count data were transformed through the variance-stabilizing transformation method using the DESeq2 20 package and then were quantile-normalized using the preprocessCore package. The 888 cases of breast cancer obtained from the TCGA were screened based on the following inclusion criteria: availability of complete data on overall survival time, survival status, age, subtype, and AJCC stage. The molecular subtypes were classified by the PAM50 subtype predictor, including luminal A, luminal B, HER2-enriched, basal-like, and normal-like. The METABRIC dataset with normalized data sourced from Molecular Taxonomy of Breast Cancer International Consortium (https://www.mbcpr oject.org/) contained 1903 breast cancer cases with overall survival time and survival status. The GSE20685 dataset was downloaded from the Gene Expression Omnibus (https://www.ncbi. nlm.nih.gov/gds/), which contained 327 breast cancer cases with overall survival time, survival status, age, and clinical stage. As previously described, 21 SeqMap was used to reannotate the probe sets of the Affymetrix Hg-U133 Plus 2.0 array. The microarray data were background corrected and normalized via the Limma package. 22 The log2transformed normalized data were used for downstream analysis. In this study, TCGA and METABRIC datasets were used to assess the efficacy of the glycolysis score. The TCGA dataset served as the training set to select the five most glycolysis-related lncRNAs to construct a prognostic signature. The GSE20685 dataset served as the validation set to validate our findings from the training set. The clinical information for the included patients is summarized in Table S1. | RNA extraction, reverse transcription, and real-time PCR analysis Total RNA was extracted from breast cancer cells using the TRIzol reagent (Invitrogen). After RNA was reversely transcribed into complementary DNA (cDNA) by PrimeScript reverse transcriptase (RT) reagent kit (TaKaRa), Biosystems StepOne plus System was used to perform real-time PCR assay. Primers used for real-time PCR are listed in Table S2. | Lactate assay The lactate level of cell supernatant was measured by the Lactate Assay Kit (Eton Bioscience). Briefly, breast cancer cells were seeded at a density of 20,000 cells per well in a 96-well plate. The next day, the cell supernatant from each well was collected, mixed with l-Lactate assay solution, and then incubated at 37℃ for 30 min. Lastly, the absorbance at 490 nm was read to measure the concentration of l-Lactate. | Development and evaluation of glycolysis score First, glycolysis-related genes were obtained from the Molecular Signatures Database (MSigDB) containing REACTOME_GLYCOLYSIS, HALLMARK_GLYCOLYSIS, and KEGG_GLYCOLYSIS_GLUCONEOGENESIS. 23 Next, we calculated the glycolysis scores for each patient via single sample Gene Set Enrichment Analysis (ssGSEA). Based on the median glycolysis score, breast cancer patients were classified into two subgroups, high-and low-glycolysis groups. Lastly, GSEA and Kaplan-Meier survival analysis were used to evaluate the efficacy of glycolysis scores in two independent datasets. | Construction of the lncRNA signature Based on the criteria (|r| > 0.35, p value <0.001), a cohort of lncRNAs significantly associated with the glycolysis score was selected in the training set by Spearman's correlation analysis. Subsequently, univariate followed by stepwise multivariate Cox regressions were performed to identify the five most promising lncRNAs and develop a prognostic signature. | Construction and evaluation of a nomogram The RMS package was used to generate a nomogram, in which predictive accuracy and discrimination ability were evaluated by calibration curve and concordance index (Cindex), respectively. Moreover, a decision curve analysis (DCA) was performed to evaluate the clinical utility of the nomogram by quantifying the net benefits against a range of threshold probabilities. 24 | Gene set enrichment analysis and gene set variation analysis Gene set enrichment analysis was performed by the JAVA program using the Hallmark gene sets sourced from MSigDB. All genes were ranked based on differential significance between the high-and low-risk subgroups stratified by the median risk score. After performing 1000 permutations, gene set enrichment with nominal p < 0.05 and FDR < 0.25 was considered as a significant difference. In GSVA, 25 Spearman's correlation analysis was carried out to assess the relationship between the risk score and specific hallmark gene sets in the training and validation sets. | Weighted gene correlation network analysis Weighted Gene Correlation Network Analysis procedure was carried out as described previously. 26 Briefly, a soft thresholding power of six was selected to generate a scalefree topology from adjacency matrix. DeepSplit of 2 and minModuleSize of 30 were set as the parameters of the Dynamic Tree Cut method to avoid generating too many modules. The height cut-off value was set to 0.25 to merge modules with similarity >0.75. Finally, the enrolled genes generated 17 modules (except the gray module) by cluster analysis. We evaluated the association between the risk score and module eigengenes (MEs) to achieve the module most closely related to the risk score. The hub genes were selected according to module membership (MM) greater than 0.8 and gene significance (GS) greater than 0.4. Biological process enrichment analysis of hub genes from highly related modules was performed using Metascape (http://metas cape.org/). | Statistical analysis Multivariate survival analysis for the lncRNA signature and clinicopathological features was performed using Cox proportional hazards regression models to determine which factors could act as an independent prognostic indicator. A time-dependent receiver operating characteristic (ROC) analysis was conducted to investigate the model's predictive performance at 1, 3, 5, and 10 years. The Kaplan-Meier method combined with the log-rank test was carried out to assess the overall survival time between the two subgroups. Two well-established machine learning algorithms (random forest and logistic regression ) were used to confirm the efficacy of the lncRNA signature for stratifying glycolysis status on the basis of the area under the ROC curve (AUC) score through fivefold cross validations. 27 Logistic regression analysis was performed to evaluate whether the lncRNA signature had a better performance for stratifying the glycolysis status than individual lncRNA. Spearman's correlation analysis was used to assess the relationship among risk scores, glycolysis scores, and each lncRNA. The Chi-squared test was used to examine the association between the lncRNA signature and clinicopathological phenotype based on the median risk score as a cutoff threshold. For continuous data, difference between two groups was assessed using the Student's t-test or Wilcoxon test, and multiple groups comparison was made using the Kruskal-Wallis test. Experimental data are presented as mean (±SD). In this study, we used the R project (version 3.6.1) and GraphPad Prism 8 to perform the main statistical analysis. Differences with p < 0.05 were considered statistically significant. | Development and evaluation of the glycolysis score A flow diagram illustrating our analysis procedure is shown in Figure 1. Here, TCGA and METABRIC datasets were employed to assess the efficacy of the glycolysis score (ssGSEA score). GSEA identified that three glycolysis-related gene sets were significantly enriched in the high-glycolysis group, which indicated that the glycolysis score could directly represent glycolysis status (Figure 2A,B). In addition, survival analysis revealed that patients with high-glycolysis scores had shorter survival times than those with low-glycolysis scores ( Figure 2C,D). | Construction of a five glycolysisrelated lncRNA signature in the training set Based on the criteria (|r| > 0.35, p value <0.001), the 121 most glycolysis-related lncRNAs were obtained from the training set using Spearman's correlation analysis ( Figure 3A; Table S3). Univariate followed by stepwise multivariate Cox regression analyses were performed, and then a five glycolysis-related lncRNA signature was constructed (Table S4). According to univariate Cox regression analysis, AC007686.3, BAIAP2-DT, LINC00926, LINC01016, and MAPT-AS1 were defined as protective factors (HR value <1) in the prognostic model ( Table 1). As shown in Figure 3B, univariate Cox regression analysis was used to examine the effect of clinicopathologic features and lncRNA signature on overall survival in the TCGA cohort. Subsequent multivariate Cox regression analysis indicated that age, cancer status, and lncRNA signature had a significant association with duration of patient survival independent of other variables ( Figure 3C). In addition, we found that the malignant grade of the AJCC stage was evidently associated with a high-risk value ( Figure 3D). Meanwhile, the molecular subtype of breast cancer was strikingly related to the risk score, which showed that basal-like or HER2 patients had higher risk values than patients with other subtypes ( Figure 3E). | Investigation of the association between signature lncRNAs and glycolysis First, we evaluated the role of signature lncRNAs in breast cancer progression via breast cancer progression cell line model (MCF10A\MCF10AT\MCF10CA1A). Real-time PCR suggested that the levels of five lncRNAs expression were reduced in premalignant MCF10AT and malignant MCF10CA1A cells compared to parental MCF10A cells ( Figure 4A). Notably, LINC00926, AC007686.3, and BAIAP2-DT appeared to be a significant trend of gradient downregulation from MCF10A to MCF10AT and MCF10CA1A cells, indicating that these three lncRNAs could play an important role in breast cancer initiation and progression. To further investigate the association F I G U R E 1 The flow diagram of our analysis procedure of signature lncRNAs with glycolysis, we treated breast cancer cells with 2-Deoxy-D-glucose (2DG), a glycolysis inhibitor. Lactate production was used to examine for levels of the glycolytic in breast cancer cells treated with 5 or 10 mM 2DG for 12-24 h. As presented in Figure 4B, gradient descent lactate production was observed with an increase in 2DG concentration. On the contrary, the use of 2DG enhanced expression levels of LINC00926, LINC01016, AC007686.3, and MAPT-AS1 in MDA-MB-231 and MDA-MB-468 cells ( Figure 4C,D). However, we did not observe significant changes in BAIAP2-DT expression levels (data not shown). Afterward, we conducted Spearman's correlation analysis to further identify the relationship between aerobic glycolysis-related factors and lncRNA signature in the training set. Our results showed that the risk score was positively correlated with hub glycolysis-related genes, and signature lncRNAs had a negative correlation with them ( Figure 4E). We next examined these correlations in the validation set ( Figure S1A). A similar result was yielded to further support our findings from the training set. | Validation and further evaluation of the lncRNA signature in the training and validation sets Here, ROC analysis was applied to assess the predictive accuracy of signature at 1, 3, 5, and 10 years. The area under the curve (AUC) scores in the training and validation sets are shown in Figure 5A,B, respectively. Using the median risk score as the cutoff threshold, the distribution of survival status, overall survival time, and lncRNA expression in the training and validation sets is presented separately in Figure 5C,D. Survival analysis showed that patients with high-risk scores had poor survival time compared to those with low-risk scores ( Figure 5E,F). Subsequently, the signature's predictive capability for glycolysis status was further assessed by RF and LR analyses. Importantly, moderate predictive performance for glycolysis status was observed in the training and validation sets ( Figure 6A,B). Additionally, LR analysis identified that the lncRNA signature predicted glycolysis status more efficiently than individual lncRNA ( Figure 6C,D). The interplay among risk scores, glycolysis scores, and five lncRNAs was further confirmed by Spearman's correlation analysis. It revealed that the risk score was positively associated with the glycolysis score, whereas five lncRNAs were negatively associated with the risk and glycolysis scores ( Figure 6E,F). Additionally, the interactions among the five lncRNAs are shown in Figure 6E,F. Of note, the validation set had an acceptable performance with the training set. | Construction and evaluation of the nomogram Before the signature was sent to construct a nomogram, we employed the Chi-squared test to explore the association between the lncRNA signature and clinicopathological features by stratifying TCGA-derived patients into high-and low-risk groups based on the median risk score as a cutoff threshold. As shown in Table 2, the risk score was significantly associated with AJCC stage, cancer status, and subtype. To enhance the predictive efficacy of the lncRNA signature, we developed a nomogram that incorporated age, AJCC stage, subtype, cancer status, and the lncRNA signature ( Figure 7A). DCA was performed to estimate net benefit and clinical utility for this nomogram Breast cancer cells treated with 2DGcontaining medium were subjected to real-time PCR analysis to measure signature lncRNAs expression. (E) The relationship among aerobic glycolysisrelated factors, lncRNA signature, and each lncRNA (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001; Student's t-test) at 1, 3, and 5 years ( Figure 7B,D). It revealed that the nomogram displayed consistent positive and larger net benefit across a broad range of threshold probabilities (more than 70%) at 3 and 5 years compared to either the none-treat scheme or all-treat scheme. However, the 1year DCA showed that patients could only acquire net benefits within nearly 30% of the threshold probability. The calibration curve indicated that the nomogram survival prediction for breast cancer patients had an excellent agreement with actual observations at 1, 3, and 5 years, with a C-index of 0.855 (95% CI, 0.812-0.898) ( Figure 7E). Importantly, a good performance for predicting survival was also observed in the validation set, with a moderate discrimination (C-index of 0.725 ) ( Figure 7F). Based on the median risk score, TCGA-derived patients were stratified into two subgroups, high-and low-risk groups. GSEA showed that the mTORC1 signaling pathway was most significantly enriched in the high-risk group (NES = 2.07, FDR = 0.015) ( Figure 8A), which suggested that the lncRNA signature may contribute to the regulation of the mTORC1 signaling pathway. Moreover, the interplay between mTORC1 signaling, glycolysis signaling, and the prognostic signature is shown in Figure 8B, which revealed that mTORC1 signaling was significantly positively correlated with glycolysis signaling, and the high-risk group displayed higher levels of enrichment for mTORC1 and glycolysis signaling compared to the lowrisk group. To validate and further clarify the association between the lncRNA signature and hub hallmark gene sets, we performed GSVA as described in Figure 8C,D. Accordant with the above results, we observed that several hallmark gene sets related to cell growth were significantly upregulated in the training and validation sets; these gene sets included mTORC1 signaling, G2M checkpoints, E2F targets, unfold protein response, mitotic spindle, glycolysis, and MYC targets V1. To estimate whether the lncRNA signature can predict clinical response to mTOR inhibitors, we extracted the data of related drugs from the Genomics of Drug Sensitivity in Cancer (GDSC), 28 including four mTOR inhibitors (Rapamycin, AZD8055, NVP-BEZ235, and Temsirolimus). Our results indicated that the high-risk group appeared as a higher IC50 value for mTOR inhibitors than the low-risk group ( Figure 8E). We next treated MDA-MB-231 cells with 10 nM rapamycin, an mTORC1 inhibitor. MTT assay F I G U R E 6 Further evaluation of the lncRNA signature in the training and validation sets. (A, B) Logistic regression (LR) and random forest (RF) were used to evaluate the signature's performance in stratifying glycolysis status. (C, D) Logistic regression analysis was used to identify that the lncRNA signature predicted glycolysis status more efficiently than individual lncRNA. (E, F) The interplay among risk scores, glycolysis scores, and five lncRNAs showed that rapamycin indeed resulted in cell growth inhibition compared to vehicle-treated group ( Figure 8F). On the other hand, expression levels of LINC00926, LINC01016, AC007686.3, and MAPT-AS1 also significantly increased in response to rapamycin treatment ( Figure 8G), which indicated that the mTORC1 signaling pathway was negatively associated with the expression of signature lncRNAs. | Cell growth-related signaling significantly activated in the highrisk group Given that the hallmark gene sets related to cell growth were upregulated in the high-risk group, we continued to perform WGCNA to identify the biological processes involved in the lncRNA signature. As presented in Figure 9A, the genes enrolled in the training set were clustered into 18 modules using cluster analysis. Subsequently, the brown module was found to be highly associated with the lncRNA signature ( Figure 9B). In the brown module, 93 hub genes were selected based on the criteria of MM values greater than 0.8, and a GS value greater than 0.4 ( Figure 9C; Table S5). Finally, biological process enrichment analysis of hub genes from the brown module was performed using Metascape. As expected, we found that cell growth-related signaling, including cell cycle, cell division, and regulation of cell cycle process, were significantly enriched in the high-risk group ( Figure 9D,E). Taken together, the lncRNA signature may be associated with tumor malignant progression and higher mortality risk by promoting tumor cell proliferation. | DISCUSSION For decades, great advances have been made in breast cancer treatment; however, several mechanisms associated with breast cancer progression remain elusive. Reprogrammed energy metabolism is currently identified as an emerging hallmark of cancer cells. 6 This alteration is characterized by preferential dependence on glycolysis for energy production even in the presence of adequate oxygen and fully functioning mitochondria, namely 'aerobic glycolysis' or 'Warburg effect'. 3,9,33 Furthermore, previous studies have shown that tumor aerobic glycolysis frequently contributes to poor clinical outcomes in patients with breast cancer. Thus, continued investigation of aerobic glycolysis could help to gain insight into T A B L E 2 The chi-squared test of the association between the lncRNA signature and clinicopathological features in TCGA breast cancer dataset the crucial mechanism of breast cancer initiation and progression and develop better prognostic indicators, diagnostic biomarkers, and therapeutic targets for breast cancer patients. LncRNAs were previously reported to be involved in tumor metabolism reprogramming. In this study, we developed a glycolysis score to further construct a five glycolysis-related lncRNA signature, which was associated with malignant progression of breast cancer and acted as an independent prognostic factor in breast cancer patients. Subsequent in vitro experiments also supported these findings. Moreover, the lncRNA signature could well distinguish patients with unfavorable prognosis from those with favorable prognosis. Further analyses demonstrated that the ln-cRNA signature had moderate discrimination for glycolysis status, and the combination of five lncRNAs possessed better predictive efficacy for glycolysis status compared with each lncRNA from the prognostic model. Importantly, a consistent performance was observed in the validation set. In an effort to enhance the predictive efficacy of the lncRNA signature, we further integrated age, AJCC stage, subtype, cancer status, and the lncRNA signature to develop a nomogram which predicts the efficacy for survival and clinical utility, and was validated by calibration curve, C-index, and DCA, respectively. Lastly, our findings suggest that the nomogram based on the lncRNA signature could contribute to predicting survival probability and help to guide personalized therapeutic strategies for breast cancer patients. lncRNAs play pivotal roles in metabolism reprogramming of breast cancer by regulating important cancer-related F I G U R E 8 The cancer-related hallmark gene sets associated with the lncRNA signature in breast cancer. (A) Gene set enrichment analysis. (B) The interplay between the lncRNA signature, mTORC1 signaling, and glycolysis signaling. Brown: the ssGSEA score of mTORC1 signaling; Blue: the ssGSEA score of glycolysis signaling; Red: high-risk patients; Green: low-risk patients. The ssGSEA score was scaled to a range between 0 and 1 in the plot. (C, D) Gene set variation analysis. (E) The GDSC drug response data were used to estimate the association between the lncRNA signature and mTOR inhibitors in TCGA breast cancer patients. (F) Proliferation of MDA-MB-231 cells treated with vehicle control or 10 nM rapamycin. (G) Signature lncRNAs with significant expression change in response to rapamycin treatment (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001) pathways. GSEA of hallmark gene sets was performed and identified that mTORC1 signaling was significantly enriched in the high-risk group. In addition, we observed that mTORC1 signaling was positively correlated with glycolysis signaling. According to previous reports, the mTOR signaling pathway could integrate both intracellular and extracellular signals and function as a central pathway in tumor initiation and progression. Accumulating evidence also demonstrated that the mTORC1 signaling pathway may act as a mediator of aerobic glycolysis to promote cell proliferation. 45,48,49 Subsequent GSVA further identified that the risk score was positively correlated with the mTORC1 signaling pathway as well as other hallmark gene sets associated with cell growth. Notably, the results were mutually validated in two independent datasets. To further investigate the biological processes related to the lncRNA signature, we applied WGCNA and identified that the brown module was highly associated with the risk score and glycolysis score. Furthermore, Metascape was conducted and demonstrated that hub genes sourced F I G U R E 9 Cell growth-related signaling significantly activated in the high-risk group. (A) Clustering dendrogram of mRNAs. The two colored rows represent the original modules and merged modules, respectively. (B) The relationship between modules and traits. (C) A scatter plot of GS for risk scores versus MM for brown module. Red line represents the screening criteria: MM value greater than 0.8 and GS value greater than 0.4. (D, E) Biological process enrichment analysis of 93 hub genes from the brown module from the brown module were significantly enriched in cell growth-related signaling, which could promote tumor cell proliferation and contribute to a higher mortality risk in breast cancer patients. When glycolysis inhibitors are employed, mTORC1 could be involved in metabolism reprogramming to escape from glycolytic dependency. 50 Currently, mTOR inhibitors have been used in clinical practice. Therefore, we tried to estimate the association between the lncRNA signature and drug response via the GDSC drug response data and in vitro experiments. Finally, our data suggested that the lncRNA signature can serve as a promising indictor for measuring response to mTOR inhibitors in breast cancer patients. Moreover, previous studies have shown that tumor cell proliferation could be inhibited by co-targeting glycolytic enzyme and mTORC1 signaling. 50 Given that the lncRNA signature had a significantly positive association with mTORC1 and glycolysis signaling, it may help in developing novel therapeutic strategies for combination therapy, and achieving desirable clinical benefits for breast cancer patients. Our data presented here provided a basis for further exploration of metabolic strategies in breast cancer prognosis and treatment. However, this study was mainly based on the publicly available datasets. Because the public sample size is limited, further exploring these findings will be a crucial direction for our future work. In conclusion, we identified five glycolysis-related ln-cRNAs to construct an lncRNA signature on the basis of the glycolysis score, which could predict the survival probability and glycolysis status. Moreover, hallmark gene sets associated with cell growth were significantly activated in the high-risk breast cancer patient subgroup. Overall, the lncRNA signature could function as a robust prognostic indicator and help to develop novel therapeutic strategies for breast cancer patients.
Topologically protected vector edge states and polarization beam splitter by all-dielectric valley photonic crystal slabs The polarization beam splitter (PBS) is the essential optical component, which is widely used in various optical instruments. Its robustness against perturbation is very necessary to all-optical classical and quantum networks. Here, we report the design of topologically protected vector edge states (dual-polarization with transverse electric and transverse magnetic modes) and PBS based on all-dielectric topological valley photonic crystal slabs. The topologically protected vector edge states have been realized for the first time using germanium photonic crystal slab with silica substrate. Based on such edge states, the topologically protected PBS has been designed and its robust property has been demonstrated by exact numerical simulations. Our proposed PBS is expected to be widespread applications for photonic integrated circuits and quantum information processing. Introduction The polarization beam splitter (PBS) is an essential device in traditional optics. It plays a significant role in classical and quantum optical experiments. For ultra-fast photonic integrated circuits, the PBS also has vital applications, such as all-optical information processing, optical encoding, optical quantum gates, etc. Due to its importance, many scientists have designed various structures to realize the PBS, for example, wire waveguide coupler, photonic crystals (PhCs), and multimode interference coupler. However, the footprints of these conventional PBSs are too large (from tens to hundreds of microns) to be integrated on a chip. Instead, other scientists have designed the ultra-small PBS by the inverse design method, and its footprint may be several microns. But in fact, all these devices are vulnerable to perturbation. In this case, the efficiency and performance of the PBS are decreased. This limits the application of the PBS in complex environments. Therefore, it is important and necessary to design a robust PBS against some disturbances. Recent developments in topological photonics have made it possible to complete this design. By introducing the topology into optics, some attractive phenomena have been observed. At the microwave frequency, the photonic Chern insulator shows lots of novel phenomena, including defect-immune and unidirectional propagating. Meanwhile, the all-dielectric topological PhCs are easier to be fabricated on a chip without the need for external magnetic fields. Particularly, the topological valley photonic crystals (TVPCs) show high coupling efficiency and low transmission loss. Therefore, the TVPC is a potential way to realize the topological integrated optical circuits. As far as we know, the all-dielectric TVPCs can support only one polarization mode, transverse electric (TE) or transverse magnetic (TM) mode. However, to design the PBS, topologically protected TE and TM modes should be presented at the same time. In fact, dual-polarization topological edge state has been studied by using metal materials in the previous work. The use of metal materials limits the working frequency (around 6 GHz), and the same valleys for TE and TM modes cannot be used to realize the PBS in optical frequencies. How to design the topologically protected PBS is still a problem, although topological photonics has been intensively studied for more than 10 years. In this work, we propose a scheme to construct the topologically protected vector edge states, supporting TE and TM modes, based on germanium (Ge) TVPCs with silica substrate. The robust properties of these edge states are discussed. Furthermore, we design the PBS by such topological edge states. Especially, the proposed PBS possesses the properties of robustness against some disturbances. Actually, the PBS can be integrated into a compact photonic circuit, because the footprint of the integrated PBS is about 10 microns. It is expected that such a robust and compact PBS device has potential applications in future photonic integrated circuits. Topologically protected vector edge states by all-dielectric TVPCs We first consider the triangular lattices formed by the Ge triangular rod in the inset of figure 1(a). The lattice constant is a = 645 nm; the side of the Ge triangular rod is s = 500 nm; the thickness is d 1 = 215 nm. The rods are surrounded by air at the top and bottom, and symmetric about the z = 0 plane. In figure 1(a), we plot the vector band structure of the system. Due to the C 3v symmetry of the triangular lattices, the doubly-degenerate modes appear at K (K ) point. For the TE band and TM band, the same phenomena are discovered. At K and K points, the gap between band 2 and 3 is closed for TE (TM) mode, and the double Dirac cone appears. For convenience in this paper, TM modes are marked in red, and TE in blue. And then, we rotate the triangular rod by 30. Two Dirac points open and the complete bandgap appears, as indicated in figure 1(b). In figure 1(c), we introduce the 300 nm-thickness SiO 2 substrate for the Ge rods. Due to the presence of substrate, the z-symmetry is broken. The TE and TM bands will couple with each other. Comparing to figure 1(a), it is found that the cross point of TE and TM bands opens, but the Dirac cones still exist. This phenomenon can be observed in figure 1(c). After the 30 rotation, two Dirac points also open, as shown in figure 1(d). To sum up, the substrate existing results in the coupling, and the 30 rotation results in the Dirac point opening. The geometrical parameters are labeled in figure 1(e): the lattice constant is a = 645 nm; the side of the Ge triangular rod is s = 500 nm; the thickness of Ge is d 1 = 215 nm; the thickness of silica is d 2 = 300 nm. In our design, the Ge film and half of the silica (d 2 /2 = 150 nm) should be etched away for minimizing the breaking of symmetry in the z-direction. In fact, the high index (more than 4) material (Ge) is indispensable for constructing the complete bandgap in figure 1(c), and only Ge rods can be used to design a practical device with the substrate. With the rotation of the triangular rod, the complete bandgap can be observed around the wavelength of 1550 nm (193.41 THz) at K and K points, as shown in figure 1(d). According to the C 3v symmetry group of the proposed lattice, the photonic bandgap at K and K points is inequitable. Based on the kp model, the effective Hamiltonian around K/K valley can be expressed as where v D is the group velocity, and k = k-k K/K is the displacement from the wave vector k to K/K valley in the reciprocal space, x, y, and z are the Pauli matrices, m is effective mass. It is known that the valley Chern number is Interestingly in the topological complete bandgap, the valley Chern numbers of TE and TM band are opposite at the K point, i.e. C TM = −C TE. In general, the valley Chern number is in connection with the power flux of the eigenmodes. The power flux directions are clearly observed in figure 1(f). For the TE bandgap, the power flux is counterclockwise in band 2, and clockwise in band 3, which indicates the positive effective mass existing. It is in contrast to the TM bandgap, and its effective mass is negative. That is why the valley Chern numbers for TE and TM have the opposite signs. Now, we consider a complex TVPC structure, which is made of upright (TVPC1) and inverted (TVPC2) triangles, as shown in figure 2(a). The edge state is constructed at the boundary of TVPC1 and TVPC2, which is marked in yellow. Furthermore, a supercell is taken (the inset in figure 2(b)) from the complex TVPC to analyze the properties of the edge states. Along the x-direction of the supercell, the lattice length is a = 645 nm, and periodic boundary condition is applied. In the y-direction, the length is about 10 m (20 layers). By calculating the eigenmode of the supercell with the wave vector k x (from 0 to /a), we get the corresponding band structure, as plotted in figure 2(b). The TE-like (TM-like) bands are marked as red (blue) lines. Importantly, the TE (TM) edge state has a positive (negative) group velocity at K valley, because of C TE = −C TM. That is to say, the sources with different chirality can excite the edge states with the same directions of propagation for the TE and TM modes. Moreover, the gap of the edge states is opened as shown in figure 2(b). The TE and TM edge states couple to each other. That is because the silica substrate breaks the symmetry in the z-direction. In fact, lots of interesting phenomena happen if the edge states open the gap, such as the high-order photonic topological insulator. The dual-polarization corner state may be found at the frequency in the gap. This potential research direction might be got more attention. And then, we study the method to excite the vector edge state. In the TVPCs, it is well known that the chiral source can excite the unidirectional topological edge state. For our model, different chiral sources are needed to excite the same direction of propagation of TE and TM modes at = 1550 nm. The source is placed at the boundary between TVPC 1 and TVPC 2, as shown in figure 2(a). Four magnetic (electric) dipoles in the z-direction are used to excite the TE (TM) topological edge state. Their phases are set as 0, /2,, and 3/2 with counterclockwise (TE) and clockwise (TM) chirality, as shown in figures 2(c) and (d). K and K valleys are respectively excited for TE and TM modes, and the edge states propagate to the left along the boundary. To further study the vector edge state, we choose a Fourier integral region area in figure 2(a) for disclosing the propagating property in the momentum space (k-space). The size of rectangle is taken as 6 cells 6 cells. For the TE mode, the Fourier transform formula is f TE (kx, ky) = H z (x, y)e i k r dx dy. For the TM mode, the Fourier transform formula is f TM (kx, ky) = E z (x, y)e i k r dx dy. The results of f TE and f TM are plotted in figures 2(e) and (f), respectively. We find that the K (K) valley is highlighted in the Next, we note that the designed vector edge state possesses the topological property against some disturbances. In order to demonstrate topological robust transport of the vector valley edge states, we construct the Z-type, and -type interfaces in figures 3(a) and (b). The boundaries (yellow) are bent as two or three 60 sharp corners. The numerical simulations are carried out at the wavelength = 1550 nm. With the chiral light sources exciting, the robust properties of the unidirectional transport are revealed, as shown in figures 3(c)-(f). In fact, the same bends are non-negligible for the traditional optical waveguide, because the bends can be usually considered as scatterers. In practical applications, the comparison between the topological and trivial edge states is also important by considering the loss of material absorption and scattering. Therefore, we demonstrate them in our proposed topological vector edge state. The absorption of the Ge at the near-infrared frequency (around = 1550 nm) can be expressed by complex refractive index n 1 + in 2, where n 1 = 4.216 and n 2 = 0.002. In order to quantify the robustness, we construct the topological and trivial edge states, and the simulation results are illustrated in figures 4(a) and (b). The Geometrical parameters of the trivial edge state are as follow: the lattice constant a = 645 nm; the radius of the pillar r = 180 nm. The light sources are placed at the left side of the topological and trivial edge states, and the input power is set as 1 mW. Meanwhile, we also simulate the cases of the topological and trivial edge states with Z-type interfaces (two 60 corners), as shown in figures 4(c) and (d). The topological edge state is of a good performance against the 60 corners, but the trivial edge state is not. However, the difference of the propagation losses of the trivial edge states (with and without Z-type interfaces) is about 4 dB. The power propagations along the direction of the edge states are shown in figure 4(e). We can calculate the propagation losses in these cases. The loss of the topological edge state is 0.22 dB/a, which is 1/3 of the loss of the trivial edge state (0.68 dB/a), where a is the lattice constant. It is even smaller than the previously reported loss in reference, with the value being 0.25 dB/a. That is, we construct the topological vector edge state and prove its robustness against some bends. Comparing to the traditional PhCs waveguide, the topological edge state possesses lower transmission loss. Although the loss is low in comparison, it still reaches 0.22 dB/a. Such an edge state cannot be used for optical transmission in a long distance. A feasible way is designing a connector to a Si wire waveguide before and after the device in the future. In addition, we notice that the traditional PhC waveguide ( figure 4(b)) shows higher loss than the topological waveguide ( figure 4(a)), even though they are no bend. The phenomenon originates from the different structures of the two kinds of PhC waveguides. The topological waveguide is constructed by the triangular cylinders and the trivial waveguide by the circular cylinders. Our calculated results show that the imaginary part of the propagation constant in the traditional PhC waveguide is larger than that in the topological waveguide, which results in the higher loss in the traditional PhC waveguide. Topologically protected PBS In recent years, all-dielectric beam splitting based on the valley degree of freedom is discovered. However, previous works focus on only one polarization mode. In our topological vector edge states, we find that two polarization modes are locked in the different valleys. The different valleys correspond to different propagate directions. Therefore, the topological vector edge state can be used to realize the PBS. Now, we study the topological PBS based on the vector edge state. The schematic diagram of the PBS is shown in figure 5(a). The footprint of our device is 15 10 m 2. The input TE and TM waves are guided along the 250 nm-width wire waveguide to the right part of the PBS device. Some additional designs of the PBS are used to increase the input-coupling and output-coupling efficiency: the intermediate PhC waveguide has been introduced by the overlap of the 560 nm-width waveguide and the edge state. Meanwhile, on the left part of the PBS device, the pillar PhCs are placed near the output port. They are also triangular lattices, with the lattice constant a 2 = 400 nm, the radius of the pillars is taken as r 2 = 150 nm. The valley edge states will couple into the Gaussian beam of the pillar PhCs near the output port. The direction of the Gaussian beam can be determined by the k-space analysis. As shown in Moreover, we verify the above analysis by numerical simulation. The TM output wave propagates along the red arrow, as shown in figure 5(d). The TE output wave propagates along the black arrow in figure 5(e). The two polarization modes are guided in different directions. That is to say, the topological PBS can be realized. In figure 5(f), we plot the normalized electric and magnetic field output intensity in the xoy-plane. is defined as the angle with the positive x-axis. The results show that the TM wave propagates in the 0 direction, and the TE wave propagates in the 60 direction. These simulation results are consistent with the above analysis. Moreover, we use the extinction ratio (ER) to measure the performance of the PBS with the ER being 10 log(P 1 /P 2 ), where P 1 and P 2 represent the output power of TE and TM wave. The ER of the PBS is more than 20 dB in the directions with 0 and 60. For more polarization modes, we also study the splitting effect of the PBS with different excitations, including diagonal polarization (TE + TM), and circular polarization (TE + iTM). See Appendix. A for more details. As shown in figures 6(a) and (b), we put forward two situations of the PBS with the Z-type and the -type topological interfaces. By the simulations of the Z-type interface in figures 6(c) and (e), the TM and TE waves output to the 0 and 60 directions, respectively. The output angle is also determined by the k-space analysis of the TVPCs and the pillar lattices, with the simulation results showing in figure 6(g). Similar to the -type interface, we also study the properties of the PBS, and the simulation results of TM and TE modes are shown in figures 6(d)-(h), respectively. It is noted that PBS works well even though these disorders exist. In order to quantitatively analyze the robustness of the PBS, we calculate the results are provided in appendix B. The Z-type waveguide bends are considered in trivial and topological edge states and the transmission spectra are proposed with and without the Z-type disorder. From the practical viewpoint, if the thickness of the silica substrate increases to more than 100 m (up to infinite), and the PBS also works. But, the Ge PhCs do not have topological protection. For some disorders like the Z-type and -type interfaces, the robustness of the topological devices usually disappears. So, the thickness of the substrate may increase to infinite when the robustness against the disorders is not necessary. The PBSs with a very small footprint have been designed using the inverse-designed method. As for our topological PBS, robustness is the highlight. Only the topological devices can work well in some disorders, like Z-type and -type interfaces. The robust performance of topological protection is clearly revealed in our PBS, and it is important in the future photonic integrated circuit. Conclusion Based on the all-dielectric TVPCs, we have proposed the topological vector edge state and investigated its propagation characteristics by k-space analysis. It is found that the TE and TM edge states are locked in different valleys. Furthermore, we have designed the topological PBS based on the vector edge state. Crucially, the numerical simulations show that the topological vector edge state and the PBS possess robust properties against some disturbances. We believe that compact and robust PBS can find widespread applications in future photonic integrated circuits and quantum information processing.
Towards Assurance-Driven Architectural Decomposition of Software Systems Computer systems are so complex, so they are usually designed and analyzed in terms of layers of abstraction. Complexity is still a challenge facing logical reasoning tools that are used to find software design flaws and implementation bugs. Abstraction is also a common technique for scaling those tools to more complex systems. However, the abstractions used in the design phase of systems are in many cases different from those used for assurance. In this paper we argue that different software quality assurance techniques operate on different aspects of software systems. To facilitate assurance, and for a smooth integration of assurance tools into the Software Development Lifecycle (SDLC), we present a 4-dimensional meta-architecture that separates computational, coordination, and stateful software artifacts early on in the design stage. We enumerate some of the design and assurance challenges that can be addressed by this meta-architecture, and demonstrate it on the high-level design of a simple file system. Introduction Computer systems are so complex, so they are usually designed and analyzed in terms of layers of abstraction. An operating system typically runs on top of the bare hardware. A run-time system possibly comes next, and then the sets of abstractions introduced to programmers by a programming languages. Each layer can restrict the power of the layer underneath, but it can not be more powerful. For example machine instruction sets allow for arbitrary jumps to instruction addresses, while many high-level programming languages do not allow that. Abstractions allow software designers to manage the inherent complexity of systems. Complexity is still a challenge facing logical reasoning tools that are used to find software design flaws and implementation bugs. Scalability (or lack thereof) of many such tools (e.g., model checkers) make them only suitable for relatively simple systems or toy examples. Abstraction, again, is a common technique for scaling those tools to more complex systems. However, abstractions used in system design are in many cases different from those used for assurance. In this paper, we argue that the software abstractions used for assurance can also guide software designs, making them simpler, easier to understand, and readily suitable for automated reasoning. In particular, we present a 4-dimensional software meta-architecture that separates the computational, coordination, stateful, and meta-programming aspects of a software system early on in the design process. We argue that this systemic separation allows different assurance tools and techniques to be applied to each of the meta-architecture dimensions. The rest of this paper starts with outlining some of the challenges of quality assurance of monolithic systems (Sec. 2). The 4-dimensional meta-architecture is then presented in Sec. 3. This meta-architecture is then demonstrated on a simple file system design (Sec. 4). Related work is then briefly discussed in Sec. 5, and we finally conclude and outline some future directions in Sec. 6. Challenges of Assurance of Monolithic Systems In this paper, we refer to systems that do not separate computational, coordination, and state abstractions as monolithic systems. This section outlines some of the challenges primarily caused by this lack of separation. Automated Logical Reasoning Integrating automated reasoning tools into the Software Development Life Cycle (SDLC) can significantly improve the quality of software systems by finding design/implementation flaws early in the process. Due to the inherent complexity of software systems, many automated reasoning techniques and tools can efficiently operate only on abstractions rather than the concrete system artifacts. For example, model checkers operate on transition systems capturing abstract representations of system behavior. Theorem provers on the other hand are typically used to prove the correctness of abstract representations of algorithms. Abstraction is a prevalent design technique allowing software designers to manage system complexity. However, design abstractions are in many cases different from the abstractions used in automated reasoning. For example, objects, classes, and interfaces are typical abstractions used in object-oriented design. On the other hand, when applying a software model checker to a program, program state is abstracted using bounded unrolling of loops. As a result, automated reasoning abstractions are synthetic in many cases. One serious consequence is abstractions becoming out of sync with concrete system artifacts, leaking flaws into the system even when they are proven not to exist on abstractions. Decomposing system artifacts into different categories suitable for different reasoning and/or assurance techniques early in the design process provides a synergy between design and reasoning abstractions. Partial Correctness The axiomatic verification literature carefully, and rightfully, distinguishes Total Correctness from Partial Correctness. Total Correctness is accomplished when given a guarantee of the correctness of a precondition, an implementation is proven to satisfy a given postcondition. Partial Correctness on the other hand describes the same ternary relationship between a precondition, a postcondition and an implementation only if an implementation terminates. Proving whether an implementation will always terminate is undecidable, so Partial Correctness is a weaker guarantee than Total Correctness. Separating terminating computations from non-terminating coordination artifacts allows for designing modeling languages of computational algorithms that always terminate. In his seminal book The Art of Computer Programming, Knuth characterizes the notion of an algorithm in terms of five attributes, the first of which is finiteness. In his definition of algorithms, Knuth explicitly states that "an algorithm must always terminate after a finite set of steps". Making termination explicit when modeling computational algorithms thus does not limit algorithm designers. Performance Assurance Performance analysis has always been an important aspect of software engineering. Theoretical asymptotic analysis of algorithm time and space complexities has been an established field of Computer Science, but its practical counterpart has not been fully materialized yet. We typically use profilers to measure the execution time or memory consumption of a specific implementation under a specific workload, but we cannot do that statically. Profilers are analogous to dynamic type checkers, potentially reporting problems at run-time rather than compile-time, and potentially missing problems that are not covered by the workload used for analysis. Real-time systems in particular would benefit a lot from statically analyzing the performance of programs to make sure they meet their real-time constraints. Embedded systems with limited memory and processing resources would also benefit a lot from checking the performance characteristics of a system before being deployed. Power consumption has also been an important performance metric of programs running on handheld systems. Reusable library designers and users can leverage resource consumption contracts on modules to declaratively and soundly distinguish high-performance components from other components meeting the same functional requirements but consuming more processing time, memory and/or power. But automatic performance analysis is theoretically impossible in general, simply because almost all performance analysis problems reduce to the halting problem, which is undecidable. Several heuristic algorithms have been proposed to address the problems of termination (e.g. ) and resource consumption. How-ever, those are heuristics that can not be used in a logically sound and complete analysis. Modeling algorithms using only terminating constructs would enable new performance analysis scenarios: -Time Analysis: With algorithm models terminating by design, the compiler becomes responsible of making sure recursion is bounded and loops all have an upper bound on the number of iterations. Since the compiler needs either to infer those bounds or to require the model to make them explicit, then worst case asymptotic complexity can be directly calculated. Asymptotic time complexity can be also added to the contract of an algorithm. The implementation will be checked against that contract, and clients using that algorithm will be "taxed" that upper bound on time complexity when their individual complexities are calculated. Those contracts can then be used by performance analysis tools to find performance bottlenecks statically. -Space Analysis: Similar to time complexity, space complexity can be also calculated directly. This will be an asymptotic analysis as well because algorithms usually abstract away platform-specific details for portability. Still, given those asymptotic bounds, lower-level compilation phases can come up with more accurate space analyses as they generate platform-specific code. -Power Analysis: Power consumption is at least as important as time and space in handheld systems. Time complexity, individual instructions, using specific peripheral devices and network bandwidth are among the factors affecting power consumption. Since we can at least asymptotically quantify each of those factors, we can again statically calculate an asymptotic bound on power consumption given performance contracts. -Bandwidth Analysis: Similar to Power, network bandwidth analysis can be performed based on time analyses, and the different levels of overhead added at different layers of a communication stack. A 4-Dimensional Meta-Architecture Given the differences in the nature of different software artifacts, we propose the meta-architecture in Fig. 1. This is a meta-architecture in the sense that it is system-independent, and can be instantiated for different systems. This is also sometimes referred to as an architectural pattern. The three orthogonal models we identify here are computation, coordination, and state. The coordination model interfaces with the others, accessing only constructs publicly exported. For example, a message handler in the coordination model might need to perform a computation. This can be achieved by calling a function exported from the computation model. Similarly, a transition from one object state to another might involve calling a computational function. In such case, the function is called by a process in the coordination model, and the result is used to atomically update the stateful object. Fig. 1: A 4-dimensional meta-architecture (architectural pattern) decomposing architectural artifacts into four models: coordination, computation, state, and meta-programming artifacts. To allow for direct integration with program reasoning tools, the exported constructs of all three models form a meta-programming envelope. Those constructs need to export programmable interfaces for models to integrate with each other, and also to be used by tools (e.g., verification, test-case generation, performance analysis). The three models presented earlier, in addition to the metaprogramming envelope, form a 4-dimensional meta-architecture that can guide the design of software systems. The main advantage of splitting the architecture into multiple models is liberating software designers to use the formalisms that best suite the responsibilities of each of the models, instead of having to stick to only one set of formalisms throughout the design of the whole system. Table 1 presents a taxonomy of formalisms, with examples of particular formalisms that might be more suitable for particular models than others. Software reasoning tools are usually based on an underlying logic. For computation, Hoare logic has been widely used to reason about sequential programs, and truth judgments are usually defined in constructive logics. Coordination on the other hand is more about causality and timing constraints. Temporal logics and Linear logics are capable of expressing those judgments. Description logics have been commonly used to formalize data representation, and thus might be best suited for stateful object models. Similarly, different kinds of formal calculi are applicable to the different modeling languages. Variants of -calculus (e.g., C, P, D ) have been designed with computation as their primary focus. Process calculi are all about modeling concurrent processes, communication channels and interactions. Relational calculus and Relational algebra have been used for decades as the underlying formalisms for relational data management in databases. Many relational concepts are applicable to in-memory stateful objects. Defining the semantics of language constructs is what gives models meaning. Different approaches to language semantics have been used over the years. Denotational semantics model language constructs using mathematical functions, so they are a natural fit for computational models. Operational semantics model the operational effects resulting from the evaluation of constructs. Coordination is effectful, and we naturally tend to think about coordinating systems operationally. Axiomatic semantics focuses on defining logical invariants that are to be maintained across evaluations. This is exactly what stateful objects and their integrity invariants and constraints are to be defined on top. Typed languages usually integrate their type systems together with their formal calculi. System F of polymorphic types and dependent types associates types (with varying expressive powers) with expressions. Concurrent systems need a different sort of type systems (e.g., Session types ). Stateful objects are themselves treated as types in many language paradigms. Elaborate Typestatebased systems have been designed though to track the dynamic interfaces (types) of objects subject to logical object state. There are several approaches to program verification, and the assurance of program properties in general. Again, different approaches would fit better than others to different models. Correctness of computations can be verified based contracts (pre-conditions and post-conditions). Model checking approaches are best-suited to verifying temporal properties of systems defined in process calculi. With invariants being first-class constructs of stateful objects, symbolic model checking can be used to verify that object state transitions do not violate object invariants. Example: A Simple File System To demonstrate the concepts presented in this paper, we use a simple file system as an example (Fig. 2). A file system can be thought of as a process communicating with other processes. In addition to the user-mode processes that need to read and write data from/to files, a file system also interacts with a storage system that manages a block-based storage device (e.g., disk or tape), and possibly with other operating system processes. Different processes communicate with each other through message passing. Processes and their communication channels are modeled as blue model elements in Fig. 2. Their dependencies on stateful objects and computations are modeled as dashed arrows. A process can call a Fig. 2: A simple file system example. Diagram is split into three sub-models: stateful classes/objects (yellow), processes (blue), and computations (green). computational function, and can also query or update the state of an object. Computations and stateful objects do not directly interact though. The stateful objects managed by the file system are modeled as a UML class diagram in Fig. 2 (yellow model elements). Stateful objects include files, directories, and a list of storage blocks that can be allocated to different files. A file exclusively contains the set of blocks where its contents are stored. A directory contains a set of files and possibly sub-directories. Stateful objects have to preserve a set of invariants. For example, a block is either free, or belongs to strictly one file. Also the size of a file is a function of the number of blocks it contains. Structurally, the directory structure has to be acyclic, with each file/directory having at most one parent (i.e., directories form a tree). In addition to communication between processes, and state transitions of stateful objects, a file system needs to compute the values that are to be passed across processes, or used to determine which state an object should transition to. Computations are modeled as green elements in Fig. 2. For example, when a file is created or appended, the number of blocks needed has to be computed using the BlockCount function. Files do not necessarily occupy contiguous blocks on a storage device, so given a per-file block table, accessing a particular byte within a file, the index of that byte needs to be translated into a pair of values: a block identifier, and an index within that block. Those are computed using the IndexToBlock function. The Access Control List (ACL) of a file encodes access permissions to that file, and determining whether a user, a group or a process has access to that file typically involves a computation (HasPermission). Whenever possible, storing file contents in contiguous blocks improves access time due to locality patterns, especially for sequential file access. Defragmentation is a process where file contents are moved to unused blocks that are physically closer to other blocks used by the file. Deciding whether to defragment a volume is usually subject to several metrics that also need to be computed (DefragMetrics). File systems, pretty much like other operating system components, are usually cited as examples of non-terminating software systems. A file system has to continuously respond to requests from user processes. This is typically modeled as an event loop, where a system waits indefinitely for an external event, and when that event arrives it is processed by the system, which then goes back to the waiting state. Read/write requests are examples of events processed by a file system. Modeling this event loop as a process rather than a computation makes it easier to assure the safety and correctness of the system. Coordination properties of processes (e.g., deadlock/livelock freedom) can be checked using a model checker without having to include computational states in the model. This can highly reduce the state-space of the model, improving scalability of existing model checkers. At the same time, contract-based assurance techniques, or axiomatic tools based on Hoare-logic, can be used to check the correctness of sequential computations without having to take the inherent concurrency of the system into consideration. Related Work Systematic decomposition of software systems into smaller units has been the driving force behind several software engineering paradigms. Seminal work by Parnas suggests hiding each design decision in a separate module. Decomposing systems statically into functions, or objects, or deployment-time services, are among the most commonly used paradigms. Hybrid decomposition techniques have been also suggested. In addition, cross-cutting concerns inspired multi-dimensional decomposition techniques, such as Aspect-Oriented Programming (AOP), and Feature-Oriented Programming (FOP). The aforementioned techniques and paradigms base their decomposition decisions upon either problem domain abstractions, or encapsulation of design decisions. This paper on the other hand suggests an orthogonal dimension of decomposition, taking assurance techniques and their abstractions into consideration. This can be thought of as a generalization of multi-dimensional separation of concerns, adding an explicit assurability dimension. Separation of computation and coordination aspects of software systems has been argued for by Gelernter back in the early 1990s. In this paper we follow that argument, and it is one of the inspirations behind the 4-dimensional meta-architecture. It is unfortunate though that almost 30 years later, monolithic system architectures are still the norm. Conclusion and Future Work In this paper we argued that different software quality assurance techniques operate on different aspects of software systems. To facilitate assurance, and for a smooth integration of assurance tools into the Software Development Lifecycle (SDLC), we presented a 4-dimensional meta-architecture that separates computational, coordination, and stateful software artifacts early on in the design stage. We enumerated some of the challenges that can be addressed by this meta-architecture, and demonstrated it on a simple file system design. For future work, we plan to study the adequacy of existing software modeling tools, and potentially provide tool support for the 4-dimensional metaarchitecture. Integrating modeling with logical reasoning tools (e.g., model checkers, theorem provers, SMT solvers), and effectively combining the results computed by reasoning tools are two future research directions as well. Tooling support might involve the design of notations/languages suitable for the different aspects of the meta-architecture. Integration of results from different reasoning tools would involve proving that this integration preserves soundness.
Outburst hazard of little-studied lakes assessment at the Mongun-Taiga massif There is a reduction in the area of glaciation of mountain massifs as a result of climate warming, which leads to the formation of lake-glacial complexes in areas of glaciation degradation. These complexes are dynamic systems that are rapidly changing over time, therefore, unstable and potentially outburst. Moraine and periglacial lakes outbursts are dangerous hydrological phenomena. As a result of outbursts catastrophic floods and mudflows can form, causing serious damage to the infrastructure of settlements located downstream and often leading to human toll. Therefore, the study of outburst-hazardous lakes is necessary and is an important applied problem associated with forecasting natural hazards. In this paper an the outburst hazard of little-studied moraine and periglacial lakes at the Mongun-Taiga mountain massif (Tyva Republic, Russian Federation) assessment was carried out using the scoring method, supplemented taking into account regional characteristics, using data from remote sensing of the Earth. The performed assessment according to satellite images showed that most of the massif's lakes have a high outburst hazard. Based on the assessment results a group of lakes was selected located in the right branch upstream of the river Tolaity for the purpose of a more detailed field survey (hydrological and geophysical studies were carried out). Field work carried out on the selected group of lakes allowed us to correct the performed assessment. In paper the applicability of the method based on comparing field data and data obtained from satellite images was estimated.
Complexity Around the Edges In todays higher education environment, the question of how to assess the value of what we as professors do should engage us all. Within this context, Donald R. Bacon and Kim A. Stewarts essay, Why Assessment Will Never Work at Many Business Schools, is a laudable effort with important insights. Indeed, I am for the most part in substantial agreement with the authors analysis as far as it goes. My belief, though, is that the assessment picture is more complex around the edges than Bacon and Stewart describe. There is complexity at both the micro levelthat of the individual instructorand at the macro levelthat of our collective ability as business professors to articulate measurable learning goals. While it is tempting to assume otherwise, this complexity needs to be an ever-present part of our assessment discourse. Bacon and Stewarts thesis is that business pedagogical research is often statistically problematic, mainly because of the frequent use of small student samples. Along with small sample sizes, they identify numerous practical issues such as low reliability, variable effect sizes, and impractically long learning cycles. Their proposed solution is to turn to the discipline of
-Tocopheryl Succinate Inhibits Osteoclast Formation by Suppressing Receptor Activator of Nuclear Factor-kappaB Ligand (RANKL) Expression and Bone Resorption Objective Osteoclasts are bone-resorbing multinucleated cells derived from the monocyte/macrophage lineage during normal and pathological bone turnover. Recently, several studies revealed that alpha-tocopheryl succinate (TP-suc) have demonstrated potent anti-cancer activities in vitro and in vivo. However, the effects of TP-suc on osteoclast formation and bone resorption remain unknown. Thus, in this study, we examined the effects of TP-suc on osteoclast differentiation and bone resorbing activity in inflammatory bone loss model. Methods Osteoclast differentiation assay was performed by cocultures of mouse bone marrow cells and calvarial osteoblasts in culture media including interleukin-1 (IL-1). Osteoclasts were stained for tartrate-resistant acid phosphatase (TRAP). The level of receptor activator of nuclear factor-kappaB ligand (RANKL) mRNA was determined by reverse transcriptase-polymerase chain reaction (RT-PCR). ICR mice were administered an intraperitoneal injections of TP-suc or dimethyl sulfoxide (DMSO) 1 day before the implantation of a freeze-dried collagen sponge loaded with phosphate-buffered saline (PBS) or IL-1 over the calvariae and every other day for 7 days. The whole calvariae were obtained and analyzed by micro-computed tomography (CT) scanning, and stained for TRAP. Results TP-suc inhibits osteoclast formation in cocultures stimulated by IL-1 and decreased the level of expression of RANKL mRNA in osteoblasts. In addition, administered intraperitoneal injections of TP-suc prevented IL-1-mediated osteoclast formation and bone loss in vivo. Conclusion Our findings suggest that TP-suc may have therapeutic value for treating and preventing bone-resorptive diseases, such as osteoporosis. INTRODUCTION The bone tissue forms cartilages and the skeletal system, supports and adheres muscles with its mechanical functions, Overactive osteoclast causes imbalance the bone remodeling and metabolic bone diseases including periodontal disease by bacterial/germ infection, as well as osteoporosis, metastatic cancer by tumors, rheumatic arthritis and degenerative arthritis. The receptor activator of nuclear factor-kappaB ligand (RANKL) is produced from the osteoblast by inflammatory cytokine like interleukin-1 (IL-1), becomes a crucial factor in creating and activating the osteoclast and plays a crucial role in differentiation from osteoclast precursors into mature osteoclasts under the existence of macrophage colony-stimulating factor (M-CSF). Recently, Xiong et al. reported that RANKL produced from osteocytes and chondrocytes in the substrate were essential for both osteoclast formation and its activation. In addition, the RANKL activates various signaling pathways by binding with RANKL expressed from the osteoclast precursors and is known as a requirement for differentiating and activating the osteoclasts. The structural formula of vitamin E was firstly reported in 1922 and is known as playing an important role in the reproduction process. The vitamin E consists of tocopherol and tocotrienol depending on the existence of saturation in the site chain and has the types of alpha, beta, gamma and delta. The four types are separated depending on the numbers and locations of methyl groups in the chromanol ring structure. The alpha ()-tocopherol is abundant in human body and the vitamin E is the most representing substance. That is why the material has been under many researches but its biological studies have not been emerged. For example, the gamma ()-tocopherol decreases the cyclooxygenase-2 (COX-2) activation in the macrophages treated with the lipopolysaccharide (LPS) and epithelial cells treated with IL-1 and inhibits the creation of prostaglandin E 2 (PGE 2 ) but not much by -tocopherol. Aggarwal et al. reported that the tocotrienols with different types increase anti-cancer and nerve cell protection effects but not by -tocopherol. Meanwhile, Prasad and Edwards-Prasad reported that the -tocopheryl succinate (TP-suc), one of esterified compound of the vitamin E was the most effective in anti-cancer activities than other vitamin E derivants including -TP acetate and -TP nicotinate. In addition, it was reported that cancer treatment was improved by using TP-suc assistant while performing radiation and chemotherapy. Recently, we found that -tocotrienol strongly inhibited bone resorption and osteoclast formation by inhibiting the RANKL expression on the osteoblast but -tocopherol had no such effect. However, studies on the effects of TP-suc in the aspect of generating and following bone reduction have been rare despite various advantages of TP-suc mentioned above. In this study, we seeks to discover how TP-suc affected the osteoclast generation, the relation with the RANKL expression in the osteoblast known as an essential factor for the osteoclast generation and the effect of TP-suc on the bone destruction stimulated by IL-1 in vivo. Osteoclast differentiation The macrophages from the mouse bone marrow were obtained by the method mentioned in previous studies. The macrophages were treated by the M-CSF (30 ng/mL) and the RANKL (100 ng/mL) for 4 days in the -minimal Reverse transcriptase-polymerase chain reaction (RT-PCR) The RNA was prepared by using the TRIzol reagent IL-1-induced bone loss in vivo All animal experiments were reviewed and approved by the Seoul National University School of Dentistry Animal Care Committee (Seoul, Korea). Lyophilized collagen sponge soaked with the IL-1 (2 g) or the phosphatebuffered saline (PBS) was implanted to 5 mice in the test group and 5 mice in the control group (5-week old, male, ICR) on the calvarial bone to induce bone loss by IL-1. The TP-suc (80 mg/kg body weight) was intraperitoneally administered one day before implantation the collagen sponge and injected for 7 days for every 2 days after the operation. The same amount of TP-suc, 30 L of the DMSO solution, was intraperitoneally administered to the control group. All the mice were anesthetized 7 days after the operation, the calvarial bones were washed with the PBS and fixed with 4% of paraformaldehyde for one day. The calvaria were analyzed with the methods mentioned in previous studies to achieve 3-dimensional (3D) images Statistical analysis ll the data were expressed as the average ± the standard deviation and the differences were analyzed by the Student's t-test for 2 groups and the one-way ANOVA (SPSS statistical software version 12; SPSS Inc., Chicago, IL, USA) for more than 3 groups. The data with less than 0.05 of P were expressed with asterisk (*) for significance. The effect of TP-suc on the osteoclast differentiation induced by co-culture of osteoblasts and bone marrow cells To investigate the effect of the TP-suc on the osteoclast differentiation, the osteoblasts and bone marrow cells isolated from mice were co-cultured in the presence of IL-1 to induce osteoclast formation. The result of checking the osteoclast differentiation through the TRAP staining after cell cultures for 7 days with different concentrations of TP-suc showed that the group treated with the TP-suc significantly inhibited osteoclast formation dependent on concentration compared to the control group (Fig. 1A, 1B). Such inhibition of osteoclast differentiation did not appear in the -tocopherol and TP acetate (Fig. 1B). As such, the TP-suc may strongly inhibit the osteoclast differentiation induced by co-culture unlike different TPs. to -tocopherol in case of treating TP-suc with different concentrations ( Fig. 2A, 2B). The results mean that TP-suc did not directly affect the osteoclast precursors and the substance may inhibit the osteoclast differentiation by affecting the osteoblast. The effect of TP-suc on the RANKL expression in the osteoblasts The RANKL is a member of tumour necrosis factor The effect of TP-suc on IL-1-induced bone destruction in vivo It was confirmed that the TP-suc inhibited the osteoclast formation in vitro studies. Next, we investigated the effect IL-1 is a strong cytokine related to inflammation and it was shown that treating the IL-1 lead to severe calvarial bone loss through the TRAP satining (Fig. 4A, top) and micro-CT images (Fig. 4A, bottom) but the mice injected with TP-suc showed strong inhibition of bone losses due to the IL-1 (Fig. 4A). In addition, the BMC lost by the IL-1 was significantly restored by TP-suc (Fig. 4B) and the number of TRAP-positive osteoclasts per unit area increased by the IL-1 was significantly decreased by TP-suc (Fig. 4B). These results show that TP-suc inhibited the osteoclast formation and its following bone losses in vivo like at the cell level. DISCUSSION The effect of TP-suc on the bone biology, especially on the osteoclast differentiation and bone loss is rarely known even though it is reported that TP-suc was the most effective in anti-cancer activities than other vitamin E derivants and very effective as the adjuvant to the cancer treatment. Prior to the study, it was reported that -tocotrienol, one of vitamin E series, inhibited the osteoclast generation and bone resorption. The -tocotrienol strongly inhibited the RANKL expression in the osteoblast, known as a critical factor in the osteoclast differentiation and the c-Fos expression in osteoclast precursors. In addition, inhibiting differentiation to mature osteoclasts from the osteoclast precursors by -tocotrienol was completely restored by the excessive expression of the c-Fos. In addition, it was found out that -tocotrienol strongly inhibited the bone resorption by the mature osteoclasts through the calcium-phosphate apatite-coated OAAS plate. However, the other vitamin E series, -tocopherol, did not show the osteoclast differentiation and the bone resorption. Other study groups reported that the vitamin E was required for bone calcification and reformation and it was found out that -tocotrienol was more effective in bone protection than -tocopherol. Meanwhile, it was interesting that Fujita et al. reported that the genetic mouse model The indicated protein amounts were determined by using enzyme Immunoassay (ELISA) kits in cell lysates (RANKL) and in cell culture media (osteoprotegerin and prostaglandin E 2 ). *P < 0.05. improved the osteoclast fusion. The differences in results suggest that the causes are due to methods for each experiment and ages of mice, meaning that additional studies are required to investigate the relation between the vitamin E and the bone metabolism. This study discovered that TP-suc, one of the esterified compound of the vitamin E, strongly inhibited the osteoclast differentiation and bone losses unlike the TP acetate and -tocopherol, another derivants of the vitamin E. First, we treated TP-suc in the osteoclast precursors with different concentrations under the existence of the M-CSF and the RANKL to investigate how the osteoclast differentiation strongly inhibited by TP-suc was inhibited in the co-culture of the osteoblasts and the bone marrow cells. However, we found that the TP-suc did not affect the osteoclast formation by the RANKL in such systems. It may be suggested that the results inhibited the osteoclast differentiation by affecting the osteoblast which induced the osteoclast differentiation, rather than the fact that TP-suc did not directly affect the osteoclast precursors. Then, the effect of TP-suc for the RANKL expression in the osteoblast was investigated. It was known that the RANKL was required for the osteoclast differentiation and various stimulating factors in the osteoclast differentiation including IL-1, TNF- and 1,25(OH) 2 D 3 increased the RANKL expression and induced the osteoclast differentiation. In particular, IL-1 is considered as one of the most important mediator in the osteoporosis accompanying chronic bone resorption by estrogen deficiency or various inflammatory factors. Lorenzo et al. reported that the mice with deficient IL-1 receptors did not show decrease in the bone mass due to the ovariectomy and Abramson and Amin discovered that blocking the IL-1 signal system decreased bone destruction and cartilage losses in the animal model with rheumatic arthritis. Interestingly, the TP-suc strongly inhibited the RANKL expression increased by the IL-1 in the osteoblasts (Fig. 3). The IL-1 stimulated the PGE 2 generation and increased the induced by the IL-1 was not affected by the TP-suc (Fig. 3). In addition, the osteoclast differentiation by the TP-suc and the RANKL expression inhibition were not restored by directly stimulating the PGE 2 into the cells (not published). Therefore, the results suggest that the TP-suc affected factors. Additionally, the present study showed that the RANKL expression increased by the vitamin D was inhibited by the TP-suc (Fig. 3). Therefore, additional studies are required to discover whether the activation of the RANKL promoters induced by the IL-1 or the vitamin D is inhibited by the TP-suc. Lastly, we investigated whether the inhibition of osteoclast differentiation by TP-suc in vitro was reproduced in the organism by using the mouse model. Injecting the TP-suc strongly inhibited the excessive bone losses and the osteoclast formation on calvarial bone induced by IL-1. The results coincide with the study results in vitro and may suggest the possibility that the RANKL expression may be inhibited in the osteoblast by the TP-suc in vivo. However, additional studies are required to investigate whether the RANKL expression in vivo is inhibited by the TP-suc. In addition, it was reported that the RANKL produced from the chondrocytes and ostecytes were required to form and activate the osteoclasts, meaning that the effect of the TP-suc on the RANKL expression from these cells shall be investigated. CONCLUSION The TP-suc inhibited the osteoclast formation in the co-culture system which consisted of the osteoblasts and the bone marrow cells but failed to inhibit the differentiation of the osteoclast precursors to mature osteoclasts by the RANKL. Also, the TP-suc decreased the RANKL expression in the osteoblasts. This study confirmed the osteoclast formation and bone destruction inhibition by the TP-suc in vivo using the mouse model. Therefore, this study results indicated that the TP-suc may be efficiently utilized to prevent and treat metabolic bone diseases including the osteoporosis due to excessive bone losses.
Co-evaluation of IT value as an activity for effective project appraisal at ex-ante stage The business benefits of IT projects are becoming the main determining factor for selecting projects at the ex-ante justification stage. At this stage, the identification and measurement of benefits is usually abdicated to business management and IT professionals support as technical advisors. However, there is still on-going evidence that shows that organisations have not been able to appropriately evaluate IT benefits. This paper highlights the importance of close collaboration between business managers and IT managers for effective and appropriate IT benefit evaluation at the ex-ante justification stage. Activity theory was applied as analytical model to understand and explain the dynamics of the activity in pursuit of achieving improved outcome. The activity analysis sees the IT project benefit evaluation as a systemic entity that consists of elements with shared motive for effective identification and measurement of IT value. The paper presents a case study in a large academic institution to assess the nature of joint participation of evaluators for effective IT project appraisal and to identify the desired roles and responsibilities needed for effective IT benefit evaluation. Close collaboration and partnership between users and IT professionals is shown to be a crucial component in the justification process. The roles and responsibilities of IT management exceeds beyond the task of technical advisors. New roles and responsibilities are proposed to resolve some of the challenges faced with the current justification process in the organization. The paper provides plausible insights for IT project evaluation research and for practitioners that aim to improve their benefit evaluation. Close collaboration and partnership between users and IT professionals is shown to be a crucial component in the justification process. The roles and responsibilities of IT management exceeds beyond the task of technical advisors. New roles and responsibilities are proposed to resolve some of the challenges faced with the current justification process in the organization. The paper provides plausible insights for IT project evaluation research and for practitioners that aim to improve their benefit evaluation.
Modeling of optical gain in quantum cascade laser subjected to strong magnetic field We present a comprehensive rate equations based model for calculating the optical gain in the active region of quantum cascade laser in magnetic field perpendicular to the structure layers, which takes into account all the relevant carrier relaxation processes. The magnetic field causes electron energy subbands to split into series of discrete Landau levels whose arrangement depends strongly on the magnitude of this field. This enables one to control the population inversion in the active region, and hence the laser output properties, in particular the optical gain. Numerical illustration is provided for a GaAs/AlGaAs based structure, designed to emit radiation in the mid-infrared part of the spectrum.
Shifting Conceptions of Gender Justice in EU Policy on Women Peace and Security United Nations Security Council Resolution 1325 on Women, Peace and Security was a landmark resolution passed in 2000 that, for the first time, recognised the differing impact of conflict on women and girls and called for greater participation of women in conflict resolution and peacebuilding. This Resolution has now been followed by a further 8 Resolutions, which, together, make up the Women, Peace and Security Agenda. This paper examines United Nations Security Council Resolution 1325 on Women, Peace and Security through the lens of the three GLOBUS conceptions of justice: justice as non-domination, justice as impartiality and justice as mutual recognition. The paper argues that in the process and adoption of the 2018 EU Strategic approach to UNSCR 1325, the EU has demonstrated a shift towards an approach to gender justice that is more closely assigned to justice as mutual recognition than in its previous approach to Women, Peace and Security, found in the 2008 Comprehensive Approach to UNSCR 1325 and 1820. Using empirical data from meeting notes and qualitative interviews, this paper demonstrates how the EEAS engaged in a surprisingly inclusive process within the consultations that led to the adoption of the new 2018 Strategic Approach, which enjoyed extensive inclusion of a diversity of civil society voices in the process. As a result, the document shows a shift towards a greater focus on diversity, civil society inclusion and a gendered analysis. From the perspective of gender justice, this shows a shift from an approach previously aligned with justice as impartiality in the text of the 2008 document, but in actuality, through a lack of implementation, an approach more akin to non-domination, towards an approach to gender justice that can be more firmly associated with justice as mutual recognition.
Virulence of geographically different Cryptosporidium parvum isolates in experimental animal model Cryptosporidium parvum is a coccidian parasite which causes gastrointestinal disease in humans and a variety of other mammalian species. Several studies have reported different degrees of pathogenicity and virulence among Cryptosporidium species and isolates of the same species as well as evidence of variation in host susceptibility to infection. The study aimed to investigate infectivity and virulence of two Cryptosporidium parvum Iowa isolate (CpI) and a local water isolate (CpW). Thirty-three Swiss albino mice have been divided into three groups: Negative control Group (C), the CpI group infected with Iowa isolate and the CpW group infected with C. parvum oocysts isolated from a local water supply. Infectivity and virulence have been measured by evaluating clinical, parasitological and histological aspects of infection. Significant differences were detected regarding oocysts shedding rate, clinical outcomes, and the histopathological picture of the intestine, lung, and brain. It was concluded that the local water isolate is significantly more virulent than the exported one.
Society for Social Medicine fifth annual meeting, 1961 I M LECK (Department of Social Medicine, University of Birmingham) In the United Kingdom the birth rate fluctuates between a peak in spring and a trough in autumn. From a study of data for England and Wales and for Birmingham, it is concluded that the amplitude of this seasonal fluctuation is relatively low for first births to mothers under 25 and that the time of the seasonal peak may be related to birth order. According to the same data the fluctuation is especially marked for illegitimate births and those of high socio-economic status, but not for multiple births. Analysis of a series of abortions from Belfast suggests that the abortion rate is not increased among winter conceptions. These findings may indicate that the seasonal fluctuation in births is due to variations in human behaviour rather than in the frequency of ovulation or abortion.
Robin Turner Donald Robin Turner Donald was born in Inverurie, Aberdeenshire. He remained deeply attached to that part of the world all his life. A scholarship took him to Fettes College. At medical school in Aberdeen he was a key figure in his class and was chairman of the Medical Society. Later, his organisation of the class reunions was
Reversible interactions between smooth domains of the endoplasmic reticulum and mitochondria are regulated by physiological cytosolic Ca2+ levels The 3F3A monoclonal antibody to autocrine motility factor receptor (AMFR) labels mitochondria-associated smooth endoplasmic reticulum (ER) tubules. siRNA down-regulation of AMFR expression reduces mitochondria-associated 3F3A labelling. The 3F3A-labelled ER domain does not overlap with reticulon-labelled ER tubules, the nuclear membrane or perinuclear ER markers and only partially overlaps with the translocon component Sec61. Upon overexpression of FLAG-tagged AMFR, 3F3A labelling is mitochondria associated, excluded from the perinuclear ER and co-distributes with reticulon. 3F3A labelling therefore defines a distinct mitochondria-associated ER domain. Elevation of free cytosolic Ca2+ levels with ionomycin promotes dissociation of 3F3A-labelled tubules from mitochondria and, judged by electron microscopy, disrupts close contacts (<50 nm) between smooth ER tubules and mitochondria. The ER tubule-mitochondria association is similarly disrupted upon thapsigargin-induced release of ER Ca2+ stores or purinergic receptor stimulation by ATP. The inositol -trisphosphate receptor (IP3R) colocalises to 3F3A-labelled mitochondria-associated ER tubules, and conditions that induce ER tubule-mitochondria dissociation disrupt continuity between 3F3A- and IP3R-labelled ER domains. RAS-transformed NIH-3T3 cells have increased basal cytosolic Ca2+ levels and show dissociation of the 3F3A-labelled, but not IP3R-labelled, ER from mitochondria. Our data indicate that regulation of the ER-mitochondria association by free cytosolic Ca2+ is a characteristic of smooth ER domains and that multiple mechanisms regulate the interaction between these organelles.
Pathophysiology of ovarian steroid secretion in polycystic ovary syndrome. The ovary in polycystic ovary syndrome (PCOS) produces markedly increased amounts of steroids in response to gonadotropin stimulation. Because FSH secretion is under tight long-loop negative-feedback control and LH is not, hyperandrogenism is the primary clinical manifestation of excess steroid production in PCOS. However, estrogen production by multiple, small follicles may inhibit FSH secretion sufficiently to prevent selection of a single, dominant follicle. Ovarian stimulation testing has suggested that ovarian hyperandrogenism is a result of dysregulation of the androgen producing enzyme P450c17. ACTH stimulation testing is consistent with dysregulation of adrenal P450c17 in about two-thirds of hyperandrogenic women. In most cases dysregulation appears to be due to an intrinsic abnormality of P450c17, or to an abnormality of autocrine/paracrine factors which regulate P450c17. Both LH and insulin hypersecretion are most often a result of the steroid secretory abnormalities. Once present they may amplify the underlying cause of dysregulation of P450c17.
. Synthetic biology of natural products is the design and construction of new biological systems by transferring a metabolic pathway of interest products into a chassis. Large-scale production of natural products is achieved by coordinate expression of multiple genes involved in genetic pathway of desired products. Promoters are cis-elements and play important roles in the balance of the metabolic pathways controlled by multiple genes by regulating gene expression. A detection plasmid of Saccharomyces cerevisiae was constructed based on DsRed-Monomer gene encoding for a red fluorescent protein. This plasmid was used for screening the efficient promoters applying for multiple gene-controlled pathways. First of all, eight pairs of primers specific to DsRed-Monomer gene were synthesized. The rapid cloning of DsRed-Monomer gene was performed based on step-by-step extension of a short region of the gene through a series of PCR reactions. All cloned sequences were confirmed by DNA sequencing. A vector named pEASYDs-M containing full-length DsRed-Monomer gene was constructed and was used as the template for the construction of S. cerevisiae expression vector named for pYeDP60-Ds-M. pYeDP60-Ds-M was then transformed into S. cerevisiae for heterologous expression of DsRed-Monomer gene. SDS-PAGE, Western blot and fluorescence microscopy results showed that the recombinant DsRed-Monomer protein was expressed successfully in S. cerevisiae. The well-characterized DsRed-Monomer gene was then cloned into a yeast expression vector pGBT9 to obtain a promoter detection plasmid pGBT9Red. For determination efficacy of pGBT9Red, six promoters (including four inducible promoters and two constitutive promoters) were cloned by PCR from the S. cerevisiae genome, and cloned into pGBT9Red by placing upstream of DsRed-Monomer gene, separately. The fluorescence microscopy results indicated that the six promoters (GAL1, GAL2, GAL7, GAL10, TEF2 and PGK1) can regulate the expression of DsRed-Monomer gene. The successful construction of pGBT9Red lays the foundation for further analysis of promoter activity and screening of promoter element libraries.
Speed control of switched reluctance motor using sliding mode control strategy A robust speed drive system for a switched reluctance motor (SRM) using a sliding mode control strategy (SLMC) is presented. After reviewing the operation of an SRM drive, a SLMC based scheme is formulated to control the drive speed. The scheme is implemented using a microcontroller and a high resolution position sensor. The parameter insensitive characteristics are demonstrated through computer simulations and experimental verification.
Influence of Local Incidence Angle Effects on Ground Cover Estimates An often neglected source of uncertainty on estimated cover percentage is caused by local view angle effects, where parts of bare soil patches are not visible due to vegetation blocking the sensor line-of-sight. When estimating the fractional cover of pixels far offnadir or at slopes pointing away from the sensor, plant cover can be overestimated by more than 50%, seriously decreasing the accuracy. This problem is inherent when using wide field of view (FOV) sensors, or satellite sensors tilted off-nadir. In the following, results of a study using airborne HyMap hyperspectral data with a wide FOV of ±32° are presented. Ground cover fractions for bare soil, green (PV) and dry (NPV) vegetation were derived using an iterative multiple endmember spectral mixture analysis (MESMA) approach. Using field measurements of geometric canopy parameters as well as terrain information, typical error margins of local incidence angle effects on unmixing results were calculated. These results are further included as a component in a per-pixel accuracy estimate to be introduced.
Tips and tricks of durable cryoballoon based left atrial appendage isolation To the Editor, We have just read an interesting small sized study by Chen et al presenting the efficacy and safety of cryoballoon (CB) based left atrial appendage isolation (LAAi) in patients with persistent atrial fibrillation (PsAF) who have had the history of at least two previous ablation for AF. The authors prefer to apply 240msec bonus freeze in all participants. The durability of LAAi has found as 100% at median 6 months followup which has been assessed during LAA occlusion with 80% atrial tachyarrhythmia free survival at the same time. Although the sample size was small to make any suggestion for clinical practice, it was welldesigned to show the durability of CB for LAAi. We well know that the myocardial sleeves extending from LA onto the pulmonary veins (PVs) were composed of circularly and longitudinally oriented bundles of cardiomyocytes with variable thickness and length. However, the LAA muscular tissue is the continuation of LA tissue rather than an extension and there is thicker tissue around the LA and LAA junction to isolate as compared to antral region. Thus, we thought that we should perform longer duration of CB with a good occlusion grade to create durable lesions. In a previous study, our team also reported that empirical CB based LAAi in addition to PV isolation was both effective and safe method as compared to PVI alone in PsAF. We have observed that there was a high variability in time to LAAi (115.5 msec). Although we have performed cryo application of 450 msec in the first 20 patients, thereafter we changed our protocol according to timetoLAAi and concluded on that if the timetoLAAi was shorter than 150msec, we apply cryoablation for 300msec without bonus freeze and if timetoLAAi was longer than 150msec, we apply bonus freeze of 300msec and also according to our experience, degree of the occlusion grade of LAA and timetoLAAi were much more important than the nadirtemperature during LAAi. To our knowledge, for the first time in literature, we reported the occurrence of left circumflex artery vasospasm of 4% after LAAi without any symptoms or sign of ischemia in whom high dose intracoronary nitrate administration was required. Therefore, we suggest routine coronary angiography in those patients undergoing LAAi because of close anatomical neighboring. Additionally, in a study Mohanty et al reported that recurrence of AF after previous multiple ablations, nonPV triggers were shown to be responsible from AF maintenance in the majority and ablation of these triggers enhanced ablation success. furthermore, in the presence of permanent PVI and no nonPV triggers on isoproterenol, empirical isolation of LAA and CS provided high rate of arrhythmiafree survival. Thus, the authors may comment on how they decided on the cutoff time for freezing and bonus freeze; how can we be sure about the absence of coronary vasospasm without invasive imaging? At last, why did the authors prefer to perform cryoballoon based empirical LAAi among patients with the history of at least two ablations before as other nonPV triggers may also be responsible from tachyarrhythmia?
Copulatory behavior of Microstigmatidae (Araneae: Mygalomorphae): a study with Xenonemesia platensis from Argentina Abstract Microstigmatidae are small ground-dwelling and free-living spiders. The present study reports on the copulatory behavior of Xenonemesia platensis Goloboff 1989, constituting the first report on sexual behavior of the Microstigmatidae. Our findings in X. platensis did not show evidence of pheromones associated with silk. The courtship behavioral units of males was comprised of quivers by legs I and II, brusque movements of the palps, and leg tapping with legs II. During mating, a novel courtship behavior by males was observed that consisted of tapping and scraping with legs II on the female legs. The present study not only gives a description of mating behavior in Microstigmatidae for the first time, but also reports strong evidence of nongenital copulatory courtship activity in mygalomorph spiders. Abstract. Microstigmatidae are small ground-dwelling and free-living spiders. The present study reports on the copulatory behavior of Xenonemesia platensis Goloboff 1989, constituting the first report on sexual behavior of the Microstigmatidae. Our findings in X. platensis did not show evidence of pheromones associated with silk. The courtship behavioral units of males was comprised of quivers by legs I and II, brusque movements of the palps, and leg tapping with legs II. During mating, a novel courtship behavior by males was observed that consisted of tapping and scraping with legs II on the female legs. The present study not only gives a description of mating behavior in Microstigmatidae for the first time, but also reports strong evidence of nongenital copulatory courtship activity in mygalomorph spiders. Keywords: Argentinean spider, South America, courtship, mating, reproductive biology Many spider species could be compelling targets for evolutionary studies due to their unusual reproductive biology (Eberhard 2004); it appears that a species of microstigmatids provides just such a target. Microstigmatidae are small ground-dwelling and free-living spiders (Griswold 1985) restricted to habitats offering constant high humidity and even temperature (Lawrence 1953). This family comprises 15 species, nine of them distributed in the New World (Platnick 2011). Members of this family are characterized by rounded book-lung openings and extremely shortened posterior lateral spinnerets (Goloboff 1995). Microstigmatid species, in particular, have long been overlooked, both because of their rarity in collections and their extremely small size (adult males are 1-3 mm in total length) (Raven & Platnick 1981). The spiders are not known to construct burrows or retreats and are supposed to make minimal use of silk. They readily attack and feed upon small insects (Griswold 1985). There are few published records of either the natural history or the ecology of microstigmatid species (Griswold 1985;: Old World species;: Brazilian species;: Argentinean and Uruguayan species). Here we report on the copulatory behavior of Xenonemesia platensis Goloboff 1989, constituting the first report on sexual behavior of a microstigmatid. We collected three adult males and three adult females at Martn Garca Island, Buenos Aires, Argentina (34u119250S, 58u159380W), in August 2009. Voucher specimens are still alive and will be deposited in the Museo de La Plata, Division Entomologa, La Plata, Buenos Aires, Argentina. All the females molted before we made observations, so they did not have stored sperm. In the laboratory we kept them individually in plastic Petri dishes (9 cm diameter 3 1.5 cm high), with soil as substrate and wet cotton wool moistened daily. These containers allowed us to follow their behavior as they constructed their burrows. We fed all individuals weekly with cockroaches (Blattella germanica) of approximately 10 mm length. We used a 12 h light/dark cycle, and the room temperature during breeding and observations was 26.7uC 6 1.52 SD. In order to observe mating, we placed each female dish inside a larger glass cylindrical container (19 cm diameter and 10 cm high) with a layer of soil approximately 6 cm deep. A depression excavated in the center of the larger container for the female's Petri dish avoided the destruction of the female's shelter during the transfer. The mating arena was illuminated with artificial fluorescent light. For each encounter, we removed the male from his Petri dish and carefully introduced him into the larger container housing the female's dish, and at quite a distance from the female. We performed nine male-female pairings of X. platensis in all combinations, and both males and females were given three possible mating opportunities. We considered only the first pairing for description of behavioral units during courtship and mating sequences because female behavior in particular may change after a first successful insemination, and since these spiders are very rare, they probably never encounter potential mates at such high frequencies. We recorded copulations with a Handycam Panasonic SDR-S7 and analyzed the video records with a PC program (Sony Vegas 9.0) in order to describe behavioral patterns accurately. We used slow motion and single frame advance modes. Durations and frequencies are given as averages 6 standard deviations. We present the frequency and duration of behavioral units during the three mating exposures and five copulations in Table 1. When X. platensis engaged in courtship and mated, a common pattern occurred (Fig. 1a). All males began the courtship when they directly contacted the female's body. During this initial contact, females remained largely motionless. The male did not start courtship when he contacted female silk, but did so only after contacting the female herself. Early studies proposed that mygalomorph spiders lacked chemical cues in sexual communication (Baerg 1958;Platnick 1971). However, more recent studies have reported the presence of pheromones associated with female silk threads (Costa & Prez-Miles 2002;Ferretti & Ferrero 2008). Our findings in X. platensis could indicate the absence of pheromones associated with silk, but obviously more detailed studies are needed to confirm this. After initial contact, the male quivered with the first and second pair of legs, followed by fast upward and downward movement of the pedipalps. The male made nine behavioral bouts with an average duration of 0.52s 6 0.06 SD (range 5 0.4420.60). At first glance, the quivers observed in the courtship of X. platensis could be similar to the body vibrations observed in some theraphosids (Costa & Prez-Miles 2002;Ferretti & Ferrero 2008), but in X. platensis the quiver is generated by the first and second pair of legs instead of pair III as observed in theraphosids. After approximately 46 s, the female raised her body up to an angle of almost 60u relative to the substrate, with the first pair of legs elevated and legs III and IV over the substrate. At 2012. The Journal of Arachnology 40:252-255 this instance, the male made alternating movements of the pedipalps, touching the genital zone of the female (palpal boxing). We usually observed palpal boxing alternated with quivers. Palpal boxing occurred six times, with an average duration of 1.92s 6 0.95 SD (range 5 0.96 2 3.40 s). Subsequently, the male vigorously hit legs I and II of the female with the tarsi of his legs II extended. This behavior consisted of highfrequency leg tapping in an alternating or synchronous phase. The male made seven leg tappings with a mean duration of 1.00 s 6 0.39 SD (range 5 0.68 2 1.80). The brusque movements of the palps and the scraping with legs II during courtship have not been reported in any other mygalomorph spider. These abrupt palpal movements could be similar to the ''twitching'' observed in a diplurid (Coyle & O'Shields 1990), which consisted of distinct, sudden flexions or extensions of one or more legs or palps. Next, the male clasped the female's palps and chelicerae between his first pair of legs (Fig. 1b). The distal portion of each male tibia without tibial apophyses or megaspines was placed against the prolateral surface of each female pedipalp base. The male placed his second pair of legs against the female's first pair of legs, as if pushing them, and then palpal insertion attempts began. From the nine encounters, we obtained five successful matings. All of the first copulations were successful during the first three pairings. In the second pairings, we observed two successful copulations, and no matings occurred in the third set of pairings. In one case, the female rejected the male with vigorous lateral abdominal oscillations while raising her body. In three cases, males never initiated courtship. During the copulation, the male positioned himself under the female, facing her sternum. The female's pedicel was flexed upwards so that the cephalothorax-abdomen angle was 30-50u. This mating position continued during the palpal insertion attempts, and the copulation lasted 4.61min. The male made three palpal insertions with a mean duration of 25.25s 6 12.97 SD (range 5 11.96237.88). During palpal insertion attempts, the male continued performing tapping with legs II and quivering. Afterward, while the male was inserting his palp into the female's genital opening, he added a new behavioral unit. He raised the second pair of legs to an angle of 90u between the femur and patella and quickly moved the legs upward and downward. Male tibia, metatarsi, and tarsi remained extended, and the tarsi beat and scraped the second and third female coxae. The male performed seven repetitions of this leg-beating behavior with a mean duration of 9.40 s 6 5.03 SD (range 5 4.36 2 18.88) and a velocity of 14 beats per second. The male's tapping with his second legs during copula could be interpreted as courtship in copula. This behavior, as far as we know, is unique to X. platensis and has not been previously reported in mygalomorphs (Costa & Prez-Miles 1998, 2002Ferretti & Ferrero 2008;Jackson & Pollard 1990). Finally, when the spiders separated, the male quickly moved backwards. In the observed matings of X. platensis, the copulation position achieved was similar to that of most mygalomorphs (Costa & Prez-Miles 2002), and the behavior displayed by this species during mating is noticeable and unusual among mygalomorph species. The female's apparent unresponsiveness throughout courtship and copulation may be a test of the male's quality (Eberhard 1985); she may be monitoring his overall performance, not only genital stimulation. The sexual selection by female choice hypothesis predicts selective cooperation in which males perform luring behavior, and females choose a mate according to the male's courtship display (Thornhill 1983;Eberhard 1985Eberhard, 1996Eberhard, 1997. One way a male may prevail in this competition is by courting the female during copulation (copulatory courtship) (Eberhard 1994(Eberhard, 1996 and thereby inducing her to use his sperm. Males of hundreds of species of animals perform nongenital behavior, during copulation, that appears to be courtship; this behavior includes biting, tapping, rubbing, squeezing, shaking, vibrating, singing to, and feeding the female (Eberhard 1994(Eberhard, 1996. Table 1 Few studies have directly tested the possibility that copulatory courtship affects paternity. In insects, copulatory courtship can result in a decrease in female mobility during copulation (Humphries 1967) and increased resistance to subsequent matings (King & Fischer 2005). These effects could be operating in the mating behavior of X. platensis, given the female's largely motionless state during courtship, copulation and post-copulation. They could also lead to some kind of resistance to subsequent mating, given that the three females accepted a first male, two females accepted a second male, and none accepted a third male. Obviously, this work constitutes preliminary observations, and more data are needed to elucidate these hypotheses. In conclusion, the present study not only gives a descriptive overview of the mating behavior in the Microstigmatidae for the first time, but also reports strong evidence of nongenital copulatory courtship in mygalomorph spiders, both of which offer a promising field of research in the context of sexual selection.
Stationary-Wavelet-Based Despeckling of SAR Images Using Two-Sided Generalized Gamma Models In this letter, a stationary-wavelet-based despeckling algorithm based on the two-sided generalized gamma distribution (GD) model is proposed. We first introduce the two-sided GD as a flexible and efficient model for the wavelet coefficients of logarithmically transformed synthetic aperture radar intensity or amplitude. The strength of the model is highlighted in terms of its fit to the data, its low computational cost, and the ease of parameter estimation. By empirical results, we then motivate the GD as model for the wavelet coefficients of the noise-free signal. The GD model parameters are estimated with moment methods, using both absolute central moments for the wavelet coefficients of the noisy signal and the noise. Finally, we exploit the prior information contained in the model by designing a Bayesian maximum a posteriori estimator for estimating the noise-free wavelet coefficients. Experimental results demonstrate the superiority of our method in terms of simultaneously reducing speckle and preserving structural details.