link
stringlengths
41
45
date
stringlengths
9
9
paper
dict
reviews
listlengths
1
6
version
int64
1
5
main
stringlengths
38
42
https://f1000research.com/articles/6-231/v1
07 Mar 17
{ "type": "Opinion Article", "title": "Evidence-informed capacity building for setting health priorities in low- and middle-income countries: A framework and recommendations for further research", "authors": [ "Ryan Li", "Francis Ruiz", "Anthony J. Culyer", "Kalipso Chalkidou", "Karen J Hofman", "Francis Ruiz", "Anthony J. Culyer", "Kalipso Chalkidou", "Karen J Hofman" ], "abstract": "Priority-setting in health is risky and challenging, particularly in resource-constrained settings. It is not simply a narrow technical exercise, and involves the mobilisation of a wide range of capacities among stakeholders – not only the technical capacity to “do” research in economic evaluations. Using the Individuals, Nodes, Networks and Environment (INNE) framework, we identify those stakeholders, whose capacity needs will vary along the evidence-to-policy continuum. Policymakers and healthcare managers require the capacity to commission and use relevant evidence (including evidence of clinical and cost-effectiveness, and of social values); academics need to understand and respond to decision-makers’ needs to produce relevant research. The health system at all levels will need institutional capacity building to incentivise routine generation and use of evidence. Knowledge brokers, including priority-setting agencies (such as England’s National Institute for Health and Care Excellence, and Health Interventions and Technology Assessment Program, Thailand) and the media can play an important role in facilitating engagement and knowledge transfer between the various actors. Especially at the outset but at every step, it is critical that patients and the public understand that trade-offs are inherent in priority-setting, and careful efforts should be made to engage them, and to hear their views throughout the process. There is thus no single approach to capacity building; rather a spectrum of activities that recognises the roles and skills of all stakeholders. A range of methods, including formal and informal training, networking and engagement, and support through collaboration on projects, should be flexibly employed (and tailored to specific needs of each country) to support institutionalisation of evidence-informed priority-setting. Finally, capacity building should be a two-way process; those who build capacity should also attend to their own capacity development in order to sustain and improve impact.", "keywords": [ "health technology assessment", "evidence-informed priority setting", "health policy", "institutions", "universal health coverage", "knowledge transfer and exchange", "capacity development", "INNE framework" ], "content": "Introduction\n\nSetting priorities in health is demanding, risky and fraught with fearsome challenges. One can be caught out by getting them right, for instance when an influential person sees them as a threat to their interests; and one can be caught out by getting them wrong, which often results in the country’s resources being wasted by not having the biggest impact possible on people’s health.\n\nThe international Decision Support Initiative (iDSI, www.idsihealth.org) is a practitioner-led partnership that facilitates priority-setting (Chalkidou et al., 2016b; Li et al., 2016). Its mission is to guide decision-makers towards effective and efficient healthcare resource allocation strategies for improving people’s health. It aims to achieve this by providing a combination of practical support (hands-on technical assistance and institutional strengthening) (Glassman et al., 2012) and knowledge products (high-quality, policy relevant research and tools).\n\nAs part of iDSI’s inception phase in 2014-15, iDSI committed to scoping out an ‘evidence-informed capacity building programme’ that sheds light on the capacity gaps in low- and middle-income countries (LMIC) when it comes to setting health priorities, and explored how they would begin to address these. The programme included an in-depth review of priority-setting capacity in Sub-Saharan Africa (Doherty, 2015). This paper draws on that review and on broader literature and frameworks concerning capacity building, in an attempt to provide some generalizable insights that could be applied by iDSI in future, and indeed by other stakeholders for priority-setting in LMICs.\n\nCapacity for setting health priorities can be addressed at different levels. Within the broader health policy and political environment, this means examining the central agencies and governmental structures that direct and govern the system and their capacity to deliver whatever has been determined to be their tasks in priority-setting. It also means ensuring that there is effective communication and control that makes the system a functioning network rather than just an assembly of unconnected parts. At the organisational and individual levels, one must address the capacities of specific players or stakeholders in the system and whether these fulfil their purposes. The goal of this capacity building should include transitioning from resource allocation strategies that are historically based on disease burden, “expert opinion” or global advocacy (Chalkidou et al., 2016b). A more strategic approach to priority-setting would be informed by evidence (that is, evidence on cost-effectiveness and social values as well as disease burden) and deliberative processes (Balthussen et al., 2016; Chalkidou et al., 2016a; Chalkidou et al., 2016b; Culyer & Lomas, 2006; Lomas et al., 2005).\n\n\nAims, objectives, and scope\n\nIn this paper we outline the kinds of capacity needed to support decision makers when setting health priorities, where such capacity can be found, and how best it can be created. We set out a framework for understanding the key elements of capacity building, how iDSI partners are currently involved in supporting capacity development, and finally a research and action agenda that seeks to inform any future capacity building strategy, adopted by iDSI or other development initiatives. We do not provide an exhaustive map of all possible stakeholders and solutions in priority-setting, but offer a starting point for thinking about who the most important stakeholders are and how best they might be approached.\n\n\nA framework for understanding capacity building\n\nThe United Nations Development Programme INNE Model is one way in which thinking about capacity can be organised (UNESCO International Institute for capacity Building in Africa, 2006). This model covers four general categories of capacity building: Individual, Node, Network and Enabling Environment, each of which has distinctive characteristics and require different approaches to building capacity further, especially to deliver what is required for universal health coverage (UHC). Each category also entails different segments of the population, whom we conventionally term ‘stakeholders’ (Thaiprayoon & Smith, 2015; UNESCO International Institute for capacity Building in Africa 2006). Figure 1 gives examples of how existing and future planned activities of the iDSI partnership fit within the INNE framework.\n\niDSI’s practical support in Indonesia provides an example of how the INNE framework can be applied to inform capacity building in health technology assessment (HTA, Figure 2) (HITAP International Unit, 2015). During a HTA workshop for policymakers and researchers in priority-setting, participants identified relevant stakeholders and populated the framework with activities that would enable Indonesia to reach the end goal of institutionalising HTA for sustainable and equitable UHC.\n\nThe iDSI Reference Case for Economic Evaluation, which details principles, methods and reporting standards for the planning and conduct of economic evaluation, has a specific focus on LMIC decision-makers (Wilkinson et al., 2016) and is an intervention at the Environmental level of INNE. Its preparation involved high-level, global stakeholder engagement ranging from the Bill and Melinda Gates Foundation, who initially commissioned the work, to researchers and policymakers from LMICs as well as high-income countries.\n\nCapacity building activities at one level within the INNE framework can have an impact on, and be influenced by, interventions at other levels (UNESCO International Institute for capacity Building in Africa, 2006). For example, the development of regional HTA ‘hubs’ is an important aspect of the iDSI approach to capacity building, which is as an intervention at the Network level of the INNE. The two country hubs, one in South Africa (Priority Cost Effective Lessons for System Strengthening, PRICELESS-SA) and another in China (China National Health and Development Research Center), are focal points for networks of academic institutions and government-aligned think tanks, aimed at eventually supporting neighbouring countries in using evidence in policymaking (Hofman et al., 2015; Zhao, 2016) in areas such as health benefit package design or updating formularies (Li et al., 2016). The creation of these regional hubs will always involve the strengthening of existing or nascent processes and methods for HTA within the hub countries themselves – in other words, capacity building at the Node level of the INNE framework (Figure 1).\n\nLocal and regional capacity strengthening can occur in parallel, through collaboration between institutions on specific projects. An example is the ongoing collaboration between PRICELESS-SA at the University of the Witwatersrand with the University of KwaZulu Natal to support the refinement of the Essential Medicines List in Tanzania. This technical assistance project not only provides a service to the client country, Tanzania, but it supports the hub’s own capacity development and helps establish the relationships needed to support HTA use and development within South Africa and the region (Hofman et al., 2015).\n\nThe framework makes it clear that, in capacity building, there is a broad range of stakeholder groups to be targeted at country, regional and global levels. Some of these groups operate across INNE levels. For instance, the National Institute for Health and Care Excellent (NICE) in the UK and Health Intervention and Technology Assessment Program (HITAP) in Thailand can be thought of as ‘knowledge brokers’, whose core function is to support the translation of evidence into policy in priority-setting, through convening and interfacing between researchers and decision-makers (Jongudomsuk et al., 2012; Lomas, 2007). Thus NICE and HITAP are a special example of Nodes that have significant functions across the Network of academic, clinical and policy institutions.\n\nIt also follows that there is no single approach to capacity building to support effective priority-setting, but rather a spectrum of activities that identifies the different roles and skill sets of all involved in the process. Focusing on narrowly defined ‘technical’ or research-related activities will not address the reality that priority setting in health takes place within a broader institutional and political framework (Hawkins & Parkhurst, 2016). This reinforces the value of viewing capacity within the INNE framework and of adopting a tailored approach to building it that addresses the different needs of actors within the system. However, it does require identification and categorisation of all relevant stakeholders. We therefore recommend that a tool for mapping stakeholder groups be developed that can be adapted to the context of different countries.\n\n\nTypes of capacity\n\nTable 1 lists the principal target stakeholders and capacity needs. It is not intended to be an exhaustive list, but rather a starting point for clarifying the types of capacity that may have to be built and their related activities.\n\nHTA = health technology assessment; INNE = Individual, Node, Network, Environment\n\nExerting direct influence on the Environment may be difficult. Thus, most capacity-building activities will target specific stakeholders at the lower levels, as means of impacting on a broader friendlier Environment for evidence-informed priority setting. This especially applies when engaging with the media, professional organisations, and with funders and supra-governmental bodies, who are well positioned to influence the broader Environment. Capacity building activities for other stakeholders could operate at the Network level, for example through supportive conferences or regionally based researcher/policy-maker meetings (Hofman et al., 2015).\n\nDepending on local needs, targeting certain groups such as agencies newly tasked with evidence-informed priority setting could be part of a strategy to support Nodes. Nodes include units that produce evidence to inform priority-setting (e.g. the HTA Committee and its secretariat in Indonesia responsible for generating HTA recommendations), and groups who demand evidence to inform priority-setting (e.g. policymakers in health ministries who will consider HTA recommendations in their decision-making), as well as the knowledge brokers at the interface between the two and with patients and the general public (Lomas, 2007).\n\nFinally, all capacity-building activities ultimately involve Individuals; the potential impact of empowering individuals to become champions and leaders within their respective organisations and networks should not be underestimated (West et al., 2015).\n\nEach of these stakeholders need different levels of understanding and skills, beyond the purely technical, and will therefore need different methods of training including formal and informal approaches. Suitable training resources will also need to be arranged and, if necessary, created. New institutions may be needed and the existing ones need to be adapted, and capacities currently spread across poorly connected individuals or institutions within a given country or region need to be identified and consolidated, and brought together into the network. It is also important to stress that inadequate attention to the capacity needs of any one target stakeholder can easily undermine efforts to build priority setting mechanisms that function effectively at other levels (see Figure 1). This is the keystone of the INNE approach. However, capacity building should never be done in isolation, but rather be an ongoing interdisciplinary and multiprofessional process involving knowledge transfer and exchange between stakeholders.\n\nTo develop the capacities for any target stakeholder group in any particular context, it would be necessary to assess the following:\n\nAdequacy of existing capacity\n\nCapacities target stakeholders think are needed\n\nKey outcomes target stakeholders want to achieve from capacity building\n\nThe best strategy needed to address these capacity gaps\n\nPractical constraints that have been identified, such as human resource pipeline issues.\n\nSuch a baseline assessment will help ensure that all capacity building activities are appropriately addressed.\n\n\nUnpacking capacity needs at each level of the INNE framework\n\nCapacities of the health system. The capacity of a health care system to support priority-setting, and the associated capacities across the various levels of INNE, requires institutionalising priority-setting agencies at provincial, national and regional levels, ensuring that appropriate structures, processes and incentives are in place. There are several major examples of agencies responsible for setting health priorities in entire countries, or parts of them in the case of federal systems of governance (Dittrich & Asifiri, 2016). However, the analytical advantages and weaknesses of the various models are only beginning to be exposed, and their sustainability has yet to be fully tested (Dittrich & Asifiri, 2016). Prescriptive guidance would therefore be premature, and any useful guidance would unlikely be one-size-fits-all.\n\nNevertheless, factors that may support institutionalisation of explicit priority setting in the context of LMICs have recently been identified in a policy brief co-authored by members of HTA agencies belonging to HTAsiaLink, a regional network (Chootipongchaivat et al., 2016). Its recommendations (see Box 1) are based on the experience of seven settings: China, Taiwan, Indonesia, the Republic of Korea, Malaysia, Thailand and Vietnam. The authors identify five conducive factors for HTA development and provide a practical step-by-step guide, including a checklist for monitoring the progress of HTA introduction and development (Chootipongchaivat et al., 2016). Although the policy brief focuses on the use of HTA to inform coverage decisions under universal health coverage (UHC), these recommendations could also be applied to HTA in general resource allocation.\n\n1. Human resource development within HTA research organizations as well as decision-making bodies and other relevant stakeholders using HTA.\n\n2. Development of core team or HTA institutes committed to HTA who will coordinate HTA activities and gain the trust of partners\n\n3. Linking HTA to policy decision-making mechanisms including the pharmaceutical reimbursement list/essential drug lists, immunization programs, high-cost medical devices package, and public health programmes.\n\n4. Implementing HTA legislation to ensure sustainability through participation, transparency, and systematic application of HTA in the policy process rather than focusing on technical issues.\n\n5. International collaboration, especially in the formative stages, for financial and technical capacity building support and sustained international knowledge exchange across agencies in the longer term.\n\nAny priority-setting frameworks that simply generate evidence of what works and represents good value for money are inadequate (Chootipongchaivat et al., 2016; Rutter, 2012). “Good value” policy options may have no direct bearing on financial protection, and the distribution of financial and disease burden, which are important issues for UHC (Voorhoeve et al., 2016). In addition, the “right” decisions, even when evidence-based, don’t always get implemented for a number of practical, political or other reasons. It may be entirely rational for policymakers to make decisions against evidence-based recommendations, if by that they suit their own political interests, for instance to win electoral support from the ‘median voter’ (whose particular concerns may differ from what would benefit the population on a whole) (Hauck & Smith, 2015). This underscores the importance of developing a robust, principled process that considers such constraints, within which explicit methods for evidence-informed priority-setting can be institutionalised (Chalkidou et al., 2016b).\n\nWhen we refer to ‘institutionalising’ priority-setting and HTAs, we seek to emphasise the importance of developing accepted norms and rules, and sustaining effective working relationships between relevant policymakers and research institutions (Hawkins & Parkhurst, 2016; March & Olsen, 2008). Norms and rules based around notions of transparency, accountability, citizen engagement, openness, deliberation, and contestability are valuable beyond having intrinsic moral merits, because they improve both the quality and credibility of decisions arising from evidence-informed priority-setting (Culyer, 2012; Daniels, 2000). Relevant processes that should be built in when institutionalising priority-setting and HTA include (Culyer, 2012; Daniels, 2000):\n\nThe possibility of external comment so that interested parties may see what there is to comment on;\n\nConsultation, through which external parties are invited both to engage with decision makers and their advisers and to enter into discussion about whatever aspects of the process may be underway at the time. These include assumptions, comparators, model building, literature review, and matters to do with the process itself;\n\nAppraisal of evidence, including evidence about publicly held values, evidence brought to the deliberation process by clinical and other professional participants, and discussions on how best to proceed when evidence is poor, second hand, irrelevant (as may be the case with evidence from high-income settings that is being considered in a LMIC context), or completely absent;\n\nDeliberation, the most complete form of engagement, in which relevant stakeholders participate in the actual decision making themselves. The final determination or conclusion of the process may be excluded from this process, since that responsibility most likely lies with those having political accountability.\n\nThese processes contribute to good governance in evidence-informed priority-setting, which enables it to become more resilient to vested interests and political change (Hawkins & Parkhurst, 2016).\n\nCapacities of funders and development partners. Global funders and development partners, including supra-governmental organisations like the World Health Organisation (WHO) and the World Bank, have significant power in shaping health priorities at the country level in LMICs (especially in low-income countries) (Chalkidou et al., 2016b; Glassman & Chalkidou, 2012). This can operate directly through their purchasing or provision of specific health care interventions, delivery platforms, and investment into research and technical assistance activities related to the above; or indirectly through their role as setters of global standards and norms, for example with the iDSI Reference Case (Wilkinson et al., 2016) and WHO CHOICE (Chalkidou et al., 2016b).\n\nFunders and development partners need the specific capacity to commission, receive, interpret and use HTA and priority-setting research to inform not only their own choices in global health, but also the global standards and norms which client countries look to. Health system strengthening efforts could also be targeted towards the multitude of stakeholders and capacity gaps identified here, with the broader objective of supporting effective, evidence-informed and sustainable priority-setting that is country-owned (Chalkidou et al., 2016b). There should be a shared understanding within and between funders, delivery partners and LMIC country partners of the goals or outcomes of aid investment, in terms of funding, research outputs and technical assistance. This shared understanding could take the form of a common theory of change, that is a framework outlining the preconditions, causal linkages and assumptions underlying the desired investment goals (Li, 2016).\n\nAt an internal iDSI Board meeting in Bangkok in January 2016, we asked four funder representatives who were present (from the Bill and Melinda Gates Foundation, UK Department for International Development, Rockefeller Foundation and the World Bank, respectively) what internal capacity-building they felt would be useful in their organisations in order to support priority-setting better. Three funders felt that their organisations should develop rapid response services for country partners requesting technical assistance, both in terms of being able to direct them to relevant and useful evidence sources as well as identifying international experts capable of providing immediate short-term technical support. The fourth funder reiterated the importance of having the capacity to use value for money in guiding investment decisions, pointing to the iDSI Reference Case (Wilkinson et al., 2016) and other ongoing efforts to incorporate components of HTA in the grant-making process.\n\nCapacities of policy and professional decision makers. There seems to be considerable variation in the extent to which both policy and professional groups possess the capacities detailed in Table 1, and there has been little research thus far that documents it (though see Hailey & Juzwishin, 2006, highlighting that the inability of policymakers to formulate appropriate questions risks diminishing the policy-relevance of HTA programmes). Routine follow-up and monitoring of impact of HTA research by decision makers as an integral part of evidence-informed priority-setting is rare, as is evidence of any matching training programmes targeted at developing such capacities among policymakers. Fundamentally, there needs to be political commitment among policy leaders to progress to UHC and use evidence and tools such as HTA to help achieve that aim (Li et al., 2016).\n\nCapacities of health service managers. NICE in the UK engages service managers in their HTA processes to select healthcare interventions and clinical guideline recommendations at the national level (National Institute for Health and Care Excellence, 2015). Among clinical research and health services research in general, however, health service managers are rarely included. This is possibly both symptomatic of and perpetuating the phenomenon that most HTA has focused on comparing individual interventions, as opposed to service delivery platforms or different organisational modes for human resources (Morton et al., 2016). HTA may therefore not provide sufficient information on the broader financial and organisational implications of competing resource allocation strategies that health service managers need in order to make fully informed decisions (MacDonald et al., 2008). The use of HTA to support planning by local health service managers in the UK has also arguably been hindered because of the relative inaccessibility of, and concerns over the acceptability of, the specialist methods employed (Airoldi et al., 2014).\n\nIrrespective of the scope and complexity of HTA, the practical implementation of evidence-informed policies and practices crucially depends the on managers’ ability to set and enforce clinical standards and gain local adoption of good practice in both primary and secondary care settings, arranging funding and bringing local communities along through supportive and constructive local engagement. In the UK the National Health Service (NHS) has a relatively well-established tradition of clinical governance (Scally & Donaldson, 1998; Swage, 2003), and routine performance measures of healthcare managers and providers now include how successful they are in implementing clinical governance (National Institute for Health Research, 2017). In LMICs on the road to UHC, the capacity of health service managers to understand the implications of evidence-informed developments, competing spending options, and of managing resources accordingly will require specific training and ongoing support.\n\nCapacities of patients and the public. Setting priorities in health implies that some interventions and some patient groups will be covered and others will not. There is a risk that because of this, those who do not see themselves as privileged, along with their carers and supporters, lose whatever enthusiasm they may have had for developing UHC. Their continuing engagement, and understanding of the process and decisions, are important both morally and for the success of the strategy (Clark & Weale, 2012).\n\nPatients and the general public need to understand the implications of policy and clinical decisions and of the decision-making process, the extent to which specific decisions are evidence-informed and represent efficient and ethical use of public or private money, and they need to participate with an active voice in decision shaping that affects their interests. Capacity development activities could include training of health workers, patients and the public in research projects in the field, and other forms of patient involvement (e.g. HTA appraisal panels, citizens’ juries) (Littlejohns & Rawlins, 2009) in conjunction with the development of tools to facilitate stakeholder engagement in priority-setting (Bolsewicz Alderman et al., 2013; Makundi et al., 2007; Weale et al., 2016). Any such tools will be context-sensitive, if not context-specific, and take into account the socio-cultural values and political environment of the country or region (Bolsewicz Alderman et al., 2013).\n\nCapacities of academic institutions, researchers and research managers. Healthcare researchers for LMICs tend to regard research capacity development in terms of the acquisition of research skills (e.g. World Health Organization, 2015), mainly through masters and PhD programmes offered by major centres in high-income countries. They measure success in terms of the various kinds of training received and in authorship in so-called ‘high-impact’ journals, which are predominantly published in English. Equally important, however, is how skilled local research communities are in engaging with policy and professional end-users, discerning their decision-related needs for evidence, and identifying what is researchable, and in translating those needs into research projects and programmes that can be implemented locally (with or without assistance from elsewhere). For low-income countries the key lessons to be learned may lie not with high-income countries, but with middle-income countries.\n\nNetworks that link the research community to policy decision-makers, professional regulators and professional colleges (Ezeh et al., 2010), as well as institutional and personal relationships, all need to be explored further. These relationships exist to some extent in all countries but may not focus particularly on the development of strategic commitments to the provision of timely and relevant evidence and analysis, or their institutionalisation into established practices through standing committees, routine communication (e.g. electronic) and other standard operating procedures (Hawkins & Parkhurst, 2016; March & Olsen, 2008).\n\nWith respect to technical capacities for research in LMICs, while there are relatively abundant resources in public health and epidemiology (although Ezeh et al., 2010, highlighted particular gaps in Sub-Saharan Africa), there is an even greater shortage of skills in high quality economic evaluation that would enable research teams to offer evidence of cost-effectiveness to achieve better and more equitably distributed health outcomes (Doherty, 2015; World Health Organization, 2015). In Sub-Saharan Africa, there is also a shortage of competency in systematic reviewing, especially in reviewing designed to economise on the need for new research by appropriate and critical translation of results from previous studies (Doherty, 2015). There also exists limited networks between African institutions in terms of research collaborations in health economic evaluation; the collaborations that do exist tend to be with North American or European institutions (Doherty, 2015; Hernandez-Villafuerte et al., 2016). The significant health economic research activity, capacity, and capacity-building initiatives in related disciplines that already exist in South Africa suggest that it is well placed as a hub country for catalysing South-South collaborations with other African countries (Ezeh et al., 2010; Hernandez-Villafuerte et al., 2016).\n\nA more comprehensive and strategic approach to capacity building might embody the following additional features (see also Ezeh et al., 2010):\n\nLeadership, management and administration\n\nA general commitment to creating training opportunities for research managers and trainers of technical, management and leadership skills in research, and developing local centres of excellence without creating lasting dependencies on foreign centres of excellence;\n\nA formal system for identifying local training needs for multi-disciplinary and professional competencies and the recipients of training, with a particular focus on South-South engagements\n\nA formal system for training in skills required for middle and senior research managers\n\nA comprehensive attempt to match training courses of all kinds (full time, part time, short, long, with or without internships, in workplaces or at special centres, with a range of certificated competence or one, etc.) and for various purposes (single discipline, exposure to cognate or complementary disciplines);\n\nTraining in research grant application and management\n\nParticipation in a strategy for increasing the ability of universities and research centres (public and private) to train junior researchers and take on leadership roles.\n\nTechnical and research skills\n\nA strategic assessment of the multi-disciplinary skills required in each context, including professional skills in economic evaluation and application of the iDSI Reference Case, and consideration of equity and other ethical objectives where relevant (Norheim, 2016)\n\nRecruitment of researchers into disciplines where more skilled workforce is needed\n\nTraining in interpretation of transferability (sometimes termed generalisability) of research evidence developed elsewhere than in the country of potential application\n\nTraining in systematic reviewing.\n\nKnowledge transfer and exchange\n\nTraining in knowledge transfer and exchange (other than communication to fellow academics) to ensure that research is timely, understandable and useful for the target audience. This involves engaging decision-makers in research processes, synthesising interdisciplinary knowledge into key actionable messages for relevant decision-makers, and disseminating plain language research summaries via a range of channels other than academic publications, including social media and face-to-face exchanges between researchers and end-users (Lavis, 2016; Lomas, 2007; see also section on Capacities of knowledge brokers)\n\nTraining in fit-for-purpose publication plans with specific readerships in mind\n\nNon-self-serving clarity as to the meaning of “high quality” research and “high quality” research outlets. What this refers to is research that is rigorously conducted and reported, genuinely novel, and relevant to policy and clinical practice, and research outlets which have transparent, rigorous editorial and peer-review policies, and are trusted by and influential among academic and policy leaders in the given field, but not necessarily restricted to so-called ‘high impact’ journals in English.\n\nCapacities of knowledge brokers. Knowledge brokers and knowledge brokering agencies are intermediaries between worlds of research and action (Lomas, 2007). Their role involves “all the activity that links decision makers with researchers, facilitating their interaction so that they are able to better understand each other’s goals and professional cultures, influence each other’s work, forge new partnerships, and promote the use of research-based evidence in decision-making.” (Canadian Health Services Research Foundation, 2003). Capacity building is part of their philosophy: for researchers to be able to do applied research and decision-makers to be able to use it (Lomas, 2007).\n\nKnowledge brokers can push for improvements on the evidence-supply side, for instance by packaging it better and by disseminating it in a more organised way (Lavis, 2016). They can also work on the evidence-demand side, for instance by advocating for the creation of institutional mechanisms that privilege the use of research evidence and building capacity to find and use research evidence efficiently (Chalkidou et al., 2016b; Lavis, 2016).\n\nTo achieve all of this, knowledge brokers must have the capacity to understand the cultures of both the research and decision-making environments. They need to be able to identify the ‘right’ stakeholders from both sides, and achieve meaningful knowledge transfer between them. Stakeholders from the different environments include researchers and decision makers, government agencies and local hospitals, professional organisations and community workers, and so on.\n\nOf particular relevance to LMICs, where capacities on both demand- and supply-sides may be sparse, is focusing capacity-building efforts on existing agencies or groups of individuals with some formal linkage between the research and decision-making circles, including those who themselves function as a research unit (for example, the technical unit within a ministry of health) (Li, 2016). HITAP in Thailand is a good example of an institution with dual function as a generator of primary research in health economics and health policy, and as a knowledge broker through HTA processes that convene stakeholders including policymakers, clinicians, and civil society (Jongudomsuk et al., 2012).\n\nCapacities of media organisations and journalists. The ongoing claims of a finite budget made by different stakeholders lie at the crux of priority-setting in health, and in many countries the media wield significant power to influence how these claims are understood by the general public and acted upon by policymakers, an issue perhaps more important than ever in the so-called “post truth” era (Marmot, 2017). We mean media in the broadest sense, so we are including journalists and editors in TV and print and those who communicate primarily through electronic media such as Twitter and blogs, in particular those with a specialist interest in health, government policy, science, or development.\n\nWhile the role of the media varies from country to country, there will be technical, political and ethical issues in priority-setting that are shared across settings (Briggs, 2016; Hauck & Smith, 2015; Kieslich et al., 2016; Rumbold et al., 2017). There will also be general principles and common challenges to overcome in understanding and communicating notions such as priority-setting, rationing and fair access to services, for example the fact that evidence-informed priority-setting decisions are made with the whole population in mind but will inevitably lead to winners and losers among individual patients. We are not suggesting any compromise to editorial independence or the need for journalists to hold key stakeholders accountable. Instead, the aim is to encourage a greater understanding of the complexity of the priority setting process and to enable better informed and impartial reporting.\n\n\nDiscussion\n\nSetting explicit priorities in health is not simply a narrow technical exercise. It involves the mobilisation of a wide range of skills and experience. There are many types of “capacity” required – not only the capacity to “do” research. If the aim is to get research translated into policy, in a procedurally legitimate manner, a strategy for capacity building needs to take into account the various stakeholders involved in the evidence-to-policy continuum.\n\nWe have outlined the kinds of capacity needed to support decision makers when setting health priorities, where such capacity can be found, and how it can best be created. We have set out a framework for understanding the key elements of capacity building, and how iDSI partners are currently involved in supporting capacity development. Application of the INNE framework highlights the broad range of stakeholder groups that need to be targeted in capacity building when setting health priorities, particularly in LMICs. It follows therefore, that there is no single approach to capacity building, but rather a spectrum of activities that recognise the different roles and skill sets of all those involved in the process. It will require dedicated resources, and nurturing of traditional academic expertise will be one of many important components.\n\nIn Table 2 we propose a set of research recommendations addressing the capacity needs of different stakeholder groups in priority-setting, in order to inform any future capacity building strategy adopted by iDSI or other development initiatives. Given the focus on targeting different stakeholders, we also recommend that a tool for mapping relevant stakeholder groups be developed that can adapt to different national contexts (Li, 2016).\n\niDSI = International Decision Support Initiative; LMIC = low- and middle-income country\n\nCapacity building should be a two-way process; those who engage in capacity building should also reflect on their own capacity development to ensure their activities have the impact desired in the short and long term (Itad & NICE International, 2016). iDSI has a Monitoring, Evaluation and Learning framework to track ongoing implementation, collect evidence of iDSI contributions to stated aims, enhance accountability to members, stakeholders and funders, and encourage ongoing reflection and learning (Li, 2016). In addition, iDSI and its core partners have subjected themselves to independent reviews in order to reflect on progress, achievements, and operational arrangements (Health Intervention and Technology Assessment Program, 2009; Itad & NICE International, 2016). A Mid-Term Learning Review has been conducted to ensure iDSI remains fit-for-purpose and help identify potential capacity gaps and how these can be addressed (international Decision Support Initiative, in preparation).", "appendix": "Author contributions\n\n\n\nAJC developed the initial draft working paper which formed the basis for the current manuscript. RL and FR prepared the first draft of the manuscript. All authors revised the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis paper was produced as part of iDSI (www.idsihealth.org), a global initiative to support decision makers in priority-setting for UHC. The work received funding from Bill & Melinda Gates Foundation (grant OPP1087363, “Establishing Priority Setting Institutions in Developing Countries”), the UK Department for International Development, and the Rockefeller Foundation.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe thank Jane Doherty for her work on capacity-building in Sub-Saharan Africa (Doherty, 2015) which inspired the current paper, and Yot Teerawattananon for suggesting the use of the INNE framework.\n\n\nReferences\n\nAiroldi M, Morton A, Smith JA, et al.: STAR--people-powered prioritization: a 21st-century solution to allocation headaches. Med Decis Making. 2014; 34(8): 965–975. PubMed Abstract | Publisher Full Text\n\nBalthussen R, Jansen MP, Mikkelsen E, et al.: Priority Setting for Universal Health Coverage: We Need Evidence-Informed Deliberative Processes, Not Just More Evidence on Cost-Effectiveness. Int J Health Policy Manag. 2016; 5(11): 1–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBolsewicz Alderman K, Hipgrave D, Jimenez-Soto E: Public engagement in health priority setting in low- and middle-income countries: current trends and considerations for policy. PLoS Med. 2013; 10(8): e1001495. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBriggs A: A View from the Bridge: Health Economic Evaluation - A Value-Based Framework? Health Econ. 2016; 25(12): 1499–1502. PubMed Abstract | Publisher Full Text\n\nCanadian Health Services Research Foundation: The Theory And Practice Ofknowledge Brokering In Canada’shealth System. A report based on a CHSRF national consultation and a literature review. 2003. Reference Source\n\nChalkidou K, Glassman A, Marten R, et al.: Priority-setting for achieving universal health coverage. Bull World Health Organ. 2016a; 94(6): 462–467. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalkidou K, Li R, Culyer AJ, et al.: Health Technology Assessment: Global Advocacy and Local Realities; Comment on “Priority Setting for Universal Health Coverage: We Need Evidence-Informed Deliberative Processes, Not Just More Evidence on Cost-Effectiveness.” Int J Health Policy Manag. 2016b; 6(4): 233–236. Reference Source\n\nChootipongchaivat S, Tritasavit N, Luz A, et al.: Policy Brief and Working Paper. Conducive Factors to the Development of Health Technology Assessment in Asia. Nonthaburi: HITAP. 2016. Reference Source\n\nClark S, Weale A: Social values in health priority setting: a conceptual framework. J Health Organ Manag. 2012; 26(3): 293–316. PubMed Abstract | Publisher Full Text\n\nCulyer AJ: Hic sunt dracones: the future of health technology assessment--one economist’s perspective. Med Decis Making. 2012; 32(1): E25–32. PubMed Abstract | Publisher Full Text\n\nCulyer AJ, Lomas J: Deliberative processes and evidence-informed decision making in healthcare: do they work and how might we know?Evidence & Policy: A Journal of Research, Debate and Practice. 2006; 2(3): 357–371. Publisher Full Text\n\nDaniels N: Accountability for reasonableness. BMJ. 2000; 321(7272): 1300–1301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDittrich R, Asifiri E: Adopting Health Technology Assessment. A report on the socio-cultural, political, and legal influences on health technology assessment adoption. Four case studies: England and Wales, Japan,Poland & Thailand. WORKING PAPER Version 1. 2016. [Accessed: 2 February 2016]. Reference Source\n\nDoherty J: Effective Capacity-Building Strategies For Health Technology Assessment: A Rapid Review Of International Experience. 2015. Reference Source\n\nEzeh AC, Izugbara CO, Kabiru CW, et al.: Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob Health Action. 2010; 3(1). PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlassman A, Chalkidou K, Giedion U, et al.: Priority-setting institutions in health: recommendations from a center for global development working group. Global Heart. 2012; 7(1): 13–34. PubMed Abstract | Publisher Full Text\n\nGlassman A, Chalkidou K: Priority-Setting in Health: Building Institutions for Smarter Public Spending. 2012. Reference Source\n\nGlassman A, Giedion U, Sakuma Y, et al.: Defining a Health Benefits Package: What Are the Necessary Processes? (Special Issue: Prince Mahidol Award Conference 2016: Priority Setting for Universal Health Coverage). Health Systems and Reform. 2016; 2(1): 39–50. Publisher Full Text\n\nHailey D, Juzwishin D: Managing external risks to health technology assessment programs. Int J Technol Assess Health Care. 2006; 22(4): 429–435. PubMed Abstract | Publisher Full Text\n\nHauck K, Smith PC: The Politics of Priority Setting in Health: A Political Economy Perspective. SSRN Electronic Journal. 2015. Reference Source\n\nHawkins B, Parkhurst J: The “good governance” of evidence in health policy. Evidence & policy : a journal of research, debate and practice. 2016; 12(4): 575–592. Publisher Full Text\n\nHealth Intervention and Technology Assessment Program: First Step. Evaluating HITAP : 2 years on HITAP’s responses to key recommendations. Comments on Evaluating HITAP : 2 years on. 2009. Reference Source\n\nHernandez-Villafuerte K, Li R, Hofman KJ: Bibliometric trends of health economic evaluation in Sub-Saharan Africa. Global Health. 2016; 12(1): 50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHITAP International Unit: INDONESIA MISSION REPORT. Advancing Health Technology Assessments (HTA) Development in Indonesia. 2015; 21–23, [Accessed: 18 January 2017]. Reference Source\n\nHofman KJ, McGee S, Chalkidou K, et al.: National Health Insurance in South Africa: Relevance of a national priority-setting agency. S Afr Med J. 2015; 105(9): 739–740. PubMed Abstract | Publisher Full Text\n\nItad, NICE International: NICE International’s Engagement in India and China. 2016. Reference Source\n\nJongudomsuk P, Limwattananon S, Prakongsai P, et al.: Evidence-Based Health Financing Reform in Thailand. In: Coady D, Clements BJ, and Gupta S. eds. The Economics of Public Health Care Reform in Advanced and Emerging Economies. Washington DC : International Monetary Fund, 2012; 307–326. Reference Source\n\nKieslich K, Bump JB, Norheim OF, et al.: Accounting for Technical, Ethical, and Political Factors in Priority Setting. Health Systems & Reform. 2016; 2(1): 51–60. Publisher Full Text\n\nLavis JN: Report prepared for the International Decision Support Initiative (iDSI). Supporting evidence informed priority setting. 2016. Reference Source\n\nLi R: Enhancing knowledge transfer and exchange: Reflections from the Seattle workshop on evidence-informed policymaking [Online]. 2016, [Accessed: 2 February 2017]. Reference Source\n\nLi R, Hernandez-Villafuerte K, Towse A, et al.: Mapping Priority Setting in Health in 17 Countries Across Asia, Latin America, and sub-Saharan Africa. Health Systems & Reform. 2016; 2(1): 71–83. Publisher Full Text\n\nLittlejohns P, Rawlins M: Patients, the Public and Priorities in Healthcare. illustrated. Littlejohns P and Rawlins M eds. Radcliffe Publishing. 2009. Reference Source\n\nLomas J, Culyer T, McCutcheon C, et al.: Conceptualizing And Combining Evidence For Health System Guidance. Ontario: Canadian Health Services Research Foundation. 2005. Reference Source\n\nLomas J: The in-between world of knowledge brokering. BMJ. 2007; 334(7585): 129–132. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacDonald J, Bath P, Booth A: Healthcare Services Managers: What Information do They Need and Use? Evid Based Libr Inf Pract. 2008; 3(3). Publisher Full Text\n\nMakundi E, Kapiriri L, Norheim OF: Combining evidence and values in priority setting: testing the balance sheet method in a low-income country. BMC Health Serv Res. 2007; 7: 152. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarch JG, Olsen JP: Elaborating the “new institutionalism”. Oxford University Press, 2008. Publisher Full Text\n\nMarmot M: Post-truth and science. Lancet. 2017; 389(10068): 497–498. Publisher Full Text\n\nMorton A, Thomas R, Smith PC: Decision rules for allocation of finances to health systems strengthening. J Health Econ. 2016; 49: 97–108. PubMed Abstract | Publisher Full Text\n\nNakamura R, Lomas J, Claxton K, et al.: Assessing the Impact of Health Care Expenditures on Mortality Using Cross-Country Data. CHE Research Paper 128, 2016. Reference Source\n\nNational Institute for Health and Care Excellence: Developing NICE Guidelines: The Manual [Internet]. London: National Institute for Health and Care Excellence (NICE). 2015. PubMed Abstract\n\nNational Institute for Health Research: Performance information on the initiation and delivery of clinical research [Online]. 2017, [Accessed: 8 February 2017]. Reference Source\n\nNorheim OF: Ethical priority setting for universal health coverage: challenges in deciding upon fair distribution of health services. BMC Med. 2016; 14: 75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRumbold B, Rachel B, Octavio F, et al.: Universal Health Coverage, Priority Setting and the Human Right to Health. 2017. Reference Source\n\nRutter J: Evidence and evaluation in policy making. A problem of supply or demand? 2012. Reference Source\n\nScally G, Donaldson LJ: The NHS’s 50 anniversary. Clinical governance and the drive for quality improvement in the new NHS in England. BMJ. 1998; 317(7150): 61–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwage T: Clinical Governance in Healthcare Practice. 2nd ed. Oxford: Butterworth-Heinemann. 2003. Reference Source\n\nThaiprayoon S, Smith R: Capacity building for global health diplomacy: Thailand’s experience of trade and health. Health Policy Plan. 2015; 30(9): 1118–1128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUNESCO International Institute for capacity Building in Africa: Capacity Building Framework. 2006. Reference Source\n\nVoorhoeve A, Ottersen T, Norheim OF: Making fair choices on the path to universal health coverage: a précis. Health Econ Policy Law. 2016; 11(1): 71–77. PubMed Abstract | Publisher Full Text\n\nWeale A, Kieslich K, Littlejohns P, et al.: Introduction: priority setting, equitable access and public involvement in health care. J Health Organ Manag. 2016; 30(5): 736–750. PubMed Abstract | Publisher Full Text\n\nWest M, Armit K, Loewenthal L, et al.: Leadership and Leadership Development in Health Care: The Evidence Base. 2015. Reference Source\n\nWilkinson T, Sculpher MJ, Claxton K, et al.: The International Decision Support Initiative Reference Case for Economic Evaluation: an aid to thought. Value Health. 2016; 19(8): 921–928. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: 2015 Global Survey on Health Technology Assessment by National Authorities. Main findings. 2015. Reference Source\n\nZhao K: HTA Development and Health Care Decision-making in China. Presentation to the ISPOR 7th Asia-Pacific Conference. 3–6 September, 2016; Singapore, 2016. Reference Source" }
[ { "id": "21566", "date": "19 Apr 2017", "name": "Clara Richards", "expertise": [ "Reviewer Expertise knowledge to policy", "individual and organisational development" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI think the article takes a comprehensive approach to capacity building, its importance, opportunities and how to conduct it. However, I would suggest to include a definition of capacity development - it is not very clear how the authors understand capacity at the environmental level for example.\n\nI think that the frameworks that the authors refer to are good however I would suggest using those that take a complexity lens as well, since the problem that is discussed is complex and involves multiple stakeholders. I recommend taking a look at the ITAD framework http://www.itad.com/knowledge-and-resources/capacity-development-2/ or an even more comprehensive one 'Context Matters'\n\nhttp://www.politicsandideas.org/contextmatters/\n\nRelated to the above I feel that the article could take a systems approach that takes complexity into account. What does it take to change and improve systems? Usually, the problem lies beyond technical capacity but it is more about understanding and sharing a common problem, being able to identify and relate to different points of views and realities, ability to find solutions jointly, an ability to work with different parts of the system, etc. Some of these are addressed in the article (like leadership skills) but drawing on further systemic thinking could help (see Ken Wilber).\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes", "responses": [] }, { "id": "24030", "date": "04 Jul 2017", "name": "Lilian Dudley", "expertise": [ "Reviewer Expertise Public health and health systems research" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article discusses the important topic of capacity building for priority setting in health in LMIC’s, by a research collaboration which draws on broad experience and expertise. The article reviews current knowledge on the topic, uses a framework to organise capacity building for different levels and stakeholders, and provides a useful way of documenting the wide range of groups and activities that are required to strengthen capacity for priority setting in health.\n\nComments:-\nThe title is a bit misleading as it suggests that the topic is about evidence informed approaches to capacity building, but then presents very limited evidence of approaches to capacity building for priority setting. Perhaps the title should be ‘Capacity building for evidence informed health priority setting in LMIC’s?\n\nThe scope of priority setting in health in the context of this article is not clearly defined. The article starts and ends by broadly discussing priority setting in health in LMIC’s, and links this loosely to Universal Health Coverage (UHC). The body of the article and the framework however focuses almost entirely on capacity building for Health Technology Assessment. It is therefore not clear whether the intention was to review capacity building needs for HTA specifically or more broadly for priority setting for programmes and other activities in the health sector. It would be useful to clarify this for readers, as HTA does require a specific set of technical skills and understanding both by producers and users of the knowledge, which may differ from other forms of priority setting.\n\nCapacity building is discussed broadly as a process, and the article focuses on stakeholder mapping to identify the levels and groups that require capacity building. The ‘capacities required’ again focuses more on HTA as a ‘content’ area, than a broad approach to priority setting in health, for which a more comprehensive approach would be required to include assessment of need, societal preferences, participatory processes etc.\n\nVery little is said about capacity for good governance and leadership in priority setting, and ways in which these should be strengthened as a critical component of improving priority setting in health systems in LMIC’s. It would be useful to reflect on these and how the proposed framework could include these components of the health system.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly", "responses": [] } ]
1
https://f1000research.com/articles/6-231
https://f1000research.com/articles/6-230/v1
07 Mar 17
{ "type": "Research Article", "title": "A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient treatment (Protocol GALS-C)", "authors": [ "Mark Kindy", "Paul Lupinacci", "Raymond Chau", "Tony Shum", "Dorothy Ko", "Mark Kindy", "Paul Lupinacci", "Raymond Chau", "Tony Shum" ], "abstract": "Background Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease that lacks effective treatment options. Genervon has discovered and developed GM604 (GM6) as a potential ALS therapy. GM6 has been modeled upon an insulin receptor tyrosine kinase binding motoneuronotrophic factor within the developing central nervous system.\nMethods This was a 2-center phase 2A, randomized, double-blind, placebo-controlled pilot trial with 12 definite ALS patients diagnosed within 2 years of disease onset. Patients received 6 doses of GM604 or placebo, administered as slow IV bolus injections (3x/week, 2 consecutive weeks). Objectives were to assess the safety and efficacy of GM604 based on ALSFRS-R, FVC and selected biomarkers (TDP-43, Tau and SOD1, pNFH). This report also includes results of compassionate treatment protocol GALS-C for an advanced ALS patient.\nResults Definite ALS patients were randomized to one of two treatment groups (GM604, n = 8; placebo, n = 4). 2 of 8 GM604-treated patients exhibited mild rash, but otherwise adverse event frequency was similar in treated and placebo groups. GM604 slowed functional decline (ALSFRS-R) when compared to a historical control (P = 0.005). At one study site, a statistically significant difference between treatment and control groups was found when comparing changes in respiratory function (FVC) between baseline and week 12 (P = 0.027). GM604 decreased plasma levels of key ALS biomarkers relative to the placebo group (TDP-43, P = 0.008; Tau, P = 0.037; SOD1, P = 0.009). The advanced ALS patient in compassionate treatment demonstrated improved speech, oral fluid consumption, mouth suction with GM604 treatment and biomarker improvements.\nConclusions We observed favorable shifts in ALS biomarkers and improved functional measures during the Phase 2A study as well as in an advanced ALS patient. Although a larger trial is needed to confirm these findings, the present data are encouraging and support GM604 as an ALS drug candidate.", "keywords": [ "ALS", "ALSFRS-R", "FVC", "ALS Biomarkers", "neurodegeneration", "signaling" ], "content": "Introduction\n\nAmyotrophic lateral sclerosis (ALS) is a devastating disease for which no effective treatment has been discovered1. During the last twenty years, dozens of ALS drug candidates have been tested but have unfortunately failed during clinical trials2. This astounding record of uniform failure may be attributed to the fact that the classic drug development model – which aims to design single-target drugs – is simply inadequate for rapid, complex and multifactorial diseases like ALS3.\n\nGenervon decided to look for and discovered endogenous regulators of the developing nervous system, and hypothesized that such regulators may have the capacity to monitor and repair neurological diseases4,5. Genervon’s approach was to base drug design on these regulatory proteins, leading to development of GM604 (GM6)6. GM604 is a peptide with a sequence identical to one of the active sites of human motoneuronotrophic factor (MNTF)7. MNTF is an endogenous human embryonic stage neural regulatory and signaling peptide that controls the development, monitoring and correction of the human nervous system4,5. This activity of MNTF is replicated by GM604 to provide a potent disease-modifying drug candidate that modulates many processes including inflammation, apoptosis, and hypoxia4,5,7. In pre-clinical studies, we have shown that GM604 acts as a neuro-protective agent in animal models of neurological disease7. In these studies, GM604 was found to promote neuroprotection, neurogenesis, neural development, neuronal signaling, neural transport, and other processes4–7. Recently, we have demonstrated that GM604 modulates many ALS-associated genes, promoting decreased expression of superoxide dismutase (SOD1), repression of genes associated with the intrinsic apoptosis pathway, and increased expression of genes associated with mitosis and cell division8.\n\nThis paper reports findings from a multi-center Phase 2A, double-blind, randomized, placebo-controlled pilot trial in 12 patients with Familial or Sporadic ALS diagnosed as definite ALS according to the El Escorial Criteria9,10. Objectives of the trial were to assess proof of principle; i.e., to determine whether a 2-week IV bolus treatment with GM604 can (i) be safely used and tolerated without significant adverse effects, (ii) favorably alter ALS biomarkers, and (iii) delay progression based upon key clinical indices. This report also includes results of protocol GALS-C for an advanced ALS patient who has been quadriplegic and on a ventilator since 2008 (IND number 120052).\n\n\nMethods\n\nThis was a multi-center Phase 2A, double-blind, randomized, placebo-controlled pilot trial in 12 patients with Familial or Sporadic ALS. Objectives were to test the safety, tolerability and efficacy of GM604 and to assess changes in clinical disease progression and selected ALS biomarkers. GM604 has received Orphan Drug Designation 14-4247 by the FDA Office of Orphan Products Development for treatment of ALS and Orphan Designation (EU/3/16/1662) from the European Medical Commission. Genervon received Fast Track Designation for GM604 to treat ALS (IND number 118,420) by FDA Office of Drug Evaluation I, CDER. Genervon also received Fast Track designation for GM604 to treat Ischemic Stroke (IND number 77,789).\n\nThis report also includes results of protocol GALS-C for an advanced ALS patient who has been quadriplegic and on a ventilator since 2008 (IND number 120052). It is an Expanded Access Use applied by a physician to treat his/her individual patient. The physician submits a new IND request with Form 1571 to FDA including treatment protocol, CV, IRB approval, Informed Consent Form, Medical License etc. and a Letter of Authorization (LOA) signed by the sponsor to refer to the sponsor’s IND for information regarding the investigational drug in Investigator’s Brochure, Chemistry, Manufacturing and Controls (CMC) information, and pharmacology and toxicology. After FDA approves the Expanded Access Treatment request by the physician, an IND number 120052 is assigned for the Expanded Access Use for the GALS-C patient treatment with GM60404. GM60404 is only shipped to the physician after the physician received FDA’s Study May Proceed letter. All components required by FDA are fulfilled before FDA will assign an IND number and allow the treatment to proceed. Since GALS-C is not a clinical trial, it is not registered with clinicaltrials.gov. FDA now has a simpler form for Individual Patient Expanded Access Applications (FDA Form 3926).\n\nPatients who qualified for the study were enrolled and assigned a unique patient number. The patient’s initials and identification number were written on all source documents. Only the site number and patient’s study ID number were written on CRF pages, documents sent to central readers, and CSF and blood samples sent to central lab for processing.\n\nPatients fulfilling the eligibility criteria were assigned randomization codes, starting with number 0101, with 0100 series for Site 001 and 0200 series for Site 2. The patient number was assigned in sequential order as the patient enrolled. 6 patients were enrolled at each site. 8 patients were randomized to receive GM604 and 4 patients were randomized to receive placebo control. The statistical analysis team generated a list of randomization code and sent the list to the pharmacist of each site. The study site pharmacist retained the original treatment randomization schedule in a secure location. All activities of this study were conducted in a double-blinded, randomized, placebo controlled manner.\n\nThe Phase 2A study was performed in compliance with the current International Conference on Harmonization (ICH) Good Clinical Practice (GCP) guidance and the current version of the Declaration of Helsinki of the World Medical Association11. The final protocol and informed consent form were reviewed and approved by the Columbia University Institutional Review Board (CU IRB) for Site 001 (Columbia University Medical Center) and by the Partners Human Research Committee (PHRC) for Site 002 (Massachusetts General Hospital). All patients who participated were fully informed about the study in accordance with GCP guidelines, federal regulations, HIPAA, and local requirements12. The trial was posted on clinicaltrials.gov on May 8, 2013. (NCT01854294)13. The GALS-C is not a clinical trial but an Expanded Access for compassionate treatment. IRB approval was received from Bay Area Regional IRB of Dignity Health.\n\nSubject Population. There are a total of two study sites: Columbia University Medical Center, New York and Massachusetts General Hospital. Definite ALS patients were randomized at each site to four GM604 treated and two placebo treated. Eligible patients met the El Escorial criteria for ALS9,10. At screening, symptom onset had occurred within the previous 24 months and forced vital capacity (FVC) was ≥65% of predicted capacity based upon age, height, and gender. Mean disease duration was 8.15 months, ranging from 2.7 to 16.5 months across treatment groups. Patients in the placebo group reported a slightly longer duration of disease, with a median duration of 8.90 months, compared with a median of 5.24 months for patients in the GM604 treatment group. The demographic profile of the placebo and GM604 treatment groups was matched in terms of age, with medians of 54.5 and 56.0 years in the placebo and treatment groups, respectively. The mean age of patients was 55.7 years, ranging from 45 to 68 across treatment groups. The majority of patients (66.7%; 8/12) were male. Gender distribution was slightly different in the two treatment groups, with an equal number of males and females in the placebo group (2/2) and a majority of males in the GM604 treatment group (75%; 6/8). All 4 of the females were at least 2 years post-menopausal. All 12 patients were Caucasian.\n\nPatients were excluded if they had a bleeding disorder, allergy to local anesthetics, or medical or surgical conditions in which lumbar puncture was contraindicated, e.g., elevated cerebrospinal fluid (CSF) pressure. Prohibited medications included anti-platelet or anticoagulant drugs such as Plavix, non-steroidal anti-inflammatory drugs (NSAIDs), ticlid, warfarin or coumadin. Patients may have been on a stable dose of riluzole for at least a month before screening, but riluzole was not initiated during the trial. We note that some biomarker data were missing due to hemolysis of samples, technical issues, or patients who missed clinical appointments for sample collection. These missing data were excluded from analyses.\n\nProcedures. Following screening, patients were randomized to receive GM604 (n=8) or placebo (n=4). Patients received 6 doses of 320 mg GM604 or placebo, administered as slow IV bolus injections on Monday, Wednesday, and Friday of weeks 1 and 2. Clinical assessments included the ALS Functional Rating Scale – Revised (ALSFRS-R)14, FVC15–17, timed up & go (TUG)18, and hand-held dynamometry (HHD)19. Assessements were conducted at screening, before the first dose (baseline), after the last (6th) dose at week 2, and at weeks 6 and 12. Safety and tolerability were evaluated based on the frequency of adverse events, vital signs, electrocardiography (ECG) measurements, physical and neurological examinations, safety laboratory monitoring, and hypersensitivity and injection site reactions20. The following visit windows were allowed: visits 1 (baseline and first dosing) to 6 (last dosing, 2 weeks): ± 1 day; visit 7 (4 weeks after last dosing, 6 weeks total): ± 7 days; visit 8 (10 weeks after last dosing, 12 weeks total): ± 14 days. We note that one patient (in the GM604 treatment group) returned to Germany where he resides and did not return for the week 12 assessment, although he did contact investigators to provide ALSFRS-R by phone. A total of 11 patients thus received all 6 doses of the study drug, with one patient receiving 5 doses of the drug.\n\nBiomarkers. The biomarkers SOD1, phosphorylated neurofilament heavy chain (pNFH)21, total tau, and TDP-43 were assessed at baseline, after the initial week 2 (4th) dose (plasma only), after the last (6th) dose (also in week 2) and at weeks 6 and 12. TDP-43 (TAR DNA-binding protein 43, transactive response DNA binding protein 43 kDa) is a protein encoded by the TARDBP gene. Mutations in the TARDBP gene are associated with neurodegenerative disorders including ALS20,22–25. We note that some biomarker data were missing due to technical issues with sample processing, or patients who missed clinical appointments for sample collection. These missing data were excluded from analyses.\n\nEfficacy assessments. The ALSFRS-R is used to assess disability in ALS patients. It is a total score derived from sub-scores in the following categories: speech, salivation, swallowing, handwriting, cutting food, dressing and hygiene, turning in bed, walking, climbing stairs, dyspnea, orthopnea, and respiratory insufficiency. The score decreases as the disease progresses14.\n\nThe FVC, measured as a percentage, is used to assess respiratory function and is an indicator of disease progression. FVC also decreases with disease progression15–17.\n\nTUG is used to predict falls in ALS. In this study, TUG in ambulatory participants with no assistance was measured and recorded with videotaping18 The TUG was measured in seconds rounded to 1 decimal place, with smaller estimates indicating that a patient can walk faster. As ALS progresses, however, the walking pace may slow, or the patient may be unable to perform TUG. In the present study, TUG performed with assistance was excluded and treated as missing data.\n\nHHD is used to measure muscle strength. HHD measures are dependent on the ability of the evaluator to overpower the subject's strength19. In this study, the clinician stabilized the limb segment while encouraging the patient to exert as much force as possible against an isometric HHD, and the maximum force was recorded by the HHD. Each site was tested in duplicate (triplicate if the first 2 results were more than 15% apart) and the result was measured in pounds using 1 decimal. The average of replicates for each clinical site was calculated and used in the analysis for each of the time points.\n\nStatistical Analyses. The percentage change from baseline of each biomarker in plasma and CSF was compared between treatments using a 2-sample t-test and Wilcoxon Rank Sum test. Progressive changes in clinical endpoints were examined using mixed effects modeling (ALSFRS-R, FVC, TUG, grip strength and HHD scores). Rates of disease progression were compared between GM604- and placebo-treated patients. Additionally, we made comparisons to placebo-treated patients from the Northeast ALS Consortium (NEALS) database showing stable rates of decline (https://www.alsconsortium.org/).\n\n\nResults\n\nStudy Initiation date was 16 May 2013 (first Subject pre-screened0, 03 September 2013 (first Subject screened), Study completion/Termination Date (last Subject completed) was 11 April 2014.\n\nSafety. Of 12 patients enrolled in the study, 9 reported at least one adverse event. Overall, in the GM604 treatment group, 5 of 8 patients experienced at least one treatment emergent adverse event (TEAE) and 4 of 4 patients in the placebo group experienced at least 1 TEAE. No unexpected findings were observed. Consistent with protocol-defined expected adverse reactions, the most frequently reported AEs by GM604-treated patients in the present study were falls (4 patients, 50%), puncture site pain (3 patients, 37.5%), rash (2 patients) and headache (2 patients, 25%). Of these most commonly reported TEAEs in GM604-treated patients, falls (1 patient, 25%), puncture site pain (1 patient, 25%) and headache (2 patients, 50%) were reported in placebo-treated patients.\n\nAdverse events in the ‘general disorders and administration site conditions’ system organ class (SOC) were the most frequently experienced adverse events (7 patients and 61 total events in both the GM604 and placebo-treated groups). A serious adverse event that required inpatient hospitalization, shortness of breath 24 days after the first dose of GM604 (12 days after the last dose), was experienced by one patient in the GM604 treatment group. This patient received the full 6 doses of GM604 treatment and then left the study site and flew back to Germany. There was no additional GM604 administered to this patient during the hospital stay in Germany that could have affected the outcome of the results.\n\nIt was determined by investigators that this serious adverse event was most likely due to the natural progression of ALS and was thus unrelated to the investigational product. No deaths or withdrawals due to adverse events occurred.\n\nThere were no clinically meaningful differences noted between patients who received GM604 and those who received placebo for changes over time in clinical laboratory tests, hematology parameters, or urinalysis results. There were no clinically meaningful differences noted between patients who received GM604 and those who received placebo for changes over time in ECGs, vital signs, physical findings, neurological examination, or other observations related to safety.\n\nGrade 1 hypersensitivity reactions were reported by one patient receiving placebo (visit 2 during week 1) and one patient receiving GM604 (visit 5 during week 2). All other patients reported an absence of hypersensitivity (Grade 0) reactions. There was no indication of QT prolongation as no patient receiving GM604 had a QT or QTcB (QT corrected using Bazett’s formula) result above 450 msec.\n\nBiomarker findings. Previous clinical studies in patients with ALS have suggested that biomarker concentrations in plasma, serum, and CSF can be predictive of disease progression26–33. Therefore, a primary endpoint of the present study was to examine the percentage change of each biomarker between baseline and week 12.\n\nIn plasma samples, percentage change in plasma SOD1 at visit 6 (end of week 2) was lower than at baseline (p=0.0550, two sample t-test) following GM604 treatment compared with placebo which did not lower SOD1 (Table 1, Figure 1, Dataset 134 and Dataset 1335). Percentage change in plasma total tau was significantly decreased, approximately -28% below baseline (p=0.0369 95% CI, Wilcoxon Rank Sum test) at week 6 (visit 7) after active GM604 treatment compared to placebo (Table 1, Figure 3, Dataset 236 and Dataset 1437). Percentage change in slope by treatment interaction in plasma TDP-43 from baseline (visit 1) through to week 12 (visit 8) was -34% in the GM604 treated group and +6% in the placebo group (p=00078 95% CI). The p-value of 0.0078 indicates a significant difference in slopes between GM604 and placebo up to week 12 (Table 1, Figure 2, Dataset 338 and Dataset 1539).\n\nFor Plasma SOD1 and Plasma Total tau: 1The P-value was obtained from a two-sample t-test for the difference in the change from baseline values between placebo and GM604. 2The P-value was obtained from a Wilcoxon Rank Sum Test for the difference in the change from baseline values between placebo and GM604. For Plasma TDP-43: Results were obtained from a mixed model repeated measures analysis for the change from baseline as the response variable with explanatory variables for week and the week by treatment interaction. The y-intercept was taken out of the model and forced to be 0 as the percentage change from baseline at baseline must be 0. The unstructured covariance structure was used to model the intra-subject correlation. 3The P-value indicates the significance of the difference in slopes between GM604 and Placebo.\n\nSOD1 was measured in cerebrospinal fluid (CSF) and plasma at baseline (visit 1) and following 6 doses of GM6 over 2 weeks (visit 6). In (A) and (B) estimated SOD1 levels (pg/ml) are plotted (log10-transformed scale), with each point representing a single ALS patient. Patients below the diagonal showed decreased SOD1 post-treatment. P-values (lower right) were generated from the comparison of SOD1 measurements between visits 1 and 6 (p=0.009 one-tailed paired t-test performed using log10-transformed SOD1 estimates).\n\nThe mean change in slope for plasma TDP-43 from baseline to week 12 in the GM6 treated groups was -3.513 pg/ml, which represents a decrease of 34%, while in the placebo group the mean change in slope was 0.493 pg/ml, which represents an increase of 6% from baseline. (p=0.0078, test for the significance of difference between the slopes, GM604 vs. placebo.) (To analyze disease progression, the results of the biomarker assays were analyzed using a mixed model repeated measures analysis. Commensurate with the design of the study, a mixed effects model was used to examine differences in the percentage change from baseline over time for each of the biomarkers. The unstructured covariance structure was used to model the intra-subject correlation. Since the percentage change from baseline at baseline is zero for all subjects, the y-intercept was removed from the model which will force the y-intercept to be 0. The explanatory variables that were added to the model include the week (2, 6, 12) as a numerical variable, treatment (GM604, placebo) and the treatment by week interaction. The model was run using all results through to week 6 and then again using all results through to week 12. The p-value indicates the significance of the difference in slopes between GM604 and placebo (Dataset 1539) up to week 12.\n\nThe mean percentage change from baseline for plasma total tau in GM6 treated patients was -27.69%, while the mean percentage change from baseline for the placebo group was 13.23% (p = 0.0369, Wilcoxon Rank Sum Test). This tests the significance of the difference in percentage change between the GM604 treated group and the placebo group from baseline to week 6 (Dataset 1452).\n\nWe observed suggestive trends but no statistically significant changes in CSF biomarker levels (Table 1). SOD1 levels decreased at week 6 (visit 7) following treatment with GM604 but increased following placebo treatment30. Total CSF tau was decreased after end of week 2 (visit 6, final dose) of active treatment with GM604, whereas tau increased following placebo treatment26. Cystatin C was increased after end of week 2 (visit 6, final dose) and week 12 (visit 8) following treatment with GM604, and was decreased following placebo treatment27,28.\n\nFigure 1 compares CSF and plasma SOD1 levels at baseline (visit 1) and at the end of week 2 (visit 6, final dose) in the GM604 treated and placebo group. In Figure 1A and 1B, each point represents a single ALS patient, such that patients below the diagonal exhibit decreased SOD1 at visit 6 compared to visit 1. There was a trend towards decreased SOD1 in the CSF, but it was not statistically significant (p=0.123; one-tailed t-test; Figure 1A, Dataset 134, Dataset 440). For plasma measurements, however, SOD1 abundance was significantly lower at visit 6 compared to visit 1 (p=0.009, paired one-tailed t-test; Figure 1B)\n\nFigure 2 shows the percentage change in slope by treatment interaction of plasma TDP-43 over time, from baseline (visit 1) through to week 12 (visit 8). The mean change in slope for the GM604 treated group was -3.513 pg/ml, which represents a 34% decrease, while the mean change in slope for the placebo group was 0.493 pg/ml, which represents a 6% increase (p=0.0078 for the difference between the slopes, -34% vs 6%, GM604 vs. placebo). To analyze disease progression, the results of the biomarker assays were analyzed using a mixed model repeated measures analysis. Commensurate with the design of the study, a mixed effects model was used to examine differences in the percentage change from baseline over time for each of the biomarkers. The covariance structure was used to model the intra-subject correlation. Since the percentage change from baseline is zero for all subjects at baseline, the y-intercept was removed from the model which forces the y-intercept to be 0. The explanatory variables that were added to the model include the week (2, 6, 12) as a numerical variable, treatment (GM604, placebo) and the treatment by week interaction. The model was run using all results through to week 6 and then again using all results through to week 12 separately. The p-value indicates a significant difference in slopes between GM604 and placebo up to week 12 (Dataset 1539).\n\nFigure 3 shows percentage change in plasma total tau over time, from baseline (visit 1) through to week 6 (visit 7). The mean percentage change from baseline for plasma total tau in GM604 treated patients was -27.69%, while the mean percentage change from baseline for the placebo group was 13.23% (p = 0.0369, -27.69% vs 13.23%, Wilcoxon Rank Sum Test. Dataset 1437).\n\nEfficacy assessments\n\nTUG, grip strength and HHD scores. For weeks 2, 6 and 12, no significant treatment difference was observed between placebo and GM604 treatment groups with respect to TUG, grip strength and HHD scores18,19.\n\nALSFRS-R. Rates of change in ALSFRS-R are usually linear for any one individual patient (without any intervention), but are highly variable among different patients, ranging from rapid (1 year) to slow (>10 years)41. Thus, to be able to measure any change in disease progression before and after treatment, ALSFRS-R was analyzed using mixed model analysis. The model allowed for differences in slopes before and after treatment in an attempt to observe disease modification. The slope for ALSFRS-R for the placebo group changed minimally before and after treatment, going from 0.037/day to -0.034/day. The slope for the GM604 group changed noticeably but not significantly before and after treatment, going from -0.046/day before treatment to -0.032/day after treatment. It appeared that the GM604 group had slowing of disease progression compared to pre-treatment (Dataset 1642). At week 12, there was no statistically significant difference in ALSFRS-R between GM604- and placebo-treated groups (Dataset 1043).\n\nOutcomes were also compared to baseline features of placebo-treated definite ALS patients from recent clinical trials by NEALS29,30. In our GM604-treated patients, the monthly rate of decline per 30 days was -1.047 (I. The rate of decline per 30 days among historical controls was significantly greater (-1.97 per month; p = 0.0047, -1.047/mo vs -1.97/mo, mixed model, Dataset 1744), indicating improvement in GM604-treated patients compared to an independent historical control cohort.\n\nForced vital capacity (FVC). At week 12, the total number of placebo- and GM604- treated patients was 4 and 7, respectively (one patient was excluded from week 12 assessments, see above). There was no statistically significant difference in the change of FVC from baseline between subjects who received placebo and those who received GM604 at week 12 (Table 2, -11.5 vs -4.7, p=0.5393, two sample t-test, Dataset 1145).\n\nN = number of patients. 1P-values were calculated by two-sample t Test. 2The P-value was obtained from a Wilcoxon Rank Sum Test. (Dataset 11).\n\nThere were two sites included in this study (Table 3). The screening visit and baseline assessment were separated by approximately 2 weeks. Intra-site variability was quite small for the placebo group at Site 001 and at both sites for the GM604 group (ranging from 0.3 to 3.0) and not statistically significant. While some variability between visits is expected, the drop of 15 points between screening visit and baseline assessment at Site 002 for the placebo group appeared very different than what was seen at the other site (Table 3).\n\nOnly at Site 001 was there a statistically significant difference between placebo and GM604 treated group when using FVC data from baseline to week 12 (Table 4, -28 vs -4.8, p=0.0268, two sample t-test ).\n\n1P-values were calculated by two-sample t Test. 2The P-value was obtained from a Wilcoxon Rank Sum Test.\n\nThe GALS-001 trial was under the restrictive inclusion criteria of definite ALS onset within 24 months and FVC >65%. As a follow-on study to investigate how an advanced ALS patient would respond to GM604, a single compassionate patient case study under protocol GALS-C outside of the restrictive inclusion criteria was initiated.\n\nThe patient was a 46-year old male diagnosed 10 years previously, quadriplegic for over eight years and on a ventilator. The patient received GM604 treatment in an identical dosing regimen as in GALS-001. The patient was too advanced to perform any of the clinical endpoint assessments such as ALSFRS-R, FVC etc. as in GALS-001, but personal clinical observations were recorded according to the patient’s condition.\n\nClinical observations revealed small but beneficial improvements from baseline to week 12. At week 2, the patient showed clearer articulation compared to the baseline assessment. At week 4, the patient's swallow volume had increased by 150%–200%. Oral fluid consumption reported by the patient was improved, measuring 250cc total without leakage. Mouth suction, as measured by water column height, increased from 5–8 cm to 10–15 cm with both 1/8 and 1/4 inch drinking straws. Speech, swallowing, and suction were used as primary metrics, based upon the rationale that the relatively short motor neurons in the tongue and lips would show improvements first.\n\nIn this advanced patient, CSF biomarkers SOD1, Cystatin C and total tau were all below the normal range at baseline. After 2 weeks of treatment with GM604 in this advance patient, all 3 biomarkers were upregulated towards their normal range (SOD1: 50–200 ng/ml; Cystatin C: 3.0–8.0 μg/ml; total tau: 100–350 pg/ml; see Table 5). In contrast, patients treated in this Phase 2A GALS-001 trial, diagnosed within 2 years of disease onset, had CSF biomarkers SOD1 and total tau at the high end of the normal range at the start of the trial, and at week 2, both of these biomarkers were downregulated towards their normal range. Cystatin C showed values that were at the low end of the normal range at the start of the trial and were up regulated towards their normal range by week 2. Table 5 represents a compilation summary of biomarker changes in patients after GM604 treatment in the GALS-001 and GALS-C trials.\n\nGALS-C = Single compassionate patient treatment; GALS-T = GALS-001 treated group, and GALS-P = GALS-001 placebo group. ↑ = upregulation, ↓ = downregulation, DM* = disease modification, DP** = disease progression.\n\n\nDiscussion\n\nThis GALS-001 Phase 2A, multi-center, randomized, double-blind, placebo-controlled, pilot trial was performed as part of the development program for GM604. The study was designed to test proof of principle, with the objectives of testing the safety, tolerability and efficacy of GM604 in a small cohort of ALS patients, based upon changes in ALS biomarkers and measures of clinical progression29.\n\nOur findings show that GM604 is safe and tolerable at the doses administered in this study (i.e., 320 mg by IV bolus injection 3X/week for two consecutive weeks). Ad hoc analysis revealed that the GM604-treated group demonstrated improvements in disease outcomes, achieving statistical significance in FVC clinical data at week 12 at Site 001. GM604 also changed the expression levels of three ALS plasma biomarkers (SOD1, total tau, and TDP-43). The GM604-treated group exhibited a trend towards slower disease progression compared to placebo-treated patients. Although ALSFRS-R at week 12 did not show a statistically significant difference between the GM604-treated group and placebo patients, in ad hoc analysis there were trends for improvements.\n\nPrevious clinical studies in patients with ALS have suggested that biomarker concentrations in plasma, serum, and CSF can be predictive of disease progression32,33. Therefore, a primary endpoint of the present study was to examine the percentage change of each biomarker between baseline and week 12. Although changes in CSF biomarker levels were observed over time, from baseline through to week 12, no statistically significant changes were observed in CSF biomarkers SOD1, total tau, Cystatin C, and pNFH15–19,26–28. Plasma biomarkers, in contrast, showed stronger differences between GM604-treated and placebo-treated patients. For example, plasma TDP-43 was reduced significantly by 34% below baseline at week 12 (Figure 2). Consistent with this, the slope in plasma TDP-43 from baseline to week 12 in GM604 treated patients (-3.513 pg/mL/wk which represent a change of -34%) was lower than that in placebo patients (0.493 pg/mL/wk which represent a change of 6%) (p = 0.0078, -34% vs 6%, mixed model; Figure 2). Plasma SOD1 in the GM604-treated group also showed a significant reduction at week 2 when compared with the placebo group (p = 0.009; one tailed t-test Figure 1B). Finally, plasma total tau reduction achieved statistical significance in percentage change at week 6 between the treated and placebo patients (p = 0.0369 95% CI -27.69% vs 13.23%, Wilcoxon Rank Sum Test, Dataset 14, Figure 3).\n\nThe biomarker results in GALS-001 suggests that GM6 modulates ALS disease through multiple pathways. Our findings suggest a tentative mechanism of action (MOA) by which GM6 could prolong motor neuron survival in ALS patients. We propose a “tripartite mechanism”8. First, by reducing SOD1 expression, GM6 may block accumulation of pathologic SOD1 aggregates in motor neurons. Second, by reducing mitochondrial gene expression and potentially mitochondrial abundance (decreasing total tau), GM6 may disrupt the mitochondrial (intrinsic) apoptotic pathway. Third, GM6 appears to activate developmental/mitotic pathways (Cystatin C), which may promote cellular repair, axonogenesis, and neuron projection.\n\nWe did not observe significant changes with respect to some clinical efficacy measures (HHD, TUG, grip strength). Early changes in muscle strength are difficult to measure accurately by HHD because the accuracy of HHD decreases with higher muscle strength19. Grip strength and HHD assessments had great variability due to the different handedness of the patients along with the disease potentially affecting one side of the body in a slightly different manner than the other side. TUG may also not be an ideal clinical measurement for ALS trials because as ALS progresses, many patients with ALS are unable to perform TUG. In this trial, 50% of the patients receiving placebo treatment were not able to perform TUG at Week 12.\n\nThe GALS-C patient is an unusual case, having survived 10 years when the average life expectancy is 2 to 5 years (http://www.alsa.org/about-als/facts-you-should-know.html). The GALS-C patient’s SOD1 and total tau biomarkers were below the normal range and GM604 upregulated them towards the normal range; whereas SOD1 and total tau biomarkers of GALS-001 trial patients were above normal range and GM604 downregulated them towards normal range. While it is difficult to establish strong conclusions from a single patient, these results suggest that GM604 may have homeostatic effects on biomarker abundance (i.e., decreasing biomarkers when abnormally elevated and increasing biomarkers when abnormally repressed). In this respect, GM604 may not strictly act as an agonist or antagonist, but may instead have more complex and patient-specific effects depending on baseline status. Further studies and analyses of larger patient cohorts will be needed to address this possibility.\n\nFor some analyses, patients in the present study were compared to placebo-treated patients from a clinical study designed to evaluate the safety and efficacy of ceftriaxone treatment in definite ALS patients (Dataset 1744)46,47. The use of historical placebo data may increase the clinical relevance of efficacy and safety information that can be gleaned from the current trial21,48. This may reduce type I error and improve statistical power for evaluating outcomes and endpoints in a small study49. However, when comparing these groups there are inherent variables between study populations that may lead to potential differences. For example, diagnostic criteria, the population with the disease, and concomitant standards of care can all lead to potential differences. The comparison with historical placebo data therefore needs to be interpreted with caution.\n\nAll data reported here have been submitted to the FDA. FDA has since encouraged Genervon to conduct a Phase 3 study under special protocol assessment process. Genervon is planning for the phase 3 clinical trial in 2017.\n\n\nConsent\n\nWritten informed consent for participation in the trial and publication of patient information was obtained from each patient.\n\n\nData availability\n\nDataset 1. Plasma SOD1 measurements (GALS-001).\n\nDOI, 10.5256/f1000research.10519.d15329834\n\nDataset 2. Plasma total tau measurements (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15329936\n\nDataset 3. Plasma TDP-43 measurements (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330038\n\nDataset 4. CSF SOD1 measurements (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330150\n\nDataset 5. CSF total tau measurements (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330240\n\nDataset 6. CSF Cystatin C measurements (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330351\n\nDataset 7. CSF pNFH measurements (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330452\n\nDataset 8. Adverse events data (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330553\n\nDataset 9. Serious adverse event data (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330654\n\nDataset 10. ALS Functional Rating Scale – Revised (ALSFRS-R) data (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330743\n\nDataset 11. FVC data (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15330845\n\nDataset 12. Biomarker data for GALS-C\n\nDOI, 10.5256/f1000research.10519.d15330955\n\nDataset 13. Source table for calculating the percentage change from baseline to week 2 for plasma SOD1\n\nDOI, 10.5256/f1000research.10519.d15331035\n\nDataset 14. Source table for calculating the percentage change from baseline to week 6 for plasma total tau\n\nDOI, 10.5256/f1000research.10519.d15331137\n\nDataset 15. Comparison of disease progression determined by changes in plasma TDP-43\n\nDOI, 10.5256/f1000research.10519.d15331239\n\nDataset 16. Source table for ALSFRS-R before and after treatment (GALS-001)\n\nDOI, 10.5256/f1000research.10519.d15331342\n\nDataset 17. Source table comparing ALSFRS-R data in GM604 treated patients with data from the historical control cohort46,47.\n\nDOI, 10.5256/f1000research.10519.d15331444", "appendix": "Author contributions\n\n\n\nDK designed the study. MK prepared the first draft of the manuscript. PL, RMWC and TS contributed to manuscript preparation. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nDorothy Ko is an executive of the company and has ownership interest in Genervon Biopharmaceuticals, LLC, the sponsor of this trial.\n\n\nGrant information\n\nThis study was funded by Genervon Biopharmaceuticals, LLC.\n\n\nAcknowledgments\n\nEditorial assistance was provided by WCCT Global, LLC, funded by Genervon Pharmaceuticals, LLC.\n\n\nSupplementary material\n\nSupplementary file 1: Article manuscript showing where each component of the CONSORT checklist has been adhered to. Clinical trials must adhere to the CONSORT reporting guidelines.\n\nClick here to access the data.\n\nSupplementary file 2: Completed CONSORT flow diagram. Clinical trials must adhere to the CONSORT reporting guidelines.\n\nClick here to access the data.\n\n\nReferences\n\nDeLoach A, Cozart M, Kiaei A, et al.: A retrospective review of the progress in amyotrophic lateral sclerosis drug discovery over the last decade and a look at the latest strategies. Expert Opin Drug Discov. 2015; 10(10): 1099–118. PubMed Abstract | Publisher Full Text\n\nKatz JS, Barohn RJ, Dimachkie MM, et al.: The Dilemma of the Clinical Trialist in Amyotrophic Lateral Sclerosis: The Hurdles to Finding a Cure. Neurol Clin. 2015; 33(4): 937–47. PubMed Abstract | Publisher Full Text\n\nEisen A: Amyotrophic Lateral Sclerosis is a Multifactorial Disease. Muscle Nerve. 1995; 18(7): 741–752. PubMed Abstract | Publisher Full Text\n\nXinyu D, Weiquan H: Localization and morphometric study on motoneuronotrophic factor 1 and its receptor in developing chorionic villi of human placenta. Acta Anatomica Sinica. 1998; 29: 86–89. Reference Source\n\nChau R, Ren F, Huang W, et al.: Muscle neurotrophic factors specific for anterior horn motoneurons of rat spinal cord. Recent Adv Cell Mol Biol. 1992; 5: 89–94.\n\nLu H, Le WD, Xie YY, et al.: Current Therapy of Drugs in Amyotrophic Lateral Sclerosis. Curr Neuropharmacol. 2016; 14(4): 314–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu J, Zhu H, Ko D, et al.: Motoneuronotrophic factor analog GM6 reduces infarct volume and behavioral deficits following transient ischemia in the mouse. Brain Res. 2008; 1238: 143–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwindell WR, Bojanowski K, Kindy M, et al.: GM604 down-regulates SOD1 and alters expression of 89 genes associated with amyotrophic lateral sclerosis [version 1; not peer reviewed]. F1000Res. 2016; 5: 2836. Publisher Full Text\n\nBrooks BR: El Escorial World Federation of Neurology criteria for the diagnosis of amyotrophic lateral sclerosis. Subcommittee on Motor Neuron Diseases/Amyotrophic Lateral Sclerosis of the World Federation of Neurology Research Group on Neuromuscular Diseases and the El Escorial “Clinical limits of amyotrophic lateral sclerosis” workshop contributors. J Neurol Sci. 1994; 124 Suppl: 96–107. PubMed Abstract | Publisher Full Text\n\nBrooks BR, Miller RG, Swash M, et al.: El Escorial revisited: revised criteria for the diagnosis of amyotrophic lateral sclerosis. Amyotroph Lateral Scler Other Motor Neuron Disord. 2000; 1(5): 293–299. PubMed Abstract | Publisher Full Text\n\nhttp://www.ich.org/home.html\n\nNagata E, Ogino M, Iwamoto K, et al.: Bromocriptine Mesylate Attenuates Amyotrophic Lateral Sclerosis: A Phase 2a, Randomized, Double-Blind, Placebo-Controlled Research in Japanese Patients. PLoS One. 2016; 11(2): e0149509. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClinicalTrials.gov registry Identifier: NCT01854294: GM60404 Phase 2A Randomization Double-blind Placebo Controlled Pilot Trial in Amyotrophic Lateral Disease (ALS) (GALS-001). Genervon Biopharmaceuticals, LLC, 2013. Reference Source\n\nCedarbaum JM, Stambler N, Malta E, et al.: The ALSFRS-R: a revised ALS functional rating scale that incorporates assessments of respiratory function. BDNF ALS Study Group (Phase III). J Neurol Sci. 1999; 169(1–2): 13–21. PubMed Abstract | Publisher Full Text\n\nTraynor BJ, Zhang H, Shefner JM, et al.: Functional outcome measures as clinical trial endpoints in ALS. Neurology. 2004; 63(10): 1933–5. PubMed Abstract | Publisher Full Text\n\nLunetta C, Lizio A, Sansone VA, et al.: Strictly monitored exercise programs reduce motor deterioration in ALS: preliminary results of a randomized controlled trial. J Neurol. 2016; 263(1): 52–60. PubMed Abstract | Publisher Full Text\n\nRuiz-López FJ, Guardiola J, Izura V, et al.: Breathing pattern in a phase I clinical trial of intraspinal injection of autologous bone marrow mononuclear cells in patients with amyotrophic lateral sclerosis. Respir Physiol Neurobiol. 2016; 221: 54–8. PubMed Abstract | Publisher Full Text\n\nMontes J, Cheng B, Diamond B, et al.: The Timed Up and Go test: predicting falls in ALS. Amyotroph Lateral Scler. 2007; 8(5): 292–5. PubMed Abstract | Publisher Full Text\n\nBeck M, Giess R, Würffel W, et al.: Comparison of maximal voluntary isometric contraction and Drachman's hand-held dynamometry in evaluating patients with amyotrophic lateral sclerosis. Muscle & Nerve. 1999; 22(9): 1265–1270. PubMed Abstract | Publisher Full Text\n\nNoto Y, Shibuya K, Sato Y, et al.: Elevated CSF TDP-43 levels in amyotrophic lateral sclerosis: specificity, sensitivity, and a possible prognostic value. Amyotroph Lateral Scler. 2011; 12(2): 140–3. PubMed Abstract | Publisher Full Text\n\nBoylan KB, Glass JD, Crook JE, et al.: Phosphorylated neurofilament heavy subunit (pNF-H) in peripheral blood and CSF as a potential prognostic biomarker in amyotrophic lateral sclerosis. J Neurol Neurosurg Psychiatry. 2013; 84(4): 467–472. PubMed Abstract | Publisher Full Text\n\nKasai T, Tokuda T, Ishigami N, et al.: Increased TDP-43 Protein in Cerebrospinal Fluid of Patients with Amyotrophic Lateral Sclerosis. Acta Neuropathol. 2009; 117(1): 55–62. PubMed Abstract | Publisher Full Text\n\nEgawa N, Kitaoka S, Tsukita K, et al.: Drug screening for ALS using patient-specific induced pluripotent stem cells. Sci Transl Med. 2012; 4(145): 145ra104. PubMed Abstract | Publisher Full Text\n\nLing SC, Polymenidou M, Cleveland DW: Converging mechanisms in ALS and FTD: disrupted RNA and protein homeostasis. Neuron. 2013; 79(3): 416–38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLing JP, Pletnikova O, Troncoso JC, et al.: TDP-43 repression of nonconserved cryptic exons is compromised in ALS-FTD. Science. 2015; 349(6248): 650–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPijnenburg YA, Verwey NA, van der Flier WM, et al.: Discriminative and prognostic potential of cerebrospinal fluid phosphoTau/tau ratio and neurofilaments for frontotemporal dementia subtypes. Alzheimers Dement (Amst). 2015; 1(4): 505–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRen Y, Zhu W, Cui F, et al.: Measurement of cystatin C levels in the cerebrospinal fluid of patients with amyotrophic lateral sclerosis. Int J Clin Exp Pathol. 2015; 8(5): 5419–26. PubMed Abstract | Free Full Text\n\nWilson ME, Boumaza I, Lacomis D, et al.: Cystatin C: a candidate biomarker for amyotrophic lateral sclerosis. PLoS One. 2010; 5(12): e15133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPasinetti GM, Ungar LH, Lange DJ, et al.: Identification of potential CSF biomarkers in ALS. Neurology. 2006; 66(8): 1218–22. PubMed Abstract | Publisher Full Text\n\nWiner L, Srinivasan D, Chun S, et al.: SOD1 in cerebral spinal fluid as a pharmacodynamic marker for antisense oligonucleotide therapy. JAMA Neurol. 2013; 70(2): 201–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGanesalingam J, An J, Shaw CE, et al.: Combination of neurofilament heavy chain and complement C3 as CSF biomarkers for ALS. J Neurochem. 2011; 117(3): 528–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGanesalingam J, An J, Bowser R, et al.: pNfH is a promising biomarker for ALS. Amyotroph Lateral Scler Frontotemporal Degener. 2013; 14(2): 146–9. PubMed Abstract | Publisher Full Text\n\nOeckl P, Jardel C, Salachas F, et al.: Multicenter validation of CSF neurofilaments as diagnostic biomarkers for ALS. Amyotroph Lateral Scler Frontotemporal Degener. 2016; 17(5–6): 404–13. PubMed Abstract | Publisher Full Text\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 1 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 13 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 2 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 14 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 3 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 15 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 5 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nRavits J, La Spada AR: ALS motor phenotype heterogeneity, focality, and spread: deconstructing motor neuron degeneration. Neurology. 2009; 73(10): 805–811. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 16 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 10 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 17 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 11 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nBerry JD, Shefner JM, Conwit R, et al.: Design and initial results of a multi-phase randomized trial of ceftriaxone in amyotrophic lateral sclerosis. PLoS One. 2013; 8(4): e61177. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCudkowicz ME, Titus S, Kearney M, et al.: Safety and efficacy of ceftriaxone for amyotrophic lateral sclerosis: a multi-stage, randomised, double-blind, placebo-controlled trial. Lancet Neurol. 2014; 13(11): 1083–1091. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGordon PH, Moore DH, Miller RG, et al.: Efficacy of minocycline in patients with amyotrophic lateral sclerosis: a phase III randomised trial. Lancet Neurol. 2007; 6(12): 1045–53. PubMed Abstract | Publisher Full Text\n\nViele K, Berry S, Neuenschwander B, et al.: Use of historical control data for assessing treatment effects in clinical trials. Pharm Stat. 2014; 13(1): 41–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 4 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 6 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 7 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 8 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 9 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source\n\nKindy M, Lupinacci P, Chau R, et al.: Dataset 12 in: A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient trial (Protocol GALS-C). F1000Research. 2017. Data Source" }
[ { "id": "24163", "date": "13 Jul 2017", "name": "Walter G. Bradley", "expertise": [ "Reviewer Expertise ALS", "clinical trials", "biomarkers", "neurology" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGM604 is a potentially an interesting molecule for the treatment of ALS. Therefore, the results of this Phase IIa trial need to be carefully reviewed. There have been many similar Phase IIa trials of potentially interesting drugs for ALS, which have subsequently failed in large Phase III trials for 3 main reasons: too small numbers; the variability of ALS leads to cohort effects [better patients entering one arm of the study will make the treatment in that arm appear more favorable]; over-enthusiastic interpretation of results of a preliminary Phase II trial.\nI am very surprised that clinical investigators from the 2 trial sites are not included in the authorship of the paper. At the least, I would like to see a reason, and letters from the heads of those clinical sections that did the study, Drs. Cudkowicz and Mitsumoto, stating that they have read the paper and support its contents and conclusions. Without this, the current paper is a sponsor-derived report.\nThe authors use the term \"multi-center\", and this should be changed to \"two-center\".\nRegarding safety: The authors should provide the FVC values for the patient who experienced respiratory distress leading to hospitalization, in order to allow assessment of whether this SAE could have been due to the drug. They should also provide more information about the rash in one patient - type, distribution, severity, response to treatment, and any resultant change in the trial.\nRegarding efficacy: The trial showed no significant change in the clinical parameters studied. This is not surprising since the trial only lasted 12 weeks. The differences between FVC values in the 2 sites is probably related to variability of the disease, not to treatment effect. Since the study had a placebo arm, there is no reason to include reference to historical controls, and that section of the results should be deleted. The authors comment that \"to be able to measure any change in disease progression before and after treatment, the ALSFRS-R was analyzed using mixed model analysis.\" However, since they had pre-screening values on only 3 of their 12 subjects, no statistical model can provide such an assessment. Therefore, I do not support the assertion that they were able to examine pre-treatment progression, and recommend deletion of the statement in the Abstract that [we] \"observed .... improved functional measures.\"\nRegarding biomarkers: Most of this paper is devoted to changes in potential biomarkers. Though the changes are interesting and worthy of report, the editors and the readers should understand that these changes do not prove efficacy. The gold standard remains demonstrating efficacy in clinically relevant measures of disease.\nRegarding the compassionate use patient: Since no significant clinical effect was seen in the 8 patients receiving active treatment and followed for 12 weeks, it is difficult to treat as reliable the unblinded clinical observations reported. This case is of interest to the sponsors of the study, but should not be included in a scientific report of this Phase IIa trial.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly", "responses": [ { "c_id": "2977", "date": "29 Aug 2017", "name": "Mark Kindy", "role": "Author Response", "response": "We appreciate Dr. Bradley’s review of 7/17/2017. The following are explanations to the questions in his review.Efficacy: The 12 week, 8 treated and 4 placebo pilot trial is not expected by anyone to see any trend or result. Genervon was willing to do the proof of concept trial hoping that the unique endogenous regulator peptide therapy drug candidate GM6 can proof to be efficacious despite the obstacles of too small and too short a trial. Although a larger trial is needed to confirm these findings, the present data are encouraging and support GM604 as an ALS drug candidate. Genervon is planning a Phase 3 ALS trial under Special Protocol Assessment suggested by FDA in 2017 in the US. Enrollment detail will be announced later. Biomarker: Biomarkers are not the gold standard but highly significant to proof not only MOA but modulation of critical corrupted disease specific gene/protein expression of treated ALS patients (not in vitro or in vivo studies). Not only Merit is intrigued she encourage us to move forward because of these unique data. Most Pharma are talking to us because of the biomarker data of the treated ALS patients. Efficacy: The FDA approved for Genervon to use and show historical control due to only 4 placebo patients in pilot trial. FVC data: For the patient who experienced respiratory distress leading to hospitalization, the information can be found in the Datasets. Dataset 11 listed all FVC data for each subject. Dataset 9 informed that Subject 203 had SAE. Dyspnea which is unrelated to GM604, had moderate respiratory disorders NEC, no breathing abnormalities, but had shortness of breath. Subject 203 lived in Germany and flew back to Germany soon after receiving 6 doses. The MGH PI determined that the shortness of breath is unrelated to GM604 in the SAE report. This SAE occurred in Germany on 5-Dec 2013 after the patient flew back from Boston to Germany, 14 days after he received his last dose (6th dose) on 22-Nov-2013 in Boston. He did not complete any other assessment after Visit 6, except he gave ALSFRS-R by phone for Visit 12. He should not have been allowed to be enrolled since he was not planning to comply with the protocol requirement. Subject 203 FVC data are as follows: pre-screening 6-Aug-13 96%, screening 5-Nov-13 79%, Visit 1 (Baseline, dose 1) 11-Nov-13 75%, Visit 6 (Week 2, dose 6) 22-Nov-13 84%. Please note that his FVC decreased from screening (79%) to baseline (75%) before first dose. His FVC improved to 84% at Visit 6 after 6 doses compared to baseline and screening. Rash: Dataset 8 listed all the AEs including rash. Subject 203 had rash on forehead and along crease from nose to mouth on 15 Nov 2013, 5 days from start of dosing, mild, not SAE, unlikely related to GM604 according to PI and was resolved completely. Subject 207 had rash below injection site in left arm on 6 Feb 2014 (visit 5), 11 days from 1st dosing. Rash is local, not SAE, possibly related to GM604, moderate, resolved completely. Subject did not receive the last dose (Visit 6). Statistical analysis: of ALSFRS-R pre-and post-treatment between GM604 treated and placebo, our statistician used the ALSFRS-R from screening to visit 1 (dose 1), not from pre-screening to visit 1. Screening was supposed to be within two weeks from first dose. The ALS Phase 2A Study start date was August 2013. Pre-screening data are not part of the study. The pre-screening data were taken from the subject's clinical record before study start as a guide for the PI to consider recruiting the patient, although pre-screening data were reported in Dataset 10. As you can see in Dataset 10, the pre-screening dates were two to three months before study start in August: Subject 102 had pre-screening on 6/24/2013, Subject 201 pre-screening was on 5/16/2013, Subject 202 has pre-screening on 6/4/2013. Therefore, there were data from all twelve subjects (not three) from screening to baseline to be used in the mixed model analysis for pre-treatment progression.Inclusion of data from the compassionate use patient: The compassionate use patient had ALS for 10 years and is quadriplegic and is on ventilator full time. His ALS disease has progressed much more than any of the subjects in the trial. GM6 was discovered as an endogenous multi-target regulator based on our innovative hypothesis and paradigm. Based on many pre-clinical studies, Genervon has confidence in efficacy of GM6 and was willing to treat even very advance ALS patients under expanded access IND. Reporting the findings of the very advance ALS patient functional responses and his biomarkers response to GM604 is of interest as a footnote because most clinical trials would not have included patients at such advanced state.We hope this addresses the concerns." } ] }, { "id": "29817", "date": "12 Mar 2018", "name": "Nitin Saksena", "expertise": [ "Reviewer Expertise Human genomics", "miRNA regulation of genes", "neurodegenerative diseases", "infectious diseases", "human genome and microbiome mapping", "gene expression" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper Kindy M et al., for Genervon have conducted a Phase 2A randomized, double blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS).\nGenervon discovered an endogenous embryonic stage regulator of the human nervous system named Motoneuronotrophic Factor (MNTF). GM604 is a peptide with a sequence identical to one of the active sites of human motoneuronotrophic factor (MNTF)7. MNTF is an endogenous human embryonic stage neural regulatory and signaling peptide that controls the development, monitoring and correction of the human nervous system. Genervon has taken advantage of these functional attributes of MNTF peptide, termed GM604, in this paper as a potential therapy for ALS. GM6 has been modeled upon an insulin receptor tyrosine kinase binding motoneuronotrophic factor within the developing central nervous system. GM604 modulates many ALS-associated genes, promoting decreased expression of superoxide dismutase (SOD1), repression of genes associated with the intrinsic apoptosis pathway, and increased expression of genes associated with mitosis and cell division.\n\nThis study relates to a 2-center phase 2A, randomized, double-blind, placebo-controlled pilot trial with 12 definite ALS patients who were diagnosed within 2 years of disease onset. During the trial, patients received 6 doses of GM604 or placebo, administered as slow IV bolus injections (3x/week, 2 consecutive weeks). The main objectives of this Phase 2A trial were to assess the safety and efficacy of GM604 based on ALSFRS-R, FVC and selected biomarkers (TDP-43, Tau and SOD1, pNFH). In addition, this clinical trial was also extended to a compassionate inclusion and treatment protocol GALS-C for an advanced ALS patient. Thus, this protocol includes patients with early onset of ALS and 1 advanced ALS patient.\nIn this study, definite ALS patients were randomized to one of two treatment groups (GM604, n = 8; placebo, n = 4). During the trial, 2 of 8 GM604-treated patients exhibited mild rash, but otherwise adverse event frequency was similar in treated and placebo groups. GM604 slowed functional decline (ALSFRS-R) when compared to a historical control (P = 0.005). At one study site, a statistically significant difference between treatment and control groups was found when comparing changes in respiratory function (FVC) between baseline and week 12 (P = 0.027). GM604 decreased plasma levels of key ALS biomarkers relative to the placebo group (TDP-43, P = 0.008; Tau, P = 0.037; SOD1, P = 0.009). SOD-1 showed considerable reduction in plasma, with increased levels in placebo arm. Percentage change in plasma total tau was significantly decreased, approximately -28% below baseline (p=0.0369 95% at week 6 (visit 7) after active GM604 treatment compared to placebo. Percentage change in slope by treatment interaction in plasma TDP-43 from baseline (visit 1) through to week 12 (visit 8) was -34% in the GM604 treated group and +6% in the placebo group (p=00078 95% CI). The p-value of 0.0078 indicates a significant difference in slopes between GM604 and placebo up to week 12. The advanced ALS patient in compassionate treatment demonstrated improved speech, oral fluid consumption, mouth suction with GM604 treatment and biomarker improvements.\n\nComments: First, although the trial was small in nature with the inclusion of only 12 patients (8 in the treatment arm, and 4 in the placebo arm), it clearly demonstrates that the drug GM604 is completely safe to use, with only minor adverse events that could be clinically managed. The trial has been very well managed, scrutinized, clinically and experimentally planned/executed, along with thorough experimentation for the analysis of biomarkers. The statistical analysis within the realm of the study is to-the-point, and the statistical assessments of plasma and CSF-derived biomarker data between baseline and treatment time points is rigorous. Overall, there is also a considerable experimental rigor in the study, and provides an unbiased view point of safety, coupled with the modulation of gene expression in response to the drug. The procedures for measuring ALSFRS efficacy measurements that include FVC, TUG, HHD, etc. have been performed with clinical rigor.\n\nSecond, the trial also clearly demonstrates that the genes involved in ALS, and the biomarkers of the disease (TDP-43, Tau, SOD-1) showed favourable expression shift upon treatment with GM604, suggesting the genes being its possible targets. This also shows the evidence that the GM604 was able to alter the expression of certain ALS genes and its biomarkers in the positive direction, an evidence not shown before with any drug in the market for ALS treatment.\n\nAlthough improved functional measurements during the Phase 2A study and in advanced ALS patient were observed, they failed to meet statistical significance. A larger trial is needed in the light of these findings. It is also understandable from the authors point of view that the clinical improvement measures in ALS are highly variable and difficult to achieve with a drug that is mechanistically different, acting on gene modulation. In addition, most drug trialled for ALS encounter the same problem, and the drug GM604 not only concurs with other drugs, but also is in line with the natural progression and history of the disease. One significant advantage GM604 offers is the multi-target effect, which none of the ALS drugs have achieved. Thus, it is likely that with repeated use of GM604, which modulates gene expression, patients will start showing improvement once the function of the impaired genes, involved in ALS, is restored to their normal function.\n\nIn the CSF,\nthere were suggestive trends with no statistically significant changes in CSF biomarker levels. SOD1 levels decreased at week 6 (visit 7) following treatment with GM604, but increased following placebo treatment.\n\nTotal CSF tau was decreased after end of week 2 (visit 6, nal dose) of active treatment with GM604. Tau increased following placebo treatment. Cystatin C increased after end of week 2 (visit 6, nal dose) and week 12 (visit 8) following treatment with GM604, and was decreased following placebo treatment.\nThese are positive and very interesting indications at the gene expression of these highly ALS-related biomarkers. What other genes authors encountered alongside these biomarkers? This is key in unravelling functionally-related genes.\n\nAt the level of gene analysis, I believe authors have done an excellent job with the key ALS genes in plasma and CSF compartments but more functional analysis of genes is needed that can guide us towards the genes that are mechanistically related to the ones authors have analysed, and the genes that fit in the bigger schema of neurodegeneration.\n\nOverall comments: It is a tightly controlled study with logical set of clinical and experimental parameters. Interesting to note is the multi-target nature of GM604, which has not been shown by any other drug for ALS. Mechanistically, the drug is tentative but larger clinical trials can evaluate a more precise mechanism. The way it stands currently, it is clear that this neuromodulator acts on a large number of genes that are involved in neurodegeneration, therefore a clear definition of genes and pathways is needed to address its precise mechanism. It also appears likely its utility in other neurodegenerative diseases.\nThe clinical improvement scores were minimal and transitory, but this is expected and is in line with clinical history of patients, duration of the study (12 weeks), and the nature of ALS disease as such. As the drug works on gene expression, more time is needed for improvement in clinical scores, along with repeated use.\nOverall, authors have provided an excellent discussion in relation to previously published studies and this study is unique and exhaustive in terms of assessments of various clinical and experimental parameters. All in all, a bigger study is warranted to conclude on statistical improvement in clinical parameters, in addition to involving more exhaustive panoply of genes that can critically demonstrate the relationship with biomarkers SOD-1, Tau, TDP-43 and Cystatin, etc. This is critical!\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] }, { "id": "35982", "date": "17 Jul 2018", "name": "Héctor R. Martínez", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI am sending my comments and criticism to the research article entitled A Phase 2A randomized, double-blind, placebo-controlled pilot trial of GM604 in patients with Amyotrophic Lateral Sclerosis (ALS Protocol GALS-001) and a single compassionate patient treatment (Protocol GALS-C), written by Kindy M, Lupinacci P, Chau R, Shum T and Ko D\nThis article describes that the GM604 is a peptide with a sequence identical to one of the active motoneuronotrophic factor. This peptide modulates many ALS-associated genes, promoting decreased expression of SOD1, repression of genes associated with the intrinsic apoptosis pathway, and increased expression of genes associated with mitosis and cell division. It is a very interesting peptide that may be investigated in this neurologic disorder as well as in other neurodegenerative and neurovascular disorder including Ischemic Stroke.\n\nGM604 according to this papers it is a safe drug, therefore I consider has to be applied in larger number ALS patients since this trial includes only 12 ALS patients treated with this peptide and 4 placebo treated patients\n\nThis article is described as multi-center phase 2A, double-blind, randomized, placebo-controlled pilot trial, however in the abstract is reported a 2-center trial\n\nALS diagnosed as definite ALS was performed according to the El Escorial Criteria. However, it is not described whether clinical and neurophysiologic criteria including Awaji criteria were used.\n\nThe procedures used for patient evaluation in this trial were performed with scientific rigor, However:  a)There is a small number of ALS patients (8 in treatment and 4 in placebo group) b) In this paper the authors describe objective measurements including ALSFRS-R Time Up and Go and Hand-Held Dynamometry and FVC, although most of these objective measurements are not very sensitive to follow up in research drug trial, however I consider these measurements have to be performed every 4 weeks at least during 24 weeks mostly the ALSFRS-R\n\nThe authors did not describe in the methods section the type of CSF analysis performed. However, they described in the section of statistical analysis that biomarker in plasma and CSF was compared between treatment groups\n\nThe biomarkers in blood and CSF are not considered as objective measurement of clinical outcome, probably some biomarkers may be used as possible effect of experimental drugs or dell therapy into the motor neuronal environment. The biomarkers so far reported in the literature are considered as some potential candidates for diagnosis or prognosis in ALS patients including Cystatin C, SOD1, pNfH, Neurofilament Heavy Chain Tau/tau ratio, and CSF Cytokine Pathway reported recently for our research group\n\nA table reporting adverse events between ALS treated patients and controls has to be integrated in the paper.\n\nThis is a research article that involves treated and control ALS subjects therefore I believe that historical controls are not needed\n\nThe authors did not report in the methods, type of CSF analysis were done in treated and control ALS patients. However, in the statistical analysis they compared biomarker results in blood and CSF between treatments\n\nAll patients included in this research are in an early state of ALS. There is a short time between clinical onset and inclusion to the trial (8.9 and 5.2 months in treated patients and placebo group respectively). I believe that these patients could be followed at least for 6 months after inclusion to this research protocol\n\nIt was previously reported that the average decline in the ALSFRS-R score in ALS patients is -1.1 points per month and -13.32 points per year. Therefore, in this trial a comparison has to be done between treated ALS patients and those in the placebo group.\n\nMerit Cudkowicz describe a decline in ALSFRS-R score of 3.6 point for placebo group and -2.2 in patients using dexpramipexole dose after three-months period follow-up. Since the observation period in this research trial (12 weeks) is similar than Cudkowicz report, a comparison of ALSFRS-R score among them is worthwhile\n\nRecently in ALS patients, slow and fast progressors have been identified using a probabilistic model with the ALSFRS-R score over 12 months follow up in ALS patients under placebo treatment. I judge this analysis may be useful in this research protocol as well as in a future trial with GM 604\n\nIn this research article objective measurements were performed including ALSFRS-R, Time Up and Go and Hand-Held Dynamometry at screening, baseline, 2, 6 and 12 weeks as well as safety and tolerability that only was applied in eleven patients. However, in TUG, Grip Strength and HHD scores, no significant treatment differences were observed between placebo and treatment groups. No statistically significant changes were found in the FVC among ALS treated and placebo. The slope for ALSFRS-R for the placebo group changed minimally before and after treatment and in those patients treated with GM 604 had no statistically significant changes at 12 weeks. I believe this no representative changes among treated and placebo group is due to a) small number of patients and controls included in this trial 2) short period of follow up and 3) the objective measurements used in this trial need to be strengthened\n\nThe objective measurements in this trial could be: The Tractography including measuring of the Anisotropic Fraction of the pyramidal tract that is mostly originated in upper motor neurons of the frontal motor strip and the Motor Evoked Potential at baseline and 3 to 6 months after inclusion. Although these objective measurements may be more useful to objectively determine the potential protective effect of this peptide to motor neurons.\n\nFinally, I consider that the authors presented an interesting paper to be published as a short communication including some modification I have described above that include in future trials a higher number of patients as well as improve the objective measurement about safety and efficacy. To combine results obtained in this trial with some other (advanced ALS in compassionate treatment) it is no suggested since they already reported that results in other report.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-230
https://f1000research.com/articles/6-222/v1
06 Mar 17
{ "type": "Research Article", "title": "Mitogenomes of Giant-Skipper Butterflies reveal an ancient split between deep and shallow root feeders", "authors": [ "Jing Zhang", "Qian Cong", "Xiao-Ling Fan", "Rongjiang Wang", "Min Wang", "Nick V. Grishin", "Jing Zhang", "Qian Cong", "Xiao-Ling Fan", "Rongjiang Wang", "Min Wang" ], "abstract": "Background: Giant-Skipper butterflies from the genus Megathymus are North American endemics. These large and thick-bodied Skippers resemble moths and are unique in their life cycles. Grub-like at the later stages of development, caterpillars of these species feed and live inside yucca roots. Adults do not feed and are mostly local, not straying far from the patches of yucca plants. Methods: Pieces of muscle were dissected from the thorax of specimens and genomic DNA was extracted (also from the abdomen of a specimen collected nearly 60 years ago). Paired-end libraries were prepared and sequenced for 150bp from both ends. The mitogenomes were assembled from the reads followed by a manual gap-closing procedure and a phylogenetic tree was constructed using a maximum likelihood method from an alignment of the mitogenomes. Results: We determined mitogenome sequences of nominal subspecies of all five known species of Megathymus and Agathymus mariae to confidently root the phylogenetic tree. Pairwise sequence identity indicates the high similarity, ranging from 88-96% among coding regions for 13 proteins, 22 tRNAs and 2 rRNA, with a gene order typical for mitogenomes of Lepidoptera. Phylogenetic analysis confirms that Giant-Skippers (Megathymini) originate within the subfamily Hesperiinae and do not warrant a subfamily rank. Genus Megathymus is monophyletic and splits into two species groups. M. streckeri and M. cofaqui caterpillars feed deep in the main root system of yucca plants and deposit frass underground. M. ursus, M. beulahae and M. yuccae feed in the yucca caudex and roots near the ground, and deposit frass outside through a \"tent\" (a silk tube projecting from the center of yucca plant). M. yuccae and M. beulahae are sister species consistently with morphological similarities between them. Conclusions: We constructed the first DNA-based phylogeny of the genus Megathymus from their mitogenomes. The phylogeny agrees with morphological considerations.", "keywords": [ "phylogeny", "mitochondria", "sequence assembly", "Hesperiidae", "Megathymini" ], "content": "\n\nGiant-Skippers (Lepidoptera: Hesperiidae: Megathymini) are large, fat-bodied butterflies endemic to the North American continent1–3. Their caterpillars adapted to feeding inside large roots and fleshy leaves of Yucca and Agave plants and relatives. Protected from many predators living within their nutrition-rich food sources, Megathymini are larger in size than most other skippers, and don't feed as adults. Genus Megathymus is characterized by root-feeding caterpillars, mostly in Yucca plants, that build a \"tent\" (a silk tube projecting above the ground) at least prior to pupation. Caterpillars of the genus Agathymus live inside Agave leaves and make a \"trap-door\" (a round, hardened disk of silk) to close the entrance to their leaf chamber before pupation.\n\nTo better understand the evolution and phylogeny of Megathymus, we sequenced complete mitogenomes of all five known species from the genus: M. yuccae, M. beulahae, M. ursus, M. streckeri, and M. cofaqui (http://www.butterfliesofamerica.com/L/Hesperiidae.htm). For most species, nominotypical subspecies from or near the type localities were used (see Figure 1 for specimen data; collected under the permit #08-02Rev). M. beulahae specimen, male paratype, was from the National Museum of Natural History collection (Smithsonian Institution, Washington, DC, USA). To confidently root the Megathymus tree, we also sequenced a complete mitogenome of Agathymus mariae as an outgroup. Methods for genomic DNA extraction, library construction, next-generation sequencing, and computational procedures followed those we reported previously4–14. The sequences have been deposited in GenBank and received accessions KY630500–KY630505.\n\nSpecies names for mitogenome reported here are colored red. Numbers by the nodes show bootstrap support values and branches; bootstraps less than 70% are collapsed. GenBank accessions for sequences and data for specimens with mitogenomes reported here are: Achalarus lyciades NC_030602.1; Agathymus mariae mariae KY630504, voucher NVG-1647, female, USA: New Mexico, Eddy County, 22-Sep-2013; Ampittia dioscorides KM102732.1; Burara striata KY524446; Carterocephalus silvicola NC_024646.1; Celaenorrhinus maculosa NC_022853.1; Choaspes benjaminii NC_024647.1; Ctenoptilum vasava NC_016704.1; Daimio tethys NC_024648.1; Euschemon rafflesia KY513288; Erynnis montanus NC_021427.1; Hasora anura NC_027263.1; Hasora vitta NC_027170.1; Heteropterus morpheus NC_028506.1; Lerema accius NC_029826.1; Lobocla bifasciatus NC_024649.1; Megathymus beulahae beulahae KY630505, voucher 11-BOA-13385G05, paratype, male, Mexico, Hidalgo, near Ixmiquilpan, highway 85, klm. 176, 19-Aug-1957; Megathymus cofaqui cofaqui KY630503, voucher NVG-1536, female, USA: Georgia, Burke County, 2-Aug-2013; Megathymus streckeri streckeri KY630501, voucher NVG-1461, male, USA: Arizona, Apache County, southeast of Holbrook, 19-May-2013; Megathymus ursus violae KY630502, voucher NVG-1504, male, USA: Texas, Pecos County, Glass Mountains, 7-Jun-2013; Megathymus yuccae yuccae KY630500, voucher NVG-1185, male, USA: South Carolina, Aiken County, 25-Feb-2013; Papilio glaucus NC_027252; Parnara guttata NC_029136.1; Potanthus flavus NC_024650.1; Pyrgus maculatus NC_030192.1.\n\nAll specimens, but one, were collected in 2013 and pieces of their muscles cut out of the thorax were preserved in 100% ethanol to ensure best DNA quality. However, M. beulahae paratype specimen was collected in 195715 and stored pinned, spread and dry in a museum drawer for 60 years. DNA was extracted from its abdomen prior to genitalia dissection and produced good quality genomic reads resulting in a complete mitogenome assembly. Similarly to the results reported previously16, we see that dry insect collections are an invaluable source of specimens for DNA studies; DNA can be extracted from Lepidoptera without damaging specimens beyond standard genitalia dissection procedure; and good quality DNA sequences can be obtained from specimens collected many decades ago.\n\nSequence comparison revealed that mitogenomes of all six species of Megathymini were very similar, about 15K base pairs in length, coding for 13 proteins, 22 transfer RNAs and 2 ribosomal RNA with gene order typical for mitogenomes of Lepidoptera. The A+T-rich control region is most variable in sequence and length and contains several direct repeats of about 360 bp present in all six species. Among Hesperiidae with available mitogenomes6,10,11,13,14,17–24 these repeats are unique to Megathymini. The repeats cause difficulty with mitogenome assembly and their number remains uncertain.\n\nTo obtain the first DNA-based phylogeny of Megathymus, we constructed RAxML25 (version 8.2.8, model GTRGAMMA, 100 bootstrap replicates) maximum likelihood tree from available high-quality mitogenomes of Hesperiidae6,10,11,13,14,17–24 rooted with Pterourus glaucus (Papilionidae) sequence10 (Figure 1). While not giving confident resolution to the relationships between subfamilies Eudaminae and Pyrginae, the tree confirms the placement of Megathymini within the subfamily Hesperiinae26–28 and argues against historical treatment of Giant-Skippers at subfamily level. The tree resolves the Megathymini phylogeny with 100% bootstrap, supports monophyly of the genus Megathymus and suggests a split between the two species groups. The first group is formed by M. streckeri and M. cofaqui. Caterpillars of these species feed deep in the main root system of yucca plants and deposit frass underground2,3. They build a tent only prior to pupation and the tent usually projects from the ground surface. Males of these two species possess hair-like scales, particularly prominent on dorsal hindwing. M. streckeri and M. cofaqui are the closest sister species among Megathymus (Figure 1). Due to their apparently close relationship and allopatric distribution, Scott has suggested that M. streckeri and M. cofaqui may be subspecies of the same biological species2. However, the COI barcode sequences we obtained show about 4% divergence between them, revealing significant differences and supporting the two taxa as distinct species. COI barcode divergence in different populations of the same species mostly falls within 2%29.\n\nThe second species group consists of M. yuccae, M. beulahae and M. ursus. Caterpillars of these species feed in Yucca caudex and in roots close to the ground, maintaining the tent throughout development and depositing frass outside the tent2,3. Males of these three species lack hair-like scales. M. yuccae and M. beulahae are sister species, as expected from their close morphological similarities. However, their COI barcodes show a very large divergence of 9%. This pronounced divergence was unexpected, because the two species are quite similar in appearance and some males are difficult to tell apart (http://www.butterfliesofamerica.com/L/t/Megathymus_a.htm). The most noticeable difference between M. yuccae and M. beulahae is the larger white ventral hindwing spots in the latter species, frequently fused to form a bad, especially in females. However, these spots may be significantly reduced in males, frequently in the northern populations. Interestingly, M. beulahae is the only Megathymus species that feeds in yucca-like Agave plant1,15, but it is a confident sister of Yucca-feeding M. yuccae. M. ursus is a sister to these two Megathymus. M. ursus has rather different wing shape and patterns. The wings are narrower with more extended apex, forewing spots well-separated in M. yuccae form a band-like arrangement, and hindwings lack spots that females M. yuccae and M. beulahae possess.\n\nIn conclusion, we sequenced mitochondrial genomes of all five known species of Megathymus and one species of Agathymus as an outgroup, and constructed the first DNA-based phylogeny of Megathymus. The phylogeny is fully consistent with morphological and behavioral similarities between species. Our results support phylogenetic placement of Megathymini within the subfamily Hesperiinae and clarify the relationships between Megathymus species. In particular, the major phylogenetic split is between the shallow and deep yucca root feeders, and significant mitochondrial DNA divergences between M. yuccae and M. beulahae and between M. streckeri and M. cofaqui support the species status of these allopatric and similar in appearance taxa.", "appendix": "Author contributions\n\n\n\nJ.Z. assembled and analyzed the mitogenomes. Q.C. extracted DNA and prepared sequencing libraries. N.V.G conceived the project, and drafted the manuscript. J.Z., Q.C., X-L.F., RJ.W., M.W. and N.V.G. analyzed the data and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by National Institutes of Health [GM094575] and the Welch Foundation [I-1505].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe acknowledge Texas Parks and Wildlife Department (Natural Resources Program Director David H. Riskind) for the permit #08-02Rev that makes research based on material collected in Texas State Parks possible. We are grateful to Robert K. Robbins, John M. Burns, and Brian Harris (National Museum of Natural History, Smithsonian Institution, Washington, DC) for granting access to the collections under their care. We thank Lisa N. Kinch for critical suggestions and proofreading of the manuscript. This work was supported in part by the National Institutes of Health (GM094575 to NVG) and the Welch Foundation (I-1505 to NVG).\n\n\nReferences\n\nFreeman HA: Systematic review of the Megathymidae. J Lepid Soc. 1969; 23: 1–62. Reference Source\n\nScott JA: The Butterflies of North America: A Natural History and Field Guide. (Standford University Press); 1986. Reference Source\n\nRoever K: The Butterflies of North America. (ed W. H. Howe); (Doubleday and Co.). 1975; 411–422.\n\nCong Q, Borek D, Otwinowski Z, et al.: Tiger Swallowtail Genome Reveals Mechanisms for Speciation and Caterpillar Chemical Defense. Cell Rep. 2015. pii: S2211-1247(15)00051-0. PubMed Abstract | Publisher Full Text\n\nCong Q, Borek D, Otwinowski Z, et al.: Skipper genome sheds light on unique phenotypic traits and phylogeny. BMC Genomics. 2015; 16: 639. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Grishin NV: The complete mitochondrial genome of Lerema accius and its phylogenetic implications. PeerJ. 2016; 4: e1546. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Shen J, Borek D, et al.: When COI barcodes deceive: complete genomes reveal introgression in hairstreaks. Proc Biol Sci. 2017; 284(1848): pii: 20161735. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Shen J, Borek D, et al.: Complete genomes of Hairstreak butterflies, their speciation, and nucleo-mitochondrial incongruence. Sci Rep. 2016; 6: 24863. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Shen J, Warren AD, et al.: Speciation in Cloudless Sulphurs Gleaned from Complete Genomes. Genome Biol Evol. 2016; 8(3): 915–931. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShen J, Cong Q, Grishin NV: The complete mitochondrial genome of Papilio glaucus and its phylogenetic implications. Meta Gene. 2015; 5: 68–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShen J, Cong Q, Grishin NV: The complete mitogenome of Achalarus lyciades (Lepidoptera: Hesperiidae). Mitochondrial DNA B Resources. 2016; 1(1): 581–583. Publisher Full Text\n\nShen J, Cong Q, Kinch LN, et al.: Complete genome of Pieris rapae, a resilient alien, a cabbage pest, and a source of anti-cancer proteins [version 1; referees: 2 approved]. F1000Res. 2016; 5: 2631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang J, Cong Q, Shen J, et al.: The complete mitogenome of Euschemon rafflesia (Lepidoptera: Hesperiidae). Mitochondrial DNA B Resources. 2017; 2(1): 136–138.Publisher Full Text\n\nZhang J, Cong Q, Shen J, et al.: The complete mitochondrial genome of a skipper Burara striata (Lepidoptera: Hesperiidae). Mitochondrial DNA B Resources. 2017; in press.\n\nStallings DB, Turner JR: A review of the Megathymidae of Mexico, with a synopsis of the classification of the family. The Lepidopterists' News. 1958; 11: 113–137. Reference Source\n\nTimmermans MJTN, Viberg C, Martin G, et al.: Rapid assembly of taxonomically validated mitochondrial genomes from historical insect collections. Biol J Linn Soc. 2016; 117(1): 83–95. Publisher Full Text\n\nCao L, Wang J, James John Y, et al.: The complete mitochondrial genome of Hasora vitta (Butler, 1870) (Lepidoptera: Hesperiidae). Mitochondrial DNA A DNA Mapp Seq Anal. 2016; 27(4): 3020–3021. PubMed Abstract | Publisher Full Text\n\nHao J, Sun Q, Zhao H, et al.: The Complete Mitochondrial Genome of Ctenoptilum vasava (Lepidoptera: Hesperiidae: Pyrginae) and Its Phylogenetic Implication. Comp Funct Genomics. 2012; 2012: 328049. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim MI, Baek JY, Kim MJ, et al.: Complete nucleotide sequence and organization of the mitogenome of the red-spotted apollo butterfly, Parnassius bremeri (Lepidoptera: Papilionidae) and comparison with other lepidopteran insects. Mol Cells. 2009; 28(4): 347–363. PubMed Abstract | Publisher Full Text\n\nKim MJ, Wang AR, Park JS, et al.: Complete mitochondrial genomes of five skippers (Lepidoptera: Hesperiidae) and phylogenetic reconstruction of Lepidoptera. Gene. 2014; 549(1): 97–112. PubMed Abstract | Publisher Full Text\n\nShao L, Sun Q, Hao J: The complete mitochondrial genome of Parara guttata (Lepidoptera: Hesperiidae). Mitochondrial DNA. 2015; 26(5): 724–725. PubMed Abstract | Publisher Full Text\n\nWang AR, Jeong HC, Han YS, et al.: The complete mitochondrial genome of the mountainous duskywing, Erynnis montanus (Lepidoptera: Hesperiidae): a new gene arrangement in Lepidoptera. Mitochondrial DNA. 2014; 25(2): 93–94. PubMed Abstract | Publisher Full Text\n\nWang J, James John Y, Xuan S, et al.: The complete mitochondrial genome of the butterfly Hasora anura (Lepidoptera: Hesperiidae). Mitochondrial DNA A DNA Mapp Seq Anal. 2016; 27(6): 4401–4402. PubMed Abstract | Publisher Full Text\n\nWang K, Hao J, Zhao H: Characterization of complete mitochondrial genome of the skipper butterfly, Celaenorrhinus maculosus (Lepidoptera: Hesperiidae). Mitochondrial DNA. 2015; 26(5): 690–1. PubMed Abstract | Publisher Full Text\n\nStamatakis A: RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics. 2006; 22(21): 2688–2690. PubMed Abstract | Publisher Full Text\n\nWarren AD, Ogawa JR, Brower AVZ: Phylogenetic relationships of subfamilies and circumscription of tribes in the family Hesperiidae (Lepidoptera: Hesperioidea). Cladistics. 2008; 24(5): 642–676. Publisher Full Text\n\nWarren AD, Ogawa JR, Brower AVZ: Revised classification of the family Hesperiidae (Lepidoptera: Hesperioidea) based on combined molecular and morphological data. Syst Entomol. 2009; 34(3): 467–523. Publisher Full Text\n\nYuan X, Gao K, Yuan F, et al.: Phylogenetic relationships of subfamilies in the family Hesperiidae (Lepidoptera: Hesperioidea) from China. Sci Rep. 2015; 5: 11140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuemer P, Mutanen M, Sefc KM, et al.: Testing DNA barcode performance in 1000 species of European lepidoptera: large geographic distances have small genetic impacts. PLoS One. 2014; 9(12): e115774. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "21230", "date": "23 Mar 2017", "name": "John A. Shuey", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is an interesting contribution to our understanding of the evolution of Megathymini and the phylogentic placement of the tribe within the Hesperiidae.\n\nUsing mitogenomes of all recognized species of Megathymus, the authors confirm morphologically-based  species and species groups.  Moreover – they identify a deep mitogenomic divergence between lineages that correspond with important life history traits that define species groups.", "responses": [] }, { "id": "21272", "date": "27 Mar 2017", "name": "Jiasheng Hao", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis work determined mitogenome sequences of nominal subspecies of all five known species of Megathymus and Agathymus mariae and constructed the phylogenetic trees of the main Hesperiinae groups, the results confirms that  Giant-Skippers (Megathymini) should not warrant a subfamily rank and that the monophyletic Megathymus  splits into two species groups. In addition, the reconstructed mitogenomic phylogeny is fully consistent with morphological and behavioral similarities between the closely related species of the genus Megathymus. Overall this interesting work deserves to be published and indexed.\n\nThe manuscript’s writing is good with an appropriate title and other content; the design, method and analysis is generally correct though some phylogenetic analyses should be more robust.\n\nIf the work considers the molecular dating of the Megatheymus divergence by molecular clock method, and incorporates the analysis of relevant earth environmental factors, the results and significance will be more remarkable.", "responses": [] } ]
1
https://f1000research.com/articles/6-222
https://f1000research.com/articles/6-220/v1
06 Mar 17
{ "type": "Systematic Review", "title": "Effect of citrus-based products on urine profile: A systematic review and meta-analysis", "authors": [ "Fakhri Rahman", "Ponco Birowo", "Indah S. Widyahening", "Nur Rasyid", "Fakhri Rahman", "Ponco Birowo", "Indah S. Widyahening" ], "abstract": "Background. Urolithiasis is a disease with high recurrence rate, 30-50% within 5 years. The aim of the present study was to learn the effects of citrus-based products on the urine profile in healthy persons and people with urolithiasis compared to control diet and potassium citrate. Methods. A systematic review was performed, which included interventional, prospective observational and retrospective studies, comparing citrus-based therapy with standard diet therapy, mineral water, or potassium citrate. A literature search was conducted using PUBMED, COCHRANE, and Google Scholar with “citrus or lemonade or orange or grapefruit or lime or juice” and “urolithiasis” as search terms. For statistical analysis, a fixed-effects model was conducted when p > 0.05, and random-effects model was conducted when p < 0.05. Results. In total, 135 citations were found through database searching with 10 studies found to be consistent with our selection criteria. However, only 8 studies were included in quantitative analysis, due to data availability. The present study showed a higher increased in urine pH for citrus-based products (mean difference, 0.16; 95% CI 0.01-0.32) and urinary citrate (mean difference, 124.49; 95% CI 80.24-168.74) compared with a control group. However, no differences were found in urine volume, urinary calcium, urinary oxalate, and urinary uric acid. From subgroup analysis, we found that citrus-based products consistently increased urinary citrate level higher than controls in both healthy and urolithiasis populations. Furthermore, there was lower urinary calcium level among people with urolithiasis. Conclusions. Citrus-based products could increase urinary citrate level significantly higher than control. These results should encourage further research to explore citrus-based products as a urolithiasis treatment.", "keywords": [ "Citrus", "citrate", "potassium citrate", "urolithiasis", "urine profile" ], "content": "Introduction\n\nHumans have suffered urinary tract stones for centuries1. The incidence and prevalence of urolithiasis are different between geographic locations, depending on age and sex distribution, stone composition and stone location2. Risk of stone development has been shown to be 5–10% with a higher prevalence in men than women3. Urolithiasis is a common disease with significant morbidity and cost worldwide4–6. Based on National Health and Nutrition Examination survey, kidney stones affect 1 in 11 people in the United States, and an epidemiological increase was found in 2012 compared to 19947. Additional data from Dr. Cipto Mangunkusumo National General Hospital, Indonesia’s national referral hospital, showed an increase in stone disease prevalence from 182 patients in 1997 to 847 patients in 20028. Moreover, it is further worsen by its high recurrence rate reaching 30–50% within 5 years7.\n\nCalcium-based urinary tract stone is the most common stone composition found in urolithiasis9,10. Supersaturation is believed to be the mechanism behind calcium stone formation11. One factor in determining urine stone formation or stone recurrence is urine profile, which is defined as urine volume and its composition. Hypercalciuria and hypocitraturia are the most common urine abnormalities found among calcium stone-formers12. A high fluid intake could prevent stone formation by lowering supersaturation, whereas citrate could prevent stone formation by ionizing urinary calcium13,14. Food that is rich of citrate is citrus. There are wide variety of citrus fruits and derivate products that can be easily obtained, such as lemonade, grapefruit, orange, lime, and citrus-based juice. Several studies had already been conducted to learn the effect of citrus-based products on urine profile. However, the results between those studies were contradictive. Therefore, our study aimed to systematically review and quantify the available studies regarding the effects of citrus-based products on urine profile and its comparison to a control diet and potassium citrate.\n\n\nMethods\n\nWe included both healthy people and patients with urolithiasis history in our selection criteria. Study subjects must have consumed citrus fruits, such as orange, lime, grapefruit, or juices made from the fruits. Study designs could be interventional, prospective observational, or retrospective with standard diet therapy (any kind of mineral water), or potassium citrate, as a control group therapy. We included studies with urine profile as the outcome. We only included articles written in English or Indonesian, and those with full text article available. We excluded non-systematic review articles. We did not limit studies based on their year conducted.\n\nA literature search was conducted using PUBMED, COCHRANE, and Google Scholar as search engines on August 2016. The terms “citrus OR lemonade OR orange OR grapefruit OR lime OR juice” AND “urolithiasis” were used as search terms. We also searched the list of references in included studies. We did not use any limitation in study searching.\n\nAll studies were screened for duplication using EndNote X6 software. Duplication-free articles underwent title and abstract examination using predetermined inclusion and exclusion criteria mentioned above. Selection of studies was selected by two authors independently. Discrepancies of opinion were resolved by discussion. All studies, which fulfilled the inclusion and exclusion criteria, underwent full text review. For every eligible full text, we extracted the following data, if available: subjects specific condition, citrus-based product used in the study, number of patients consuming citrus-based product, citrate content or its concentration, control intervention, number of individuals under control intervention. For the outcomes, we extracted urine profile data as follows: volume, pH, calcium level, citrate level, oxalate level, and uric acid level. Measurement units used in this study are L/day for urine volume and mg/day for urinary calcium level, urinary citrate level, urinary oxalate level, and urinary uric acid level. All data in the form of numbers were extracted manually as mean and standard deviation for variable measurement.\n\nThis study used Cochrane Risk of Bias assessment tools15 and Newcastle-Ottawa scale16 to assess interventional and retrospective study’s quality, respectively. These assessments of study quality were done by two authors independently. Quantitative synthesis of included studies was analyzed using Review Manager (RevMan) 5.0 software and mean difference was used as its effects size measurement. Heterogeneity of studies was assessed using chi-square. A fixed-effects model was conducted when p > 0.05, whereas a random-effect model is conducted when p < 0.05. We also conducted subgroup analysis to differentiate between healthy and urolithiasis populations.\n\nStudies which could not be included in quantitative analysis were described qualitatively.\n\n\nResults\n\nWe found 135 citations through database searching. Literature searching from the list of references found similar studies that were all already included in this study. Ten studies were found to be consistent with our selection criteria (Figure 1).\n\nTwo of ten studies had to be excluded from quantitative analysis because of the following reasons: (1) Penniston et al.17 only published baseline data and its maximal change following intervention; and (2) Tosukhowong et al.19 used medians as their outcome measurement, and due to its non-uniform distribution, we were unable to convert these to means. Therefore, eight studies were analyzed to find the effect of citrus-based products on urine profile compared to controls. However, not all of the eight studies were included in urine profile outcome measurement, due to data availability. Characteristics of the included studies and their risk of bias assessment can be seen in Table 1 and Figure 2/Supplementary Table 1, respectively.\n\nRS – retrospective study; RCT – randomized controlled trial; CBAS – controlled before-after study; CS – crossover study. *Also included in qualitative synthesis for comparison between citrus-based product and potassium citrate.\n\nData shows that citrus-based products increased urine pH (mean difference, 0.16; 95% confidence interval [CI] 0.01-0.32) and urinary citrate (mean difference 124.49; 95% CI 80.24-168.74) to a higher extent than control treatment (Figure 3).\n\nHowever, there was no statistically significant difference in urine volume (mean difference -0.09; 95% CI -0.20-0.02), urinary calcium (mean difference -5.45; 95% CI -18.89-7.98), urinary oxalate (mean difference 0.76; 95% CI -0.47-1.98) and urinary uric acid (mean difference 2.15; 95% CI -23.96-28.27) between the two groups (Figure 4).\n\nSubgroup analysis showed a significantly higher urinary citrate level in both the healthy population and the population with history of urolithiasis who received citrus-based therapy compared to control. However, urine pH, which showed a statistically significant increase in urine pH compared to controls, did not demonstrate any differences in a subgroup analysis. On the other hand, urinary calcium was lower after consumption of citrus-based products compared to controls in the urolithiasis population. Furthermore, this study demonstrated that there was a lower urine volume in the healthy population after drinking citrus-based products compared to controls (Figure 5). We did not find any differences in other urine profile variables, either in the healthy population or the population with history of urolithiasis (Supplementary Figure 1 and Supplementary Figure 2).\n\nWe tried to conduct further analysis by excluding Aras et al.20 from quantitative analysis, due to its different study method (RCT). We still found a significant increase in urine citrate level in both mixed population (mean difference 132.46; 95% CI 70.48-194.44) and the population with a history of urolithiasis (mean difference 159.22; 95% CI 47.05-271.40]), as well as no statistically significant difference in urine pH (mean difference 0.16; 95% CI -0.02-0.33). Furthermore, other variables still demonstrate similar outcomes after exclusion of Aras et al.\n\nDue to the reasons stated above, we decided to discuss the comparisons between citrus-based product and potassium citrate in a qualitative manner.\n\nThree studies showed both citrus-based products (lemon juice and lime powder) and potassium citrate increased the level of urinary citrate significantly17,19,20. Even though no significant difference in post treatment urine profile was found between citrus-based products and potassium citrate, post-treatment citrate level in the potassium citrate group showed a 3.5 times increase from pre-treatment level, while it was only 2.5 times in the lemon juice group20. Furthermore, Penniston et al. exhibited a greater maximum increase of urinary citrate level in lemonade therapy combined with potassium citrate compared to lemonade therapy alone17.\n\nTwo studies also showed significant increase in urine pH in both treatment arms17,19. However, a study by Aras et al. only exhibit a significant increase in urine pH for the potassium citrate group20. In terms of side effects, patients in the potassium citrate group suffered gastric and oropharyngeal discomfort, although they did not require drug discontinuation20. Furthermore, potassium citrate had lower compliance compared to citrus-based therapy19.\n\n\nDiscussion\n\nThis study showed that citrus-based products, such as lemonade, orange juice and grapefruit juice, could increase urinary citrate levels and urine pH. Low citrate excretion, as in type I tubular acidosis, shows an increase in nephrolithiasis incidence27. Therefore, existence of citrate in urine is important since it is a well-known preventive factor in calcium stone formation, with an increase in calcium salt solubility and calcium oxalate crystal growth inhibition as its primary mechanism. It also can reduce bone resorption and increase calcium reabsorption in kidneys. Furthermore, citrate fixes the inhibitory properties of Tamm-Horsfall protein28. Citrate and Tamm-Horsfall protein are related to inhibition of calcium oxalate agglomeration29. An increase in urine pH is due to metabolism of citrate into bicarbonate13. Moreover, an increase in urine pH could reduce reabsorption of citrate30. Thus, it could induce more citrate excretion. A study conducted by Curhan et al.31 found an increased risk of stone formation associated with grapefruit juice consumption; although, the exact mechanism is still unclear. One theory suggests that grapefruit juice contains sugar, which can increase calcium excretion31. However, this study proved that citrus-based products could increase urinary citrate level, which could be a protective factor for urinary tract stone formation.\n\nPotassium citrate has been used as urolithiasis stone treatment for more than two decades. Its effectiveness in urolithiasis treatment has been established from several studies32,33. From one meta-analysis conducted by Phillip et al., potassium citrate significantly reduced stone size, reduced new stone formation, and increased citrate levels34. The stone prevention mechanism of potassium citrate is thought to be due to alkali loading and its citrate-uric effect35. In this study, potassium citrate showed a significant increase in urinary citrate and is superior to citrus-based products in elevating urinary citrate. However, the use of potassium citrate has a limitation due to its side effect if used for a long term period, such as epigastric discomfort and frequent large bowel movement, and it requires the consumption of many tablets daily to reach sufficient therapeutic doses, which could dramatically decrease patient compliance36. Therefore, citrus-based products could be an alternative therapy with lower cost and better urinary citrate level than control therapy.\n\nThis is the first systematic review and meta-analysis that focuses on citrus-based product and its effect towards urine profile compared to standard therapy. However, this study only searched for published article which could lead into publication bias. Moreover, most of the included studies were not conducted using the best method for interventional studies, which is randomized controlled trials. Therefore, from the positive results this study has shown, we encourage other researchers to conduct randomized controlled trials to provide stronger evidence the beneficial effects of citrus-based products on urinary stone disease.\n\n\nConclusions\n\nCitrus-based products increase urinary citrate and urine pH significantly compared to control treatments. Compared to standard potassium citrate therapy, there was a smaller increase in urine pH and urine citrate using citrus-based products. However, due to potassium citrate side effects and patient’s poor compliance, citrus-based products could be alternative therapy in preventing stone formation. These study’s results should encourage further research to explore citrus-based product as a urolithiasis treatment.\n\n\nData availability\n\nDataset 1: Characteristics of studies included for urine pH. doi, 10.5256/f1000research.10976.d15305637\n\nDataset 2: Characteristics of studies included for urinary citrate. doi, 10.5256/f1000research.10976.d15305738\n\nDataset 3: Characteristics of studies included for urine volume. doi, 10.5256/f1000research.10976.d15305839\n\nDataset 4: Characteristics of studies included for urinary calcium. doi, 10.5256/f1000research.10976.d15305940\n\nDataset 5: Characteristics of studies included for urinary oxalate. doi, 10.5256/f1000research.10976.d15306041\n\nDataset 6: Characteristics of studies included for urinary uric acid. doi, 10.5256/f1000research.10976.d15306142", "appendix": "Author contributions\n\n\n\nPB and NR developed the concept of this study. IW designed the research methodology. FR did the literature searching. FR and PB did the selection of studies. FR prepared the draft of manuscript. All author contributed in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study received grants from Directorate of Research and Community Service (DRPM), Universitas Indonesia in 2013 (2739/H.R12/HKT.05.00/2013).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Table 1. Newcastle-Ottawa scale for retrospective study’s risk of bias assessment.\n\nClick here to access the data.\n\nSupplementary Table 2: PRISMA checklist.\n\nClick here to access the data.\n\nSupplementary Figure 1. Urine pH, urinary calcium, urinary oxalate, and urinary uric acid in healthy subject population.\n\nClick here to access the data.\n\nSupplementary Figure 2. Urine volume, urine pH, urinary oxalate, and urinary uric acid in population with a history of urolithiasis.\n\nClick here to access the data.\n\n\nReferences\n\nEknoyan G: History of urolithiasis. Clinical Reviews in Bone and Mineral Metabolism. 2004; 2(3): 177–85. Publisher Full Text\n\nTrinchieri A: Epidemiology of urolithiasis. Arch Ital Urol Androl. 1996; 68(4): 203–49. PubMed Abstract\n\nTürk C, Knoll T, Petrik A, et al.: Pocket Guidelines on urolithiasis. Eur Urol. 2014; 40(4): 362–71.\n\nFukuhara H, Ichiyanagi O, Kakizaki H, et al.: Clinical relevance of seasonal changes in the prevalence of ureterolithiasis in the diagnosis of renal colic. Urolithiasis. 2016; 44(6): 529–537. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuslumanoglu AY, Binbay M, Yuruk E, et al.: Updated epidemiologic study of urolithiasis in Turkey. I: Changing characteristics of urolithiasis. Urol Res. 2011; 39(4): 309–14. PubMed Abstract | Publisher Full Text\n\nEdvardsson VO, Indridason OS, Haraldsson G, et al.: Temporal trends in the incidence of kidney stone disease. Kidney Int. 2013; 83(1): 146–52. PubMed Abstract | Publisher Full Text\n\nScales CD Jr, Smith AC, Hanley JM, et al.: Prevalence of kidney stones in the United States. Eur Urol. 2012; 62(1): 160–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIndonesia IAU: Tatalaksana Batu Saluran Kemih. 2007.\n\nSingh P, Enders FT, Vaughan LE, et al.: Stone Composition Among First-Time Symptomatic Kidney Stone Formers in the Community. Mayo Clin Proc. 2015; 90(10): 1356–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoses R, Pais VM Jr, Ursiny M, et al.: Changes in stone composition over two decades: evaluation of over 10,000 stone analyses. Urolithiasis. 2015; 43(2): 135–9. PubMed Abstract | Publisher Full Text\n\nPark S, Pearle MS: Pathophysiology and management of calcium stones. Urol Clin North Am. 2007; 34(3): 323–34. PubMed Abstract | Publisher Full Text\n\nMaalouf N: Approach to the adult kidney stone former. Clin Rev Bone Min Metab. 2012; 10(1): 38–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTracy CR, Pearle MS: Update on the medical management of stone disease. Curr Opin Urol. 2009; 19(2): 200–4. PubMed Abstract | Publisher Full Text\n\nSiener R: Can the manipulation of urinary pH by beverages assist with the prevention of stone recurrence? Urolithiasis. 2016; 44(1): 51–6. PubMed Abstract | Publisher Full Text\n\nHiggins JP, Altman DG, Gøtzsche PC, et al.: The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011; 343: d5928. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWells GA, Shea B, O’Connell D, et al.: The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Ottawa, ON: Ottawa Hospital Research Institute; [Accessed September 1, 2016]. 2011. Reference Source\n\nPenniston KL, Steele TH, Nakada SY: Lemonade therapy increases urinary citrate and urine volumes in patients with recurrent calcium oxalate stone formation. Urology. 2007; 70(5): 856–60. PubMed Abstract | Publisher Full Text\n\nSumorok NT, Asplin JR, Eisner BH, et al.: Effect of diet orange soda on urinary lithogenicity. Urol Res. 2012; 40(3): 237–41. PubMed Abstract | Publisher Full Text\n\nTosukhowong P, Yachantha C, Sasivongsbhakdi T, et al.: Citraturic, alkalinizing and antioxidative effects of limeade-based regimen in nephrolithiasis patients. Urol Res. 2008; 36(3–4): 149–55. PubMed Abstract | Publisher Full Text\n\nAras B, Kalfazade N, Tuğcu V, et al.: Can lemon juice be an alternative to potassium citrate in the treatment of urinary calcium stones in patients with hypocitraturia? A prospective randomized study. Urol Res. 2008; 36(6): 313–7. PubMed Abstract | Publisher Full Text\n\nGoldfarb DS, Asplin JR: Effect of grapefruit juice on urinary lithogenicity. J Urol. 2001; 166(1): 263–7. PubMed Abstract | Publisher Full Text\n\nHönow R, Laube N, Schneider A, et al.: Influence of grapefruit-, orange- and apple-juice consumption on urinary variables and risk of crystallization. Br J Nutr. 2003; 90(2): 295–300. PubMed Abstract | Publisher Full Text\n\nKoff SG, Paquette EL, Cullen J, et al.: Comparison between lemonade and potassium citrate and impact on urine pH and 24-hour urine parameters in patients with kidney stone formation. Urology. 2007; 69(6): 1013–6. PubMed Abstract | Publisher Full Text\n\nOdvina CV: Comparative value of orange juice versus lemonade in reducing stone-forming risk. Clin J Am Soc Nephrol. 2006; 1(6): 1269–74. PubMed Abstract | Publisher Full Text\n\nSeltzer MA, Low RK, McDonald M, et al.: Dietary manipulation with lemonade to treat hypocitraturic calcium nephrolithiasis. J Urol. 1996; 156(3): 907–9. PubMed Abstract | Publisher Full Text\n\nTrinchieri A, Lizzano R, Bernardini P, et al.: Effect of acute load of grapefruit juice on urinary excretion of citrate and urinary risk factors for renal stone formation. Dig Liver Dis. 2002; 34(Suppl 2): S160–3. PubMed Abstract | Publisher Full Text\n\nKhanniazi MK, Khanam A, Naqvi SA, et al.: Study of potassium citrate treatment of crystalluric nephrolithiasis. Biomed Pharmacother. 1993; 47(1): 25–8. PubMed Abstract | Publisher Full Text\n\nFuselier HA, Ward DM, Lindberg JS, et al.: Urinary Tamm-Horsfall protein increased after potassium citrate therapy in calcium stone formers. Urology. 1995; 45(6): 942–6. PubMed Abstract | Publisher Full Text\n\nLaube N, Jansen B, Hesse A: Citric acid or citrates in urine: which should we focus on in the prevention of calcium oxalate crystals and stones? Urol Res. 2002; 30(5): 336–41. PubMed Abstract | Publisher Full Text\n\nHeilberg IP, Goldfarb DS: Optimum nutrition for kidney stone disease. Adv Chronic Kidney Dis. 2013; 20(2): 165–74. PubMed Abstract | Publisher Full Text\n\nCurhan GC, Willett WC, Rimm EB, et al.: Prospective study of beverage use and the risk of kidney stones. Am J Epidemiol. 1996; 143(3): 240–7. PubMed Abstract | Publisher Full Text\n\nRobinson MR, Leitao VA, Haleblian GE, et al.: Impact of long-term potassium citrate therapy on urinary profiles and recurrent stone formation. J Urol. 2009; 181(3): 1145–50. PubMed Abstract | Publisher Full Text\n\nAllie-Hamdulay S, Rodgers AL: Prophylactic and therapeutic properties of a sodium citrate preparation in the management of calcium oxalate urolithiasis: randomized, placebo-controlled trial. Urol Res. 2005; 33(2): 116–24. PubMed Abstract | Publisher Full Text\n\nPhillips R, Hanchanale VS, Myatt A, et al.: Citrate salts for preventing and treating calcium containing kidney stones in adults. Cochrane Database Syst Rev. 2015; 10(10): CD010057. PubMed Abstract | Publisher Full Text\n\nEttinger B, Pak CY, Citron JT, et al.: Potassium-magnesium citrate is an effective prophylaxis against recurrent calcium oxalate nephrolithiasis. J Urol. 1997; 158(6): 2069–73. PubMed Abstract | Publisher Full Text\n\nLee YH, Huang WC, Tsai JY, et al.: The efficacy of potassium citrate based medical prophylaxis for preventing upper urinary tract calculi: a midterm followup study. J Urol. 1999; 161(5): 1453–7. PubMed Abstract | Publisher Full Text\n\nRahman F, Birowo P, Widyahening IS, et al.: Dataset 1 in: Effect of citrus-based products on urine profile: A systematic review and meta-analysis. F1000Research. 2017. Data Source\n\nRahman F, Birowo P, Widyahening IS, et al.: Dataset 2 in: Effect of citrus-based products on urine profile: A systematic review and meta-analysis. F1000Research. 2017. Data Source\n\nRahman F, Birowo P, Widyahening IS, et al.: Dataset 3 in: Effect of citrus-based products on urine profile: A systematic review and meta-analysis. F1000Research. 2017. Data Source\n\nRahman F, Birowo P, Widyahening IS, et al.: Dataset 4 in: Effect of citrus-based products on urine profile: A systematic review and meta-analysis. F1000Research. 2017. Data Source\n\nRahman F, Birowo P, Widyahening IS, et al.: Dataset 5 in: Effect of citrus-based products on urine profile: A systematic review and meta-analysis. F1000Research. 2017. Data Source\n\nRahman F, Birowo P, Widyahening IS, et al.: Dataset 6 in: Effect of citrus-based products on urine profile: A systematic review and meta-analysis. F1000Research. 2017. Data Source" }
[ { "id": "21950", "date": "02 May 2017", "name": "David S Goldfarb", "expertise": [ "Reviewer Expertise Nephrolithiasis", "kidney stones", "calculi", "renal", "chronic kidney disease", "end stage renal disease", "electrolytes and acid-base balance", "gout" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\n1. The authors are correct in stating that no previous meta-analysis of the effects of citrus fruits has been performed. The results are not surprising as stone clinicians consider citrus supplementation (or potassium citrate) of important utility. However, the meta-analysis is reasonable to perform as the intervention is commonly administered. 2.The studies are similar enough to consider a meta-analysis worth doing. One study used diet orange soda which is not citrus juice (I am the senior author of that study). There is otherwise an appropriate selection of studies, based on availability of the requisite data which the authors explain in detail.\n\n3. The limitations of the data include the relatively small sample size, so that the meta-analysis is also underpowered but shows the expected increase in urine citrate and the increase in urine pH only in stone formers and not in non-stone forming controls. No studies actually assess stone formation as an outcome, addressing only urinary chemistry.  4. The conclusions appear reasonable, and are well-stated, if not surprising.  5. Table 1 could be improved by including the dose of juice for all the interventions. 6. I did not find a legend for figure 2 about bias assessment. The figure is hazy, not of perfect resolution. There is no interpretation of the bias assessment in the manuscript. It is worth noting that it is probably not possible to blind participants to citrus juice vs water. The other criteria, such as blinding to sequence allocation may also not be critical to a study of urine chemistry and not of kidney stone outcomes.  7. Regarding Curhan’s finding that grapefruit juice was associated with higher risk for stones, mentioned in the discussion, that finding was not confirmed in a later study1. 8. In the introduction, the authors state “whereas citrate could prevent stone formation by ionizing urinary calcium”: this is not quite correct. Citrate binds to ionic calcium and prevents it from binding to oxalate or phosphate.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Yes\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Yes\n\nIs the statistical analysis and its interpretation appropriate? Yes\n\nAre the conclusions drawn adequately supported by the results presented in the review? Yes", "responses": [] }, { "id": "22426", "date": "08 May 2017", "name": "Bhaskar Kumar Somani", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a comprehensive systematic review and meta-analysis of the effect of citrus-based products on urine profile. The paper is supplemented by the PRISMA flow chart and forest plot charts to present their results.\nAlthough their search strategy is good, they should have used other terms such as 'kidney stones', 'stones', 'ureteric stones' and 'calculi' too. They have only used the term 'Urolithiasis' which could potentially miss on other relevant studies.\nAs a limitation of any systematic review the authors are correct in acknowledging that their is likely to be a publication bias. Similarly, the long term effect of the use of citrate-based products in not know from this study and whether their results translate into a reduction in stone recurrences remain unknown.\n\nA recent cochrane review on the use of citrate salts on prevention and treatment of calcium containing kidney stones in adults showed a reduction in new stone formation and stone recurrences in these patients1.\nOverall the paper reads well and is a nice summary on the use of citrate based products on urine profile.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Yes\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Yes\n\nIs the statistical analysis and its interpretation appropriate? Yes\n\nAre the conclusions drawn adequately supported by the results presented in the review? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-220
https://f1000research.com/articles/5-1385/v1
15 Jun 16
{ "type": "Research Note", "title": "Finger stick blood collection for gene expression profiling and storage of tempus blood RNA tubes", "authors": [ "Darawan Rinchai", "Esperanza Anguiano", "Phuong Nguyen", "Damien Chaussabel", "Esperanza Anguiano", "Phuong Nguyen", "Damien Chaussabel" ], "abstract": "With this report we aim to make available a standard operating procedure (SOP) developed for RNA stabilization of small blood volumes collected via a finger stick. The anticipation that this procedure may be improved through peer-review and/or readers public comments is another element motivating the publication of this SOP. Procuring blood samples from human subjects can, among other uses, enable assessment of the immune status of an individual subject via the profiling of RNA abundance using technologies such as real time PCR, NanoString, microarrays or RNA-sequencing. It is often desirable to minimize blood volumes and employ methods that are the least invasive and can be practically implemented outside of clinical settings. Finger-stick blood samples are increasingly used for measurement of levels of pharmacological drugs and biological analytes. It is a simple and convenient procedure amenable for instance to field use or self-collection at home using a blood sample collection kit. Such methodologies should also enable the procurement of blood samples at high frequency for health or disease monitoring applications.", "keywords": [ "Blood collection", "Fingerstick", "Gene expression", "RNA", "Tempus", "Transcriptome" ], "content": "Introduction\n\nUse of sample collection methods that are least invasive and that can be practically implemented outside of clinical settings, Finger-stick blood collection is used for a wide range of applications in routine clinical practice. It is for instance by this means that millions of individuals collect daily small blood volumes to monitor sugar levels.\n\nMore recently, availability of high throughput profiling technologies made it possible to measure simultaneously the abundance of tens of thousands of analytes. For instance, transcriptome profiling, which measures abundance of RNA on a genome-wide scale has become a mainstay in biomedical research settings1–6. This approach can be implemented through the use of technologies such as microarray and more recently RNA-sequencing. Robust and more cost-effective “meso-scale” profiling technologies, relying for instance on PCR or NanoString probes, can profile the abundance of hundreds of genes7. Blood transcriptome profiling has proven useful in generating high-resolution molecular phenotypes: to investigate pathogenesis of a wide range of diseases8–11; to develop biomarker signatures12–15; and to assess response to vaccines or therapies7,16–20. More recently, an approach consisting in correlating serial blood transcriptome signatures with clinical course of disease was described as a means to guide development and selection of novel therapeutic modalities in patients with systemic lupus erythematosus1,21.\n\nTranscriptome profiling studies have initially employed peripheral blood mononuclear cells (PBMCs)11,22,23. PBMCs are isolated by fractionation and are enriched in blood leukocytes. It is also a type of sample from which high quality RNA can be reliably obtained, which at the time was not the case of whole blood. However the PBMC preparation procedure involves multiple steps and important variations are introduced between the time of blood draw and preparation of the cell lysates24. Furthermore, it is a time consuming process that requires trained personnel and equipment and is not straightforward to implement in most clinical settings. Whole blood RNA stabilization systems have been adopted as they became available and are now widely used. However the vast majority of the studies carried out to date use relatively large volumes of venous blood12,13,19,20,25. Collection of small volumes of blood via finger sticks is especially indicated for high frequency sample collection to enable monitoring of the immune status of individuals in health and disease. Advantages of this collection modality stem from the fact that it is less invasive, faster and does not require a trained phlebotomist. Therefore it is more amenable to field applications and in home self-collection for proximity testing. A study by Obermoser et al., employed this collection method to investigate transcriptome responses elicited by influenza and pneumococcal vaccines at 8 different time points in the 48 hours following vaccine administration16. A methods development article has also been published by Robinson et al., demonstrating that RNA quality and gene expression data obtained from blood obtained via finger stick (70 μL) and venipuncture (2.5 mL) are highly comparable26.\n\nWith this report, we aim to share our standard operating procedure for stabilization of RNA from 50 μl of blood collected via a finger stick. This SOP will be used specifically in a pregnancy monitoring study that will be conducted on the Thai-Myanmar border. This study will consist of measuring changes in blood transcript abundance in 400 women during the second and third trimester of their pregnancy. A complete description of this study will be provided elsewhere.\n\nThe procedure described in this article can be employed for serial blood collection in clinical or research laboratory settings as well as for in-home self-collection. A narrative is provided here, along with general remarks and considerations. A detailed point-by-point SOP follows.\n\nNarrative: Tempus RNA tubes are designed for the collection of 3 ml of blood via venipuncture and contain 6 ml of a proprietary RNA stabilizing reagent. For the collection of 50 μl blood samples 100 μl of the RNA stabilizing reagent is aliquoted in microfuge tubes. Blood is collected with a plastic capillary straw. Immediately after collection, the tube is shaken vigorously to disrupt the blood cells. Lysis of blood cells occurs upon thoroughly mixing the blood drawn into the tube and the stabilizing reagent. Furthermore, RNases are inactivated and the freed RNA is selectively precipitated and thus further protected from degradation. Effective stabilization of the RNA ensures that the transcriptional profile is maintained and will accurately reflect the physiological state of the patient at the time of the blood draw. RNA properly collected in Tempus solution and stored at -20°C or -80°C will remain stable for minimum of 6 years27.\n\nGeneral remarks: After over 10 years of use across a wide range of clinical settings RNA stabilization using tempus solution has in our hands proven robust and reliable. However there are a few things that we have learned that are worth sharing:\n\n1) Finger stick: The finger is usually the preferred site for capillary testing in an adult patient. When samples are collected serially it is recommended to choose a different finger from the one used for the last procedure to prevent bruising. The sides of the heel are only used in pediatric and neonatal patients. The guidance given in Section 7.1 of the WHO guidelines on blood drawing: best practices in phlebotomy, can help decide whether to use a finger or heel-stick, and with the selection of an appropriately sized lancet28.\n\n2) Blood volumes: The volume can be adjusted depending on the application. Typical yield from 50 μl of blood is about 500 ng of total RNA. Procedures for RNA extraction and quality control will be shared in a separate publication (Anguiano E., Rinchai D., Tomei S., Chaussabel D., unpublished report). A study was conducted where as little as 15 μl of blood was collected, which was sufficient to run a high throughput Fluidigm PCR assay (Speake C., Whalen E., Gersuk V., Chaussabel D., Odegard JM., and Greenbaum CJ., unpublished report). Such small blood volumes can also be obtained serially from mice, which allow longitudinal monitoring of individual animals. In human studies, instead of using a capillary straw small blood volumes can also be collected and measured with a micropipette. The blood is then placed into the microfuge tube containing the tempus solution. This can be done when collecting small volumes of blood from a finger stick or obtaining a small aliquot of blood from a larger venous blood draw.\n\n3) Volume of RNA stabilization solution: The appropriate ratio of [Blood : RNA stabilizing reagent] is 1 volume of blood for 2 volumes of tempus solution (in our case 50 μl of blood in 100 μl of RNA stabilizing reagent). Loss in RNA quality and quantity will be observed if this ratio is not respected. Collecting more blood will actually result in decreased yields and RNA quality. In cases when the amount of blood collected is lower, the volume of tempus solution can be adjusted accordingly when feasible. The same ratio can be used when working with mouse blood collected from the tail vein using a similar procedure (as mentioned above blood volumes can be lowered to 15 μl). The volumetric ratio is usually lower when working with non-human primate species (e.g. 1:3, 1:4) and should be determined on a species-by-species basis (a 1:3 ratio is used when collecting blood from macaques29).\n\n4) Sample mixing: This, after maintaining an appropriate blood:tempus solution ratio, is the second most critical aspect of the procedure, and a potential cause of sample failure. As mentioned above samples must be homogenized by thorough mixing in order to disrupt cells and release their RNA cargo. The RNA will precipitate in the tempus solution and in this form is protected from degradation by the RNAses that are present in the sample.\n\n5) Temperature: RNA should remain in a precipitated state at “room temperature”. Although refrigeration and freezing at the earliest possible time is recommended, based on our observations keeping the blood lysates at room temperature (25°C) for up to 24 hours should not affect RNA quality. Samples can be stored at 4°C (refrigerator or cold packs) for up to 48 hours, which can simplify the logistics associated with temporary storage, transfer and shipping of samples post-collection. Based on information provided by the manufacturer RNA should remain in a precipitated state as long as temperatures remain below 30°C. It may therefore be necessary to take precaution when working in warm climates.\n\n6) Storage and shipping: By default samples are stored in the lab at -20°C. We have observed that the RNA yield for samples stored at -80°C is generally about half the yield of same blood samples stored at -20°C. Furthermore, we observed that plastic tempus tubes are made of will become brittle at temperatures lower than -20°C. Shipments are made on dry ice although for overnight shipping in cooler climates using ice packs should be sufficient (however testing using mock samples is recommended). When shipping “off the shelf” tempus tubes direct contact with dry ice should be avoided to prevent breakage. When shipping on dry ice the thickness of the walls of the polystyrene container holding the tubes along with the dry ice matters. The thinner the walls the faster the shipment will run out of dry ice. This is especially important to consider when contemplating longer transit times and/or warm weather conditions. Regarding biosafety, we have found the tempus solution to prevent growth of bacteria known for their resilience such as Burkholderia pseudomallei (Rinchai D, unpublished report), and thus conclude that threat of contamination via tempus blood lysates is likely to be low. However, appropriate testing should be carried out on a case-by-case basis and all procedures in the laboratory involving tempus lysates should be consistent with standard blood handling procedures.\n\n\nMaterials and methods\n\nTempusTM blood RNA tubes (ThermoFischer Scientific, Waltham, MA, USA; Product number 4342792; https://www.thermofisher.com/order/catalog/product/4342792)\n\nCapillary blood and tube assembly, untreated, 50 μl, thin design (Kabe Labortechnik, Nümbrecht, Germany (Product number GK100, http://www.kabe-labortechnik.de/download/kapillarblut_en.pdf)30,31\n\nSterile blood lancets\n\nAlcohol pads\n\nBiohazard container\n\nLab coat\n\nGloves\n\nZiploc-type biohazard bag and freezer box\n\nAdhesive bandages\n\nSample collection tube labels\n\n\nPrecautions\n\nPersonal protective equipment must be worn to prevent accidental exposure to blood and bloodborne pathogens [http://www.cdc.gov/niosh/topics/emres/ppe.html].\n\nDiscard all blood collection materials and “sharps” in properly labeled biohazard containers approved for their disposal.\n\nCheck that the liquid preservatives and anticoagulants in the collection tubes are clear and colorless. Do not use any tubes if they are discolored or contain precipitates.\n\n\nProcedures\n\nThe procedures below are illustrated in Figure 1 and a demonstration video is available here: https://www.youtube.com/watch?v=NjY-OqjrzbY\n\nThis figure illustrates the different steps involved in capillary blood sampling via finger stick.\n\n1. Assemble equipment and supplies, then complete the Fingerstick Information Log by recording relevant information about blood collection such as patient name, patient identity number (patient ID), date of blood collection, frequency number of blood collection (Day 1st, Day 7th... Day 90th,…). Double check that the label on the collection tube matches with the patient ID.\n\n2. Put on well-fitting gloves\n\n3. Choose one of the subject’s fingers from which blood will be collected. The middle or the ring finger is the best choice for finger stick collection. Avoid the thumb and pinkie finger, fingers with thick calluses, that are injured or swollen and fingers with tight rings as they may constrict blood flow.\n\n4. Prepare the puncture site by warming the area. If the subject is particularly cold have the subject wash hands under warm water to stimulate blood flow. In addition it may be necessary to warm the area with a moist towel for five to ten minutes.\n\n5. Wipe the fingertip with the alcohol pad and allow to air-dry completely without blowing or wiping off the alcohol.\n\n6. To stimulate blood flow, you may shake or gently knead the subject’s hand from palm to fingertip. Blood will also flow better if the hand is kept lower, approximately at the level of the subject’s waist.\n\n7. Hold finger and press lancet firmly against the side of the center of the finger, with lancet oriented perpendicular to the fingerprint grooves.\n\n8. Discard lancet in an appropriate container.\n\n9. Release pressure and allow a full drop of blood to collect on finger. If necessary, gently knead the palm only to stimulate blood flow.\n\n10. Wipe away the first drop of blood with a sterile gauze pad because it may be contaminated with tissue fluid or debris (sloughing skin).\n\n11. Collect blood sample into the capillary tube.\n\na. Hold the capillary and micro-tube assembly horizontally, and touch the tip of the capillary to the blood drop.\n\nb. The blood will be pulled into the tube via capillary action.\n\nc. Be sure to allow the capillary to fill end-to-end to allow collection of accurate blood volume.\n\nd. To expel the sample from the capillary, place the capillary and micro-tube assembly vertically and firmly tap the bottom of the tube. Remove capillary tube together with cap assembly system and discard in the appropriate biohazard container.\n\ne. It is important to maintain the appropriate blood sample to tempus solution ratio. A volume of 50 μl of blood should be added to the 100 μl of tempus solution. If necessary the volume of solution can be adjusted to the available or desired volume of blood; e.g. for 15 μl of blood use 30 μl of tempus solution.\n\nf. Close the micro-tubes, making sure that the cap is pressed down firmly to avoid any spillage during sample homogenization.\n\ng. To prevent clotting, blood samples should be collected within 30 seconds of performing the finger stick. Clotted samples will not be usable.\n\n12. Have the subject apply pressure to the puncture site using sterile gauze pad until bleeding has stopped and apply a bandage. Do not use the alcohol pad as contact of an open wound with alcohol would be painful for the subject.\n\n13. Mix the blood sample and preservative thoroughly by holding the top of the tube between thumb and index of one hand and flicking the tube vigorously for 20 seconds with the index finger of the other hand (Figure 1).\n\n14. If not already in place stick pre-printed label with sample information on the sample tube.\n\n15. Place sample tube in appropriate container (e.g. freezer box).\n\n16. The sample should be kept cold at 4°C and transferred to a -20°C freezer as soon as possible for long-term storage. Note that RNA integrity is preserved when samples are kept at “room temperature” for a few hours as long as the temperature does not rise above 30°C.\n\n17. For local transportation samples can be kept in a freezer box containing ice or ice packs. For international shipping samples can be kept on dry ice in a freezer box.", "appendix": "Author contributions\n\n\n\nDR: participated in development and testing of the protocol, in the shooting of the demonstration video; drafted the manuscript. EA: participated in development and testing of the protocol; edited the manuscript. PN: participated in testing protocol, reviewed the manuscript. DC: participated in development of the protocol, shooting of the demonstration video; assisted with the drafting of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were declared.\n\n\nGrant information\n\nDR and DC received support from the Qatar Foundation.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Dr. David Furman for his participation during the shooting of the demonstration video, Dr. Sara Tomei for her help with obtaining the necessary supplies and Benaroya Research Institute members (DCRP, Genomics and bioinformatics teams) for their input.\n\n\nReferences\n\nBanchereau R, Hong S, Cantarel B, et al.: Personalized Immunomonitoring Uncovers Molecular Networks that Stratify Lupus Patients. Cell. 2016; 165(3): 551–65. PubMed Abstract | Publisher Full Text\n\nChaussabel D, Pulendran B: A vision and a prescription for big data-enabled medicine. Nat Immunol. 2015; 16(5): 435–9. PubMed Abstract | Publisher Full Text\n\nJoshi AD, Andersson C, Buch S, et al.: Four Susceptibility Loci for Gallstone Disease Identified in a Meta-analysis of Genome-wide Association Studies. Gastroenterology. 2016; pii: S0016-5085(16)30110-X. PubMed Abstract | Publisher Full Text\n\nLinsley PS, Chaussabel D, Speake C: The Relationship of Immune Cell Signatures to Patient Survival Varies within and between Tumor Types. PLoS One. 2015; 10(9): e0138726. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLinsley PS, Speake C, Whalen E, et al.: Copy number loss of the interferon gene cluster in melanomas is linked to reduced T cell infiltrate and poor patient prognosis. PLoS One. 2014; 9(10): e109760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFurman D, Davis MM: New approaches to understanding the immune response to vaccination and infection. Vaccine. 2015; 33(40): 5271–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNielsen T, Wallden B, Schaper C, et al.: Analytical validation of the PAM50-based Prosigna Breast Cancer Prognostic Gene Signature Assay and nCounter Analysis System using formalin-fixed paraffin-embedded breast tumor specimens. BMC Cancer. 2014; 14: 177. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRamilo O, Allman W, Chung W, et al.: Gene expression patterns in blood leukocytes discriminate patients with acute infections. Blood. 2007; 109(5): 2066–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBennett L, Palucka AK, Arce E, et al.: Interferon and granulopoiesis signatures in systemic lupus erythematosus blood. J Exp Med. 2003; 197(6): 711–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPascual V, Chaussabel D, Banchereau J: A genomic approach to human autoimmune diseases. Annu Rev Immunol. 2010; 28: 535–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang BM, McLean AS, Dawes IW, et al.: Gene-expression profiling of peripheral blood mononuclear cells in sepsis. Crit Care Med. 2009; 37(3): 882–8. PubMed Abstract | Publisher Full Text\n\nMejias A, Dimo B, Suarez NM, et al.: Whole blood gene expression profiles to assess pathogenesis and disease severity in infants with respiratory syncytial virus infection. PLoS Med. 2013; 10(11): e1001549. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerry MP, Graham CM, McNab FW, et al.: An interferon-inducible neutrophil-driven blood transcriptional signature in human tuberculosis. Nature. 2010; 466(7309): 973–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartinez-Llordella M, Lozano JJ, Puig-Pey I, et al.: Using transcriptional profiling to develop a diagnostic test of operational tolerance in liver transplant recipients. J Clin Invest. 2008; 118(8): 2845–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNewell KA, Asare A, Kirk AD, et al.: Identification of a B cell signature associated with renal transplant tolerance in humans. J Clin Invest. 2010; 120(6): 1836–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nObermoser G, Presnell S, Domico K, et al.: Systems scale interactive exploration reveals quantitative and qualitative differences in response to influenza and pneumococcal vaccines. Immunity. 2013; 38(4): 831–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGaucher D, Therrien R, Kettaf N, et al.: Yellow fever vaccine induces integrated multilineage and polyfunctional immune responses. J Exp Med. 2008; 205(13): 3119–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHecker M, Hartmann C, Kandulski O, et al.: Interferon-beta therapy in multiple sclerosis: the short-term and long-term effects on the patients' individual gene expression in peripheral blood. Mol Neurobiol. 2013; 48(3): 737–56. PubMed Abstract | Publisher Full Text\n\nOswald M, Curran ME, Lamberth SL, et al.: Modular analysis of peripheral blood gene expression in rheumatoid arthritis captures reproducible gene expression changes in tumor necrosis factor responders. Arthritis Rheumatol. 2015; 67(2): 344–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuerec TD, Akondy RS, Lee EK, et al.: Systems biology approach predicts immunogenicity of the yellow fever vaccine in humans. Nat Immunol. 2009; 10(1): 116–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJourde-Chiche N, Chiche L, Chaussabel D: Introducing a New Dimension to Molecular Disease Classifications. Trends Mol Med. 2016; 22(6): 451–53, pii: S1471-4914(16)30009-0. PubMed Abstract | Publisher Full Text\n\nKaizer EC, Glaser CL, Chaussabel D, et al.: Gene expression in peripheral blood mononuclear cells from children with diabetes. J Clin Endocrinol Metab. 2007; 92(9): 3705–11. PubMed Abstract | Publisher Full Text\n\nAllantaz F, Chaussabel D, Stichweh D, et al.: Blood leukocyte microarrays to diagnose systemic onset juvenile idiopathic arthritis and follow the response to IL-1 blockade. J Exp Med. 2007; 204(9): 2131–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDebey-Pascher S, Eggle D, Schultze JL: RNA stabilization of peripheral blood and profiling by bead chip analysis. Methods Mol Biol. 2009; 496: 175–210. PubMed Abstract | Publisher Full Text\n\nPankla R, Buddhisa S, Berry M, et al.: Genomic transcriptional profiling identifies a candidate blood biomarker signature for the diagnosis of septicemic melioidosis. Genome Biol. 2009; 10(11): R127. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobison EH, Mondala TS, Williams AR, et al.: Whole genome transcript profiling from fingerstick blood samples: a comparison and feasibility study. BMC Genomics. 2009; 10: 617. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuale N, Lipkin WI, Briese T, et al.: Long-term storage of blood RNA collected in RNA stabilizing Tempus tubes in a large biobank--evaluation of RNA quality and stability. BMC Res Notes. 2014; 7: 633. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO Guidelines Approved by the Guidelines Review Committee: WHO Guidelines on Drawing Blood: Best Practices in Phlebotomy. Geneva, 2010. PubMed Abstract\n\nSkinner JA, Zurawski SM, Sugimoto C, et al.: Immunologic characterization of a rhesus macaque H1N1 challenge model for candidate influenza virus vaccine assessment. Clin Vaccine Immunol. 2014; 21(12): 1668–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRAM Scientific: SAFE-T-FILL® Capillary Blood Collection Tubes. Reference Source\n\nKABE Labortechnik: The capillary blood collection sets. Reference Source" }
[ { "id": "15424", "date": "29 Sep 2016", "name": "Angela E Vinturache", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to review the manuscript Finger stick blood collection for gene expression profiling and storage of tempus blood RNA tubes by Rinchai D, Anguiano E, Nguyen P and Chaussabel D. In this paper, the authors are proposing an SOP for finger stick blood collection for RNA profiling.  While the objective of this manuscript is worthwhile for the scientific world in proposing standardized lab methods, the paper would benefit from some improvement.\n\nThere is a disconnect between the title and the objective of the paper. Is the focus on the finger stick collection or on the tempus tube storage? This should be revised, especially considering that the authors intend to publish the RNA extraction in a separate paper.\nIntroduction: First sentence is unclear. The introduction could be shortened considerably. A more concise and to the point argument on why we need this method, advantages and disadvantages can be made. Also, the authors should add supporting reasons for why their method should be published and how it is different from the others. The authors mention that the method will be used in the future. Is this to understand they have no experience with the proposed method? Why is mentioning where/what study this method will be used important? Do authors have any experience with non-pregnant patients? Does pregnancy bear any weight in sample collection and transportation?\nNarrative of the procedure:\nThe figure is very useful for following the procedure step-by-step. Considering that a follow up paper proposed by the authors will discuss RNA extraction, all the practical points shared under 'General Remarks' could be moved to the respective manuscript. Since this paper proposes to discuss only the finger stick procedure, the discussion about the volumes of blood drawn is exhaustive and unwarranted for this particular manuscript. However, it is not clear if the tubes for the blood collection are the same as the tubes pre-prepared with the volume of RNA stabiliser for 50 μL of blood. Please provide some practical explanation on the procedure when the capillary straw is not filled (i.e. how to measure the blood volume). What type of capillary is to be used?\nPrecautions: There is no mention of the possibility of sample contamination. For point 3 and 5, I suggest they add the reasoning for the selected choices in addition to the drawbacks of the other possibilities. Eliminate repetitions about blood flow and the ratio blood to solution. Point 16 is quite vague. As the matter is essential for the RNA quality, I invite the authors to expand the topic and share their experience, with numbers rather than “as soon as possible”. How much of those samples could be kept during transportation? Some information on authors’ experience with the bio-repository would be very welcome. The authors mention somewhere about using the method as in home self collection. Have they experimented that? It would be particularly interesting if they would provide details about how they plan quality assurance for these samples. Also, any particularities of this method of collection should be explained.", "responses": [ { "c_id": "2517", "date": "03 Mar 2017", "name": "Darawan Rinchai", "role": "Author Response", "response": "We thank the reviewer for their precious spent reviewing our manuscript. Please see our point by point response below. In this paper, the authors are proposing an SOP for finger stick blood collection for RNA profiling. While the objective of this manuscript is worthwhile for the scientific world in proposing standardized lab methods, the paper would benefit from some improvement. There is a disconnect between the title and the objective of the paper. Is the focus on the finger stick collection or on the tempus tube storage? This should be revised, especially considering that the authors intend to publish the RNA extraction in a separate paper. Introduction: First sentence is unclear. The introduction could be shortened considerably. A more concise and to the point argument on why we need this method, advantages and disadvantages can be made. Also, the authors should add supporting reasons for why their method should be published and how it is different from the others. The authors mention that the method will be used in the future. Is this to understand they have no experience with the proposed method? Why is mentioning where/what study this method will be used important? Do authors have any experience with non-pregnant patients? Does pregnancy bear any weight in sample collection and transportation?   Authors: The introduction provides the rationale for adopting such a sample collection procedure. The benefits of this approach were mentioned in the third paragraph of the “Introduction”. We could not find a major disadvantage of this method, except quality and quantity of RNA may be affected if the protocol was to be adapted to be used collecting blood lower than 15 uL.    As per the reviewer’s suggestion,  1) We have edited the first paragraph to make it more clear and concise.  2) A detailed standard operating procedure has not been provided elsewhere before, which is the reason why we are publishing this report (and to gather feedback from other users/reviewers). We have first used this approach 6 years ago, with ensuing publications (Obermoser., et al. 2013), and more recently in a study in which samples were collected from diabetics and control subjects in home and on a weekly basis (Speake C., et al., Clin Exp Immunol 2016) – however those details have not been published before. The intent was to publish a ready to use SOP that could be readily incorporated  in a clinical protocol. We have also added a sentence to clarify these points.    3) Pregnancy has no particular bearing on this procedure, only this is mentioned here as the paper is published as part of our “molecular profiling of pregnancy channel”. We have added more information into this sentence.  \"The standard operating procedure that we are sharing with this report will be published as part   of our molecular profiling of pregnancy channel. Indeed, it was specifically developed for      collection and stabilization of 50 μl of blood collected via a finger stick in a pregnancy monitoring study currently being conducted on the Thai-Myanmar border.” Narrative of the procedure:  The figure is very useful for following the procedure step-by-step. Considering that a follow up paper proposed by the authors will discuss RNA extraction, all the practical points shared under 'General Remarks' could be moved to the respective manuscript. Since this paper proposes to discuss only the finger stick procedure, the discussion about the volumes of blood drawn is exhaustive and unwarranted for this particular manuscript. However, it is not clear if the tubes for the blood collection are the same as the tubes pre-prepared with the volume of RNA stabiliser for 50 μL of blood. Please provide some practical explanation on the procedure when the capillary straw is not filled (i.e. how to measure the blood volume). What type of capillary is to be used? Authors: Discussion of volume is extensive and warranted as this is in our experience one of the main causes of failure during the extraction step; we now provide a procedure detailing “preparation of the sample collection tubes” (Please see “Material and Methods”); the capillary needs to be filled in order to collect appropriate volumes. Alternative methods, such as use of micropipettes is possible if preferred; other products are also available and could be tested for that purpose. We describe the method that in our hands produces the best results for our particular conditions. Precautions: There is no mention of the possibility of sample contamination. Authors: TempusTM Blood RNA stabilizing Reagent immediately lyse whole blood and stabilize RNA in single step. We have tested lysate in the solution by bacterial culture and found that the sample were still sterilized after blood collection. Therefore we never found contamination after blood collection. The only possibility of the contamination is from sterile/hygiene of blood collection. Personal protective equipment (PPE) is a key of sterile sample collection. We therefore added more details into point 1 of “Precaution” about the point of sample contamination as below. “Personal protective equipment (PPE) must be worn to prevent accidental exposure to blood and bloodborne pathogens, and to help of reducing contamination during sample collection [http://www.cdc.gov/niosh/topics/emres/ppe.html].” For point 3 and 5, I suggest they add the reasoning for the selected choices in addition to the drawbacks of the other possibilities. Eliminate repetitions about blood flow and the ratio blood to solution. Authors: Ideally choosing a finger for fingerstick blood collection is a finger of non-dominant hand which are generally less calloused. We also need to choose the fingers that less painful for subjects. Therefore, the best choice should be middle and ring finger. We don’t recommend the thumb that may be calloused and has pulse and pinkie finger or index finger that is often calloused and potentially more sensitive to pain due to additional nerve ending.   We added this information in the manuscript.   Point 16 is quite vague. As the matter is essential for the RNA quality, I invite the authors to expand the topic and share their experience, with numbers rather than “as soon as possible”. How much of those samples could be kept during transportation? Some information on authors’ experience with the bio-repository would be very welcome. Authors: As the comment by reviewer, we have edited point #16 “The sample should be kept cold at 4°C no longer than 48 hours and transferred to a -20°C or -80°C freezer for long-term storage.” The authors mention somewhere about using the method as in home self collection. Have they experimented that? It would be particularly interesting if they would provide details about how they plan quality assurance for these samples. Also, any particularities of this method of collection should be explained. Authors: Since version 1 of our SOP came out in F1000Research (F1000Research 2016, 5:1385 (doi: 10.12688/f1000research.8841.1) we have published a paper describing results of a study in which weekly in home self-finger stick blood collection was undertaken by 13 subjects with type 1 diabetes and 14 controls for a period of 6 months [Speake C., et al., Clin Exp Immunol 2016]. Subjects returned an average of 24 out of 26 total weekly samples, and transcript data were successfully obtained for >99% of samples returned. A high degree of correlation between finger stick data and data from a standard 3 mL venipuncture sample was observed. RNA yields obtained from blood volumes ranging from 10, 15, 20, and 25 μL indicated that those volumes were sufficient to generate the 100 ng of RNA needed for downstream high throughput real time-PCR. [Speake C., et al., Clin Exp Immunol 2016].  However, the detailed procedure for finger stick blood collection and RNA stabilization employed in this and other studies has never been published.   The paragraph above was included in the manuscript (Part of Introduction).   This point was also addressed in response to Dr. Cliff’s comment: In Our initial experiment with RNA extraction of 50 uL blood collection protocol in Tempus solution, we obtained the average RNA yield was 1 ug, Purity of RNA; average of RNA integrity number (RIN) was ~7.5 and A260/A280 ratio of ~2.14. Such yields and quality are sufficient for downstream assays; including RNA sequencing, microarray or Fluidigm. These data will be included in our manuscript describing RNA extraction procedures." } ] }, { "id": "16014", "date": "04 Oct 2016", "name": "Jacqueline M Cliff", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to review the manuscript by Rinchai et al, entitled “Finger stick blood collection for gene expression profiling and storage of tempus blood RNA tubes”.\nBlood gene expression profiling has led to rapid advances in our understanding of a range of pathological conditions, including cancer, autoimmune disease and infectious disease. Blood RNA stabilisation tubes have greatly advanced the ease and standardisation of blood collection for such studies. Developing this methodology further by enabling reduced blood volume collection by non-trained phlebotomists or even study participants themselves would facilitate more detailed investigations at greater frequency and more pertinent time points, for example during times of disease exacerbation, and by removing the necessity to visit healthcare settings.\nThis article describes such a procedure, describing the collection of 50µl blood in a capillary tube, collected into Tempus Tube stabilisation reagent. This paper therefore provides an important contribution to the blood transcriptomics field. However, the manuscript could be improved if the following points are addressed.\nThe authors are suggesting that this sample collection method could be rolled out for home-based testing. However, the Tempus Tube RNA stabilisation reagent is hazardous according to the MSDS. This should be mentioned in the manuscript under the precautions section, whilst also addressing this limitation in terms of suitability for extensive roll-out for personal sample collection.\n\nWhilst the manuscript, including the figure and the accompanying video, are very explicit about the blood sample collection into the capillary tube, the preparation of the sample collection tubes in advance is not described. Including this under the preparation steps would be very helpful, as from the video it is difficult to see the Tempus reagent already in the tube.\n\nThe paper would be substantially improved by the inclusion of some RNA quality data, although the authors indicate RNA extraction will be published in a separate manuscript. But it would be reassuring to see some quantity and quality indicators in this manuscript, or the inclusion of some downstream analysis data to demonstrate that reasonable quality RNA can be generated.\nI agree with reviewer Angela Vinturache that the paper does not show much data regarding the storage of Tempus tubes as described in the paper title, except for a note about storage at -20oC being better than -80oC. In our experience, we get good yields of RNA from Tempus Tubes when samples have been stored at -80oC, as normally recommended for long-term RNA storage, believed to be due to inactivation of RNAses.  A comment on why -20oC is better would be useful.\n\nOverall I think this is an important manuscript which could stimulate the expansion of blood-based transcriptomics for disease analysis and treatment monitoring, and recommend that it is indexed, subject to addressing the reservations (particularly point 1) described above.", "responses": [ { "c_id": "2518", "date": "03 Mar 2017", "name": "Darawan Rinchai", "role": "Author Response", "response": "We thank the reviewer for their precious spent reviewing our manuscript. Please see our point by point response below. Blood gene expression profiling has led to rapid advances in our understanding of a range of pathological conditions, including cancer, autoimmune disease and infectious disease. Blood RNA stabilisation tubes have greatly advanced the ease and standardisation of blood collection for such studies. Developing this methodology further by enabling reduced blood volume collection by non-trained phlebotomists or even study participants themselves would facilitate more detailed investigations at greater frequency and more pertinent time points, for example during times of disease exacerbation, and by removing the necessity to visit healthcare settings. This article describes such a procedure, describing the collection of 50µl blood in a capillary tube, collected into Tempus Tube stabilisation reagent. This paper therefore provides an important contribution to the blood transcriptomics field. However, the manuscript could be improved if the following points are addressed. 1. The authors are suggesting that this sample collection method could be rolled out for home-based testing. However, the Tempus Tube RNA stabilisation reagent is hazardous according to the MSDS. This should be mentioned in the manuscript under the precautions section, whilst also addressing this limitation in terms of suitability for extensive roll-out for personal sample collection.  Authors: We thank reviewer for this suggestion. RNA stabilizing reagent is indeed potentially hazardous and designed for research use only. We therefore added the paragraph below under the “precaution” and “general remarks” sections, respectively and as suggested discuss this point as well. “Tempus Tube RNA stabilization reagent is a potential health hazard; acute oral toxicity, skin corrosion/irritation and serious eye damage/eye irritation can occur upon contact (See MSDS for details)” The hazardous nature of the tempus solution would make extensive roll out of the collection procedure described in this manuscript problematic and is at the moment clearly intended for research use under well controlled conditions, with preferably collection carried out by trained personnel. However, it should be noted that it has been-field tested for in-home self-collection in a limited number of subject over a period of 6 months without incident [Speake C., et al., Clin Exp Immunol 2016]. The fact that small volumes of solution are used may alleviate some concerns (30 microliters of solution for 15 microliters of blood in the above-mentioned study, vs 6 ml of solution for 3 ml of blood using “off the shelf” tempus collection tubes). However, other technical solutions in which liquids are better contained may indeed be preferable (e.g. microfluidics card, sponges), with one of the best example being the recently developed “DxCollect” system (DxTerity, Rancho Dominguez, CA). 2. Whilst the manuscript, including the figure and the accompanying video, are very explicit about the blood sample collection into the capillary tube, the preparation of the sample collection tubes in advance is not described. Including this under the preparation steps would be very helpful, as from the video it is difficult to see the Tempus reagent already in the tube. Authors: As suggested by the reviewer, we added a new section and figure (new Figure 1) for “Preparation of collection tubes” in “Materials and Methods” section. We provide the step-by-step procedure for aliquoting Tempus RNA stabilization and preparation of the blood collection tubes.  3. The paper would be substantially improved by the inclusion of some RNA quality data, although the authors indicate RNA extraction will be published in a separate manuscript. But it would be reassuring to see some quantity and quality indicators in this manuscript, or the inclusion of some downstream analysis data to demonstrate that reasonable quality RNA can be generated. Authors: A proof of principle has been obtained with a study that we recently published, in which weekly in home self-finger stick blood collection (15 uL) was implemented for 13 subjects with type 1 diabetes and 14 controls for a period of 6 months [Speake C., et al., Clin Exp Immunol 2016]. A high degree of correlation between results obtained via fingerstick and a standard 3 mL venipuncture sample was observed. RNA yields obtained from blood volumes ranging from 10, 15, 20, and 25 μL indicated that those volumes were sufficient to generate the 100 ng of RNA needed for downstream high throughput real time-PCR. Furthermore, it was found that equivalent quantities of of RNA were obtained whether tubes were flicked, pipetted, or vortexed, but was reduced when samples were not mixed at all.   Several sentences were added into part of introduction and the work referenced throughout, where appropriate.   However, the initial experiment with RNA extraction of 50 uL blood collection protocol in Tempus solution, we obtained the average RNA yield was 1 ug, Purity of RNA; average of RNA integrity number (RIN) was ~7.5 and A260/A280 ratio of ~2.14. This concentration was enough for downstream assays; RNA sequencing, microarray or Fluidigm assay. These data will be included in our manuscript describing the RNA extraction procedures.   I agree with reviewer Angela Vinturache that the paper does not show much data regarding the storage of Tempus tubes as described in the paper title, except for a note about storage at -20oC being better than -80oC. In our experience, we get good yields of RNA from Tempus Tubes when samples have been stored at -80oC, as normally recommended for long-term RNA storage, believed to be due to inactivation of RNAses.  A comment on why -20oC is better would be useful. Authors: We thank the reviewer for helping us to make our manuscript clearer. The following sentence has been added to the manuscript (6) Storage and shipping)   “After collecting the blood sample should be kept cold at 4°C no longer than 48 hours and transferred to a -20°C or -80°C freezer for long-term storage. Data obtained using a limited set of samples frozen overnight showed that the RNA yield for samples stored at -80°C was about half the yield of same blood samples stored at -20°C, but was nevertheless still amply sufficient for downstream analyses. It should also be noted that the plastic tempus tubes are made of will become brittle at temperatures lower than -20°C. The effect of storage temperature on RNA yield and quality will have to be evaluated further, especially over extended periods of time where storage at lower temperatures might show benefits (see also referees comments and our response for more details).”   We intend to publish standard operating procedures for RNA extraction from tempus blood, which will include extensive QA/QC analyses under different storage conditions. The preliminary data that we are referring to when comparing the yield of RNA from samples that were storage at -20 °C or -80 °C are shown below. Table 1: Preliminary results for comparison of -20 °C and -80 °C storage conditions Sample ID          Storage Temp (°C)         RNA yield (ug)         RNA RIN       S1                        -20                                    8.59                          8.2       S2                        -20                                    8.69                          8.0       S3                        -80                                    3.04                          9.0       S4                        -80                                    2.81                          8.9       NC                                                                0.05                          N/A Note: Two whole blood samples collected in Tempus RNA Tubes were combined into 50 mL conical tubes and then mixed thoroughly by inversion. Two aliquots of 4 mL each were stored at -80ºC or -20ºC for 3 days. Samples were subsequently processed using modified MagMAX for Stabilized Blood Tubes RNA Extraction Kit (adjusted for input volume during homogenization step only).  Purified RNA was assessed for concentration and integrity using NanoDrop and BioAnalyzer, respectively.  Negative control is a 1X PBS processed through RNA extraction workflow.   The result shows that samples stored at -20°C have greater RNA yield compared to those stored at -80°C, but that quality was similar under both storage conditions. We feel sharing these preliminary results as part of our point by point response could be helpful to some readers although more extensive investigation is required, especially with long-term storage – although yields and QC results will obviously not be available for some time." } ] }, { "id": "16392", "date": "14 Oct 2016", "name": "Eiliv Lund", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes an important approach to transcriptomics that could be important for future research. It is remarkable that modern technology can give mRNA profiles based on microgram of mRNA. The SOP is important, but some parts of it confuse me.\nFirst of all, this company is related to Tempus. Can the method be extended to other products? This is important since PAX is as much used as Tempus.\nSecondly, I am a bit surprised that -20 degrees is the primary choice versus -70 degrees since all medical biobanks use -70 degrees. The explanation of plastic that is brittle does not sound convincing for future use of the technology.\nThird, how long is the optimal mixing time before freezing? You state 20 seconds in the illustration while you also write that it can be stored for 24 hours at normal temperatures.\nThe technology is very interesting. Since there are no comparisons of outcomes for different procedures nor any quality measures of mRNA it may look more like a recipe.", "responses": [ { "c_id": "2519", "date": "03 Mar 2017", "name": "Darawan Rinchai", "role": "Author Response", "response": "We thank the reviewer for their precious spent reviewing our manuscript. Please see our point by point response below. This manuscript describes an important approach to transcriptomics that could be important for future research. It is remarkable that modern technology can give mRNA profiles based on microgram of mRNA. The SOP is important, but some parts of it confuse me.   First of all, this company is related to Tempus. Can the method be extended to other products? This is important since PAX is as much used as Tempus. Authors: Our choice of the tempus system over PaxGene is based on side-by-side comparisons we have performed over 10 years ago. We have checked in publications whether PAXgene were used for small volume blood collection. We have found only a publication by Carrol ED. et al., BMC Immunol 2007 where the PAXgene Blood RNA System kit protocol was modified for used with small, sick children. These investigators aliquoted 860 uL of PAXgene reagent into microtubes and then added 300 uL whole blood to maintain the same recommended proportions as in the PAXgene evacuated tube system. Total RNA yield was between 1,114 and 2,950 ng. This blood volume is not compatible with finger stick blood collection but it is possible that a protocols very similar to the one we are publishing here could be derived for use with PAXgene solution instead of Tempus.   However, there are many studies have compared the “Differences in RNA quality and yield between PAXgene™ and Tempus™”. Here are some selected publications; Asare AL. et al., BMC Genomics 2008 have reported that “the tempus system has higher mean yields, improved RNA purity based on OD 260/230 ratios, less degradation based on GAPDH 3'/5 ratios, and a higher number of expressed transcripts based on Percent Present Calls [PMID:18847473]. Nikula T. et al., Transl Res 2013 have compared the performance of Tempus and PAXgene systems by using 2 RNA amplification protocols and high-density microarrays. “The microarray analysis showed acceptable correlation within and between the RNA preservation methods, but altogether 443 transcripts were differentially expressed between RNA samples preserved in TEMPUS and PAXgene tubes”. However, the TEMPUS gene expression profile more closely resembles the PBMC than the PAXgene [PMID:23138105]. Häntzsch M. et al., PLoS One 2014. That have compared the 3 PAXgene and 3 Tempus tubes samples which collected from participants of the LIFE study with and without acute myocardial infarction (AMI). They extracted RNA with 4 manual protocols from Qiagen (PAXgene Blood miRNA Kit), Life Technologies (MagMAX for Stabilized Blood Tubes RNA Isolation Kit), and Norgen Biotek (Norgen Preserved Blood RNA Purification Kit I and Kit II), and 2 (semi-)automated protocols on the QIAsymphony (Qiagen) and MagMAX Express-96 Magnetic Particle Processor (Life Technologies). The results showed that RNA yields were highest using the Norgen Kit I with Tempus tubes and lowest using the Norgen Kit II with PAXgene [PMID:25469788].  Thus, choice of RNA stabilizing reagent used to preserve samples can indeed be important and affect subsequent RNA quantity and quality [(1) -(3)]. We have added sentences below to our introduction;   “Whole blood RNA stabilization systems; PAXgene™ (Qiagen) and Tempus™ (Life Technologies) have been adopted as they became available and are now widely used. Several studies have compared the performance of these 2 commercial kits and found differences in gene expression profiles, RNA quality and yield [Asare AL. et al., BMC Genomics 2008., Nikula T. et al., Transl Res 2013., Häntzsch M. et al., PLoS One 2014]. Reported yields and quality of RNA stabilized in Tempus solution was generally greater. Thus, the choice of RNA stabilizing reagent used to preserve samples can indeed be important and affect subsequent RNA quantity and quality. Our choice of the tempus system over PaxGene dates from side-by-side comparisons we have performed over 10 years ago.”   Secondly, I am a bit surprised that -20 degrees is the primary choice versus -70 degrees since all medical biobanks use -70 degrees. The explanation of plastic that is brittle does not sound convincing for future use of the technology. Authors: This point was also addressed in response to Dr. Cliff’s comment: Regarding to the optimal temperature for long term RNA storage. The preliminary data that we are referring to comparing the yield of RNA from samples that were storage at -20 °C or -80 °C are shown below. Table 1: Preliminary results for comparison of -20 °C and -80 °C storage conditions Sample ID          Storage Temp (°C)         RNA yield (ug)         RNA RIN       S1                        -20                                    8.59                          8.2       S2                        -20                                    8.69                          8.0       S3                        -80                                    3.04                          9.0       S4                        -80                                    2.81                          8.9       NC                                                                0.05                          N/A Note: Two whole blood samples collected in Tempus RNA Tubes were combined into 50 mL conical tubes and then mixed thoroughly by inversion. Two aliquots of 4 mL each were stored at -80ºC or -20ºC for 3 days. Samples were subsequently processed using modified MagMAX for Stabilized Blood Tubes RNA Extraction Kit (adjusted for input volume during homogenization step only).  Purified RNA was assessed for concentration and integrity using NanoDrop and BioAnalyzer, respectively.  Negative control is a 1X PBS processed through RNA extraction workflow.   The result shows that samples stored at -20°C have greater RNA yield compared to those stored at -80°C, but that quality was similar under both storage conditions. It is conceivable that over the long term integrity however would be better maintained at the lower temperature. We feel sharing these preliminary results at this point could be helpful to some readers although more extensive investigation is required, especially with long-term storage – although yields and QC results will obviously not be available for some time.   However, when using off the shelf vacuutainer tempus tubes it has indeed been our experience that tubes in direct contact with dry ice can break during transportation and will shatter if dropped to the ground. Recommendation for -20°C storage were made by the manufacturer on this basis. However, this is not a concern when using microtubes in which small volumes of blood in tempus solution are stored. The appropriate sentences were added into the manuscript (6) storage and shipping).   Third, how long is the optimal mixing time before freezing? You state 20 seconds in the illustration while you also write that it can be stored for 24 hours at normal temperatures. Authors: The 20 seconds mixing step is necessary and should always be performed. It should occur immediately after blood collection. This is important because it will allow precipitation of the RNA, which is then protected from degradation. We have clarified this point in the manuscript.   Furthermore, in a recent paper by Speake C, et al., Clin Exp Immunol 2016 in which we report the implementation of a procedure using 15 ul of fingerstick for in home blood collections we demonstrated that quality of RNA was similar when tubes were flicked, pipetted, or vortexed, but was reduced when samples were not mixed at all.   Subsequently to this homogenization step we recommend that samples be stored: at room temperature for a few hours; at 4°C for no longer than 48 hours; at -20°C or -80°C freezer for long-term storage. It should be noted that per the manufacturer samples could be stored for 5 days at room temperature but in our experience this can lead to significant loss in RNA integrity and it is generally best to refrigerate or freeze as soon as feasible. The technology is very interesting. Since there are no comparisons of outcomes for different procedures nor any quality measures of mRNA it may look more like a recipe. Authors:  We indeed tried to be pragmatic and publish a detailed SOP with sufficient level of details that it could be directly incorporated in a clinical protocol. Outcome comparisons with RNA quality and quantity measurements are presented in a paper that was published very recently and is now referred to in version 2 of our manuscript." } ] } ]
1
https://f1000research.com/articles/5-1385
https://f1000research.com/articles/6-210/v1
02 Mar 17
{ "type": "Research Article", "title": "Cause and age-related mortality trends in Bangladesh (2000-2008)", "authors": [ "Aziza Sultana Rosy Sarkar", "Nurul Islam", "Aminul Hoque", "Nurul Islam", "Aminul Hoque" ], "abstract": "Background The purpose of this study was to analyze mortality trends in Bangladesh from 2000 to 2008, to identify the main causes of death, and categorize them by sex and age group. Methods This study used vital registration, maternal and child health data collected from Matlab, a rural area of Bangladesh, in 2000, 2004 and 2008.The data were collected and published by Health and Demographic Surveillance System of ICDDR, B. Results This study indicates a downward trend in communicable disease, neonatal and maternal, injury and miscellaneous mortality. Only non-communicable diseases (NCDs) revealed an uprising trend for both males and females. Among the NCDs, circulatory system related diseases were most common in Bangladesh. The second major cause of death was neoplasm. The risk of deaths from non-communicable diseases increased with age. The overall death rates were higher for males than females. Males of ages 45 and above were greatly affected by circulatory system related diseases and neoplasm. Circulatory system related deaths were highest (34.01%) in the 70-79 age group. Neoplasm related deaths were highest (34.38%) in the 60-69 age group. Similar patterns were observed for females. Circulatory system related diseases, respiratory related diseases and neoplasms greatly affected females of the 45-59 and above age group. The highest percentage (38.65%) of circulatory system related deaths was found in the 70-79 age group; neoplasm related deaths were highest (29.41%) in the 45-49 age group; and the highest percentage (32.69%) of respiratory related diseases was found in the 60-69 age group. Conclusions It was observed that a large portion of the population died because of non-communicable diseases. Public awareness about common NCDs and the risk factors involved should be raised. Promoting health-related content both in male and female education can bring improvements in reducing NCDs.", "keywords": [ "non-communicable diseases", "NCDs", "mortality rate", "percentage distribution" ], "content": "Abbreviations:\n\nNCDs: noncommunicable diseases, ICDDR,B: International Centre for Diarrhoeal Disease Research, Bangladesh, D1: maternal and neonatal cause, D2:communicable cause, D3: noncommunicable cause, D4:miscellaneous cause.\n\n\nIntroduction\n\nMortality trends are important to demographers because they present a useful way of examining mortality differentials and their principal causes across populations. It has been reported that generally mortality rates in Bangladesh have reduced notably over recent decades1. However, deaths caused by chronic diseases are rising at an alarming rate1. There is a rapid rise observed in the burden of non-communicable diseases (NCDs) worldwide. Demographic transition and changing lifestyles among people are important factors for these kind of health problems2. The World Health Organization (WHO) has predicted that, by 2020, two-thirds of the world’s global burden of disease will be caused by non-communicable conditions3. In 2005 it was reported that non-communicable diseases such as heart disease, stroke, diabetes mellitus, cancer, and chronic respiratory diseases were responsible for 59% of the 57 million deaths yearly and 46% of the total burden of disease, globally3.\n\nThe burden of NCDs has been showing an increasing trend in South Asia, where almost half of all deaths in Asia and 46% of global burden of disease is attributable to these diseases5. It was observed in much of sub-Saharan Africa that the leading risks were those associated with poor quality of life6. Cardiovascular disease is a major non-communicable disease, taking almost 17 million lives each year7. It has been observed that decreasing primary risk factors such as inadequate nutrition, physical inactivity, smoking etc. can decrease death rate significantly7. Alam et al.8 investigated total deaths of adults with increasing age in Bangladesh and found communicable diseases responsible for 18% of overall deaths and NCDs responsible for 66%8. The NCDs included those caused by the circulatory system (35%), respiratory system (10%), digestive system (6%), neoplasms (11%) and endocrine and metabolic disorders (6%)8.\n\nThere are relatively few published studies about mortality, especially for NCDs, in developing countries like Bangladesh. It is therefore a timely necessity to categorize the country’s mortality data by cause of death, sex and age group. The aim of this study is to analyze mortality trends in Bangladesh. These will help in the development of strategies regarding the approach of the health sector to disease control. It is also important to increase awareness about which diseases will cause further burden in Bangladesh, in order to supply the suitable drugs.\n\n\nMaterials and methods\n\nThis study used vital registration, maternal and child health data collected from Matlab, a rural area of Bangladesh, in 2000, 2004 and 2008. The data were gathered and published by Health and Demographic Surveillance System of ICDDR,B. In 2000, 2004 and 2008, the surveys counted 218579, 224476 and 222218 individuals, respectively. In 2000, the surveys counted 106370 male individuals and 112209 female individuals. In 2004, the surveys counted 107439 male individuals and 117037 female individuals. In 2008, the surveys counted 103579 male individuals and 118639 female individuals.\n\nMortality rate is a measure of the number of deaths in a population. It is expressed as number of deaths per 1000 individuals per year. Cause-specific mortality rate is the number of deaths from a particular causes of disease in a population during a fixed time period.\n\nCause-Specific mortality rate =  Number of deaths from a particularcauseTotal Population  × 100\n\nThe ethical considerations of the study were approved on the 12/06/2012 by the University Research Ethics Committee, University of Rajshahi, Bangladesh.\n\nAll participants were informed about the study and gave their written consent to participate.\n\n\nResults\n\nTable 1 shows total deaths and death rates in Bangladesh in 2000, 2004, and 2008. Regarding causes of death, neonatal and maternal diseases (D1), showed a decreasing trend both in males and females. Communicable diseases (D2) also showed decreasing trend. Non-communicable diseases (D3) showed an increasing trend, almost doubling its victim count between 2000 and 2004. Injuries and miscellaneous causes (D4) showed a statistically significant declining trend. The overall male death rate from year 2000 to year 2004 represented a growing trend and reached from 7.52 to 7.86, falling back to 7.49 in 2008. Similar trends can be seen for female death rates.\n\nIt is observed that males have a higher mortality rate than females in 2008. Also, the total number of deaths from non-communicable diseases was significantly higher than in the rest of the disease categories for both sexes. After analyzing Table 2, it has come to our attention that the percentage of male deaths was higher than female deaths across all years.\n\nTable 3 provides age specific death rates for males in Bangladesh in 2000, 2004, and 2008. Infant mortality was highest in 2000, at 15 per 1,000 children. There was a gradual decline in rate of infant mortality from 2000 to 2008, with 11 per 1000 children in 2004 and 9 per 1000 children in 2008. The death rate was also declining for the 0–14 age category from 2000 to 2008. On the contrary, the 15–59 age group showed increasing death rates from 2000 to 2008, with 2.81 per thousand in 2000, 2.99 per thousand in 2004 and 3.04 per thousand in 2008. Finally, natural trends were observed in the 60+ age groups. Most people died at that age. Table 4 shows that for the 60+ age groups, female mortality was lower than male mortality in Bangladesh. Female death rates at ages of 60+ were 39 per thousand in 2000 and 42 per thousand in 2004. Among males, death rates at ages of 60+ were 48 per thousand males in 2000 and 54 per thousand males in 2004. The trends remained the same for 2008. This was also true for the age group 15–59. There was no statistically significant difference in death rates between males and females of the age group 0–14. Infant mortality was highest in 2000 amongst this age group, at 15 per thousand both in male and female infants. Female infant mortality exhibited a gradual decline over the years from 2000 to 2008, similar to male infant mortality. Infant mortality rate was 11 per thousand in 2004 and 6 per thousand in 2008.\n\nTable 5 shows that among the total male NCD related deaths in year 2004, 232 (43%) fall under the category of circulatory related disease, 82 (15%) of them fall in the neoplasm group, and 78 (14%) of them were respiratory related. Then, 59 (11%) male NCD related deaths fall under the category of digestive disease, 47 (9%) under endocrine disorder, 16 (3%) under neuro-psychiatric, 17 (3%) under genito-urinary and 9 (2%) under other NCDs. In the year 2008, 297 (54%) of NCD related deaths fall under the category of circulatory related disease, 96 (18%) fall in the neoplasm group, and 52 (10%) of them were respiratory related. Then, 40 (7%) fall under the category of digestive disease, 30 (5%) under endocrine disorder, 8 (1%) under neuro-psychiatric, 16 (3%) under genito-urinary, and 4 (1%) under other NCDs.\n\n(ICDDR, B).\n\nIn Table 6, it can be observed that among the total female NCD related deaths in 2004, 234 of them (54%) fall under the category of circulatory related disease, 46(10%) of them fall in the neoplasm group, and 34 (8%) of them were respiratory related. Then, 51 (12%) female NCD related deaths fall under the category of digestive disease, 29 (6%) under endocrine disorder, 13 (3%) under neuro-psychiatric, 17 (4%) under genito-urinary and 12 (3%) under the other non-communicable disease category. Among the total female respondents, 282 (61%) of NCD related deaths in 2008 fall under the category of circulatory related disease, 51 (11%) fall in the neoplasm group and 52 (11%) of them were respiratory related. Then, 21 (4%) female NCD related deaths fall under the category of digestive disease, 31 (7%) under endocrine disorder, 8 (2%) under neuro-psychiatric, 13 (3%) under genito-urinary and 5 (1%) under the other non-communicable disease category.\n\nTable 7 shows the distribution by age group of male circulatory and neoplasm related deaths. Circulatory system related diseases and neoplasms greatly affected the age groups 45–59 and above. Circulatory system disease related deaths were highest (34.01%) in the age group 70–79, and neoplasm related deaths were highest (34.38%) in the age group 60–69. The asymptotic significance level was 0.000. Given that the null hypothesis is rejected when the p-value is less than 0.05, this indicates a strong relationship between age and incidence of disease in men. (Table 8).\n\nTable 9 shows the distribution by age group of female deaths caused by three major NCDs: circulatory system related diseases, neoplasms, and respiratory related diseases. Similar to what was observed in the male population, circulatory system related diseases, respiratory related diseases, and neoplasms had a greater effect on age groups 45–59 and above. Circulatory system disease related deaths were highest (38.65%) in the age group 70–79, neoplasm related deaths were highest (29.41%) in the age group 45–59 and respiratory related deaths were highest (32.69%) in the age group 60–69. The asymptotic significance level was less than 0.05 (Table 10). Given that the null hypothesis is rejected when the p-value is less than 0.05, this indicates a strong relationship between age and incidence of disease in females (Table 10).\n\n\nDiscussion\n\nIn 1990, worldwide and regional cause-of-death patterns were measured across age groups. It was found that 98% of all deaths in children below 15, 83% of all deaths in the 15–59 age group, and 59% of all deaths in the 70+ age group were occurring in the developing world9. The disease mortality pattern in elderly patients of a Nigerian teaching hospital was studied from January 2007 to December 2011. A total of 3,002 elderly (>65 years) people were admitted, of which 561 died. Among the population, 317 were male and the rest were females. Cerebrovascular disease was the top cause of death (25.1%). The second and third major causes of death were malignancies (15.2%) and diabetes mellitus (8%)10. A cross-sectional study involving 535 inhabitants of Sokoto in Nigeriato displayed the prevalence and pattern of non-communicable diseases. The participants were overweight, obese and morbidly obese, and represented 12.3%, 6.7% and 0.9% of the population, respectively. The prevalence of pre-hypertension and hypertension was 8.5% and 30.2%, respectively11.\n\nDeaths from non-communicable diseases represent a rising trend. Our results support the finding that non-communicable diseases are imposing a sizeable and growing public health burden globally12–18. Vital registration, maternal and child health data was collected from Matlab, Bangladesh, in 2000, 2004 and 2008. The data were collected and published by Health and Demographic Surveillance System of ICDDR, B. Among the total male NCD related deaths in year 2004, 232 (43%) fall under the category of circulatory related disease, 82 (15%) of them fall in the neoplasm group, and 78 (14%) of them were respiratory related. Then, 59 (11%) male NCD related deaths fall under the category of digestive disease, 47 (9%) under endocrine disorder, 16 (3%) under neuro-psychiatric, 17 (3%) under genito-urinary and 9 (2%) under other NCDs. In the year 2008, 297 (54%) of male respondents fall under the category of circulatory related disease, 96 (18%) fall in the neoplasm group, and 52 (10%) of them were respiratory related. Then, 40 (7%) deaths fall under the category of digestive disease, 30 (5%) under endocrine disorder, 8 (1%) under neuro-psychiatric, 16 (3%) under genito-urinary, and 4 (1%) under other NCDs.\n\nAmong the total female respondents, 234 (54%) of NCD related deaths in 2004 fall under the category of circulatory related disease, 46 (10%) of them fall in the neoplasm group, and 34 (8%) of them were respiratory related. Then, 51 (12%) female NCD related deaths fall under the category of digestive disease, 29 (6%) under endocrine disorder, 13 (3%) under neuro-psychiatric, 17 (4%) under genito-urinary and 12 (3%) under the other non-communicable disease category. Among the total female respondents, 282 (61%) of NCD related deaths in 2008 fall under the category of circulatory related disease, 51 (11%) fall in the neoplasm group and 52 (11%) of them were respiratory related. Then, 21 (4%) female NCD related deaths fall under the category of digestive disease, 31 (7%) under endocrine disorder, 8 (2%) under neuro-psychiatric, 13 (3%) under genito-urinary and 5 (1%) under the other non-communicable disease category. There were more male deaths due to neoplasms and more and female deaths due to circulatory related disease.\n\nIt is recognized that a huge portion of the population will die because of non-communicable diseases. The number of deaths rapidly increases year by year12,19. Males of ages 45 and above were greatly affected by circulatory system related diseases and neoplasms. Circulatory system related deaths were highest (34.01%) in the 70–79 age group. Neoplasm related deaths were highest (34.38%) in the 60–69 age group. Similar patterns were observed for females. Circulatory system related diseases, respiratory related diseases and neoplasms greatly affected females of the 45–59 age group and above. The highest percentage (38.65%) of circulatory system related deaths was found in the 70–79 age group; neoplasm related deaths were highest (29.41%) in the 45–59 age group; and the highest percentage (32.69%) of respiratory related diseases was found in the 60–69 age group.\n\n\nConclusions\n\nThis study recognized that a huge number of people die because of non-communicable diseases. This number increases year by year at a large scale. Deaths from circulatory related diseases were significantly higher than from other non-communicable diseases. In females, the mortality rate was very high for these. The second major cause of death was from neoplasms for the male population in 2008. Circulatory system related diseases and neoplasms greatly affected the 45–59 age groups and above. For females, the death rate was very high for respiratory related diseases. Females were affected by non-communicable diseases at a younger age than males. Circulatory system related diseases, neoplasms and respiratory related diseases are the top three NCDs which have massive impact on the health of the population, and should therefore be given the utmost attention. These three NCDs and their associated risk factors should be targeted in all public health awareness programs.\n\nThe national policy and action plan should take these points into consideration, and focus on improving basic education and expanding public health systems to raise awareness. Mass media outlets such as television, newspapers, radios, and the internet can play an effective role to promote consciousness and alert people to the dangers posed by NCDs. Awareness campaigns can positively modify attitudes. Finally, the Ministry of Health and Family Welfare should train more personnel, achieve national coverage and promote more research on the subject, thus ensuring high standards are kept.\n\n\nData availability\n\nRaw datasets have not been made available at the request of the ethics committee in order to maintain participant confidentiality. This data is stored at the Department of Statistics, University of Rajshahi, and is available upon request. Please contact the 1st author (Aziza Sultana Rosy Sarkar, email: asrosy2012@gmail.com) for further information.", "appendix": "Author contributions\n\n\n\nASRS participated in the design of the study and performed the statistical analysis. ASRS, MNI conceived the study, and participated in its design and coordination and helped draft the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nMinistry of Health and Family Welfare: Strategic plan for surveillance and prevention of Non-communicable diseases in Bangladesh 2011–2015.2011. Reference Source\n\nNissenen A, Berrios X, Puska P: Community-based noncommunicable disease interventions: lessons from developed countries for developing ones. Bull World Health Organ. 2001; 79(10): 963–970. PubMed Abstract | Free Full Text\n\nWorld Health Organization: The world health report 2002 - Reducing Risks Promoting Healthy Life. Geneva: WHO Press. Reference Source\n\nWorld Health Organization: 2008–2013 Action plan for the global strategy for the prevention and control of Noncommunicable diseases. Geneva: WHO Press. 2008. Reference Source\n\nGhaffar A, Reddy KS, Singhi M: Burden of non-communicable diseases in South Asia. BMJ. 2004; 328(7443): 807–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim SS, Vos T, Flaxman AD, et al.: A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet. 2012; 380(9859): 2224–2260. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFehér J, Lengyel G: [Nutrition and cardiovascular mortality]. Orv Hetil. 2006; 147(32): 1491–1496. PubMed Abstract\n\nAlam N, Chowdhury HR, Bhuiyan MA, et al.: Causes of death of adults and elderly and healthcare-seeking before death in rural Bangladesh. J Health Popul Nutr. 2010; 28(5): 520–528. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMurray CJ, Lopez AD: Mortality by cause for eight regions of the world: Global Burden of Disease Study. Lancet. 1997; 349(9061): 1269–1276. PubMed Abstract | Publisher Full Text\n\nUchendu OJ, Forae GD: Diseases mortality patterns in elderly patients: A Nigerian teaching hospital experience in Irrua, Nigeria. Niger Med J. 2013; 54(4): 250–253. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMakusidi MA, Liman HM, Yakubu A, et al.: Prevalence of non-communicable diseases and its awareness among inhabitants of Sokoto metropolis: outcome of a screening program for hypertension, obesity, diabetes mellitus and overt proteinuria. Arab J Nephrol Transplant. 2013; 6(3): 189–191. PubMed Abstract\n\nWHO (World Health Organization): Preventing chronic diseases: a vital investment. WHO global report, Geneva: WHO, 2005. Reference Source\n\nStrong K, Mathers C, Leeder S, et al.: Preventing chronic diseases: how many lives can we save? Lancet. 2005; 366(9496): 1578–1582. PubMed Abstract | Publisher Full Text\n\nMathers CD, Loncar D: Projections of global mortality and burden of disease from 2002 to 2030. PLoS Med. 2006; 3(11): e442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBeaglehole R, Epping-Jordan J, Patel V, et al.: Improving the prevention and management of chronic disease in low-income and middle-income countries: a priority for primary health care. Lancet. 2008; 372(9642): 940–9. PubMed Abstract | Publisher Full Text\n\nThacker SB, Stroup DF, Carande-Kulis V, et al.: Measuring the public’s health. Public Health Rep. 2006; 121(1): 14–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJemal A, Ward E, Hao Y, et al.: Trends in the leading causes of death in the United States, 1970–2002. JAMA. 2005; 294(10): 1255–1259. PubMed Abstract | Publisher Full Text\n\nICDDR, B: Health and Demographic Surveillance System–Matlab, v. 42. Registration of health and demographic events 2008, Scientific Report No. 109. Dhaka: ICDDR,B; 2010. Reference Source\n\nAnderson GF, Chu E: Expanding priorities--confronting chronic disease in countries with low income. N Engl J Med. 2007; 356(3): 209–211. PubMed Abstract | Publisher Full Text" }
[ { "id": "22133", "date": "24 Apr 2017", "name": "Gonghuan Yang", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript' objective is to analyze mortality trends in Bangladesh from 2000 to 2008, to identify the main causes of death, and categorize them by sex and age group with vital registration, maternal and child health data collected from Matlab, a rural area of Bangladesh, in 2000, 2004 and 2008 though Health and Demographic Surveillance System of ICDDR, B.\nIt is very significant to directly report the mortality trend in developing countries like Bangladesh. The data are from original vital registration and maternal and children health. However authors do not describe the vital registration and maternal and children health collecting systems. How to work of these collecting systems? Who report the status of victims to whom? How many cases are diagnosed by hospital? These basic information are very necessary in the manuscript. Also how about the quality of these collecting system, underreporting, or misreporting? Authors should supplement these messages.\nAs the number of deaths per year is only about 800, it is understanding why authors classified a bigger age span for mortality. I still suggest mortality aged from 15-59 is divided into 2 groups: 15-  39, and 40 - 59. In addition authors do not emphasize the total mortality rate is crude death rate or standard death rate, but it should be the standard death rate for comparison with different annual mortality.\nThe third, the death causes. Authors very briefly define the death causes: D1neonatal and maternal diseases (D1), D2 Communicable diseases, D3: Non-communicable diseases and D4 Injuries and miscellaneous causes. The definition on death causes is hard to satisfy the death causes analysis. Authors should refer to the International classification of death causes, the results of the manuscript can be understood by international colleagues. Especially authors list the subdivision category of chronic non-communication diseases without ICD code in Table 5, so it is hard to understand whether the diseases is same disease on the international category of diseases causes.", "responses": [] }, { "id": "22406", "date": "02 May 2017", "name": "Nowrozy Kamar Jahan", "expertise": [ "Reviewer Expertise" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is important to examine the trends of age-specific mortality rate and to identify the causes. I suggest that this manuscript would be indexed after revision.\nMy major comments:\nTitle: Matlab DHSS is located in rural Bangladesh and does not represent urban Bangladesh. In this case, suggestion is to add either “rural Bangladesh” or “evidence from Matlab DHSS” in the title.\nAbbreviations: Suggestion is to add “WHO: World Health Organization” which I found in the 1st paragraph of “Introduction” and delete D1 to D4 as these are not the abbreviations i.e. shortened form of different causes of death. D1 to D4 are the operational symbols of different causes of deaths.\n\nIntroduction: In 2017, Reference 3 “WHO: world health report 2002” is not acceptable. Authors should cite current WHO report on NCD which is available online. In the 2nd paragraph, how Reference 5 is linked to the Reference 6 as authors did not mention whether NCD is also a burden in sub-Saharan Africa. They can mention “why NCD is increasing in South Asia”, “what is the prevalence of NCD related risk factors”. In the last paragraph, instead of mentioning “Bangladesh”, they should mention “rural Bangladesh” and in order to emphasis why it is important to study the mortality trends of rural Bangladesh; they should add that “In Bangladesh, population is mainly rural, almost 80 percent of the population living in rural areas” with appropriate reference.\n\nMaterials and methods: D4 is “Injuries and miscellaneous cause” which was not mention properly under ““Abbreviation” section and “D1 to D4” should remove from the “Abbreviation” section to this section. In the reference 8, Alam et al. who used Matlab DHSS data, mentioned that verbal autopsy (VA) was conducted to identify the causes of death. Authors should check the information whether VA was done in their case or not. If yes, they should mention it in this section and add “VA” in the abbreviation section. Demographers normally get the total number of deaths from vital registration and in order to know the causes of death, VA is the most appropriate. In this section, authors should mention that they run ‘Chi-squared test’ to examine the relationship between age and different causes of death for both sexes, which they presented in Table 8 and 10. In the 1st paragraph of the result section, authors mentioned “Injuries and miscellaneous causes showed a statistically significant declining trend” and in the 2nd paragraph, “the total number of deaths from noncommunicable diseases was significantly higher than in (delete “in”) the rest of the disease categories for both sexes”. Authors should mention here about statistical analysis before presenting the findings in the result section.\nCause-Specific mortality rate:\nDefinition: delete “of disease” as particular cause can be either disease or injury or accident etc. Formula: it should be “per 1000”, not 100\n\nConsent to participate: This article is based on secondary data analysis. Suggestion is “All participants gave their written consent when ICDDR,B collected data for vital registration”.\nResults: As authors presented their study findings in tables (mainly Table 5 & 6); they do not need to mention the findings of each row and column in text. They should mention only the important findings which they will interpret in the discussion section. Example: the main focus of this manuscript is NCD; in this case, congenital malformations, neuro-psychiatric, digestive disease, genito-urinary are not relevant to highlight in the result section.\nDiscussion: Authors repeated their study findings in the 2nd, 3rd and last part of 4th paragraph which they should not do. In this section, they should interpret their results and describe the significance of their study findings by comparing with the findings of other studies. They should do critical analysis of their study findings. In the 1st paragraph, they compared their study findings with a hospital based study where the respondents were elderly patients and this is not relevant as Matlab DHSS is population based. And in the last part of 1st paragraph where they mentioned about a cross sectional study; how this study finding is related to interpret their study findings.\nLimitation of study: Author should add this section. In this section, they can mention that they do not have detailed and correct information about causes of death related to NCD; e.g. whether “respiratory disease” is representing only COPD, which is NCD or other respiratory disease. In the similar way, whether “endocrine disorder” includes only diabetes or other endocrine problem like Thyroid disease. In Tables 5 & 6, I did not find the information on 2000 and in Table 7 I did not find the information on “respiratory”. Does it mean that these information were not available, if yes, authors should mention in this section.\n\nMy minor comments:\nAuthors should avoid repetition; e.g.\nIn the last paragraph of Discussion and first paragraph of Conclusion “huge portion …..year by year”- more or less same meaning. Table 5 is for male, so no need to mention “male 2004” under “Frequency and Percent”. Same comment for Table 6\n\nTable 1: could be re-organized.\nAfter the column “Total deaths”, authors can add the column “Overall rate per thousand” D1 and Rate: These two columns under D1; 1st column is for total number of deaths due to D1 and 2nd column is death rate per 1000 due to D1. These two columns can be reorganized\n\nD1\n\n|\n\nD2\n\n|\n\nD3\n\n|\n\nD4\n\nTotal deaths | Rate per 1000  | Total deaths | Rate per 1000  | Total deaths | Rate per 1000  | Total deaths | Rate per 1000  |\nTable 2: authors should delete two digits (.00) after the number of total deaths. They can add one more row for “Total percentage” in addition to “Total deaths”\n\nTable 3 & 4 heading can be re-arranged:\nAge\n\n|\n\n2000\n\n|\n\n2004\n\n| Group|\n\nMidyear population|Total deaths|Rate/1000|  Midyear population|Total deaths|Rate/1000|\n\n2008\n\nMidyear population|Total deaths|Rate/1000\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-210
https://f1000research.com/articles/6-209/v1
02 Mar 17
{ "type": "Research Article", "title": "Engaging bioscientists in science communication: Evidence from an international survey", "authors": [ "Andrea Boggio", "Giorgio Scita", "Carmen Sorrentino", "David Hemenway", "Andrea Ballabeni", "Giorgio Scita", "Carmen Sorrentino", "David Hemenway", "Andrea Ballabeni" ], "abstract": "Background: Exchanges between scientists and nonscientists are critical to realizing the social value of basic research. These exchanges rest in part on the willingness and ability of scientists to engage effectively in science communication activities. In this paper, we discuss the perception and willingness of basic scientists in the biological and biomedical fields to engage in science outreach. Methods: The analysis is based on qualitative data collected as part of a survey on the social value of basic research and is framed by the theory of planned behavior. This is a well-established theory of human behavior that relies on the premise that a person’s intention to engage in a behavior is the single best predictor of whether that person will in fact engage in that behavior. Results: Our data show that, while bioscientists maintain a positive attitude towards science communication, their intentions are influenced by some negative feelings with regard to how nonscientists react to science communication efforts. Interactions with institutional actors, governmental bodies and the public are particularly problematic. On the other hand, interactions with clinicians and patients are framed in positive terms. Finally, some study participants raised concerns as to their ability to communicate science effectively, the availability of time and resources, and the lack of proper rewards, particularly in terms of career advancement, for those who engage in science efforts. Conclusions: Our findings suggest that bioscientists' intentions to engage in science communication efforts must be better studied to develop empirically-informed interventions to increase scientists’ participation in science outreach efforts.", "keywords": [ "Science communication", "publicly-funded research", "biomedical research", "theory of planned behavior", "qualitative data", "usability gap", "social value of science", "interaction experts/publics" ], "content": "Introduction\n\nScience communication refers to the “use of appropriate skills, media, activities and dialogue to produce one or more of the following personal responses to science: awareness, enjoyment, interest, opinion-forming and understanding” (Burns et al., 2003: 183). Effective science communication is seen as an important science policy tool, as it enables exchanges between scientists and nonscientists that are critical to transforming basic research into social outcomes (Bozeman, 2007; Bozeman & Sarewitz, 2011). After reviewing the empirical literature on the connections between research and societal application, Sarewitz & Pielke (2007: 7) conclude that “one feature that invariably characterizes successful innovation is ongoing communication between the producers and users of knowledge.” Producing valuable knowledge alone is in fact not sufficient to realize the public value of basic research; to transform basic knowledge into concrete social outcomes, basic research findings must be properly communicated to nonscientists and the latter must use these findings to generate social outcomes.\n\nUnfortunately, studies show that effective communication between scientists and nonscientists is not the norm. This has resulted in a ‘usability gap’ (Kirchhoff et al., 2013) that prevents basic research from becoming fully realized into concrete social outcomes. Scientists’ behavior is certainly an important determinant of this usability gap. Scientists can in fact contribute to closing this gap by engaging in science communication. Unfortunately, we know that scientists’ engagement in science communication activities is far from optimal and that scientists could be engaged more frequently and more effectively (Bruine de Bruin & Bostrom, 2013; Bucchi, 1998; Hilgartner, 1990; Pielke, 2012; Rödder et al., 2012; Suldovsky, 2016). Consequently, understanding scientists’ attitudes towards science communication activities is critical to designing policies that enhance their commitment to science communication activities.\n\nIn this paper, we analyze qualitative data gathered in conjunction with a large survey of scientists who are researchers in Italy, the United Kingdom, and the United States, using the theory of planned behavior to understand scientists’ perception of science communication activities. The goal is to understand which scientists’ perceptions influence their decision not to engage in science communication activities and to suggest policy interventions that would modify those perceptions and increase scientists’ participation in public engagement of science.\n\n\nMethods\n\nThe theory of planned behavior is a well-established theory of human behavior, which relies on the premise that a person’s intention to engage in a behavior is the single best predictor of whether that person will in fact engage in that behavior. According to the theory, intentions, and consequently, behavior are influenced by three sets of variables: attitude towards the behavior, subjective norms, and perceived behavioral control (Ajzen, 1991). Attitude towards the behavior is the degree to which performance of the behavior is positively or negatively valued by an individual. This attitude is in turn determined by behavioral beliefs—the subjective probability that the behavior will produce a given outcome—that link the behavior of interest to expected outcomes. Behavioral beliefs are based on personal experience, information sources and inferences. Subjective norms reflect an individual’s perception of social pressures to engage or not to engage in a behavior. This variable is linked to the degree to which certain groups of people approve or disapprove of the individual performing the specific behavior and how social pressure informs subjective perception about the particular behavior. Perceived behavioral control refers to a person’s perception of her ability to perform a given behavior. This variable is linked to the perceived ability to engage in a behavior and to the ability to do so effectively (Ajzen, 1991).\n\nPlanned behavior theory in particular has provided a fertile framework for empirical research in scientists’ participation in public engagement activities (Besley et al., 2013; Dudo et al., 2014; Dudo & Besley, 2016; Poliakoff & Webb, 2007). Poliakoff & Webb (2007: 254) have applied this approach to investigate the determinants of scientists’ intentions to participate in public engagement activities. Based on an ‘augmented’ version of the theory of planned behavior, the two authors show that scientists’ intentions are determined by past experiences with public engagement, attitudes towards engaging with the public, perceived control that scientists can exercise on public engagement activities, and beliefs about colleagues’ participation in public engagement. Poliakoff and Webb also conclude that career recognition, availability of time and other constraints do not significantly predict intentions to engage in public engagement of science activities. Using data from two large surveys of scientists from the United States and the United Kingdom, Besley et al. (2013) found that a personal commitment to the public good and feelings of personal efficacy and professional obligation can predict scientists’ involvement in outreach activities. Dudo et al. (2014) found that predictors of nanoscientists’ involvement in science communication include perceiving public communication as important for the welfare of society, seeing professional benefits from generating publicity about their research, and spending more time using online tools. These results were in great part replicated by Aykurt (2016), who conducted a similar study among nonscientists in Denmark. Interestingly, the study also found that a lack of time is an obstacle to engaging in science communication using online tools (Aykurt, 2016: 44). Finally, in researching how scientists prioritize science communication objectives, Dudo & Besley (2016) found that informing the public, exciting the public, building public trust about science, and defending science from public misinformation are objectives that received the largest support among study participants. The fact that these objectives are important to scientists certainly has implication to their intention to engage in science communication: scientists may be more willing to engage if they perceive that science communication efforts prioritize the right objectives.\n\nWith this framework in mind, we analyze how the three variables deployed by the theory of planned behavior contribute to our understanding of scientists’ intentions to participate in science communication activities. Our analysis is based on qualitative data that we gathered in conjunction with a large survey of scientists conducting basic research in the biomedical field in Italy, the United Kingdom, and the United States1. The survey focused on the attitudes and beliefs of scientists towards policies that could incentive the types of basic research with the highest likelihood of creating public health benefits2. While the survey did not focus directly on science communication, many study participants shared their perception of science communication activities in a Comments section that they could fill out after completing the survey. The section reads as follows:3\n\nCOMMENTS: Do you have ideas about other policies that can increase both the societal benefit potential of basic research and the scientist satisfaction, without affecting the fundamental nature of basic investigation? You can use this space to tell us about them or for any additional considerations related to the themes of this survey.\n\nParticipants were recruited by email, and, after consenting to participating, asked to confirm their status as basic researchers and identify their position/role within the respective organizations. Respondents could indicate their geographical location, but not their institution. They could skip any questions, including the Comments field4. Out of the 7,786 invitations that were sent between August 24, 2015 and October 10, 2015, 885 recipients filled out the entire survey and, among them, 145 participants filled out the Comments field5. Comments are presented, along with all responses to the entire survey, in the Final report in the Data availability section. The majority of 145 respondents were male (64%) and PIs (71%). In total 60% were based in the US (50 in Los Angeles-San Diego and 34 in New York City), with the remaining split between the UK (36) and Italy (20) (Table 1).\n\nLA = Los Angeles-San Diego; LN = London-Cambridge; MI = Milan; NY = New York City.\n\nAll Comments were imported into MAXQDA (ver. 12.1; http://www.maxqda.com/), a qualitative data analysis software, to be analyzed. We then conducted a three-step analysis the data. In Step 1, we coded segments ‘in vivo’ based on the ‘first-step coding,’ a code system that we had used to analyze qualitative data in a previous study exploring similar issues (Boggio et al., 2016). The ‘first-step coding’ schema is available as Supplementary File 1. During Step 1, we discarded 12 of the 145 answers because respondents either commented on the survey design or merely thanked us for conducting the study. In Step 2, we reviewed the results of the first-step coding to identify clusters of data that could lead to further insights when analyzed in conjunction with the theory of planned behavior. In Step 3, we proceed to analyze the data clusters, identified as a result of Step 2, based on the three main parameters of the theory of planned behavior: attitudes, subjective norms, and (perceived and actual) behavioral control. This analysis was conducted by sifting through the material, highlighting text, terms, and phrases, writing notes, and reorganizing coded segments along the three parameters. Finally, we reviewed the results, reflected on the data, and constructed narratives that would convey study respondents’ attitudes towards science communication activities. While focused primarily on the qualitative data, the analysis is mindful of the quantitative findings of the survey and we occasionally refer to the quantitative findings to provide the appropriate research context for our analysis.\n\n\nResults\n\nInitially, we analyzed responses that provide insights as to the degree to which scientists positively or negatively value engaging in science communication activities. Overall, respondents expressed positive attitudes towards science communication.\n\nWith regard to content of science communication, many respondents indicated that science communication should be preoccupied to correcting misunderstandings about the social value of science and to fostering a deeper understanding of the public value of science. Since society often fails to fully appreciate basic science’s potential to contribute to social outcomes, science communication should firstly aim at “increasing people awareness of the contributions of basic research to our daily lives” (#191)6 and foster “much more interaction between society, scientists and sufferers” (#160). As one respondent put it, science communication must answer the question “What is the point of this research?” so that “everyone can understand why they are doing this research and why it will be a benefit to us” (#222). Secondly, science communication is also an opportunity to address negative perceptions of science. “With the current challenges about data replication in the popular press, good communication has never been more vital” (#116). Patients, some scientists worried, “do not understand the real role of bioscience for the general health [and think negatively of] animal experimentation, drugs in general and natural cures (that there are not miraculous cure of cancer)” (#133). Thirdly, many respondents also indicated that, if science were better understood, the public would be more willing to support basic research. To foster public support of science “the benefits of basic research are fed back to the people who in the end are funding the research” (#216). Others favored communication efforts that focus on “non-instrumental” aspects of science (#118 mentioning astronomy as “very popular, even though it has no direct economic benefit”), the complexity and mystery of scientific inquires (#176: “science is a process that takes time and hard work and great thoughtfulness, not just a received body of ‘facts’ that one is supposed to memorize”), and science as “awesome and fun” (#150). The last respondent also added that “we need to get away from the pure health focus of research: in neuroscience all the promises of the decade of the brain were bad public policy. Let us not repeat that mistake over and over again!” (#150). “Good science is similar to painting or composing music, but the tools are different” (#166).\n\nOur respondents expressed different attitudes depending on the audience of science communication. Exchanges with clinicians and patients received an assessment that is more favorable. By contrast, the relationship with policymakers, politicians, and the media were mostly framed in negative terms (we discuss policymakers, politicians, and the media under ‘perceived behavioral control’). With regard to clinicians, more than 20 of the 133 comments remarked that interactions between basic scientists and clinicians are beneficial in two ways. First, scientists can benefit from clinicians’ feedback. Due to the “constant patient and clinician contact,” the clinical environment may generate “a good amount of feedback into the direction of the science” (#184). In addition, interactions with basic scientists may reorient clinicians’ focus from “treatment of symptoms … to understanding disease [since] basic research tends to want to understand how things work” (#139). Second, clinicians can become agents of science when communicating with other users (colleagues and patients) (#215: “MDs have key role to play in supporting/communicating basic research”). Similarly, interactions with patients are perceived as positive. To stimulate these exchanges, study participants supported the proposal to relocate some basic researchers into hospitals and suggested that scientists should get actively involved with clinical care to generate more opportunities for knowledge exchanges7. They also noted though that more needs to be done for science communication to happen in the clinical environments. “Simply having basic scientists at hospitals does not help if they are not actively involved with clinical problems” (#244). Cultural barriers may be an obstacle to knowledge exchanges, since clinicians and scientists come from different cultural worlds, and their exchanges would improve if their educational paths—e.g. a PhD in biology or an MD for clinicians—overlapped more.\n\nThere is huge knowledge gap between basic science and medical or clinical research although we start our education with same knowledge of biology, chemistry etc. To tackle this problem, syllabus for basic science or medicine should have some overlapping subjects. (#112)\n\nA different picture emerges from the analysis of comments involving the media and policymakers. Media are perceived as agents of distortion: the information that the media feed to the public does not accurately portray how scientific research is conducted and how it contributes to the betterment of society. Some imputed this state of affairs to the fact that “the media are for the most part corporate and can say/show what they want” (#146) and to the fact that the media engage in “false equivalence giving equal weight to dubious if not downright false opposing viewpoints” (#141). Against this background, the media are nonetheless perceived to play a critical role in science communication efforts.\n\nI think the media could really help by reporting the origins of progress in terms of the basic science and scientists that initiated and brought to fruition a finding, even if it is not world-shattering progress. The public needs to be educated about the scientific method and how failed hypotheses lead to new insights that can change the world. For example, the discovery of miRNA – how it occurred and where it has led would help people understand the value of basic research. I am a glycobiologist and have determined many functions for sugars attached to glycoproteins. Even most scientists do not appreciate the importance of sugars on glycoproteins. Understanding how they are useful in recombinant therapeutics that treat disease would be helpful to the general public as it would be a lesson in how to think scientifically. (#219)\n\nThe media should thus “promote well-informed discussion of basic research discoveries and their contribution to knowledge in the media” (#230) with “accuracy of portrayal” (#108). Policymakers are also perceived negatively. The respondents questioned policymakers’ ability and willingness to understand and embrace science. “The ignorance of a large portion of the general public is terrible, especially members of Congress” (#208).\n\nOverall, negative attitudes seem to be grounded on scientists’ perception that the public lack sufficient scientific literacy. In fact, many respondents expressed the wish to have policymakers—politicians in particular—with better science credentials and more willingness to understand science (“Need more policy makers who are scientifically literate, with some experience in basic science not simply owning medical degrees” #107). To address the scientific literacy deficit, respondents proposed various improvements that could change their attitudes towards communicating knowledge to nonscientists. Improving education in science and technology seems to be a clear target to enhance science communication. Curriculum reform at all levels of education was proposed by numerous respondents. Curricula, some participants suggested, should emphasize how the scientific process works and how scientific findings benefit society. Scientific literacy would improve “if future generations are more aware of the scientific process” (#234). Scientific education should start at an early age—“get kids into science. That’s the only way to change the mindset of the generations” (#220). Also, “make sure all children have introduction to biology, chemistry, physics and human physiology; emphasize scientific process and benefit of scientific research” (#111). Curricula should also “showcase how basic science has paved the way for translational science” (#129). Children’s interest should be also fostered outside the classroom by having the right kind of programs and initiatives (“Bring back Bill Nye, The Science Guy, to public television; and/or promote TV/movie with scientists as protagonists to encourage more children to pursue STEM education and careers” #120). In addition to triggering interest in science, these initiatives would forge more informed citizens and better policymakers (“An unprepared mind will be unprepared for the proposed policies” #199). To some, curriculum reform is arguably more important than having career scientists engage the public since “much promotion of research by even highly regarded scientists does not serve the knowledge/long term interests of the public that well” (#235).\n\nWe then turned to analyzing responses that address the normative beliefs and subjective norms held by bioscientists. These are comments that concern whether bioscientists are expected to engage in science communication activities and how individuals perceived those expectations. Participants seem to be in agreement that bioscientists are expected to engage with the public. Furthermore, many argued that bioscientists ought to become more engaged with science communication efforts and take greater ownership of these efforts. This is part of their professional obligation, especially where research is publicly funded. “Since the public funds the research in many cases, scientists have an obligation to educate the public and to deliver new knowledge to them” (#111). Bioscientists must get involved “in the training of media-relations personnel (i.e. journalists), of both trainees in the discipline (e.g. classroom settings of journalism schools) and of active journalist (e.g. through seminars or workshops)” (#177).\n\nRespondents also identified various institutional and professional opportunities for greater engagement with science communication. Organizing meetings and open houses, an option directly asked about in our survey, is one. Institutions could organize “periodic open house type of event where basic scientists could explain their science to general public, entrepreneurs, clinicians” (#170). These events are also seen as opportunities for scientists to reflect on their work and “think about potential societal benefits just by interacting with people in different background and educational qualification” (#170). Professional organizations are seen as science communication tools: “professional societies could do more societal outreach” (#154). However, some argued, these organizations need to change their approach to be effective: “Scientific societies need PR and even something like political operatives to expose industry and anti-science groups for their agendas and questionable credentials. Instead of not engaging or expecting some sort of even playing field, scientific associations need to get media savvy and do more to reclaim what science stands for and represents” (#141). Finally, respondents mentioned publication outlets as valid communication tools. Open access journals in particular offer opportunities for disseminating science and for the public to appreciate that public funds are used in ways that lead to beneficial social outcomes—or “to see their tax dollars at work” as one respondent put it (#200). The idea of an online forum “where people feel safe to ask ‘stupid’ questions” (#210) was also brought up.\n\nFinally, we analyzed answers to find insights concerning bioscientists’ behavioral control. This variable refers to perceived and actual ability to engage in science communication activities. Bioscientists’ lack of skills to communicate science effectively was mentioned several times as a source of inability of bioscientists to communicate science effectively. As a solution, respondents proposed offering proper training to scientists, but also choosing only the most “effective communicators to engage with the public and media” (#243). “Train early career scientists to communicate with the public [and] include training in communication tools and public speaking during postdoctoral training” (#196). More importantly, respondents felt that they are unable to participate in public engagement of science because of perceived environmental constraints, such as time and resources. To them, these constraints constitute actual barriers to greater involvement with science communication. “Overburdened schedules” (#154) force bioscientists to struggle with keeping up with the day-to-day work of scientific research. Demands for greater commitment to science communication efforts means asking them to reduce the time and energy spent responding to the demands of today’s academic environment, where ‘publish or perish’ is the norm. Scientists’ perception is that it is simply impossible, especially considering that engaging with the public does not lead to immediate returns in terms of productivity and career outcomes. “You are raising the possibility of more sessions, more discussions, more meetings (etc.) for persons who already do research into the late hours, are involved in classroom teaching, do university administration, and instruct people in the lab” (#181). This cannot be done without increasing funds, available to scientists, labs and institutions, to pay for outreach and communication efforts, or setting up directs rewards (in the form of career advancements, credit recognition in grant reviews or prizes) for scientists who engage in such efforts. Otherwise, the risk is that further demands may “drive good scientists away from science” (#181).\n\n\nDiscussion\n\nIn this study, we investigate how bioscientists perceive science communication activities with the understanding that science communication is instrumental to realizing the social value of basic research, and that perceptions influence the decision to engage in these activities. According to the theory of planned behavior, our data show that bioscientists’ perception of science communication may influence their decision to get involved with public engagement activities. Consistent with previous research (Corrado et al., 2001; Poliakoff & Webb, 2007), bioscientists maintain a positive attitude towards science communication as a tool that can enhance how science is understood and valued in society. Contrary to previous studies (Poliakoff & Webb, 2007), we found that subjective norms play a role as determinants of scientists’ participation. In fact, bioscientists appear to feel required to engage in exchanges with nonscientists. Furthermore, direct previous experiences with science communication activities were barely discussed, suggesting that they do not play an important role in determining participation, contrary to Poliakoff & Webb (2007). Occasionally respondents referred to indirect experiences, such as observing how nonscientists react to science communication efforts. Reported reactions are mostly negative: bioscientists’ perception is that nonscientists are ordinarily not positively impacted by science communication activities.\n\nThis leads to another important finding, which is currently not documented in the literature: interactions with certain nonscientists are valued more than with other nonscientists. In particular, exchanges with clinicians and patients are valued more than interactions with institutional actors, governmental bodies and the public. Institutional actors, governmental bodies and the public are perceived as lacking sufficient scientific literacy to appreciate science, which makes bioscientists feel powerless. Engaging with this group of actors may be of little value because bioscientists feel they cannot change their understanding of and appreciation for science. On the other hand, interactions with clinicians and patients are perceived as more promising, since these exchanges can lead to mutual benefit: scientists would gain a better understanding of how science is translated into application; clinicians would be less focused on symptom treatment and more on understanding the underlying causes of disease; and patients would better understand what knowledge is behind a certain treatment.\n\nFinally, perceptions relating to perceived behavioral control matter. Some bioscientists feel that they are not able to contribute effectively to science communication activities because they lack either the skills needed to communicate science effectively or the time and resources that would be needed. In addition, bioscientists feel that involvement with science communication is not properly rewarded, particularly in terms of career advancement. Perceptions of the lack of adequate training are not new (Corrado et al., 2001; Gascoigne & Metcalfe, 1997; Poliakoff & Webb, 2007). Similarly, many accounts of contemporary experience in academia focus on well-documented and debated systemic problems (Alberts et al., 2014; Alberts et al., 2015), which relates to how basic science is funded (Martinson et al., 2009), scientific merit is evaluated (Siler et al., 2015), scientific findings are published (Vale, 2015), and younger scientists are offered opportunities for career growth (Daniels, 2015; McDowell et al., 2014). Not surprisingly, these issues were brought up in the survey as challenges that prevent bioscientists from engaging further in science communication activities.\n\n\nConclusions\n\nExchanges between scientists and nonscientists are critical to realizing the social value of basic research. These exchanges rest in part on the willingness and ability of scientists to engage effectively in science communication activities. Our study suggests that bioscientists are supportive of greater participation in public engagement of science and that they particularly value interactions with clinicians and patients. However, their commitment appears to be constrained by their limited confidence in their ability to effectively communicate science, as well as the perceived lack of time and resources to engage with nonscientists. These problems are not new, and are not insurmountable. Solutions must be empirically-informed interventions, in order to reduce the obstacles that bioscientists perceive as preventing them from getting more involved with science communication efforts.\n\n\nData availability\n\nQuestions and responses of the survey are available on F1000Research: doi, 10.5256/f1000research.7683.d110888 (Scita et al., 2016a)", "appendix": "Author contributions\n\n\n\nStudy design: Andrea Ballabeni, David Hemenway, Giorgio Scita. Data collection: Andrea Ballabeni, Stefano Confalonieri, Carmen Sorrentino. Data analysis and first draft of the manuscript: Andrea Boggio. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nThe authors declare no competing interests.\n\n\nGrant information\n\nAndrea Ballabeni and Carmen Sorrentino were funded by Cariplo Foundation (grant #, 2015.0081).\n\n\nnotes\n\n1Data were collected as part of a study involving bioscientists working in in four geographical locations—Los Angeles-San Diego (CA-USA), London-Cambridge (UK), Milan (Italy), and New York City (NY-USA). For a list of the institutions where the invited scientists work, see, Carmen Sorrentino et al., Increasing Both the Public Health Potential of Basic Research and the Scientist Satisfaction. An International Survey of Bio-Scientists [Version 2; Referees: 2 Approved] (5, 2016).\n\n2In this paper, we analyze the qualitative data collected using the Comments field (see FINAL REPORT). The overall results of the survey are published separately. See ibid.\n\n3The survey can be accessed at https://f1000researchdata.s3.amazonaws.com/datasets/7683/e2cce3ea-006d-4143-befe-10b5dd86c3e2_Data.zip.\n\n4The Harvard T.H. Chan School of Public Health IRB (IRB15-2787) and the FIRC Institute of Molecular Oncology Ethics Committee reviewed and approved the study.\n\n5For the overall results of the survey are published separately. See, Sorrentino et al., Increasing Both the Public Health Potential of Basic Research and the Scientist Satisfaction. An International Survey of Bio-Scientists [Version 2; Referees: 2 Approved].\n\n6This is the last three digits of a random number assigned by the software at the time of data collection. We used them as unique and anonymous identifiers of participants.\n\n7Please see Question 15 of the survey: Please evaluate the following policy: “Locate more basic research laboratories inside or in close proximity of hospitals”\n\n\nReferences\n\nAjzen I: The theory of planned behavior. Organ Behav Hum Dec. 1991; 50(2): 179–211. Publisher Full Text\n\nAlberts B, Kirschner MW, Tilghman S, et al.: Rescuing US biomedical research from its systemic flaws. Proc Natl Acad Sci U S A. 2014; 111(16): 5773–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlberts B, Kirschner MW, Tilghman S, et al.: Opinion: Addressing systemic problems in the biomedical research enterprise. Proc Natl Acad Sci U S A. 2015; 112(7): 1912–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAykurt B: Surveying Nanoscientists' Communication Activities and Online Behavior. (Center for Science Studies, University of Aarhus), 2016. Reference Source\n\nBesley JC, Oh SH, Nisbet M: Predicting scientists’ participation in public life. Public Underst Sci. 2013; 22(8): 971–87. PubMed Abstract | Publisher Full Text\n\nBoggio A, Ballabeni A, Hemenway D: Basic Research and Knowledge Production Modes: A View from the Harvard Medical School. Sci Technol Human Values. 2016; 41(2): 163–93. Publisher Full Text\n\nBozeman B: Public values and public interest: counterbalancing economic individualism. (Public management and change series; Washington, D.C.: Georgetown University Press) ix, 2007; 214. Reference Source\n\nBozeman B, Sarewitz D: Public value mapping and science policy evaluation. Minerva. 2011; 49(1): 1–23. Publisher Full Text\n\nBruine de Bruin W, Bostrom A: Assessing what to address in science communication. Proc Natl Acad Sci U S A. 2013; 110(Suppl 3): 14062–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBucchi M: Science and the media alternative routes in scientific communication.1998. Reference Source\n\nBurns TW, O'Connor DJ, Stocklmayer SM: Science communication: a contemporary definition. Public Underst Sci. 2003; 12(2): 183–202. Publisher Full Text\n\nCorrado M, Pooni K, Hartfree Y: The Role of Scientists in Public Debate: Full Report. (London: The Wellcome Trust), 2001. Reference Source\n\nDaniels RJ: A generation at risk: young investigators and the future of the biomedical workforce. Proc Natl Acad Sci U S A. 2015; 112(2): 313–18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDudo A, Besley JC: Scientists’ Prioritization of Communication Objectives for Public Engagement. PLoS One. 2016; 11(2): e0148867. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDudo A, Kahlor L, AbiGhannam N, et al.: An analysis of nanoscientists as public communicators. Nat Nanotechnol. 2014; 9(10): 841–44. PubMed Abstract | Publisher Full Text\n\nGascoigne T, Metcalfe J: Incentives and impediments to scientists communicating through the media. Sci Commun. 1997; 18(3): 265–82. Publisher Full Text\n\nHilgartner S: The Dominant View of Popularization: Conceptual Problems, Political Uses. Soc Stud Sci. 1990; 20(3): 519–39. Publisher Full Text\n\nKirchhoff CJ, Lemos MC, Dessai S: Actionable knowledge for environmental decision making: broadening the usability of climate science. Annu Rev Environ Resour. 2013; 38(1): 393–414. Publisher Full Text\n\nMartinson BC, Crain AL, Anderson MS, et al.: Institutions' expectations for researchers' self-funding, federal grant holding, and private industry involvement: manifold drivers of self-interest and researcher behavior. Acad Med. 2009; 84(11): 1491–99. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDowell GS, Gunsalus KT, MacKellar DC, et al.: Shaping the Future of Research: a perspective from junior scientists [version 2; referees: 2 approved]. F1000Res. 2014; 3: 291. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPielke R Jr: Basic research as a political symbol. Minerva. 2012; 50(3): 339–61. Publisher Full Text\n\nPoliakoff E, Webb TL: What factors predict scientists' intentions to participate in public engagement of science activities? Sci Commun. 2007; 29(2): 242–63. Publisher Full Text\n\nRödder S, Franzen M, Weingart P: The Sciences’ Media Connection –Public Communication and its Repercussions. (Springer Netherlands), 2012; 28. Publisher Full Text\n\nSarewitz D, Pielke RA Jr: The neglected heart of science policy: reconciling supply of and demand for science. Environ Sci Policy. 2007; 10(1): 5–16. Publisher Full Text\n\nScita G, Sorrentino C, Boggio A, et al.: Dataset 1 in: Increasing the public health potential of basic research and the scientist satisfaction. An international survey of bioscientists. F1000Research. 2016a. Data Source\n\nScita G, Sorrentino C, Boggio A, et al.: Increasing the public health potential of basic research and the scientist satisfaction. An international survey of bioscientists [version 1; referees: 1 approved, 1 approved with reservations]. F1000Res. 2016b; 5: 56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSiler K, Lee K, Bero L: Measuring the effectiveness of scientific gatekeeping. Proc Natl Acad Sci U S A. 2015; 112(2): 360–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuldovsky B: In science communication, why does the idea of the public deficit always return? Exploring key influences. Public Underst Sci. 2016; 25(4): 415–26. PubMed Abstract | Publisher Full Text\n\nVale RD: Accelerating scientific publication in biology. Proc Natl Acad Sci U S A. 2015; 112(44): 13439–46. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "20839", "date": "09 Mar 2017", "name": "Vickie Curtis", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper utilizes the results of a previous survey that explores opinions relating to funding and policy among a group of biomedical scientists in the US, the UK and Italy. The authors have used a section in the questionnaire that asks for further comments to isolate views relating to ‘science communication’. Conclusions are drawn about the attitudes of those surveyed toward science communication and public engagement more generally.\nUnfortunately, there are some serious methodological problems with this work. The original purpose of the survey (which I have read, including all the feedback) was to explore views relating to funding and public policy issues directly affecting the basic research in the biomedical sciences. There were no questions that asked specifically about science communication or public engagement. The feedback analysed by the authors was obtained from a final section that asked for comments as follows: Do you have ideas about other policies that can increase both the societal benefit potential of basic research and the scientist satisfaction, without affecting the fundamental nature of basic investigation? You can use this space to tell us about them or for any additional considerations related to the themes of this survey.\n\nAgain, this is not directly related to public engagement although a small proportion of the statements made by 145 of the respondents do contain some general statements relating to science communication media coverage of science, communication with clinicians and patient’s groups and formal science education in schools. It is possible for new insights to be gained from work exploring a different research question. However, after reading the entirety of the feedback it is evident that much of the individual feedback has been taken out of context, and that the substance and meaning of the data has been greatly overstated. The statements made don’t feel like they have been made to communicate views about PE, but have been made within the context of wider funding and policy issues that the survey was designed to address. I can’t help feeling that the authors are squeezing this data for something substantive, when there isn’t really much there – certainly not enough to draw some of the conclusions that they have made.\n\nSweeping statements are made about the views of bioscientists toward public engagement, and statements such as ‘many believe…’ simply cannot be backed up by the sparse data. The authors need to be more specific and open about their dataset. Actual numbers need to be given (e.g. 10 respondents expressed this view etc…). It’s not enough to generalise or to extrapolate this feedback to bioscientists in general – this is a very diverse body of scientists.\n\nThe authors need to give more detail about their methods of analysis. What specific approaches were taken? Did they undertake a thematic analysis. Use grounded theory? Some supporting references are needed here to signpost their approach to qualitative analysis for the reader. It is good that they attached their coding scheme, but some explanation of each theme would have been helpful. Also, I would like to know who coded the data. Was it coded by more than one individual? How was the reliability of the coding ascertained? Did they account for inter-coder reliability. If only one person coded the data – was this checked by someone else. Again, values for inter-coder reliability, or even an acknowledgement that this needs to be addressed in qualitative analysis would have been welcome.\n\nThe use of the theory of planned behaviour as a framework for analysis is questionable. At no point are scientists questioned about their intent to take part in public engagement activities. Looking at the raw data, there is nothing that stands out as an expression of intent. It is unclear why this framework has been applied to the data.\n\nIn addition to methodological problems, there are some other concerns about the contextualisation of the data within current work on public engagement. The authors appear to have focused on ‘science communication’ yet have used the terms ‘outreach’ and ‘public engagement’ interchangeably. The first paragraph of the paper gives a definition of science communication from 2003 that appears to encapsulate a rather one-way movement of information from informed scientists to uninformed public. This is quite different to more recent definitions of public engagement, which reflect a move from a deficit model of communication, to one that is based on dialogue and a greater understanding of the social and cultural context of communication.\n\nBy presenting this definition of science communication in the opening paragraph, I am unsure whether the authors are aware of the cultural shifts taking place in the public engagement landscape. The use of the term ‘outreach’ is also problematic. Outreach can (but not always) refer specifically to public engagement work in a school setting, often with the goal of increasing diversity in those who choose to study science at a tertiary level. While it is true that terminology in the public engagement literature can be confusing, there has been some work on this, and I would encourage the authors think about what public engagement means to them and also how the scientists they survey define it. Some of the references cited are a little out of date and much work has been done on public engagement and the greater emphasis on dialogic approaches. This work needs to be consulted to address this underlying issue which detracts greatly from the work.\n\nThere are other issues which need attention.\n\nOn several occasions, the authors make reference to the ‘social value’ of research without defining what that means, or how it can be measured. I am not sure how this relates to public engagement or the views expressed in the survey.\n\nUsing the terms ‘scientists’ and ‘non-scientist’ is problematic. For example, the authors refer to clinicians as being non-scientists. It is likely that many clinicians would disagree with that assessment. It may be more helpful to talk about ‘specialists’ and ‘non-specialists’. This allows for the fact that some groups who aren’t professional scientists, can be specialists in a given area. For example, patient groups. Also, scientists have different specialisations, so may not understand another discipline any better than someone who isn’t a professional scientist.\n\nThe geographical categories don’t make sense. Why not just divide them into country groups? I am assuming N/A in Table 1 means these scientists come from somewhere  else – where? Is geographical origin relevant – there is no evidence that this has been considered in the analysis.\n\nDespite the above criticisms, the authors allude to an interesting question about public engagement and basic research which could be addressed in more detail in a new study. Are views about public engagement related to whether a scientist carries out basic science or applied science? Given their previous contact with a substantial pool of scientists, the authors could re-design a more rigorous study. They could question scientists above the public engagement work they have actually carried out (rather than their intention) and explore whether activities and views were related to the kind of research that was carried out – bearing in mind that there are many other factors that influence public engagement activity. As a public engagement professional working with scientists in basic research, such a study would be of great interest.\n\nAs it stands, this study has too many methodological and theoretical flaws to be indexed. The body of data is insubstantial and the analysis is problematic.Too many sweeping generalisations are made which simply are not supported by the data. Within the work however, is the kernel of an interesting research question about public engagement within applied vs. basic research. I would urge the authors to consider this question and design a new study that can address this, paying attention to adequate data collection and a rigorous analysis. The following references may be helpful.", "responses": [ { "c_id": "2824", "date": "22 Jun 2017", "name": "Andrea Boggio", "role": "Author Response", "response": "We want to thank Dr. Curtis for her insightful review. Her points are well taken and the reference list that is appended is also a useful and thoughtful resource that would help us strengthen the research. We certainly appreciate her encouragement to pursue some of the questions we raise in the paper. This would entail designing a new study. Unfortunately, at the moment, we lack the resources to design and conduct such study. For this reason, we have decided not further develop the ideas proposed in the review." } ] } ]
1
https://f1000research.com/articles/6-209
https://f1000research.com/articles/6-208/v1
02 Mar 17
{ "type": "Research Article", "title": "The impact factor of an open access journal does not contribute to an article’s citations", "authors": [ "SK Chua", "Ahmad M Qureshi", "Vijay Krishnan", "Dinker R Pai", "Laila B Kamal", "Sharmilla Gunasegaran", "MZ Afzal", "Lahiru Ambawatta", "JY Gan", "PY Kew", "Than Winn", "Suneet Sood", "SK Chua", "Ahmad M Qureshi", "Vijay Krishnan", "Dinker R Pai", "Laila B Kamal", "Sharmilla Gunasegaran", "MZ Afzal", "Lahiru Ambawatta", "JY Gan", "PY Kew", "Than Winn" ], "abstract": "Background Citations of papers are positively influenced by the journal’s impact factor (IF). For non-open access (non-OA) journals, this influence may be due to the fact that high-IF journals are more often purchased by libraries, and are therefore more often available to researchers, than low-IF journals. This positive influence has not, however, been shown specifically for papers published in open access (OA) journals, which are universally accessible, and do not need library purchase. It is therefore important to ascertain if the IF influences citations in OA journals too. Methods 203 randomized controlled trials (102 OA and 101 non-OA) published in January 2011 were included in the study. Five-year citations for papers published in OA journals were compared to those for non-OA journals. Source papers were derived from PubMed. Citations were retrieved from Web of Science, Scopus, and Google Scholar databases. The Thompson-Reuter’s IF was used. Results OA journals were found to have significantly more citations overall compared to non-OA journals (median 15.5 vs 12, p=0.039). The IF did not correlate with citations for OA journals (Spearman’s rho =0.187, p=0.60). The increase in the citations with increasing IF was minimal for OA journals (beta coefficient = 3.346, 95% CI -0.464, 7.156, p=0.084). In contrast, the IF did show moderate correlation with citations for articles published in non-OA journals (Spearman’s rho=0.514, p<0.001). The increase in the number of citations was also significant (beta coefficient = 4.347, 95% CI 2.42, 6.274, p<0.001). Conclusion It is better to publish in an OA journal for more citations. It may not be worth paying high publishing fees for higher IF journals, because there is minimal gain in terms of increased number of citations. On the other hand, if one wishes to publish in a non-OA journal, it is better to choose one with a high IF.", "keywords": [ "bibliometrics", "bibliometric analysis", "information science", "publications", "literature based discovery", "open access", "Web of Science", "Google Scholar" ], "content": "Introduction\n\nA journal’s impact factor (IF) has long been used as a measure of the quality of a journal1. Today, the IF is used as a tool to assess researchers for employment, career promotion, and funding2–4.\n\nIn the past, most libraries could possess only a limited number of journals, and librarians used the IF to decide which journals to buy3,5–7. Consequently, high IF journals were more likely to be purchased, read, and cited. With low IF journals, availability was a constraint. Scientists, wanting a greater audience for their research, preferred to publish in high IF journals. There was plenty of evidence that publishing in a higher IF journal resulted in more citations8–13.\n\nIn contrast, at present, open access (OA) journals are universally available. Libraries have no need to subscribe, and researchers can access OA articles freely. Expectedly, OA publication is associated with increased citations14–19, so researchers are likely to prefer this path. What is not known is whether, within OA journals, increasing IF is associated with increasing citations, as it is for non-OA journals. Yet this information is important, since cost of publishing in an open access journal is high and increases with the journal’s IF. Should a researcher, or a sponsor, pay good money for publication in a higher IF OA journal if the IF will not influence citations?\n\nWe conducted a study to determine whether an OA journal’s IF influences citations.\n\n\nMethods\n\nWe first conducted a pilot study to estimate required sample size. For this purpose, 57 randomized controlled trials (RCTs) were extracted from PubMed, and scanned for citations as listed in the Web of Science. PubMed was chosen to look for source articles because most researchers start their search on PubMed20. Within this pilot group, for OA articles the mean citations were 12.0±8.81; for non-OA articles the mean citations were 7.14±6.89. The estimated sample size, at α= 0.05 and β=0.2, was 58 articles per group, which we rounded up to an intended 100 articles per group.\n\nIn order to have a 5-year follow up for citations, we chose 2011 as the publication year of articles included in this study, and restricted our source articles to those published in January 2011. We found 3,742 RCTs, and saved them into a Microsoft Excel file. The IF of their journals were derived from the Thompson Reuters’ Web of Science database.\n\nFrom these 3,742 articles, we extracted titles until at least 100 articles met the criteria for OA, and 100 for non-OA. Articles were picked at random, using MS Excel’s RANDBETWEEN function.\n\nArticles were considered OA if the journal title was present in PubMed’s OA subset list as open access, and open access was allowed immediately upon publication.\n\nArticles were considered non-OA if the following three conditions were all fulfilled:\n\n1. The publishing journal was not listed in PubMed’s OA subset list;\n\n2. The article was never made freely available by the journal;\n\n3. The article was not self-archived (as determined by a careful web search for the article).\n\nIn other words, the non-OA article could, in theory, only be read by someone with a subscription. Within non-OA journals, we excluded articles if their journals allowed free access to all articles any time after publication. We further excluded articles published in hybrid non-OA journals if over 20% of their articles were freely available (for this, we counted 100 successive 2011 articles in that journal, and ensured that fewer than 20 were marked as freely accessible). In other words, we attempted to ensure that the non-OA journal was true non-OA, and its IF would properly represent the IF of a non-OA journal (Figure 1). Finally, we also excluded articles if their journal did not have a measurable Web of Science IF for 2011.\n\n1We entered the PubMed search command: “Randomized Controlled Trial [ptyp] AND (“2011/01/01“[PDAT] : “2011/01/31“[PDAT])”. Ptype = publication type, pdat = publication date.\n\nThe articles were scanned for citations as listed in Web of Science, Scopus, and Google Scholar databases. The search period was extended up to 2016, allowing for five years of publication time, with the assumption that citations over five years provide a better estimate of the impact of a paper than citations over two years5. Only journal citations were included in the counts; citations in books, theses, and government documents were excluded to conform with the Web of Science policy21. We exported citation data from the three databases into .csv files, and imported these into a Microsoft Excel sheet. Duplicates were excluded. Citations that appeared in two language versions of the same paper were counted as one.\n\nIBM® SPSS® Statistics (version 22.0) software was used to conduct the statistical data analyses on the dataset (Dataset 1, doi: 10.7910/DVN/XR6MR922). OA journals were compared to non-OA journals for overall IF and citations over 5 years. Normality for each independent variable and dependent variable was assessed using the “Kologorov-Smirnov” test, which showed that citations were not normally distributed (p< 0.05). Consequently, non-parametric univariate analysis was carried out using the “Mann-Whitney” test. Linear regression was performed before and after logarithmic transformation of the data.\n\n\nResults\n\n203 articles (101 non-OA, and 102 OA) fulfilled the criteria for inclusion. The IFs of their journals for 2011 (IF-2011) ranged from 0.121 to 10.111 (median 2.083, mean 2.285±1.323). The median of number of citations was 15 (range 0–92).\n\nThere were significantly more citations in OA publications than in non-OA publications. The IFs were almost identical (Table 1).\n\nIQR: Interquartile range; SD: standard deviation.\n\naMann-Whitney test\n\nbIndependent samples t-test\n\nWe assessed the correlation between IF-2011 and citations. Since the data was skewed, we used Spearman’s rho (rs). The rs value for all papers was 0.387 (p<0.001).\n\nThe correlation was assessed separately for OA and non-OA publications. For OA publications the correlation was very small (rs=0.187, p=0.060). In contrast, it was significant (rs=0.514, p<0.001) for non-OA publications.\n\nWe calculated the linear regression coefficient between IF and citations. The crude β regression coefficient was 0.297 (Table 2). We then calculated the regression values separately for OA and non-OA publications. There was very little correlation between IF and citations for OA publications. The five-year citations increased by 3.3 for every unit increase in IF. There was, however, significant correlation between citations and IF in non-OA publications, which showed a rise in five-year citations by 4.3 for every unit increase in IF (Table 2).\n\nFinal model equation for all citations: 4.093 × (IF-2011) +10.904. Final model equation for OA citations: 3.346 × (IF-2011) + 14.648. Final model equation for non-OA citations: 4.347 × (IF-2011) + 8.291. β: crude regression coefficient. SE: Standard error.\n\nIn view of the skew, we repeated the regression analysis after log10 transformation of the citation data. The data became normally distributed after transformation. The outcome was roughly similar to the pre-transformation results (Table 3).\n\nFinal model equation for all log10 citations: 0.097 × (IF-2011) + 0.926. Final model equation for log10 OA citations: 0.066 × (IF-2011) + 1.055. Final model equation for log10 non-OA citations: 0.109 × (IF-2011) + 0.839. β: crude regression coefficient. SE: Standard error.\n\n\nDiscussion\n\nThe IF served an important function in the pre-internet era. Libraries needed to decide which journals to buy. With limited budgets, especially in poorer countries, they purchased only a few of the highest IF journals7,23–25. In a self-propagating mechanism, the higher IF journals continued to be better read and cited, and were purchased more often. To quote Peter Suber24, “Prestige even feeds prestige. Journal prestige attracts readers, and helps justify library decisions to spend part of their limited budget on a subscription. The growth in readers and subscribers directly boosts prestige.”\n\nWith time, the IF became widely used as a measure of the quality of a journal, author, and paper21,24. Universities rewarded faculty who published in high-IF journals. Promotion and tenure committees, as well as funding agencies, preferred authors who had published papers in high-IF journals24. Researchers thus were driven to publish their best papers in high-IF journals. Instead of the content identifying the journal, the journal began to identify the content.\n\nToday, the game has changed and the efficiency of the internet has lead to the proliferation of OA journals. Libraries do not need to make any choices at all; the reader just needs to decide which paper is relevant and read it. This has diminished at least one purpose served by the IF: to help institutions decide which journals to buy. It also raises two questions. The first is: Are publications in OA journals more likely to be cited than those in non-OA journals? The second is: Will a higher IF lead to more citations?\n\nOA journals are always available to all—this is their advantage over non-OA journals. Consequently, one would expect that an article published in an OA journal would be more easily accessible, more widely read, and therefore more often cited. Research has proved that this is indeed true14,18,26.\n\nOur data has also shown that articles published in OA journals are associated with more citations than those published in non-OA journals—by a factor of 1.3. Although statistically significant, this increase in citations was slightly lower than that shown by others. Antelman14, found that open access publications in various specialties (philosophy, political science, engineering, mathematics) were associated with increased citation rates by a factor of 1.45–1.9. Freely accessible articles had 1.5 times higher citation rates than non-OA articles. Kousha and Abdoli18 showed that citation rates of OA publications were higher by a factor of 1.9, giving them a clear advantage. However, these other authors compared OA articles and non-OA articles, rather than OA journals and non-OA journals. Our data is different as it compares the number of citations of publications in OA journals with citations of publications in non-OA journals.\n\nThis leads us on to the next question: Is the expectation of more citations with a higher IF being fulfilled?\n\nAt the start of the study we had expected to see a significant correlation between IF and the number of future citations, believing that increasing IF indicated improved quality of journal and article. For OA journals the correlation, however, was poor and insignificant (rs=0.187, p=0.060). We believe that it is safe to say an OA journal’s IF contributes little to an article’s future citations.\n\nIn contrast, the relationship between citations and IF was strong for non-OA publications. Our correlation coefficient for non-OA publications (0.514), closely matched the values reported by Judge et al (0.44)12, Piwowar and Vision (0.45)27, Vanclay (0.56)11, and Leimu and Koricheva (0.62)28. Thus, despite using different databases, particularly Google Scholar, the citation rate in our study showed a moderate (yet statistically significant) correlation with the IF in our study. This validates our methods, and strengthens the findings about OA publications.\n\nLinear regression analysis indicated a very real relationship between citations and IF for non-OA publications. The expected citations rise at an approximate rate of one citation per year per rise in impact factor—a change that is consistent with the very definition of the impact factor. This result was quite similar to the findings reported by Vanclay11 and by Perneger29. In contrast, publishing in an OA journal with a higher IF did not result in significantly increased citations. For every 1 unit rise in IF, the data showed a rise of just over 3 citations in five years; using the log10 transformed data the rise was even lower at low IFs. We could not compare our results to those of other authors, as we were unable to find a publication that correlated IF with citations exclusively for OA journals.\n\nWe are unable to comment on whether any other variable is a better predictor for an article’s citations compared to the IF, since we did not analyze other factors. Nevertheless, it is reasonable to presume that the article’s quality and relevance will influence the citations much more than IF will. Even for non-OA publications, the citations of an article are likely to be strongly influenced by other factors including the quality of the article, and not by the IF alone. This, of course, is well established4,11,30.\n\nSince OA publications are cited more often, it seems logical that a researcher should publish in an OA journal. Should an author search for a high-IF OA journal? An author may reasonably expect about 14 citations in five years, regardless of the IF, and these would rise to about 20 if the OA journal’s IF was 2 (from 11 to 15 if we use the log10 transformed data). With a rise in IF from 0 to 4, the total citations would not even double. And unlike non-OA journals, OA journals charge the author, and, in general, the higher the journal’s IF, the higher the cost. BioMed Central journals with IFs higher than 2 typically charge article-processing fees of about 2000 euros. Even if the journal’s IF contributes to a higher readership and citation rate — which is questionable, considering the low r2 value — it is doubtful whether the few extra citations are worth the cost.\n\nIn contrast to OA journals, the number of citations for an article published in a non-OA journal with IF of 4 will be thrice as many as those published in a non-OA journal with an IF of 0. So it makes sense to select as high an IF as possible when publishing an article in a non-OA journal, particularly since non-OA journals charge their readers, and not their contributors.\n\nWe have tried to minimize confounders by selecting RCTs published across one specific month, so that all studies have had the same period of citation. Our other strength was to analyze citations in more than one database: Web of Science, Scopus, and Google Scholar. The inclusion of Google Scholar allowed us to include results from a much larger database31, and thus to provide a better representation of citations than would have been possible if we had depended solely on Web of Science or Scopus. We also ensured that non-OA articles were truly non-OA by excluding those that were self-archived and those that were made freely available by the journals. The journals themselves could also be considered truly non-OA, and consequently their IFs could be considered representative of non-OA journals, because we excluded journals that allowed significant numbers of articles to be freely available. We took care to adhere closely to the Web of Science definition of “Impact Factor”21, by manually examining every Google Scholar citation and excluding citations in books, theses, and government documents. We also included citations over the following 5 years, which we believe provides a better estimate of a paper’s IF5.\n\nThe main weakness of our study lies in our inability to evaluate the quality of the papers. In ideal circumstances we would have ensured that all papers were of equivalent quality. However this was not feasible. The other potential issue is that inclusion of citations from Google Scholar might allow entry of poor quality publications and predatory publications31. Despite this possibility, we believe that Google Scholar represents an important database, and must not be excluded.\n\n\nConclusions\n\nOA journals attract more citations than non-OA journals. If all other considerations are equal, a researcher should prefer an OA journal to a non-OA journal for publication. If a researcher publishes in an OA journal, the IF does not matter. It is reasonable to select a journal that will publish quickly and cheaply. If a non-OA journals is selected, the researcher should aim to publish in a journal with a high IF.\n\n\nData availability\n\nDataset 1: Impact factor data. doi, 10.7910/DVN/XR6MR922", "appendix": "Author contributions\n\n\n\nSS conceived the study. SS, AMQ, DRP, and VK designed the study details, and supervised the data collection. TW was the statistician, and was involved in the study design. The study was conducted as a BMedSc thesis by SKC, who was primarily responsible for the data collection and writing of the first draft; SS and AMQ were her supervisors. LBK, SG, MZA, LA contributed significantly in collecting the data. JYG, PYK participated in writing the paper and rechecking the draft for errors. The paper was largely written by SKC, SS, AMQ, DRP, and VK.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nWe wish to thank Prof Rusli bin Nordin, epidemiologist, for guidance during the study.\n\n\nReferences\n\nWáng YX, Arora R, Choi Y, et al.: Implications of Web of Science journal impact factor for scientific output evaluation in 16 institutions and investigators' opinion. Quant Imaging Med Surg. 2014; 4(6): 453–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith D, Bissell G, Bruce-Low S, et al.: The effect of lumbar extension training with and without pelvic stabilization on lumbar strength and low back pain. J Back Musculoskelet Rehabil. 2011; 24(4): 241–9. PubMed Abstract | Publisher Full Text\n\nBrink PA: Article visibility: journal impact factor and availability of full text in PubMed Central and open access. Cardiovasc J Afr. 2013; 24(8): 295–6. PubMed Abstract | Free Full Text\n\nSmith DR: Historical development of the journal impact factor and its relevance for occupational health. Ind Health. 2007; 45(6): 730–42. PubMed Abstract | Publisher Full Text\n\nKamat PV, Schatz GC: Journal Impact Factor and the Real Impact of Your Paper. J Phys Chem Lett. 2015; 6(15): 3074–5. PubMed Abstract | Publisher Full Text\n\nGarfield E: The Thomson Reuters Impact Factor. 1994; [cited 2016 14/06]. Reference Source\n\nDeSart M, Bailey D, Powers A, et al.: Metrics and More: How Librarians Decide to Purchase or Cancel Your Journal. Science Editor. 2007; 30(1): 15. Reference Source\n\nFilion KB, Pless IB: Factors related to the frequency of citation of epidemiologic publications. Epidemiol Perspect Innov. 2008; 5(1): 3. PubMed Abstract | Free Full Text\n\nNieminen P, Carpenter J, Rucker G, et al.: The relationship between quality of research and citation frequency. BMC Med Res Methodol. 2006; 6(1): 42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEtter JF, Stapleton J: Citations to trials of nicotine replacement therapy were biased toward positive results and high-impact-factor journals. J Clin Epidemiol. 2009; 62(8): 831–7. PubMed Abstract | Publisher Full Text\n\nVanclay JK: Factors affecting citation rates in environmental science. J Informetr. 2013; 7(2): 265–71. Publisher Full Text\n\nJudge TA, Cable DM, Colbert AE, et al.: What Causes a Management Article to Be Cited-Article, Author, or Journal? Acad Manage J. 2007; 50(3): 491–506. Publisher Full Text\n\nFalagas ME, Zarkali A, Karageorgopoulos DE, et al.: The impact of article length on the number of future citations: a bibliometric analysis of general medicine journals. PLoS One. 2013; 8(2): e49476. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAntelman K: Do Open-Access Articles Have a Greater Research Impact? Coll Res Libr. 2004; 65(5): 372–82. Publisher Full Text\n\nLawrence S: Free online availability substantially increases a paper's impact. Nature. 2001; 411(6837): 521. PubMed Abstract | Publisher Full Text\n\nHajjem C, Harnad S, Gingras Y: Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. 2006. Reference Source\n\nNorris M, Oppenheim C, Rowland F: The citation advantage of open‐access articles. J Am Soc Inf Sci Technol. 2008; 59(12): 1963–72. Publisher Full Text\n\nKousha K, Abdoli M: The Citation Impact of Open Access Agricultural Research: A Comparison between OA and Non-OA Publications. Online Information Review. 2010; 34(5): 772–85. Publisher Full Text\n\nEysenbach G: Citation advantage of open access articles. PLoS Biol. 2006; 4(5): e157. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Leo G, LeRouge C, Ceriani C, et al.: Websites most frequently used by physician for gathering medical information. AMIA Annu Symp Proc. 2006; 2006: 902. PubMed Abstract | Free Full Text\n\nGarfield E: The history and meaning of the journal impact factor. JAMA. 2006; 295(1): 90–3. PubMed Abstract | Publisher Full Text\n\nSood S: Impact factor data 2017-02-11. Harvard Dataverse, V1, UNF:6:z/aZ/8XtsnATQ+GAPxm7WA==, 2017. Data Source\n\nCollins T: The current budget environment and its impact on libraries, publishers and vendors. J Libr Adm. 2012; 52(1): 18–35. Publisher Full Text\n\nSuber P: Thoughts on prestige, quality, and open access. LOGOS: The Journal of the World Book Community. 2010; 21(1): 115–28. Publisher Full Text\n\nKale R: Health information for the developing world. BMJ. 1994; 309(6959): 939–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrody T, Harnad S: Comparing the impact of Open Access (OA) vs. non-OA articles in the same journals. D-Lib Magazine. 2004; 10(6). Publisher Full Text\n\nPiwowar HA, Vision TJ: Data reuse and the open data citation advantage. PeerJ. 2013; 1: e175. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeimu R, Koricheva J: What determines the citation frequency of ecological papers? Trends Ecol Evol. 2005; 20(1): 28–32. PubMed Abstract | Publisher Full Text\n\nPerneger TV: Citation analysis of identical consensus statements revealed journal-related bias. J Clin Epidemiol. 2010; 63(6): 660–4. PubMed Abstract | Publisher Full Text\n\nMontori VM, Wilczynski NL, Morgan D, et al.: Systematic reviews: a cross-sectional study of location and citation counts. BMC Med. 2003; 1(1): 2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Winter JC, Zadpoor AA, Dodou D: The expansion of Google Scholar versus Web of Science: a longitudinal study. Scientometrics. 2014; 98(2): 1547–65. Publisher Full Text" }
[ { "id": "20913", "date": "15 Mar 2017", "name": "Eleftherios P Diamandis", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting article which compares the effect of impact factor on collected citations of papers published in open access and non-open access journals. The major findings of this paper is that the open access journals attract more citations than non-open access journals and that the impact factor of an open access journal does not significantly affect the number of citations received. On the other hand, it was found that the number of citations increases with impact factor in non-open access journals. These data have implications for authors who wish to publish their research in either open access or non-open access journals, after considering their associated costs.\n\nI have no suggestions for further changes of this manuscript except that recently, the findings of the authors have been speculated and/or corroborated with additional publications which should be cited.\n\nIn a paper published in Clin Chem Lab Med 20091, it has been suggested that the impact factor will go away soon, effectively in full agreement with the finding that at least for open access journals, the impact factor does not affect citations. In a subsequent paper in Clin Chem Lab Med 20132, it has been speculated that journals will act as repositories of information and the impact factor will be irrelevant, further corroborating the findings of this paper that the impact factor of open access journals does not affect citations. Also, in a recent paper in BMC 20173, it is mentioned that the journal impact factor is under attack and another factor is proposed to assess journal quality. I believe that these 3 papers are very relevant to this contribution and should be briefly mentioned in the discussion and cited appropriately. Otherwise this is a nice contribution to the discussion related to journal impact factor and related themes.", "responses": [] }, { "id": "23330", "date": "07 Jun 2017", "name": "Samiran Nundy", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting approach to the vexed question of whether or not it is worthwhile publishing in the now ubiquitous, and commonly regarded inferior, open access journals.\nThe authors have convincingly demonstrated that to get more citations it is. Whether or not this will translate into more prestige points towards selection and promotion for e.g.  faculty positions has to be evaluated later. However these are early days to judge this rather novel open access experiment.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-208
https://f1000research.com/articles/3-157/v1
10 Jul 14
{ "type": "Review", "title": "Aspiration in injections: should we continue or abandon the practice?", "authors": [ "Yasir Sepah", "Lubna Samad", "Arshad Altaf", "Nithya Rajagopalan", "Aamir Javed Khan", "Arshad Altaf", "Nithya Rajagopalan", "Aamir Javed Khan" ], "abstract": "Aspiration during any kind of injection is meant to ensure that the needle tip is at the desired location during this blind procedure. While aspiration appears to be a simple procedure, it has generated a lot of controversy concerning the perceived benefits and indications. Advocates and opponents of aspiration both make logically sound claims. However, due to scarcity of available data, there is no evidence that this procedure is truly beneficial or unwarranted. Keeping in view the huge number of injections given worldwide, it is important that we draw attention to key questions regarding aspiration that, up till now, remain unanswered. In this review, we have attempted to gather and present literature on aspiration both from published and non-published sources in order to provide not only an exhaustive review of the subject, but also a starting point for further studies on more specific areas requiring clarification. A literature review was conducted using the US National Institute of Health’s PubMed service (including Medline), Google Scholar and Scopus. Guidelines provided by the World Health Organization, Safe Injection Global Network, International Council of Nursing, Center for Disease Control, US Federal Drug Agency, UK National Health Services, British Medical Association, Europe Nursing and Midwifery Council, Public Health Agency Canada, Pakistan Medical Association and International Organization of Standardization recommendations 7886 parts 1-4 for sterile hypodermics were reviewed for relevant information. In addition, curricula of several medical/-nursing schools from India, Nigeria and Pakistan, the US pharmacopeia Data from the WHO Program for International Drug Monitoring network in regard to adverse events as a result of not aspirating prior to injection delivery were reviewed. Curricula of selected major medical/nursing schools in India, Nigeria and Pakistan, national therapeutic formularies, product inserts of most commonly used drugs and other possible sources of information regarding aspiration and injections were consulted as well.", "keywords": [ "An injection is defined by the World Health Organization (WHO) as parenteral administration of medication through a skin puncture via a syringe", "while aspiration is defined as the pulling back of the plunger of a syringe (for 5–10 seconds) prior to injecting medicine1–4. Aspiration is most commonly performed during an intramuscular [IM] or subcutaneous [SC] injection", "and is meant to ensure that the needle tip is located at the desired site", "and has not accidentally punctured a blood vessel." ], "content": "Introduction\n\nAn injection is defined by the World Health Organization (WHO) as parenteral administration of medication through a skin puncture via a syringe, while aspiration is defined as the pulling back of the plunger of a syringe (for 5–10 seconds) prior to injecting medicine1–4. Aspiration is most commonly performed during an intramuscular [IM] or subcutaneous [SC] injection, and is meant to ensure that the needle tip is located at the desired site, and has not accidentally punctured a blood vessel.\n\nDespite the growing wealth of medical knowledge in recent decades, the simple procedure of aspiration is still generating much controversy concerning its perceived benefits and indications5. Advocates of aspiration contend that it is a technically easy maneuver that is rapidly performed and well tolerated by patients with no increase in costs incurred. However, due to a paucity of available data, there is no evidence that this procedure is essential or truly beneficial. This issue has been widely debated with specific regard to vaccination; there are no studies that have assessed the need for aspiration prior to IM injection of vaccines in relation to vaccine safety. The widespread use of auto-disable (AD) syringes – most of which are not designed to aspirate6 – has not been linked to adverse effects due to the elimination of the aspiration procedure prior to injection of vaccines7. This finding has intensified the debate and raised doubts over the necessity of aspiration in non-vaccine medication administration as well.\n\nConventional syringes are also used to aspirate materials other than blood – synovial fluid, amniotic fluid, cells (via fine needle cytology), pericardial fluid, peritoneal fluid and cerebrospinal fluid (CSF) are examples8–19. This wide spectrum of applications for conventional syringes is all the more interesting in view of the fact that although used for both aspiration and injection, the syringe is actually designed only for injection20. A number of studies have concluded that a conventional syringe is a poorly controlled and non-ergonomic device during aspiration21,22. Possible lack of precision may result in local trauma and pain, prolonged procedure time, failed or incomplete procedures, accidental puncture of blood vessels or nerve bundles, poor sample retrieval and delayed diagnosis23–33. The ingrained use of the conventional syringe for injection and aspiration is to a large extent attributable to its low cost, widespread availability and lack of an effective alternative21.\n\nThe huge volume of injections being given worldwide – an estimated 16 billion injections per year are administered in the developing and transitional countries alone34 – necessitates that this aspect of injection technique be given due attention. This review aims to collate English-language literature on aspiration from all published and non-published sources in order to provide an overview on the subject. In particular, this review aims to highlight areas of debate and draw attention to key questions that remain unanswered, thus providing a starting point for controlled studies on specific areas requiring clarification.\n\n\nMethodology\n\nAn literature review was conducted using the US National Institute of Health’s PubMed service (including Medline), archives of SIGNpost, the weekly electronic newsletter of the World Health Organization’s (WHO) Safe Injection Global Network (SIGN), and International Organization of Standardization (ISO) recommendations 7886 parts 1–4 for sterile hypodermics. Clarification on points of debate was sought by direct communication with ISO. Google Scholar was also used to search for relevant information. Relevant search terms for PubMed and Google Scholar literature searches are listed below.\n\nGuidelines from the WHO, International Council of Nursing (ICN), US Center for Disease Control (CDC), US Federal Drug Agency (FDA), UK National Health Service (NHS), British Medical Association, UK Nursing and Midwifery Council (NMC), and Australian Nursing and Midwifery Accreditation Council, Public Health Agency Canada and the Pakistan Medical Association (PMA) were extensively searched for information. Data from the WHO Program for International Drug Monitoring network in regard to adverse events as a result of not aspirating prior to injection delivery were reviewed. Curricula of selected major medical/nursing schools in India, Nigeria and Pakistan were also reviewed for relevant information to document the inclusion (or otherwise) of aspiration in teaching guidelines for injection technique.\n\nNational therapeutic manuals and formularies such as British National Formulary (BNF), European Pharmacopeia (EP), United States Pharmacopeia (USP) and Pakistan Pharma Guide (PPG) were also consulted for information regarding aspiration before injection. Product inserts for all injectable drugs on the WHO Essential Drug List (EDL) were collected to determine if the manufacturer had provided instructions on aspiration prior to injecting the drug. These product inserts were collected from local pharmacies and the international manufacturers for each drug. Drug inserts from multi-nationals were acquired either directly from their websites or from other online resources including the Drug Index (www.Rxlist.com), Australian Prescription Products Guide (www.appgonline.com.au/default.asp) and from (http://www.rxmed.com/).\n\n\nResults\n\nOur review was conducted between March 2008 and March 2014. Table 1 summarizes the resources searched.\n\nPublished literature on injection technique advises aspiration before injecting a drug through different routes, i.e. IM35, intravascular (IV)36 or SC37. However, it is important to note that emphasis has been placed on negative pressure being applied for 5–10 seconds for aspiration to be of benefit1,3,4. During the administration of an IV injection, the presence of “flashback” (return of blood into the syringe or cannula) is a passive process and active aspiration is usually not necessitated; hence, this particular route of administration has not been emphasized in the review below.\n\nIM injections. Aspiration prior to injection of medication through the IM route remains a part of most guidelines4,35,38–40. Nursing curricula and guidelines4,38,39 clearly recommend aspiration as an essential step in IM injection technique. Guidelines originating in the UK recommend aspiration prior to IM injection of medications35, as well as specifically as part of the Z-track technique of administering IM injections. Training curricula for community health workers in Nigeria recommend aspiration prior to IM, SC and intradermal [ID] injections40.\n\nSC injections. It is apparent that there are opposing schools of thought when it comes to aspiration prior to SC injections. There are those that insist that aspiration should continue to be part of SC injection techniques for medication administration, and those who are convinced that aspiration is not necessary and has no real advantage; in fact, several disadvantages may be attributed to this step.\n\nSome nursing curricula do not include aspiration as part of the recommended technique38 for SC injection. One nursing guideline highlights the debate existing over aspiration prior to a SC injection, concluding that while the likelihood of piercing a vessel is slim, local guidelines should be followed in determining individual practices. Others recommend routine aspiration prior to injection of medications through the SC route42.\n\nThe WHO/ICN43 combined guidelines do not mention aspiration. Similarly, the WHO/SIGN document44 “A Guide For Supervising Injections” makes no recommendations related to aspiration. Both documents are primarily concerned with infection control practices in relation to injection administration, overlooking aspiration entirely.\n\nA recent debate in relation to SC injection of immunotherapy has highlighted this controversy. Waibel recommended that aspiration before SC injection of immunotherapy be abandoned since there were no positive aspirates in 36,000 immunotherapy injections given at his practice45. While other specialists agreed that aspiration prior to immunotherapy injection in SC tissue is very rarely positive, rare anecdotes were quoted when positive aspiration has been documented46,47, even in the hands of experienced specialists and nurses. Given the potentially fatal adverse reactions of immunotherapy injected into blood vessels, it is logical to recommend that aspiration be performed as part of the standard technique. However, fatal and near fatal adverse reactions have been reported following immunotherapy injection despite precautions, including aspiration, being taken45.\n\nEpinephrine. Epinephrine is given through the SC or IM route to treat allergic reactions. Geller48 has reported the observation of a positive aspiration prior to epinephrine injection for asthma; if aspiration had not been performed in that instance, epinephrine would have been injected into the blood vessel with potentially hazardous consequences. On the other hand, the preloaded auto injector commonly used for administering epinephrine in emergency situations does not allow for aspiration49. In this form, epinephrine is designed to be administered via IV injection, via intracardiac injection or via the endotracheal route into the bronchial tree where aspiration is superfluous.\n\nInsulin. The NMC guidelines50,51 do not mention aspiration in relation to insulin injection. Aspiration prior to insulin injection is rarely positive36 and hence not indicated. This recommendation is supported by drawing a parallel with heparin administration, where increased hematoma formation has been associated with aspiration4.\n\nDental procedures. A study looking at dental anesthetic injections showed positive aspiration rates ranging from 3.2–8% depending on the type of syringe system used52 and the type of nerve block. Accuracy of needle position combined with mechanical ease at the time of dental injections are important considerations when choosing an appropriate device53. To this end, different self-aspirating devices have been tested in dental practice54. An understanding of vascular anatomy57 is all the more important in view of the potential toxicity of anesthetic agents and the possibility of embolization to the ophthalmic artery58.\n\nThe US CDC screening form55 for device specifics notes whether a dental syringe is capable of aspiration.\n\nImmunization. Vaccinations form an important subset of all injections given worldwide. Most government programs worldwide follow UNICEF/WHO recommendation in their Expanded Program on Immunization (EPI) programs. At present, the WHO does not recommend aspiration prior to administering a vaccine7,56. Current guidelines published by the American Academy of Pediatrics (AAP)57 recommend that aspiration prior to IM vaccinations may not be necessary, while similar Canadian guidelines continue to recommend aspiration58. The US Advisory Committee on Immunization Practices (ACIP)59 does not make any recommendations on aspiration at the time of vaccine administration. Without data indicating the need for aspiration during vaccination, ACIP is basically leaving this decision to the person giving the vaccine. A similar stance is taken by the US Immunization Action Coalition guideline40 where aspiration is not mentioned in its recommendations for SC and IM injections in adults, and it states that there are “no data to document the necessity of aspiration” in children.\n\nA different approach to this issue was taken by Ipp et al.2 through a survey where the actual practice of end users was evaluated. This survey established that 74% of respondents aspirated prior to IM vaccine administration. However, of these only 3% aspirated for the recommended 5–10 seconds; the remaining applied negative pressure for <5 seconds. The same group went on to conduct a randomized controlled trial in which they compared two injection techniques: the standard approach, which included aspiration for 5–10 seconds, and the pragmatic approach, which excluded aspiration entirely50. They concluded that IM vaccinations using the pragmatic approach were less painful and there were no benefits to following the standard approach. Jablecki60 has suggested a technique for choosing a site for administering IM injection that is relatively pain free by understanding the anatomy of cutaneous innervation at the selected site. This may mitigate the effect of increased pain in the standard approach. Similarly, Philippe Duclos, WHO/Vaccines and Biologicals, has recommended against aspiration prior to injection with a view to minimizing pain61. More recently, a 2007 study of 113 infant vaccinations compared rapid IM injection without aspiration with slow IM injection with aspiration, and found the non-aspiration method to be associated with less pain based on behavioral pain ratings41,62. Similarly, in 2009, a systematic review of 19 randomized controlled trials involving 2,814 infants and children found that immunization pain can be decreased by performing a rapid IM injection without aspiration41.\n\nIn actual practice, AD syringes are recommended worldwide for vaccinations. While this is a small proportion of all injections given worldwide, it is an important component given that the target population is healthy children, and the risks have to be minimized as much as possible. In general, AD syringes do not permit health workers to aspirate for blood. This inability to aspirate with AD syringes has generated a heated debate. In theory, some devices like the BD Soloshot allow for limited aspiration63, but this does not meet the recommended criteria for the amount of negative pressure and duration of aspiration.\n\nA summary of the rationale behind the current recommendation of not aspirating during the administration of IM or SC vaccines is given below.\n\n1. Recommended sites for immunizations do not have major blood vessels; hence the risk of accidentally injecting the vaccine into a blood vessel is thought to be minimal63.\n\n2. AD syringes have been used in mass campaigns for IM injections without any reported adverse effects7,63 or injury from failure to aspirate7,64,65. All complications reported in the literature of intra-arterial injection involved penicillin and other medications and not vaccines1. “It is safe to assume that immunization as a class of IM injection poses less risk to the patient” than other medications, particularly antibiotics1,7,66,67. Hence, according to Clements7, “the practice of aspiration during vaccinations is not evidence-based”.\n\n3. Aspiration can result in wastage of vaccine64.\n\n4. Aspiration prolongs the time that the needle is inside the patient hence increasing the pain experienced by the recipient50.\n\n5. Less control is exercised during two-handed aspiration using a conventional syringe, which may lead to local injury. During a one handed vaccination without aspiration, the vaccinator can use the other hand to control the child7.\n\n6. At present, at the public health level, the use of AD syringes represents best practice to protect the health of the public despite the fact AD syringes do not allow aspiration for the recommended 5–10 seconds. The increased risk presented by eliminating aspiration from routine vaccine administration technique can be mitigated to an extent by a thorough understanding of the anatomy and landmarks of recommended injection sites66.\n\nThe WHO appreciates that there is not enough evidence to support the exclusion of aspiration1,7 at present. As a result, “WHO is neither able to support nor offer alternative actions in relation to aspiration undertaken during the administration of vaccines. Until such time as clear evidence becomes available to indicate which method is preferable, vaccinators should make locally appropriate choices7”. In addition, it is suggested that in individual clinical practice using non-AD syringes, aspiration should continue to be a part of the standard technique for IM injection administration66.\n\nThe realization that the information available to the WHO may not be comprehensive is reflected in disclaimers that are incorporated in WHO documents/publications. The joint statement on AD syringes in immunization says, “The World Health Organization does not warrant that the information contained in this publication is complete and correct and shall not be liable for any damages incurred as a result of its use”. All WHO publications state, “All reasonable precautions have been taken by the World Health Organization to verify the information contained in this publication. However, the published material is being distributed without warranty of any kind, either expressed or implied. The responsibility for the interpretation and use of the material lies with the reader. In no event shall the World Health Organization be liable for damages arising from its use. The named authors alone are responsible for the views expressed in this publication”.\n\nSpecial areas. The conventional syringe, primarily designed for injection, is widely used for aspiration. Sibbit et al.68 have found the conventional syringe to be unsuited for aspiration during Fine-Needle Aspiration Cytology (FNAC). Robinson et al.69 reported a similar experience using conventional syringes for amniocentesis. Aspiration was found to be unreliable in reducing the risk of IV penetration during intraforaminal cervical and lumbosacral epidural steroid injections36,70. Loss of control during joint aspirations can result in serious complications23–25,27–31,33,71–80, as was noted during other invasive procedures like pericardiocentesis, amniocentesis and thoracocentesis77. Precision is important where critical organs are involved. Improved control was seen with the FDA-approved one-handed reciprocating syringe21.\n\nVaccination. According to the Red Book57 published by the American Academy of Pediatrics (AAP), there is no need of aspiration before injection of vaccines or toxoids. Similarly, the US CDC guidelines for administration of vaccines65 have clear instructions not to aspirate before injection (for both IM and SC routes), as no large vessels exist in the recommended injection site. No recommendations were found in the Pink Book81 from the CDC in this regard.\n\nNone of the documents dealing with immunization (including immunization in practice module 1–11 from the WHO) suggest that aspiration is required before injection of a vaccine82. The WHO Fact Sheet No 231 on Injection Safety, revised in October 200683, focuses primarily on injection safety. Technical details, including aspiration, are not touched upon in this document.\n\nInjection of medication. The UK NHS84 and Public Health Agency Canada6 recommend aspiration before IM injection of medication.\n\nNeither the ICN nor the Nursing and Midwifery Council50,51 (Europe and British Chapters) have made any kind of recommendation in their guidelines on administration of medication. The official website of the WHO’s Uppsala Monitoring Center (UMC)85 does not list any warnings related to aspiration.\n\nNational nursing curricula in Nigeria86 and Pakistan87 do not mention aspiration before injection as a necessary step for IM and SC injections. Similarly the syllabus for MSc Nursing in India does not elaborated on injection technique. Curricula for primary health workers in Nigeria40 and nursing students in Pakistan’s foremost nursing school (Aga Khan University School of Nursing) do advocate aspiration38 before injection. Similarly, the IndiaCLEN Model Injection Center Program advises aspiration prior to IM injection88. None of the curricula mentioned above make any comment on the duration of aspiration.\n\nThe United States Pharmacopeia-National Formulary89, British National Formulary90 and Pakistan Pharma Guide91 make no mention of aspiration before injection.\n\nThe ISO recommendations 7886 for sterile hypodermic syringes (acquired via personal communication) were reviewed. Relevant sections from parts 1, 3 and 4 are reproduced below. Part 2 relates to syringes for use with power-driven pumps and is therefore beyond the scope of this review.\n\nPart 1: Sterile hypodermic syringes for single use - specifies requirements for sterile single-use hypodermic syringes made of plastic materials and intended for the aspiration of fluids or for the injection of fluids immediately after filling.\n\nPart 3: Auto-disable syringes for fixed dose immunization - specifies the properties and performance of sterile single-use hypodermic syringes, with or without needle, made of plastic materials and stainless steel and intended for the aspiration of vaccines or for the injection of vaccines immediately after filling. Upon delivering a fixed dose of vaccine, the syringe is automatically rendered unusable.\n\nPart 4: Syringes with re-use prevention feature - specifies requirements for sterile single-use hypodermic syringes made of plastic materials with or without needle, and intended for the aspiration of fluids or for the injection of fluids immediately after filling and of design such that the syringe can be rendered unusable after use.\n\nISO Section 5.3: Intended use/application. The intended use/application shall be categorized as follows:\n\nType A: single aspiration and injection.\n\nType B: multiple plunger aspirations prior to the final intended single use.\n\nThe term aspiration used in these guidelines indicates the drawing up of the vaccine or medication into the syringe prior to aspiration. Aspiration as defined in the context of this review is not directly referred to in these guidelines. It appears that withdrawal of the plunger for blood to be visible in the syringe if the needle tip is placed in a vessel, and for this function to be possible at any position of the piston within the graduated range, was considered for inclusion in the ISO guidelines at some point92. However, this is not included in ISO 7886 at all.\n\nProduct inserts for injectable drugs on the Essential Drug List were obtained from over 20 pharmacies across the city of Karachi, Pakistan93.\n\nEach insert was checked and the level of evidence available was categorized as follows:\n\n1. Clearly mentioned on the leaflet to aspirate before injection\n\n2. No mention of aspiration on the leaflet, but advocates a particular route of administration because of the dangers of side effects\n\n3. Suggests to stick to a particular route regardless of outcome of aspiration before injection\n\nOnly 3 drugs out of a total of 108 studied had level 1 evidence (bupivicaine, lidocaine and Pneumococcal 7-valent conjugate vaccine). Level-3 evidence was available for only 1 drug (Dactinomycin). The remaining essential drugs gave level-2 evidence Table 2.\n\n\nDiscussion\n\nAspiration prior to injection is just one part of the process of performing vaccinations, therapeutic injections and diagnostic/therapeutic procedures. The debate over its inclusion as an essential part of recommended techniques has driven this review, and is likely to continue in the absence of findings from randomized controlled trials. In most instances, general clinical or vaccination experiences guide global recommendations for aspiration. In others, anecdotal reports of adverse events form the basis for inclusion or exclusion of aspiration in standard injection techniques. The sheer number of injections given globally in the preventive and therapeutic sectors makes this omission even more surprising. This appraisal of current guidelines and literature has made it clear that the need for aspiration prior to administering an injection is dependent upon multiple factors, as elaborated below.\n\nInjections given for the purpose of routine immunizations are different from injections for medications. The minimal risk of side effects combined with defined sites for immunization form one basis of the existing recommendations for eliminating aspiration during immunization. The fact that most AD devices currently in use do not allow for aspiration also appears to have been a major factor in the decision to eliminate aspiration as an essential step prior to IM or SC injection of vaccines. We argue that clinical needs should dictate the development of new devices and not the other way around. Relevant recommendations must be evidence-based and ISO guidelines must be modified to reflect evolving needs. This would drive the device industry to meet the criteria laid down based on scientific rationale.\n\nThe drug that is being injected has a direct bearing on the decision to aspirate or not to aspirate. If the drugs to be given have potentially fatal consequences in the event of systemic administration (as in the case of immunotherapy), all possible precautions must be taken. This is all the more important in cases where the drug is being administered electively by specialist staff. On the other hand, if there are no serious known sequelae to a drug being injected systemically – as in the case of vaccines – an argument can be made not to aspirate, especially since a huge number of immunizations are performed globally by vaccinators and health workers. Product inserts for 104 injectables on the WHO Essential Drug List were reviewed. Of these, only 3 inserts specified that aspiration should be performed prior to injection. Two of these inserts were for local anesthetic agents and the third was for Pneumococcal 7-valent conjugate vaccine. Other product inserts mentioned the importance of injecting into the desired site, but did not specify aspiration as a way of ensuring this. Clearer instructions must be stated if indeed potentially serious complications may occur if a drug or vaccine is inadvertently administered at a site other than that recommended.\n\nAs is apparent from ISO 7886 part 4 for curative injection devices, a global shift towards the increasing the use of re-use prevention syringes in the curative sector is imminent. Devices manufactured to meet these criteria incorporate the function of aspiration. Newer devices are coming into the market in order to address the issues of control over the syringe during aspiration and to increase patient safety. One such device recently approved by the US Food and Drug Agency (FDA) is the highly controllable one handed reciprocating procedure syringe21. Specific procedures where aspiration is performed for diagnostic or therapeutic purposes would benefit from newer devices that are custom-designed to aspirate rather than inject.\n\nA systematic approach would be to conduct randomized controlled trials of the device to reach an unbiased conclusion on the benefits and necessity for aspiration using therapeutic re-use prevention syringes and AD syringes for vaccinations; the appropriate duration of aspiration that yields best results also needs to be determined. If such trials deem that aspiration should be part of the recommended therapeutic and vaccination technique, this would act as the driving force for the device industry to develop appropriate tools to meet these requirements.\n\n\nLiterature search terms\n\n“Aspiration”, “injection”, “technique”, “procedure”, “guidelines”, “standards”, “efficacy”, “complications”, “pain”, “trauma”, “administration”, “intramuscular”, “intravascular”, “intradermal”, “subcutaneous”, “syringe”, “auto-disable syringe”, “Z-track”, “immunotherapy”, “epinephrine”, “insulin”, “dental”, “immunization”, “vaccination”, “medication”, “rapid”, “fine-needle”, “pain”, “trauma”.", "appendix": "Author contributions\n\n\n\nL Samad: Primary author, contributed to literature search and overall supervision. YJ Sepah: Literature review, collation and draft of results and report. A Altaf: Contributed to literature review. AJ Khan: Provided overall supervision and final review of manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis literature review was funded by Star Syringe Ltd UK. All decisions regarding the content of the manuscript and final responsibility for submission of the manuscript belong to the authors.\n\n\nReferences\n\nKamlesh RL, Lala MK: Intramuscular Injection: review and guidelines. Indian Pediatr. 2003; 40(9): 835–845. PubMed Abstract\n\nIpp M, Sam J, Patricia PC: Needle aspiration and intramuscular vaccination. Arch Pediatr Adolesc Med. 2006; 160(4): 451. PubMed Abstract | Publisher Full Text\n\nMallet J, Christopher B: The Royal Marsden NHS Trust Manual of Clinical Nursing Procedure. 4th ed. London: Blackwell Science. 1996. Reference Source\n\nWorkman B: Safe injection techniques. Nurs Stand. 1999; 13(39): 47–53. PubMed Abstract | Publisher Full Text\n\nCrawford CL, Johnson JA: To aspirate or not: an integrative review of the evidence. Nursing. 2012; 42(3): 20–5. PubMed Abstract | Publisher Full Text\n\nCanada, PHAo, Canadian Immunization Guide. 7th ed. Ottawa. Publishing and Depository Services Public Works and Government Services Canada: Ontario. 2006. Reference Source\n\nJ CC: Aspiration before injection, in SIGN. 2003; 1–2.\n\nAceves-Avila FJ, Delgadillo-Ruano M, Ramos-Remus C, et al.: The first descriptions of therapeutic arthrocentesis: a historical note. Rheumatology (Oxford). 2003; 42(1): 180–3. PubMed Abstract | Publisher Full Text\n\nGuggi V, Calame L, Gerster JC: Contribution of digit joint aspiration to the diagnosis of rheumatic diseases. Joint Bone Spine. 2002; 69(1): 58–61. PubMed Abstract | Publisher Full Text\n\nLane JG, Falahee M, Wojtys EM, et al.: Pyarthrosis of the knee. Treatment considerations. Clin Orthop Rel Res. 1990; 45(252): 198–204. PubMed Abstract\n\nJohnson MW: Acute knee effusions: a systematic approach to diagnosis. Am Fam Physician. 2000; 61(8): 2391–400. PubMed Abstract\n\nManadan AM, Block JA: Daily needle aspiration versus surgical lavage for the treatment of bacterial septic arthritis in adults. Am J Ther. 2004; 11(5): 412–5. PubMed Abstract | Publisher Full Text\n\nDooley P, Martin R: Corticosteroid injections and arthrocentesis. Can Fam Physician. 2002; 48: 285–92. PubMed Abstract | Free Full Text\n\nBrown PW: Arthrocentesis for diagnosis and therapy. Surg Clin North Am. 1969; 49(6): 1269–78. PubMed Abstract\n\nLee AH, Chin AE, Ramanujam T, et al.: Gonococcal septic arthritis of the hip. J Rheumatol. 1991; 18(12): 1932–3. PubMed Abstract\n\nKesteris U, Wingstrand H, Forsberg L, et al.: The effect of arthrocentesis in transient synovitis of the hip in the child: a longitudinal sonographic study. J Pediatr Orthop. 1996; 16(1): 24–9. PubMed Abstract | Publisher Full Text\n\nWeidner S, Keller W, Kellner H: Interventional radiology and the musculoskeletal system. Best Pract Res Clin Rheumatol. 2004; 18(6): 945–56. PubMed Abstract | Publisher Full Text\n\nBureau NJ, Ali S, Chhem RK, et al.: Ultrasound of musculoskeletal infections. Semin Musculoskelet Radiol. 1998; 2(3): 299–306. PubMed Abstract | Publisher Full Text\n\nGrassi W, Farina A, Filippucci E, et al.: Sonographically guided procedures in rheumatology. Semin Arthritis Rheum. 2001; 30(5): 347–53. PubMed Abstract | Publisher Full Text\n\nFeldmann H: [2000-year history of the ear syringe and its relationship to the enema. Images from the history of otorhinolaryngology, represented by instruments from the collection of the Ingolstadt Medical History Museum]. Laryngorhinootologie. 1999; 78(8): 462–7. PubMed Abstract | Publisher Full Text\n\nSibbitt W Jr, Sibbitt RR, Michael AA, et al.: Physician control of needle and syringe during aspiration-injection procedures with the new reciprocating syringe. J Rheumatol. 2006; 33(4): 771–78. PubMed Abstract\n\nDraeger HT, Twining JM, Jhonson CR, et al.: A randomised controlled trial of the reciprocating syringe in arthrocentesis. Ann Rheum Dis. 2006; 65(8): 1084–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoberts WN, Hayes CW, Breitbach SA, et al.: Dry taps and what to do about them: a pictorial essay on failed arthrocentesis of the knee. Am J Med. 1996; 100(4): 461–4. PubMed Abstract | Publisher Full Text\n\nLobo A, Lightman S: Vitreous aspiration needle tap in the diagnosis of intraocular inflammation. Ophthalmology. 2003; 110(3): 595–9. PubMed Abstract | Publisher Full Text\n\nQublan HS, Al-Jader K, Al-Kaisi NS: Fine needle aspiration cytology compared with open biopsy histology for the diagnosis of azoospermia. J Obstet Gynaecol. 2002; 22(5): 527–31. PubMed Abstract | Publisher Full Text\n\nJauniaux E, Holmas A, Hyett J, et al.: Rapid and radical amniodrainage in the treatment of severe twin-twin transfusion syndrome. Prenat Diagn. 2001; 21(6): 471–6. PubMed Abstract | Publisher Full Text\n\nCederholm M, Haglund B, Axelsson O: Maternal complications following amniocentesis and chorionic villus sampling for prenatal karyotyping. BJOG. 2003; 110(4): 392–9. PubMed Abstract | Publisher Full Text\n\nFarran I, Sánchez M, Mediano C, et al.: Early amniocentesis with the filtration technique: neonatal outcome in 123 singleton pregnancies. Prenat Diagn. 2002; 22(10): 859–63. PubMed Abstract | Publisher Full Text\n\nPapp C, Papp Z: Chorionic villus sampling and amniocentesis: what are the risks in current practice? Curr Opin Obstet Gynecol. 2003; 15(2): 159–65. PubMed Abstract\n\nMoore KP, Wong F, Gines P, et al.: The management of ascites in cirrhosis: report on the consensus conference of the International Ascites Club Hepatology. 2003; 38(1): 258–66. PubMed Abstract | Publisher Full Text\n\nMurthy SV, Hussain ST, Gupta S, et al.: Pseudoaneurysm of inferior epigastric artery following abdominal paracentesis. Indian J Gastroenterol. 2002; 21(5): 197–8. PubMed Abstract\n\nWebster ST, Brown KL, Lucey MR, et al.: Hemorrhagic complications of large volume abdominal paracentesis. Am J Gastroenterol. 1996; 91(2): 366–8. PubMed Abstract\n\nNettleman MD, Bock MJ, Nelson AP, et al.: Impact of procedure-related complications on patient outcome on a general medicine service. J Gen Intern Med. 1994; 9(2): 66–70. PubMed Abstract | Publisher Full Text\n\nWHO, Global Facts & Figures. A safe injection does not harm the recipient, does not expose the health care worker to any risk and does not result in waste that is dangerous for the community, in SAFETY OF INJECTIONS. 2006; 33: 4.\n\nRodger MA, King L: Drawing up and administering intramuscular injections: a review of the literature. J Adv Nurs. 2000; 31(3): 574–582. PubMed Abstract | Publisher Full Text\n\nFurman MB, Giovanniello MT, O’Brien EM: Incidence of intravascular penetration in transforaminal cervical epidural steroid injections. Spine (Phila Pa 1976). 2003; 28(1): 21–25. PubMed Abstract\n\nHiggins D: Subcutaneous Injections. Nursing times. 2004; 100(50): 32–3. PubMed Abstract\n\nRozani N: Aga Khan University School of Nursing enrichement program: Skills check list mannual, ed. AHN. Vol. 1. 2007, Karachi. 10.\n\nClinical Skills: Intramuscular Injections. Nursing times. 2003; 99(26): 27.\n\nNigeria C.h.p.r.b.o: Practical assesment record for cummunity health extension workers. Instructors Guide Book. 2006.\n\nTaddio A, et al.: Physical interventions and injection techniques for reducing injection pain during routine childhood immunizations: systematic review of randomized controlled trials and quasi-randomized controlled trials. Clin Ther. 2009; 31(Suppl 2): S48–76. PubMed Abstract | Publisher Full Text\n\nHayes C: Injection technique subcutaneous. Nursing times. 1998; 94(41): suppl 1–2. PubMed Abstract\n\nWHO, I SIGN. Best Infection Control Practices for Skin-Piercing Intradermal, Subcutaneous, and Intramuscular Needle Injections. 2001; [cited 2008 June 23]. Reference Source\n\nWHO. A guide for supervising injections. Feb 2004 April 2008 [cited 2013 March 15]; Feb 12. 2004: [1–16]. Reference Source\n\nWaibel KH: Aspiration before immunotherapy injection is not required. J Allergy Clin Immunol. 2006; 118(2): 525–6. PubMed Abstract | Publisher Full Text\n\nMiller JD, Bell JB, Lee RJ, et al.: Blood return on aspiration before immunotherapy injection. J Allergy Clin Immunol. 2006; 119(2): 512. PubMed Abstract | Publisher Full Text\n\nGuarneri F: Aspiration before subcutaneous immunotherapy injection: Unnecessary or advisable? J Allergy Clin Immunol. 2007; 119(2): 512–513. PubMed Abstract | Publisher Full Text\n\nGeller M: Aspiration before immunotherapy injection is required. J Allergy Clin Immunol. 2007; 120(1): 220–1. PubMed Abstract | Publisher Full Text\n\nH WK: Reply: Letter to the editor. J Allergy Clin Immunol. 2006; 120(1).\n\nCouncil NM: Guidelines for the administration of medicine. 2002. [cited 2014 March 15]. Reference Source\n\nCouncil NM: Standards for medicines management. 2004. [cited 2014 March 15]. Reference Source\n\nCorkery PF, Barrett BE: Aspiration using local anesthetic catridges with an elastic recoil diaphragm. J Dent. 1973; 2(2): 72–74. PubMed Abstract | Publisher Full Text\n\nMeechana JG, Ramacciato JC, McCabea JF: A comparison of the aspirating abilities of re-usable and partly disposable dental cartridge syringes in vitro. J Dent. 2006; 34(1): 41–47. PubMed Abstract | Publisher Full Text\n\nPetersen JK: Efficacy of a self-aspirating syringe. Int J Oral Maxillofac Surg. 1987; 16(2): 241–4. PubMed Abstract | Publisher Full Text\n\nControl, C.o.D. Sample Screening Form Dental Safety Syringes and Needles. 2002. [cited 2014 March 15].\n\nCalin MA, Parasca SV, Savastru R, et al.: Optical techniques for the noninvasive diagnosis of skin cancer. J Cancer Res Clin Oncol. 2013; 139(7): 1083–104. PubMed Abstract | Publisher Full Text\n\nPickering LK, EG V: Ill. Active and Passive Immunization: Report of the Committee on Infectious Diseases. 26th ed. Red book. 2003: American Academy of Pediatrics. Reference Source\n\nImmunization, N.A.C.o., Canadian Immunization Guide. H. Canada, Editor. 2002, Public Health Agency of Canada, Infectious Disease and Emergency Preparedness Branch, Centre for Infectious Disease Prevention and Control: Ontario. 38–40. Reference Source\n\nCDC. Vaccine Administration. In: General recommendations on Immunisation. Eds. William LA, L.P., Benjamin S, Bruce W, John lskander, John Watson, Atlanta, USA. MMWR. 2002; 51(RR02): 1–36. Reference Source\n\nJablecki CK: Letter to the Editor. Nursing Res. 2000; 49(5): 244. Reference Source\n\nDuclos P: WHO/V&B. SIGN January 2003 July 2008 [cited 2014 March 15]. 1–2.\n\nIpp M, Taddio A, Sam J, et al.: Vaccine-related pain: randomised controlled trial of two injection techniques. Arch Dis Child. 2007; 92(12): 1105–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCatlin M, Crook B: Giving safe injections: introducing auto-disable syringes. PATH Seattle, WA U.S.A. 2000. Reference Source\n\nProgram NI: Epidemiology and Prevention of Vaccine-Preventable Diseases. 2007; [cited 2014 March 15]. Reference Source\n\nAPPENDIX D. Vaccine Administration. 2007; [cited 2008 July 1]. Reference Source\n\nNicoll LH: IM injection: updated information. SIGNpost September. 2002; 1–2. [cited 2014 March 15].\n\nNicoll LH, Hesby A: Intramuscular injection: an integrative research review and guideline for evidence-based practice. Appl Nurs Res. 2002; 15(3): 149–162. PubMed Abstract | Publisher Full Text\n\nSibbitt RR, Sibbitt WL Jr, Nunez SE, et al.: Control and performance characteristics of eight different suction biopsy devices. J Vasc Interv Radiol. 2006; 17(10): 1657–1669. PubMed Abstract | Publisher Full Text\n\nRobinson JN, Loeffler HH, Norwitz ER: A syringe adapter to facilitate aspiration at amniocentesis. Obstetrics & Gynecology. 2000; 96(1): 138–140.\n\nFurman MB, O'Brien EM, Zgleszewski TM: Incidence of intravascular penetration in transforaminal lumbosacral epidural steroid injections. Spine (Phila Pa 1976). 2000; 25(20): 2628–2632. PubMed Abstract | Publisher Full Text\n\nYankelevitz DF, Hayt D, Henschke CI: Transthorasic needle biopsy. What size syrine? Clin Imaging. 1995; 19(3): 208–209. PubMed Abstract | Publisher Full Text\n\nCastellote J, Xiol X, Cortés-Beut R, et al.: Complications of thoracentesis in cirrhotic patients with pleural effusion. Rev Esp Enferm Dig. 2001; 93(9): 566–75. PubMed Abstract\n\nDoyle JJ, Hnatiuk OW, Torrington KG, et al.: Necessity of routine chest roentgenography after thoracentesis. Ann Intern Med. 1996; 124(9): 816–20. PubMed Abstract | Publisher Full Text\n\nSassoon CS, Light RW, O'Hara VS, et al.: Iatrogenic pneumothorax: etiology and morbidity. Results of a Department of Veterans Affairs Cooperative Study. Respiration. 1992; 59(4): 215–20. PubMed Abstract | Publisher Full Text\n\nCallahan JA, Seward J: Pericardiocentesis Guided by Two-Dimensional Echocardiography. Echocardiography. 1997; 14(5): 497–504. PubMed Abstract | Publisher Full Text\n\nBastian A, Meissner A, Lins M, et al.: Pericardiocentesis: differential aspects of a common procedure. Intensive Care Med. 2000; 26(5): 572–6. PubMed Abstract | Publisher Full Text\n\nSalem K, Mulji A, Lonn E: Echocardiographically guided pericardiocentesis - the gold standard for the management of pericardial effusion and cardiac tamponade. Can J Cardiol. 1999; 15(11): 1251–5. PubMed Abstract\n\nTsang TS, Barnes ME, Gersh BJ, et al.: Outcomes of clinically significant idiopathic pericardial effusion requiring intervention. Am J Cardiol. 2003; 91(6): 704–7. PubMed Abstract | Publisher Full Text\n\nTsang TS, El-Najdawi EK, Seward JB, et al.: Percutaneous echocardiographically guided pericardiocentesis in pediatric patients: evaluation of safety and efficacy. J Am Soc Echocardiogr. 1998; 11(11): 1072–7. PubMed Abstract\n\nDuvernoy OBJ, Borowiec J, Helmius G, et al.: Complications of percutaneous pericardiocentesis under fluoroscopic guidance. Acta Radiol. 1992; 33(4): 309–13. PubMed Abstract | Publisher Full Text\n\nAtkinson W, Hamborsky J, McIntyre L, et al.: The Pink Book. 10th ed. 2nd printing ed. Epidemiology and Prevention of Vaccine-Preventable Diseases. Washington DC: Public Health Foundation: Centers for Disease Control and Prevention. 2008. Reference Source\n\nKaic B: Aspiration before injection. SIGNpost January 2003 July 2008. Reference Source\n\nWHO. Fact sheet N°231 Injection safety. 2002. [cited 2008 March 21]. Reference Source\n\nNHS. Administration of medicines through intramuscular injections - Guidelines. 2006. [cited 2014 March 15]. Reference Source\n\nCenter, T.U.M. Adverse reactions newsletter. 1996. Reference Source\n\nNigeria, C.h.p.r.b.o., Curriculum for higher diploma in cummunity health. Instructors Guide Book. 2006.\n\nCurriculum, P.N.C.B.N. 1990; 50–51.\n\nIndiaCLEN. Model Injection Centres (MICs): A Program to Improve Injection Practices in the Country (2005–2006). Reference Source\n\nNguyen QD, Tatlipinar S, Shah SM, et al.: Vascular endothelial growth factor is a critical stimulus for diabetic macular edema. Am J Ophthalmol. 2006; 142(6): 961–9. PubMed Abstract | Publisher Full Text\n\nBritish National Formulary. 2007; [cited 2014 March 15]. Reference Source\n\nPakistan Pharma Guide. [cited 2014 March 15]. Reference Source\n\nBattersby A: “To aspirate or not to aspirate” that is the question. SIGN January 2003 July 2008 [cited 2014 March 15]; July 2008: [1–2]. Reference Source\n\nSafety, C.o.I. Supplementary Material. 2009 [cited 2014 March 15]. Reference Source" }
[ { "id": "5402", "date": "04 Sep 2014", "name": "Cees Smit Sibinga", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have compiled important information based on both literature review and the analysis of guidelines and medication inserts on the issue of pre-injection aspiration once a needle is inserted in or under the skin.However, the review might be improved by bringing in a more systematic approach in which the results follow the description in the Methodology. That would lead to a consistent and orderly review of the different aspects identified. It is also recommended to include in the introduction a definition of aspiration in the context of the review.Table 2 shows several spelling errors that need to be corrected.On page 6/11 (second column, last paragraph) the sentence ‘…. the drawing up of the vaccine or medication into the syringe prior to aspiration.’ erroneously uses the word aspiration where this should be injection.A clear conclusion with feasible recommendations to come to evidence pro or con, or at least a nuancing in the pro and con of pre-injection aspiration would certainly contribute, as this is not really expressed.", "responses": [ { "c_id": "2438", "date": "01 Feb 2017", "name": "Yasir Sepah", "role": "Reader Comment", "response": "The errors identified by the reviewer have been addressed. A conclusion section has been added to the manuscript." } ] }, { "id": "11456", "date": "30 Dec 2015", "name": "Ankit Balani", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript is intelligently written and authors have provided a literature review of need of pre-injection aspiration and discussed its utility in clinical practice. However, we would like to make a few pertinent observations:The authors have not concluded the review article and it would be appreciated if they could summarize their observations from the literature review and provide an appropriate conclusion giving the readers an insight into the necessity of pre-injection aspiration.Grammatical error in abstract: medical/-nursing schools... needs to be replaced with medical/nursing.Grammatical error in literature review (methodology) - first line - An literature review to be replaced by A literature review.Grammatical error in Results - Immunization - Page 5: AD syringes do not permit health workers to aspirate for blood to be replaced to aspirate blood. Grammatical error in Findings of guidelines and recommendation (injection of medication) - Page 6 - ..Msc Nursing in India does not elaborated on injection techniques needs to be replaced with did not elaborate injection techniques.In review of ISO guidelines, ISO section 5.3 - Page 6 - ...vaccine or medication into the syringe prior to aspiration to be replaced by ....prior to injection.\n\nSpelling errors in Table 2 including Diphtheria and Tetanus in second row need to be rectified.", "responses": [ { "c_id": "2437", "date": "01 Feb 2017", "name": "Yasir Sepah", "role": "Reader Comment", "response": "All the grammatical and spelling errors have been addressed as identified by the reviewer." } ] } ]
1
https://f1000research.com/articles/3-157
https://f1000research.com/articles/5-673/v1
13 Apr 16
{ "type": "Software Tool Article", "title": "dbVar structural variant cluster set for data analysis and variant comparison", "authors": [ "Lon Phan", "Jeffrey Hsu", "Le Quang Minh Tri", "Michaela Willi", "Tamer Mansour", "Yan Kai", "John Garner", "John Lopez", "Ben Busby", "Lon Phan", "Jeffrey Hsu", "Le Quang Minh Tri", "Michaela Willi", "Tamer Mansour", "Yan Kai", "John Garner", "John Lopez" ], "abstract": "dbVar houses over 3 million submitted structural variants (SSV) from 120 human studies including copy number variations (CNV), insertions, deletions, inversions, translocations, and complex chromosomal rearrangements. Users can submit multiple SSVs to dbVAR  that are presumably identical, but were ascertained by different platforms and samples,  to calculate whether the variant is rare or common in the population and allow for cross validation. However, because SSV genomic location reporting can vary – including fuzzy locations where the start and/or end points are not precisely known – analysis, comparison, annotation, and reporting of SSVs across studies can be difficult. This project was initiated by the Structural Variant Comparison Group for the purpose of generating a non-redundant set of genomic regions defined by counts of concordance for all human SSVs placed on RefSeq assembly GRCh38 (RefSeq accession GCF_000001405.26). We intend that the availability of these regions, called structural variant clusters (SVCs), will facilitate the analysis, annotation, and exchange of SV data and allow for simplified display in genomic sequence viewers for improved variant interpretation. Sets of SVCs were generated by variant type for each of the 120 studies as well as for a combined set across all studies. Starting from 3.64 million SSVs, 2.5 million and 3.4 million non-redundant SVCs with count >=1 were generated by variant type for each study and across all studies, respectively. In addition, we have developed utilities for annotating, searching, and filtering SVC data in GVF format for computing summary statistics, exporting data for genomic viewers, and annotating the SVC using external data sources.", "keywords": [ "NCBI", "dbVar", "Structural Variation Cluster", "GVF", "Genomics", "Open-Source", "Genome Annotation", "Education", "Software" ], "content": "Introduction\n\nThere is a growing body of evidence suggesting that genomic structural variants play an important role in the etiology of human disease and in determining individuals’ characteristics and phenotypes1,2. Structural variants are also important for understanding the evolution of species3. dbVar is a database of large structural genomic variants that catalogs millions of records from both small and large studies and makes them freely available to the public4. The data are organized by submitted study, which makes for convenient comparisons between cases and controls. dbVar online search and browser tools make it easy to search and retrieve the data.\n\nIt is difficult to annotate novel SVs or to compute summary data without a reference record or exemplar when multiple SSV choices are available in the same genomic region, and there has been no publicly available resource to date that combines variants from all studies for integration into a bioinformatic pipeline for search, analysis, and comparison. We created structural variant clusters (SVC) to overcome these problems. Structural variant clusters (Figure 1) are smaller discrete genomic features that include counts of the features shared between SSVs. In regions with fuzziness between overlapping SSVs, SCVs allow the calculation of annotation and frequency by either consensus overlapping regions or by user-defined limits.\n\nReference genomic regions SVC1-SVC4 (yellow box) are demarcated by overlap and non-overlapping positions (P1-P2, P2-P3, etc.) between SSVs. The observed SVC counts and the genes are shown on the bottom.\n\nAdditional benefits of having a defined set of SVCs include:\n\nimproved data exchange, data mining, computation, and reporting;\n\nbetter searching and matching of genomic coordinates across studies;\n\neasier aggregation of annotations such as disease and phenotype, frequency, and genomic features that co-locate with a SVC;\n\na simplified display in the Sequence Viewer as an aggregated histogram or density track from all studies (currently dbVar display each study as a track, which can be slow to render and difficult to display on small screens); and\n\nthe ability to measure SSV concordance regions and validate across studies.\n\nThe Structural Variation Cluster project aimed to accomplish a number of goals. First, we generated a Genome Variant Format (GVF) file of SVC regions as defined above, based on RefSeq GRCh381. Each region is assigned a unique ID (SVC1, SVC2, etc.). The SVC VCF file is used as the basis for generating aggregated data, filtering, generating sequence viewer tracks, and for comparison with user data. We also generated a histogram track to show the frequency of the regions across studies in genomic context for the Sequence Viewer. In addition, we annotated SVC regions with Gene, colocated dbSNP reference SNPs, ClinVar, and other colocated features. We aimed to create a tool for filtering SVC GVFs by variant type, region size, region count, chromosome, and additional user-defined splitting and filtering parameters. This tool would allow users to compare their data with SVC GVFs and report matching regions of overlap.\n\n\nMethods\n\nSVCs are defined as the union set of overlapping and non-overlapping regions for all SSVs aligned to the genome using HTSeq version 0.6.05, based on the genomic coordinates in RefSeq human genome assembly GRCh38 (RefSeq accession GCF_000001405.26)1 (Figure 1).\n\nFigure 2 demonstrates the workflow for this analysis. dbVar SSV data by studies were obtained in tab delimited format from the FTPsite (ftp://ftp.ncbi.nlm.nih.gov/pub/dbVar/data/Homo_sapiens/by_study/) and used as input. The study files were combined and sorted by chromosome positions into a single file using the script merge_data.py. SVC regions, including counts as shown in Figure 1, were generated from the merged file using the script make_gvf_and_bedgraph.py, which output SVC GVF and BED files. Since the approach in Figure 1 is similar to finding consensus regions or overlapping features between aligned reads make_gvf_and_bedgraph.py use HTSeq.GenomicInterval class to store SSV chr. start, and stop coordinates as genomic features and the HTSeq.GenomicArrayOfSets class to identify overlapping positions to generate SVC and counts.\n\nAdditional tools are available as scripts using SVC GVF as input to compute summary statistics, to search and filter, to generate WIG files for viewing in sequence viewer, and to annotate using external data sources. All scripts and examples are available on GitHub (https://github.com/NCBI-Hackathons/Structural_Variant_Comparison/). For this study all coordinates reported are based on GRCh38.\n\n\nResults\n\nAs shown in Figure 1, SVCs were created from overlapping and non-overlapping regions of two or more SSVs using the HTSeq.GenomicArrayOfSets class and output as GVF file format. Each SVC is counted for the number of times it is present as a subregion of a SSV, providing a total SVC count across studies. A single SSV by itself without any overlap between itself and another SSV in the region constitutes a single SVC with a feature count of 1. 3.6 million dbVar SSVs generated 3.4 million SVCs for all dbVar data (combined-set) by variant type (Table 1).\n\nThe most common variant type was deletion followed by CNV. All CNV types combined (rows 1, 7, and 8 in Table 1) total 972,335. We also generated SVC for each variant type (ie. CNV, in/del, etc.) and by individual studies (study-set) for QA/QC and analysis between types and studies of interest. The study-set generated a total of 2.5 million SVCs versus 3.4 million SVCs from the combined-set.\n\nWIG files were generated from SVC GVF files to allow loading into sequence viewer for quick visual inspection as shown in Figure 3. The SVC sets used for inspections are the combined-set which includes 1000Genomes6, as well as other large studies to provide frequently occurring or “common” SVC to compare with presumed curated variants that have clinical significance from study-set (dbVar:nstd37) submitted by ClinGen7. The Variation Viewer8 allows for quick navigation by genes, chromosome positions, and variations for visual comparison (Figure 3, Figure 4, and Figure 5). Figure 3 and Figure 4 show a hotspot peak A in ClinVar (track 4) that corresponds with a peak in SVC from nstd37, suggesting that this region is critical for function and that variations in this region are rare. These conclusions are supported by the lack of corresponding SVC peaks in the combined-set “common” tracks 7 and 8. However, tracks 7 and 8 also contain peaks B and C that flank the ClinVar peak, which may demarcate the boundaries for the critical region peak A. In contrast, Figure 5 shows that there are corresponding SVC peaks in the nstd37 (rare) and in the combined-set (common), suggesting that variants in this region may have minimal or no clinical impact by themselves.\n\nStarting from the top: (1) chr 1 sequence, (2) Gene track, (3) ClinVar short variation for dbSNP SNV, (4) ClinVar large variation, (5) ClinGen SVC study-set (dbVar:nstd37) copy number gain, (6) ClinGen SVC study-set (dbVar:nstd37) copy number loss, (7) SVC combined-set for copy number gain with count >= 100, and (8) SVC combined-set for copy number loss with count >= 100. The red box highlights an SVC hotspot region found in ClinGen (dbVar:nstd37) tracks 5 and 6 that correspond with the variants in ClinVar. The scale for SVC count histogram are 1–90 (track 5), 1–20 (track 6), 1–4618 (track 7), and 1–10885 (track 8).\n\nThe track and histogram scales are as described in Figure 5.\n\nThe tracks and histogram scales are as described in Figure 3.\n\n\nConclusions\n\nThe software tools we developed and provide here compute SVCs and provide counts of concordance regions across SSVs. We also developed tools to search, filter, annotate, and graphically view the results in sequence viewers or to incorporate them into custom analysis pipelines. Using these tools, we provide examples (Figure 3) for comparing across different SVC data sets with other annotation (such as genes and ClinVar). Such comparisons will allow users to investigate across the genome - or near a gene of interest - and to look for concordance and conflicts between data, which may help users form hypotheses regarding the biological impact of observed variation in SVC regions. In future, we will conduct the work and analysis required for SVC data quality assurance. We believe that SVC data promise to improve the analysis and the elucidation of the biological impact of structural variants, and in future, will probably have uses beyond those described here. Potential uses for SVC data could include:\n\nthe evaluation of other SVC hot spot regions to determine if they occur biologically or are due to genome problem regions;\n\nthe use of study metadata to validate SVCs that are in concordance with regions across studies and different assay platforms;\n\nthe validation of rare SVCs (count =< 2) and common SVCs ( count > 2);\n\nidentification of evidence of variations in all public SRA data;\n\ncombined analysis and annotation of SVCs to ClinVar, dbSNP, and other variation resources;\n\nthe creation of a reference dbVar “SV” number based on SVCs, which would be the equivalent to dbSNP’s RS number;\n\nidentification of population-specific SVCs to gain insight into the functional significance of structural variants and their evolution; and\n\ndetermination of high-priority SVCs with significant functional impact and effects.\n\nIn addition, a “dbVar Beacon Service” could be developed to allow users to query dbVar if variants exists for a genomic location of interest using combined SVC data. The results would report the number of SVCs and associated SSV IDs and study IDs. Users could then download the study or SSV of interest from dbVar.\n\n\nSoftware availability\n\nLatest source code: https://github.com/NCBI-Hackathons/Structural_Variant_Comparison/\n\nArchived source code as at time of publication: http://dx.doi.org/10.5281/zenodo.482019\n\nAccompanying wiki: https://github.com/NCBI-Hackathons/Structural_Variant_Comparison/wiki\n\nManual: https://docs.google.com/document/d/1WBnEnShnw28ZFg17A3xUpWOyvxXjb2q-h1kF-XYVWEw/edit?usp=sharing\n\nLicense: CC0 1.0 Universal", "appendix": "Author contributions\n\n\n\nAll of the authors participated in designing the study, carrying out the research, and preparing the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nLon Phan, John Garner, John Lopez, and Ben Busby’s work on this project was supported by the Intramural Research Program of the National Institutes of Health (NIH)/National Library of Medicine (NLM)/NCBI.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank Lisa Federer, NIH Library Writing Center, for manuscript editing assistance.\n\n\nReferences\n\nSaeed S, Bonnefond A, Manzoor J, et al.: Genetic variants in LEP, LEPR, and MC4R explain 30% of severe obesity in children from a consanguineous population. Obesity (Silver Spring). 2015; 23(8): 1687–95. PubMed Abstract | Publisher Full Text\n\nRoss JS, Badve S, Wang K, et al.: Genomic profiling of advanced-stage, metaplastic breast carcinoma by next-generation sequencing reveals frequent, targetable genomic abnormalities and potential new treatment options. Arch Pathol Lab Med. 2015; 139(5): 642–9. PubMed Abstract | Publisher Full Text\n\nRadke DW, Lee C: Adaptive potential of genomic structural variation in human and mammalian evolution. Brief Funct Genomics. 2015; 14(5): 358–68. PubMed Abstract | Publisher Full Text\n\nHome - dbVar - NCBI [Internet]: Home - dbVar - NCBI. [cited 2016 Feb 24]. Reference Source\n\nAnders S, Pyl PT, Huber W: HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015; 31(2): 166–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nestd214 - 1000 Genomes Consortium Phase 3 - dbVar Study - NCBI [Internet]: estd214 - 1000 Genomes Consortium Phase 3 - dbVar Study - NCBI. [cited 2016 Feb 24]. Reference Source\n\nClinGen - ClinGen Clinical Genome Resource [Internet]: ClinGen - ClinGen Clinical Genome Resource. [cited 2016 Feb 24]. Reference Source\n\nVariation Viewer - NCBI [Internet]: Variation Viewer - NCBI. [cited 2016 Feb 24]. Reference Source\n\nJohn G, TriLe965, Hsu J, et al.: Structural_Variant_Comparison: Initial Post-Hackathon Release. Zenodo. 2016. Data Source" }
[ { "id": "13373", "date": "05 May 2016", "name": "Lihua Julie Zhu", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\ndbVar is a database hosted by NCBI for archiving all types of genomic structural variants (GSV) in all species, including copy number variations (CNV), insertions, deletions, inversions, translocations, and complex chromosomal rearrangement. It accepts data submissions from researchers and exchanges data on a regular basis with the European Database of Genomic Variants Archive (DGVa). To facilitate the exchange, annotation, computation, visualization, reporting and interpretation of user submitted structural variants (SSVs), overlapping SSVs are merged to form a non-redundant set of genomic regions called structural variant clusters (SVCs), and utilities have been developed for annotating, searching, summarizing, filtering and visualizing SVC data in GVF format. However, in light of the previous publication of the same database (Lappalainen et al., 2013), it is unclear to the reviewers about the additional contribution of this manuscript. It would be important if the authors could cite the previous publication and clearly describe detailed updates made to the database and how these updates improve the existing software to help reviewers to understand what is new.It would be helpful if the authors could clarify the reason why Table 1 does not add up to 100%, and describe where 2.5 million SVCs are derived from as stated as \"The study-set generated a total of 2.5 million SVCs versus 3.4 million SVCs from the combined-set \". In addition, the resolution for Figure 2-5  needs to be improved.", "responses": [] }, { "id": "15157", "date": "09 Aug 2016", "name": "Justin M. Zook", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors create a set of scripts that take SVs submitted to dbVar and find how many calls cover each region of the genome.  I expect these will be useful for understanding locations in the genome where multiple SV calls have been made.  The methods appear to be straightforward to use, so that they can be applied to new callsets as they are submitted to dbVar and potentially to other repositories as well. I have a few minor suggestions below:\nFig 4 caption seems to refer to red box in Fig 3, not Fig 1.\n\nIt appears that the output wig files for the current dbvar are on the GitHub site, and it would be useful to make clear that these are available in the paper.  Are the output bed files also available? Are the outputs available as a track in any NCBI browser?\n\nWhy did the authors choose gvf as the output format?  Although no format is great for SVs, would the authors consider adding vcf as an output format since vcf seems to be increasingly adopted by SV callers?\n\nThis is implied in the future work proposed, but it may be useful to state explicitly that dbVar entries are not curated for accuracy, so regions with many SVs may be enriched for artifacts or true SVs or both.", "responses": [] } ]
1
https://f1000research.com/articles/5-673
https://f1000research.com/articles/5-2119/v1
31 Aug 16
{ "type": "Research Article", "title": "Predictors and brain connectivity changes associated with arm motor function improvement from intensive robotic practice in chronic stroke", "authors": [ "George F. Wittenberg", "Lorie G. Richards", "Lauren M. Jones-Lush", "Steven R. Roys", "Rao P. Gullapalli", "Suzy Yang", "Peter D. Guarino", "Albert C. Lo", "Lorie G. Richards", "Lauren M. Jones-Lush", "Steven R. Roys", "Rao P. Gullapalli", "Suzy Yang", "Peter D. Guarino", "Albert C. Lo" ], "abstract": "Background and Purpose: The brain changes that underlie therapy-induced improvement in motor function after stroke remain obscure. This study sought to demonstrate the feasibility and utility of measuring motor system physiology in a clinical trial of intensive upper extremity rehabilitation in chronic stroke-related hemiparesis. Methods: This was a substudy of two multi-center clinical trials of intensive robotic arm therapy in chronic, significantly hemiparetic, stroke patients. Transcranial magnetic stimulation was used to measure motor cortical output to the biceps and extensor digitorum communus muscles. Magnetic resonance imaging (MRI) was used to determine the cortical anatomy, as well as to measure fractional anisotropy, and blood oxygenation (BOLD) during an eyes-closed rest state. Region-of-interest time-series correlation analysis was performed on the BOLD signal to determine interregional connectivity. Functional status was measured with the upper extremity Fugl-Meyer and Wolf Motor Function Test. Results: Motor evoked potential (MEP) presence was associated with better functional outcomes, but the effect was not significant when considering baseline impairment. Affected side internal capsule fractional anisotropy was associated with better function at baseline. Affected side primary motor cortex (M1) activity became more correlated with other frontal motor regions after treatment. Resting state connectivity between affected hemisphere M1 and dorsal premotor area (PMAd) predicted recovery.  Conclusions: Presence of motor evoked potentials in the affected motor cortex and its functional connectivity with PMAd may be useful in predicting recovery. Functional connectivity in the motor network shows a trends towards increasing after intensive robotic or non-robotic arm therapy. Clinical Trial Registration URL: http://www.clinicaltrials.gov. Unique identifiers:  CT00372411 & NCT00333983.", "keywords": [ "Predictors", "brain connectivity", "robotic", "motor function" ], "content": "Introduction\n\nThe development of new methods for rehabilitation of deficits after stroke has enabled research into the brain mechanisms of improved function after such therapy. This has been accomplished in Constraint Induced Therapy1,2, Bilateral Arm Training3 and in one form of robotic hand training4. Some common themes in these studies include: 1. Changes in motor task-related brain activation after therapy (although both positive and negative changes have been reported) and, 2. Expansion of shrunken motor maps as measured by transcranial magnetic stimulation (TMS)5,6. However, there remain ambiguities and even controversies regarding the effects of repetitive task practice on brain activity and whether modern, well-defined therapeutic methods differ in their brain effects.\n\nRobotic rehabilitation has certain mechanistic advantages over other therapeutic methods7. Robotic therapy is a better option for more severely affected stroke patients who may not be able to practice certain movements without external assistance. In such patients the mechanisms of recovery may be qualitatively different and who have the most to gain with improved understanding of the mechanisms of improvement of any particular therapy. In addition, although robotic therapy is well defined by the algorithms it uses for training, the therapy is flexible enough to train patients in various types of movements.\n\nWe had the opportunity to perform a multi-center investigation of the brain mechanisms underlying robotic rehabilitation by studying a subset of participants in two multi-center VA studies that compared robotic rehabilitation to both an intensity-matched non-robotic therapy regiment and usual care. The hypotheses for this study related to both prognosis (e.g. greater cortical motor excitability and reduction in transcallosal inhibition will predict greater functional improvement) and treatment effects (e.g. intensive rehabilitation will more effectively increase the ability to activate multiple muscles through motor cortical activity, partly through reduced interhemispheric inhibition to the affected motor cortex.) It was also an opportunity to test a connectivity-based approach that has shown promise in studies of recovery of function8,9.\n\n\nMethods\n\nThis was a substudy of two multi-center clinical trials whose methods and results have been published10–12. It was originally intended to enroll approximately 40 participants across four sites but due to regulatory and staffing at issues at two sites, 13 subjects across two sites were enrolled. Briefly, all participants were chronic hemiparetic stroke patients with a significant degree of impairment (Upper Extremity Fugl-Meyer scale 7–38.) Figure 1 shows the basic design of the substudy, with TMS and magnetic resonance imaging (MRI) measures bracketing a 6–12 week intervention. Clinical Trial Registration URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00372411 & NCT00333983.\n\nThe timeline of baseline measures and the therapy interventions are shown graphically. MRI and TMS measurements were performed before or after the intervention.\n\nStimulation of the motor cortex responsible for upper-extremity impairment was performed using a MagStim 200 or 2002 Magnetic Stimulator (MagStim Ltd., Wales, UK and 70 mm D double circular coil. Motor evoked potentials (MEP) were recorded unilaterally by surface electrodes fixed over the biceps and extensor digitorum communus (EDC) muscles in bipolar montage with 3 cm spacing. Responses were amplified by a battery-powered surface electromiography (EMG) integrated electrode and amplifier (B&L Engineering, Tustin, CA or DelSys, Boston, MA), and fed into a personal computer through a multifunctional I/O board and LabView acquisition/analysis software (National Instruments, Austin, TX). A 100 ms period after stimulus was examined with time window adjusted to capture only the MEP. Amplitudes were measured peak-to-peak. Bandpass was 30–1000 Hz and digitization 2000 Hz.\n\nThe target muscle was ensured to be at rest during the entire procedure, through audio and visual monitoring of muscle activity. Motor threshold was determined using International Federation of Clinical Neurophysiology criteria13 except that a 25 µV limit was used because of the bipolar montage. The coil was localized on the frontoparietal region contralateral to the target muscle in the examined limb and moved until each muscle’s hot-spot, where the response threshold was the lowest, was found. Exact position of stimulation was recorded using a stereotactic system (BrainSight, Rogue Research, Montreal, QC, Canada) and guided by a 1 cm Cartesian coordinate system projected onto the subject’s own MRI. If a different hotspot was found on a subsequent visit, threshold and recruitment curves were obtained at both the original and new hotspot, but the original hotspot data were used for group analysis.\n\nRecruitment curves were measured by stimulating at a range of intensities, from 10% below threshold, increasing in increments of 10% of threshold until the response plateaued or the maximum output of the stimulator was reached. Ten stimuli at each intensity were delivered.\n\nThe ipsilateral silent period14 was measured by stimulation of the unaffected (defined here as contralesional) cortex and voluntary activation of the affected arm. The maximum force that the subject could sustain in each target muscle was determined. For the biceps the weight was placed on the wrist and for the extensor digitorum communis (EDC) on the proximal interphalangial joints. The coil was placed over the hand knob (hand representation within M1) of the unaffected hemisphere. After the subject stabilized the weight against gravity, a TMS pulse 100% of the maximal stimulator output was delivered. The subject was allowed to rest for several seconds and the procedure was repeated two further times. The EMG signal was integrated, and a ratio of post-relative to pre-stimulation activity was computed, making the appropriate adjustment for the length of period.\n\nAnatomical and Functional Magnetic Resonance Imaging was performed at each center on a Tim Trio 3T scanner (Siemens AG, Erlangen, Germany) equipped with an 8-channel receive-only head coil. Anatomical Imaging: These consisted of a high-resolution three-dimensional sagittal T1-weighted magnetization-prepared rapid gradient echo (MP-RAGE) image, and oblique proton density and T2-weighted images acquired with 2 mm slice thickness. Diffusion-tensor images were acquired using a single-shot echo-planar technique and 65 directions. A b value of 1000 s/mm2 was used with an average of six images acquired to increase signal-to-noise ratio. A fractional anisotropy (FA) map was created from these data. A 5 mm radius spherical Region of Interest (ROI) was centered on the posterior limb of the left and right internal capsules (IC) on the FA images and the mean, standard deviation, and ratio (affected/unaffected) were computed.\n\nFunctional Imaging. Two eyes-closed rest scans were obtained, each with 128 coronal blood oxygenation-level dependent (BOLD) weighted volumes (echo planar imaging; 3 sec TR, 30 ms TE, 4 mm slice thickness with no gap, flip angle = 90°, 36 axial slices, 1.8 × 1.8 mm2 inplane resolution, FOV = 23 cm.) These were separated in time by at least 5 minutes. A tape and cushion technique was employed to reduce head motion and remind the subjects of the need to keep their head as still as possible. We examined head motion parameters within the analysis and rejected runs with absolute head movement greater than 2 voxels. Images were corrected for head motion by realignment and Independent Component Analysis (ICA) was used to remove movement related signal15.\n\nRegion of Interest (ROI) resting state correlation analysis. ROI based analysis was performed without spatial normalization in AFNI16 and MATLAB (MathWorks Inc., Natick, MA). All of a participant’s resting state scans were corrected for slice timing and spatially registered to the first resting sate scan from their first session. The structural image was skull-stripped and also spatially registered to the subject’s first functional scan. A 6-mm FWHM Gaussian blur was then applied to all spatially registered EPI scans. Nine ROIs were selected manually identifying the following anatomical landmarks: medial part of the precentral gyrus, postcentral gyrus, cerebellar hemispheres, supramarginal gyrus, supplementary motor area (caudal supplementary motor area between medial precentral gyrus and a coronal plane through the anterior commisure17 and superior, middle, and inferior frontal gyri, Figure 2. Pairwise ROI correlations were computed on the time series for each ROI and Z-transformed.\n\nA: The 11 regions of interest (ROIs) are shown on representative axial slices of an example brain MRI. The top slices show the cerebellar ROI, then the PMAv on the left bottom slice, and PMAd, M1, and superior parietal regions from anterior to posterior. The SMA is represented by a single midline ROI. B. Correlation matrix with correlations at baseline in two resting states scans in the same participant. C. Example correlation in a single slice with a affected side M1 ROI as the seed.\n\nSAS (Cary, North Carolina) was used for all analysis. Pearson correlation was used to calculate recruitment curve slope and associations between variables. Mixed models (REML with compound symmetry) were used to analyze predictive factors such as presence of TMS responses and recruitment curve slope.\n\n\nResults\n\nFourteen subjects were enrolled between 2008 and 2010 at the Baltimore and North Florida/South Georgia VAMC. Thirteen were stroke patients, three of whom were randomized to intensive comparison therapy and ten of which were randomized to robot therapy, and one subject was a healthy control who received no therapy. The relationship between initial and follow-up Fugl-Meyer (FM) impairment score and presence of MEP is shown in Figure 3. While MEP were absent in most of the lower functioning participants initially, there were both low and high functioning participants with absent and present MEP. Controlling for the effects of baseline FM in a fixed effects analysis, presence of an MEP at baseline was associated with a mean 3.3 ± 6.2 S.E. (N.S.) higher change in FM across all post-baseline visits. (There were up to four post-baseline visits.) A biceps MEP was never present without an EDC MEP, but not vice versa and the predictive value of these two measures was approximately equal.\n\nThe baseline and change in Fugl-Meyer score are shown for each participant, grouped by whether there was initial motor evoked potential as measured by TMS. Change 1 is across the intervention; Change 2 is between the end of the intervention and the last outcome measurement (12 weeks). Negative changes are always shown below the baseline.\n\nOne of the hypotheses regarding recruitment curves was that steeper recruitment curves would correlate with better function. However, there were no significant correlations between either EDC or biceps recruitment curves and function, within the population in which recruitment curves could be evaluated (N=6). Almost all participants had very shallow recruitment curves, with one moderately affected individual (FM=35) being the exception (Figure 4). There was a non-significant trend for higher recruitment curve slope at baseline correlating with functional improvement in a mixed effects model that controlled for baseline FM.\n\nA. Single participant (#1) recruitment curves in the EDC and the second baseline and two follow-up measurements. Stimulation strength is indicated on the x-axis as a percentage of resting motor threshold. B. The slope of the recruitment curve between 100 and 120% of resting motor threshold stimulation strength was extracted from each recruitment curve of each participant that had measurable recruitment curves that could be measured on the affected side EDC. The first two measurements are both baseline periods. Changes in the recruitment curve slope were not significant. C. Recruitment curve slope in biceps (otherwise, as in B).\n\nA long-lasting stimulus artifact contaminated too many cases to allow group analysis. One example of change in silent period is noted in Figure 5. In this case the subject had an increase in voluntary activity after the intervention, despite the same amount of force requirement, and demonstrated a clearer iSP only after the intervention. However, in most other subjects, there was no visible iSP.\n\nAn example of ipsilateral silent period measured at baseline (left) and immediately after the 12 weeks intervention (right) in participant RB5. EMG is measured in mV in the EDC muscle contralateral to the TMS stimulator in the upper trace and ipsilateral in the lower trace, which is offset by 2 mV. Before the intervention there is little activation of the muscle and also no apparent silent period. There is much more activation of the muscle after the intervention and also a visible short ipsilateral silent period.\n\nPredictive measures. Resting state analysis resulted in a correlation matrix for the chosen ROI. While there was not an age-matched control population, there were clear asymmetries in the correlation matrices, as within hemisphere connections were decreased on the affected side as compared to the unaffected, but with exceptions such as the parietal area (data not shown). We were particularly interested in exploring the changes of correlation of the affected side motor cortex (M1) with other brain areas. Correlation of the affected M1 with all frontal lobe motor regions increased over the course of treatment, but there was no change in correlation of M1 with the cerebellum (Figure 6). The change in the unaffected M1, SMA and the unaffected side superior parietal area were most striking.\n\nThe mean change and S.D. of the Z-transformed correlation coefficient of the affected primary motor cortex (AM1) with each of the other regions is shown. Note that all regions showed an increase in connectivity except the parietal regions. Region names include ‘A’ for affected side hemisphere (the side opposite to the affected hemisphere in the case of the cerebellum, ‘CER’) and ‘U’ for unaffected.\n\nCorrelative measures. Improvement in Fugl-Meyer correlated with a trend towards reduction in two pairwise correlations. Greater FM increase was associated with decrease in affected M1-unaffected M1 connectivity (r2 = 0.31, p = 0.07), and unaffected M1-affected superior parietal area (r2 = 0.34, p = 0.06.) All other changes in functional connectivity measures correlated less well with changes in connectivity.\n\nFA of the affected internal capsule was correlated with baseline motor ability (r2 = 0.48, p < 0.01.) but did not predict motor recovery, although the trend was for greater FA to be associated with better recovery.\n\nThere were no significant differences in any outcome measure for robotic vs non-robotic comparison therapy.\n\n\nDiscussion\n\nThe purpose of this study was to obtain feasibility data on the use of TMS and MRI to provide predictive and mechanistic information about the motor functional response to intensive arm rehabilitation. It was not expected to provide definitive results in a field that has been marked by inconsistency. Some of the lessons learned in this study are that the lack of TMS responses in a majority of the moderate-to-severe population limits the utility of TMS for measuring change, although when MEP are present this predicts a better response to intervention, as has been demonstrated previously24. The ipsilateral silent period, a measure of transcallosal inhibition that can be performed even when MEP cannot be elicited, has limitations as well, and was not useful in this particular study, partly for technical reasons. MRI measures of resting state connectivity were more revealing, demonstrating both the deficits and changes with therapy, although in a purely exploratory manner.\n\nWhile there were technical limitations to use of silent periods, as shown in Figure 5, the silent period could become more apparent after therapy. Since the appearance of a silent period depends on cortical activation of an affected muscle, a silent period could appear to be absent if there is little such cortical activation. Increased corticomotor effectiveness can thus result in the appearance of a silent period, and give a misleading impression that more intercortical inhibition is related to better function. The role of intercortical inhibition in shaping motor function is complex, and interpretation of tests that require activity in interhemispheric networks needs to be sophisticated.\n\nIn the resting state connectivity analysis, all connectivity with the affected motor cortex was negatively correlated with impairment. The two exceptions were the superior parietal area, in which increased connectivity was correlated with impairment. There have been a number of reports of the role of the superior parietal area in recovery of function after stroke18–22. However, its correlation here would suggest an association with more impairment and a role in compensation in only the more severely affected individuals.\n\nThe correlation matrix for even a small number of ROIs is a large amount of data and causes a multiple comparison problem. We focused on connectivity with the affected motor cortex as being most relevant to recovery of function. Out of the ten other regions, correlation with eight of them increased over the course of therapy. The largest increases were in the contralesional superior parietal area, ipsilesional dorsal premotor area, and supplementary motor area. These regions have strong bilateral connections and are good candidates for brain regions that would be engaged by the practice involving visual motor feedback and proximal arm movements. There were no significant associations between change in interregional correlation and a change in motor function. The best correlation with recovery was in connections of the unaffected M1 with two areas in the affected hemisphere: affected M1 and affected superior parietal area. The fact that this was a negative correlation suggests that intensive unimanual therapy may be decreasing the importance of the unaffected primary motor area in movement of the affected side. But other changes, not measured in this study, may be related to recovery of function, and the measured network changes, whether or not they are a significant effect of the intervention, may not be necessary for recovery or may represent compensatory changes. Likely because of the relatively small size of the study, were not able to find significant differences between such changes in the two types of treatment, if any differences exist. It would be interesting to speculate that the superior parietal activity would be more involved in robotic rehabilitation, with its visuomotor component and SMA in the intensive comparison treatment that involved more self-initiated activity.\n\nSubject numbers started at 1 in Baltimore, and at 50 in Florida. All had anterior circulation ischemic strokes except for #1 who had a thalamic hemorrhage. Therapy assignment included either robotic or intensive comparison (comp.) therapy. When length of therapy was 6 weeks, there was no therapy with the wrist robot, only planar and vertical robots.\n\n\nConclusions\n\nMeasurement of brain changes related to motor recovery in moderate-to-severely affected stroke patients is complicated by difficulties in measuring brain function noninvasively. But our study showed that simple MEP presence might be useful in predicting response to rehabilitation in chronic stroke, while resting state connectivity appears to be responsive to treatment, with increase in affected primary motor cortical connectivity to other frontal motor areas. Motor cortical functional connectivity with the superior parietal cortex may be marker for compensatory changes that do not respond to affected side intensive practice.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for predictors and brain connectivity changes associated with arm and motor function improvement from intensive robotic practice in cronic strocke, 10.5256/f1000research.8603.d13317525", "appendix": "Author contributions\n\n\n\nGFW and ACL conceived of the study, SRR and RPG did MRI protocol design and analysis, PDG and SY statistical analysis, GFW, LGR, LMJL and LGR performed TMS studies.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by a VA Rehabilitation Research and Development Merit Review award to GF Wittenberg.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank Jaime Lush for expert technical assistance and Jui Panda for preliminary analysis. Dr. Jodie Haselkorn and Dr. Skip Rodriguez were involved in setting up a third site but regulatory issues prevented enrolling participants.\n\n\nReferences\n\nLiepert J, Bauder H, Wolfgang HR, et al.: Treatment-induced cortical reorganization after stroke in humans. Stroke. 2000; 31(6): 1210–1216. PubMed Abstract | Publisher Full Text\n\nWittenberg GF, Chen R, Ishii K, et al.: Constraint-induced therapy in stroke: magnetic-stimulation motor maps and cerebral activation. Neurorehabil Neural Repair. 2003; 17(1): 48–57. PubMed Abstract | Publisher Full Text\n\nLuft AR, McCombe-Waller S, Whitall J, et al.: Repetitive bilateral arm training and motor cortex activation in chronic stroke: a randomized controlled trial. JAMA. 2004; 292(15): 1853–1861. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTakahashi CD, Der-Yeghiaian L, Le V, et al.: Robot-based hand motor therapy after stroke. Brain. 2008; 131(Pt 2): 425–437. PubMed Abstract | Publisher Full Text\n\nSawaki L, Butler AJ, Leng X, et al.: Constraint-induced movement therapy results in increased motor map area in subjects 3 to 9 months after stroke. Neurorehabil Neural Repair. 2008; 22(5): 505–513. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSawaki L, Butler AJ, Leng X, et al.: Differential patterns of cortical reorganization following constraint-induced movement therapy during early and late period after stroke: A preliminary study. NeuroRehabilitation. 2014; 35(3): 415–426. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLum PS, Burgar CG, Shor PC, et al.: Robot-assisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke. Arch Phys Med Rehabil. 2002; 83(7): 952–959. PubMed Abstract | Publisher Full Text\n\nCarter AR, Shulman GL, Corbetta M: Why use a connectivity-based approach to study stroke and recovery of function? Neuroimage. 2012; 62(4): 2271–2280. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSilasi G, Murphy TH: Stroke and the connectome: how connectivity guides therapeutic intervention. Neuron. 2014; 83(6): 1354–1368. PubMed Abstract | Publisher Full Text\n\nConroy SS, Whitall J, Dipietro L, et al.: Effect of gravity on robot-assisted motor training after chronic stroke: a randomized trial. Arch Phys Med Rehabil. 2011; 92(11): 1754–1761. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLo AC, Guarino P, Krebs HI, et al.: Multicenter randomized trial of robot-assisted rehabilitation for chronic stroke: methods and entry characteristics for VA ROBOTICS. Neurorehabil Neural Repair. 2009; 23(8): 775–783. PubMed Abstract | Publisher Full Text\n\nLo AC, Guarino PD, Richards LG, et al.: Robot-assisted therapy for long-term upper-limb impairment after stroke. N Engl J Med. 2010; 362(19): 1772–1783. PubMed Abstract | Publisher Full Text\n\nRossini PM, Barker AT, Berardelli A, et al.: Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application. Report of an IFCN committee. Electroencephalogr Clin Neurophysiol. 1994; 91(2): 79–92. PubMed Abstract | Publisher Full Text\n\nWassermann EM, Fuhr P, Cohen LG, et al.: Effects of transcranial magnetic stimulation on ipsilateral muscles. Neurology. 1991; 41(11): 1795–9. PubMed Abstract | Publisher Full Text\n\nMcKeown MJ: Detection of consistently task-related activations in fMRI data with hybrid independent component analysis. Neuroimage. 2000; 11(1): 24–35. PubMed Abstract | Publisher Full Text\n\nCox RW: AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res. 1996; 29(3): 162–173. PubMed Abstract | Publisher Full Text\n\nGeyer S, Matelli M, Luppino G, et al.: Functional neuroanatomy of the primate isocortical motor system. Anat Embryol (Berl). 2000; 202(6): 443–474. PubMed Abstract | Publisher Full Text\n\nWang LE, Tittgemeyer M, Imperati D, et al.: Degeneration of corpus callosum and recovery of motor function after stroke: a multimodal magnetic resonance imaging study. Hum Brain Mapp. 2012; 33(12): 2941–2956. PubMed Abstract | Publisher Full Text\n\nLotze M, Markert J, Sauseng P, et al.: The role of multiple contralesional motor areas for complex hand movements after internal capsular lesion. J Neurosci. 2006; 26(22): 6096–6102. PubMed Abstract | Publisher Full Text\n\nGerloff C, Bushara K, Sailer A, et al.: Multimodal imaging of brain reorganization in motor areas of the contralesional hemisphere of well recovered patients after capsular stroke. Brain. 2006; 129(Pt 3): 791–808. PubMed Abstract | Publisher Full Text\n\nCramer SC, Moore CI, Finklestein SP, et al.: A pilot study of somatotopic mapping after cortical infarct. Stroke. 2000; 31(3): 668–671. PubMed Abstract | Publisher Full Text\n\nWittenberg GF, Lovelace CT, Foster DJ, et al.: Functional neuroimaging of dressing-related skills. Brain Imaging Behav. 2014; 8(3): 335–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoski L, Mernar TJ, Dobkin BH: Immediate and long-term changes in corticomotor output in response to rehabilitation: correlation with functional improvements in chronic stroke. Neurorehabil Neural Repair. 2004; 18(4): 230–249. PubMed Abstract | Publisher Full Text\n\nStinear CM, Barber PA, Smale PR, et al.: Functional potential in chronic stroke patients depends on corticospinal tract integrity. Brain. 2007; 130(Pt 1): 170–180. PubMed Abstract | Publisher Full Text\n\nWittenberg GF, Richards LG, Jones-Lush LM, et al.: Dataset 1 in: Predictors and brain connectivity changes associated with arm motor function improvement from intensive robotic practice in chronic stroke. F1000Research. 2016. Data Source" }
[ { "id": "15971", "date": "12 Sep 2016", "name": "Argye E. Hillis", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well-written and informative substudy of a thoughtfully designed clinical trial to improve motor function. The authors report imaging predictors of recovery before and after controlling for initial severity.\n\nMain criticism:\nIt was not clear that resting state sequences masked infarct. How many patients had infarct in M1. It would be unsurprising that there would be lower connectivity between M1 on the affected side and other areas if it is partially infarcted, and that would predict recovery.\n\nConclusions about M1 connectivity predicting recovery are too strong, since the results were not statistically significant, but only showed a trend (e.g. p = 0.06). Furthermore, it seems that they did not control for multiple comparisons, so these could have been found just by chance.", "responses": [ { "c_id": "2183", "date": "14 Sep 2016", "name": "George Wittenberg", "role": "Author Response", "response": "The resting state analysis was a very simple ROI based one. The ROI were registered by hand to M1 and if it was infarcted, the ROI included the infarct. In the the group, the strokes were predominantly subcortical and the analysis measured changes in connectivity, so this method was appropriate. The comment about the limitations are well-taken. We also introduced confusion on many levels by labeling two paragraphs \"predictive\" and \"correlative\". The predictive section presents data on longitudinal changes but doesn't state statistics, although it shows S.E in the graph. It also perpetuates a common misuse of the term \"predictive.\" In fact, we did not find RSC data that predicted response to the intervention, although that was a goal of the study. This can be corrected in a version after other reviews." } ] }, { "id": "15975", "date": "19 Sep 2016", "name": "Rudiger Seitz", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a well-designed, multimodal study on the effect of robotic practice in 13 chronic stroke patients using transcranial magnetic stimulation (TMS), high resolution magnetic resonance imaging (MRI), diffusion tensor imaging and resting state BOLD imaging. Investigated variables were motor evoked potentials (MEPs) on the affected side, silent period after TMS of the contralesional motor cortex, fractional anisotropy (FA) of the ipsilesional internal capsule, and functional connectivity of motor cortex in the affected hemisphere using anatomically based regions of interest (ROIs). The main results were that MEPs were associated with better functional outcome, FA of internal capsule was associated with better function at baseline, and BOLD in motor cortex was more correlated with other motor areas after training of which resting connectivity between motor cortex and dorsal premotor cortex predicted recovery. There are some issues that need clarification.\n\nTable 1 shows that 6 patients received robot training, while 7 received intensive comparison therapy. At no other instance it is said what intensive comparison treatment is. In fact, the entire manuscript including the title argues for robot training and presents the data as if all patients received robot training. It is not stated if the two treatments resulted in the same or different motor function. Further, a formal comparison of the two groups also concerning the studied variables is lacking.\n\nMoreover, patient 1 differed from the other patients as he was the only one with an intracerebral hemorrhage and a subcortical location of the lesion. It should be added how many of the other patients also had subcortical and cortical infarct lesions, respectively.\n\nWere the imaging data analyzed in one centre or in the participating centres? Were the ROIs drawn by one of the authors or by different authors? What was the interobserver reliability?\n\nHow many MRI slices did the ROIs listed in the methods include?\n\nWhat is meant with \"predictive values of the MEP measures were approximately equal (page 5)\"? Please, be specific and provide the data.\n\nThe increase of connectivity with the superior parietal area is noteworthy, since this is a brain area with profound somatosensory function. The authors should provide information about the sensory deficits of their patients.\n\nFigure 4 should provide information about which patients are presented in parts B and C.", "responses": [ { "c_id": "2198", "date": "21 Sep 2016", "name": "George Wittenberg", "role": "Author Response", "response": "Thank you for the careful reading of this manuscript. We should be able to respond to all of the comments directly in the manuscript. But to immediately answer two of the comments: The intensive comparison therapy was described in other papers, and we will make that more clear and add a summary. We in no way wanted to give the impression that any changes were specifically related to the type of intensive therapy. That was a possibility, of course, but we did not find that to be true, and had too small a sample to have the power to do so unless it was a truly dramatic difference. All MRI analysis was done at one center and ROI drawn by the first author. It was a simple, consistent approach to the data." }, { "c_id": "2496", "date": "28 Feb 2017", "name": "George Wittenberg", "role": "Author Response", "response": "Please see the overall comments to readers. The specific responses to your comments include: Because of the size of the two groups and the lack of significance of difference between the two active treatments, the data was pooled. While we did take look to see if there were intergroup differences, as expected, this was too small a sample. But we have clarified that all participants received \"intensive repetitive task practice\" rather than robot therapy.   Note about outlier was made. Unfortunately is has been hard to track down all of the data regarding stroke location.   All ROI drawn in one centre with ROI by one author.   Slice numbers per ROI can be calculated and so was not added to the paper: Since slice thickness was 4 mm for functional imaging and 2 mm for anatomical and DTI imaging, the 5 mm  radius included 5 slices for anatomical scans and 3 slices for functional scans. The cerebellar ROI was four times larger, and had so had 10 slices for the functional image.   \"What is meant with \"predictive values of the MEP measures were approximately equal (page 5)\"? Please, be specific and provide the data.\" – I don't see this in the manuscript - maybe in an earlier version?   Unfortunately we don't have detailed sensory information.   In Figure 4, Participant #1 is reference and the other data is mean." } ] }, { "id": "16609", "date": "24 Oct 2016", "name": "Sean Dukelow", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an interesting paper that examines neurophysiologic measurements in 13 chronic stroke survivors who completed robotic therapy or intensive interventions lasting either 6 weeks or 12 weeks. The study explores the use and predictive capabilities of MEPs, FA, and resting state fMRI. Further there is some discussion of the ipsilateral silent period but this was difficult to obtain in a number of subjects for technical reasons. Although we are enthusiastic about the study, we have some concerns over the manuscript in its existing format and put forward a number of questions/suggestions for the authors below.\n\nThe authors used a number of different measures – it would have been nice to see a hypothesis associated with each of these measures.\n\nCould the authors please be more clear on the rehabilitation intervention? We recognize that they do cite the trials from which the data was taken, but even a line or two discussing what went on in the robot vs the comp. groups would be helpful. It would also be helpful to demarcate which individual received what therapy in Figure 3.\n\nThe authors state that the ROI analysis for the resting state fMRI was performed without normalization. Those unfamiliar with this type of analysis may not understand why normalization is not required. Please add a sentence or two to provide justification.\n\nHow often did the authors note a change in the motor hotspot using TMS?\n\nWhy use a 5mm spherical region of interest for the PLIC as the PLIC is not a spherical structure? Some explanation of the reasoning for this would be helpful.\n\nWhat was the role of the single control subject who is mentioned in the methods? Please clarify.\n\nExactly how many subjects did the authors find/not find an ipsilateral silent period in?\n\nRe: Fractional Anisotropy (FA): R-squared = 0.48 is reported for the relationship of FA to baseline motor ability. It would be nice to see this in a scatter plot as the relationship is quite strong.\n\nThe manuscript might benefit from some discussion of why steeper MEP recruitment does not lead to better function.\n\nThe authors mention the technical limitations of using silent periods. For readers who are not familiar with this technique, a sentence or two briefly describing what those limitations are would be helpful. Further, the discussion of the importance of the silent period relative to what is known in the literature would be helpful.\n\nWe would like to see slightly more discussion about the limitations of an n=13 sample size. For instance, the results that those subjects with an MEP at baseline tended to do better appears to be driven by 3 subjects based on figure 3.\n\nMinor Concerns:\nMany abbreviations go undefined in the paper:\n\nLast paragraph of introduction: It would be appreciated if the authors provided the references for the two multicenter VA studies that they discuss. They should also define VA for readers.\n\nFigure 2 – the authors need to state what the abbreviations are in the text for the correlation matrices.\n\nFigure 6 – Please label your abbreviations.\n\nDataset 1 label: Please check your spelling “cronic strocke”\n\nIn the PDF version of the manuscript, the quality of the correlation matrices in Figure 2B appears to be low resolution.\n\nIntroduction: “In such patients the mechanisms of recovery may be qualitatively different and who have the most to grain with improved understanding of the mechanisms…” – this sentence appears to be incomplete.", "responses": [ { "c_id": "2495", "date": "28 Feb 2017", "name": "George Wittenberg", "role": "Author Response", "response": "Please see the overall comments to readers. The specific responses to your comments include: The hypotheses are stated in the introduction - added the hypothesis about functional connectivity. We elaborated a little more in response to this comment.   even a line or two discussing what went on in the robot vs the comp. groups would be helpful - added.   Demarcate which individual received what therapy in Figure 3 - still don't want to have comparisons between types of therapy. 9, 54, and 57 and conventional therapy   Normalization issue explained.   Addressed hotspot with comment in methods.   This was a limitation of AFNI (now addressed in paper.   Spherical ROI explained.   Addressed numbers of silent periods found.   FA correlation with impairment can be added as supplemental graph.   Thanks for the suggestion about recruitment curves. This has been added to the discussion.   There already was discussion of the requirement for cortical activation, but this has been increased.   MEP presence wasn't even a significant predictor, just a trend, so it didn't seem worthwhile pointing out. But need add more about limitations. Minor concerns: The references are provided in the first part of the Methods. VA is now defined.   Abbreviations in Fig. 2 are explained.   Fig. 6 abbrevs labelled.   \"Cronic strocke\" was a filename, but can correct if needed.   The source data for the correlation matrices had that resolution but it didn't limit information.   Thanks for noting the run-on sentence. This error was carried forward over many versions but was an awkward construction/splicing job that has now been fixed by splitting the sentence." } ] } ]
1
https://f1000research.com/articles/5-2119
https://f1000research.com/articles/5-1008/v1
26 May 16
{ "type": "Opinion Article", "title": "The referential brain: why do some neurons learn and some do not?", "authors": [ "Vishal Bharmauria", "Lyes Bachatene", "Lyes Bachatene" ], "abstract": "Brain is phenomenally plastic and exhibits this capacity well into adulthood. Neuronal plasticity can be studied by using different adaptation protocols. Post-adaptation neurons typically show attractive and repulsive shifts even though challenged by the same adapter. Using orientation columns as a paradigm, we argue and suggest that repulsive shifts are essentially fundamental to preserve the functional organization of the cortex, and thus, maintaining the functional homeostasis of the brain.", "keywords": [ "referential brain", "repulsive shifts", "attractive shifts", "plasticity", "visual cortex" ], "content": "\n\nIn daily life we use reference points to evaluate and analyse options around. Brain exhibits phenomenal plasticity during youth and even adulthood that helps animals adapt to different experiences (Bachatene et al., 2015b; Dragoi et al., 2001; Hensch, 2005; Kohn, 2007; Turrigiano & Nelson, 2004).\n\nIn general, neurons in the brain are selective to certain features. For example, a primary visual neuron (V1) is selective to a range of orientations (Hubel & Wiesel, 1962; Hubel & Wiesel, 1968; Swindale, 1998). Neuronal plasticity can be studied by employing several techniques and protocols (Kohn, 2007; Turrigiano & Nelson, 2004) such as visual deprivation and adaptation. Adaptation refers to the imposition of a non-optimal stimulus (adapter) within neuronal receptive fields for specific periods of time (usually several minutes) to alter their response behaviour (Kohn, 2007). Indeed, using various techniques in different brain areas, the effects of adaptation have been investigated on various neuronal properties such as orientation selectivity (Bachatene et al., 2015b; Dragoi et al., 2000; Gutnisky & Dragoi, 2008; Ghisovan et al., 2009), motion (Kohn & Movshon, 2003), spatial frequency (Bouchard et al., 2008; Marshansky et al., 2011), and contrast (Baccus & Meister, 2002). After adaptation protocol, neurons typically show two types of behavioural shift patterns: attraction and repulsion. An attractive shift is the displacement of a tuning curve toward the adapter following adaptation, whereas a repulsive shift corresponds to the movement of the tuning curve away from the adapter. Some neurons refract the adapter and do not change their selectivity (Jeyabalaratnam et al., 2013).\n\nInterestingly, contingent upon the duration of stimulus, neurons may predominantly shift in one characteristic fashion. For example, a 3-min adaptation (Dragoi et al., 2000) of visual neurons leads to a majority of repulsive shifts, whereas, a prolonged adaptation (> 6 min) potentiates attractive shifts (Bachatene et al., 2015b; Cattan et al., 2014; Ghisovan et al., 2009). Why do some neurons learn and some do not, even though challenged by the same adapter? Does this apply to all the brain regions?\n\nHere we put forth a concept through our recent results on the functional reprogramming of orientation columns in the visual cortex (Bachatene et al., 2015b). In that report, we showed that neurons at the adapted and non-adapted cortical sites exhibited similar pattern of shifts. It is particularly interesting that the non-adapted neurons (not challenged by the adapter) also displayed changes in their orientation selectivity (they exhibited both types shifts). This is illustrated as an example in Figure 1. The upper row displays a hypothetical layout of orientation columns in control conditions. The activities of nine neurons under observation were recorded simultaneously that were tuned distinctly at each location (neurons at each location are linked by a black triangle; all three sites had non-overlapping receptive fields). After the adaptation procedure, neurons at each site changed their selectivity irrespective of the fact that only neurons tuned to 90° (pre-adaptation, red-columned neurons, middle triangle) were challenged with an adapter (157.5° degree orientation, purple bar). Notably, post-adaptation, two neurons (middle triangle, purple neurons) at the adapted site exhibited an attractive shift whereas one neuron displayed repulsion. In other words, only two neurons learnt the adapter whereas the third neuron swayed away from this behaviour. Interestingly, non-adapted neurons in other columns (left and right triangles) also displayed orientation selectivity shifts following adaptation.\n\nThe upper row corresponds to the orientation layout of columns in the control (pre-adaptation) condition. The triangles show three distinct groups of neurons (with non-overlapping receptive fields) under observation within different columns. After an adaptation procedure (neurons in the red column, 90°, are adapted to 157.5°), the orientation layout of the columns is reconfigured (lower row). It is to be underlined that, although only neurons corresponding to 90° column were challenged by an adapter, neurons in other columns (non-adapted) also changed their selectivity. Two out of three neurons at the adapted site changed their selectivity toward the adapter, whereas one neuron shifted its selectivity away from the adapter. The repulsive neurons may have an important role to play in maintaining the functional dogma of orientation processing in the visual cortex. Note: Each colored sphere represents a neuron.\n\nMany reports (Bachatene et al., 2013; Jia et al., 2010; Wertz et al., 2015) have shown and suggested that neuronal dendrites contain synapses corresponding to all the orientations. Within this framework, after a prolonged adaptation, the synapses representing the adapter would strengthen and become active, thus giving rise to a novel selectivity for the neuron. Therefore, new local networks are framed potentiating a changed column. Within this dynamic interplay, most local neurons may wire together and shift their responses in conjunction with each other toward the adapter whereas a minority may deflect away (repulsion) to participate in conservation of the columnar dogma. Few neurons may remain unaffected that are termed refractory neurons. Once this reference is set, other columns would systematically tilt to achieve their ultimate destiny without leaving an orientation hole. Therefore, repulsive neurons have a significantly equal role as attractive neurons to play in organizing principles of functional sensory processing. Biologically, this implies a homeostatic phenomenon allowing a sensory feature to conserve a basic state that will allow further plastic modifications (Bachatene et al., 2015a; Turrigiano & Nelson, 2004). Thus, the cortical column is reframed with an equal representation of each optimal stimulus.\n\nFrom the above paradigm, it is suggested that the brain, especially the cortex functions on such organizing principles. In fact, similarly to the visual cortex, distinct functional maps are present in other brain regions too. For example, the auditory cortex may also be functionally reorganized in such fashion (Nahum et al., 2013). Although, neurons are arranged in a salt-and-pepper fashion in lower vertebrates, yet they exhibit selectivity to properties and may also reorganize through similar connectivity principles as higher vertebrates. Therefore, we suggest that neuronal functioning is referential in nature. This eventually facilitates the brain’s ability to modify itself easily (plasticity), to form novel networks, and most importantly, to maintain the functional homeostasis.", "appendix": "Author contributions\n\n\n\nVB wrote the manuscript. LB helped with critical remarks and manuscript writing. Both authors agreed to the final content of the article.\n\n\nCompeting interests\n\n\n\nThe authors declare no competing financial interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nBaccus SA, Meister M: Fast and slow contrast adaptation in retinal circuitry. Neuron. 2002; 36(5): 909–919. PubMed Abstract | Publisher Full Text\n\nBachatene L, Bharmauria V, Cattan S, et al.: Fluoxetine and serotonin facilitate attractive-adaptation-induced orientation plasticity in adult cat visual cortex. Eur J Neurosci. 2013; 38(1): 2065–2077. PubMed Abstract | Publisher Full Text\n\nBachatene L, Bharmauria V, Cattan S, et al.: Summation of connectivity strengths in the visual cortex reveals stability of neuronal microcircuits after plasticity. BMC Neurosci. 2015a; 16: 64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBachatene L, Bharmauria V, Cattan S, et al.: Reprogramming of orientation columns in visual cortex: a domino effect. Sci Rep. 2015b; 5: 9436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBouchard M, Gillet PC, Shumikhina S, et al.: Adaptation changes the spatial frequency tuning of adult cat visual cortex neurons. Exp Brain Res. 2008; 188(2): 289–303. PubMed Abstract | Publisher Full Text\n\nCattan S, Bachatene L, Bharmauria V, et al.: Comparative analysis of orientation maps in areas 17 and 18 of the cat primary visual cortex following adaptation. Eur J Neurosci. 2014; 40(3): 2554–2563. PubMed Abstract | Publisher Full Text\n\nDragoi V, Rivadulla C, Sur M: Foci of orientation plasticity in visual cortex. Nature. 2001; 411(6833): 80–86. PubMed Abstract | Publisher Full Text\n\nDragoi V, Sharma J, Sur M: Adaptation-induced plasticity of orientation tuning in adult visual cortex. Neuron. 2000; 28(1): 287–298. PubMed Abstract | Publisher Full Text\n\nGhisovan N, Nemri A, Shumikhina S, et al.: Long adaptation reveals mostly attractive shifts of orientation tuning in cat primary visual cortex. Neuroscience. 2009; 164(3): 1274–1283. PubMed Abstract | Publisher Full Text\n\nGutnisky DA, Dragoi V: Adaptive coding of visual information in neural populations. Nature. 2008; 452(7184): 220–4. PubMed Abstract | Publisher Full Text\n\nHensch TK: Critical period plasticity in local cortical circuits. Nat Rev Neurosci. 2005; 6(11): 877–888. PubMed Abstract | Publisher Full Text\n\nHubel DH, Wiesel TN: Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J Physiol. 1962; 160(1): 106–154. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHubel DH, Wiesel TN: Receptive fields and functional architecture of monkey striate cortex. J Physiol. 1968; 195(1): 215–243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJeyabalaratnam J, Bharmauria V, Bachatene L, et al.: Adaptation shifts preferred orientation of tuning curve in the mouse visual cortex. PLoS One. 2013; 8(5): e64294. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJia H, Rochefort NL, Chen X, et al.: Dendritic organization of sensory input to cortical neurons in vivo. Nature. 2010; 464(7293): 1307–1312. PubMed Abstract | Publisher Full Text\n\nKohn A: Visual adaptation: physiology, mechanisms, and functional benefits. J Neurophysiol. 2007; 97(5): 3155–3164. PubMed Abstract | Publisher Full Text\n\nKohn A, Movshon JA: Neuronal adaptation to visual motion in area MT of the macaque. Neuron. 2003; 39(4): 681–691. PubMed Abstract | Publisher Full Text\n\nMarshansky S, Shumikhina S, Molotchnikoff S: Repetitive adaptation induces plasticity of spatial frequency tuning in cat primary visual cortex. Neuroscience. 2011; 172: 355–365. PubMed Abstract | Publisher Full Text\n\nNahum M, Lee H, Merzenich MM: Principles of neuroplasticity-based rehabilitation. Prog Brain Res. 2013; 207: 141–171. PubMed Abstract | Publisher Full Text\n\nSwindale NV: Orientation tuning curves: empirical description and estimation of parameters. Biol Cybern. 1998; 78(1): 45–56. PubMed Abstract | Publisher Full Text\n\nTurrigiano GG, Nelson SB: Homeostatic plasticity in the developing nervous system. Nat Rev Neurosci. 2004; 5(2): 97–107. PubMed Abstract | Publisher Full Text\n\nWertz A, Trenholm S, Yonehara K, et al.: PRESYNAPTIC NETWORKS. Single-cell-initiated monosynaptic tracing reveals layer-specific cortical network modules. Science. 2015; 349(6243): 70–74. PubMed Abstract | Publisher Full Text" }
[ { "id": "14455", "date": "20 Jun 2016", "name": "Jose Fernanado Maya-Vetencourt", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Bharmauria & Bachatene is a provocative opinion article that claims a role for a particular set of visual cortex neurons in mediating/maintaining some functional aspects of the organization of the visual cortex in response to external stimuli. The script is interesting, well written, and reviews the pertinent literature. It is has potential implications in terms of neural circuitries computation of sensory information in the brain.\n\nUsing orientation columns as an experimental model, the authors propose that repulsive neurons may play an important role in the orientation processing by visual cortical neurons. The discussion is based on the emerging view that neuronal dendrites possess synapses corresponding to all orientation columns. The authors, however, fail to provide additional anatomical insights that may support their hypothesis. For instance, when thinking about neurons that learn orientation or repulsions shifts after adaptation, horizontal connections between spatially separated cortical areas represented by inhibitory interneurons that may ultimately influence synapses of dendrites of single units, immediately come to one’s mind. Can the authors rule out this possibility? The work may also get insights from a description of experimental models designed to address the potential role of repulsive shifts in the preservation of the functional organization in the visual cortex.", "responses": [ { "c_id": "2516", "date": "28 Feb 2017", "name": "Vishal Bharmauria", "role": "Author Response", "response": "We thank the reviewer for his constructive comments on the manuscript. As suggested by the reviewer, we have added a paragraph that relates to the inhibitory neurons and horizontal connections in the cortex.  Moreover, the role of repulsive neurons in maintaining the columnar organization after adaptation protocols, has also been added. We have also added a few references that suggest the role of repulsive neurons." } ] }, { "id": "19937", "date": "06 Feb 2017", "name": "Stanislaw Glazewski", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBharmauria and Bachatene  propose that the neurones termed “repulsive” (as they change their preferred orientation while challenged with the adaptor but do not follow it) are fundamental “ to preserve the functional organisation of the cortex” or as it is found in a different place “to participate in conservation of the columnar dogma”. I neither recognise the arguments for this concept or the evidence that it is based on. Why would the presence of refractory neurones not be sufficient if any “columnar dogma” or “the reference” is required? Additionally, as the preferred orientation of neurones changes with the duration of adaptor presentation, it is plausible that the repulsive and even refractory neurones are still shifting to the adaptor’s orientation. If the shifting is never absolute the remaining refractory and repulsive neurones could just have no function (noise). In any case the eventual function of refractory and repulsive neurones can be at present tested in small-scale experiment, as the appropriate technology is available. On the other hand, it would be much easier to test at first whether the repulsive and refractory neurones really exist (long adapting experiment) and if they are whether they are fixed at these functions (series of different adaptors’ presentation experiment with use of the same population of neurones). The results should aid the concepts, which are premature at present.", "responses": [ { "c_id": "2515", "date": "28 Feb 2017", "name": "Vishal Bharmauria", "role": "Author Response", "response": "We thank the reviewer for his comments on the manuscript. The comments of the reviewer have helped in clarification of the manuscript further. We have added a paragraph on the role of repulsive shifts. In fact, we have suggested this concept based on our recent findings (Bachatene et al. 2015b) on the visual cortex of the anaesthetized cat that such a phenomenon maybe occurring after adaptation procedures. We would like to emphasize that in addition to the neurons at the adapted site, neurons at the non-adapted sites also changed their selectivity. In fact, many reports (these papers have been cited in the article) have shown that refractory neurons exist in the cortex, however, refractory neurons and repulsive neurons could even reverse/change their orientation selectivity shift-directions toward the adapter after repetitive adaptations (Ghisovan et al. 2008). Indeed, neurons may be fixed at these functions, but only to a certain period of adaptation as they may change their properties as the adapter duration is changed (Ghisovan et al. 2008). We have even tested shift amplitudes between 6 min until 24 min (Bachatene et al. 2015) of adaptation and the tested neurons always fall into three categories: attractive, repulsive and refractory neurons. However, longer adaptation protocols may potentiate attractive shifts in general. Although, the concept maybe premature at present but if one were to imagine the simultaneously recorded neurons from a column as an ensemble of neurons, the role of refractory neurons could be hypothesized as follows: There is plenty of evidence nowadays that in an ensemble of neurons, some neurons are strongly embedded in the circuits whereas others change their properties due to synaptic changes occurring at their dendrites. Such strongly embedded neurons may be compared to the refractory neurons, whereas others undergo change in their properties through short term plasticity. Indeed, further experiments could be designed to explore the exact roles of refractory neurons in circuits. However, within the framework of the columnar organization, they seem to be important in preserving this functional dogma. Ghisovan et al. 2008. Visual Cells Remember Earlier Applied Target: Plasticity of Orientation Selectivity Bachatene et al. (2015)Summation of connectivity strengths in the visual cortex reveals stability of neuronal microcircuits after plasticity. BMC Neurosci. 2015a;16:64. 26453336 10.1186/s12868-015-0203-1 4600218   Cossell et al. (2015) Functional organization of excitatory synaptic strength in primary visual cortex   Barth and Poulet (2012) Experimental evidence for sparse firing in the neocortex" } ] } ]
1
https://f1000research.com/articles/5-1008
https://f1000research.com/articles/5-2456/v1
05 Oct 16
{ "type": "Research Note", "title": "Kv4.2 knockout mice display learning and memory deficits in the Lashley maze", "authors": [ "Gregory D. Smith", "Nan Gao", "Joaquin N. Lugo", "Gregory D. Smith", "Nan Gao" ], "abstract": "Background: Potassium channels have been shown to be involved in neural plasticity and learning. Kv4.2 is a subunit of the A-type potassium channel. Kv4.2 channels modulate excitability in the dendrites of pyramidal neurons in the cortex and hippocampus. Deletion of Kv4.2 results in spatial learning and conditioned fear deficits; however, previous studies have only examined deletion of Kv4.2 in aversive learning tests. Methods: For the current study, we used the Lashley maze as an appetitive learning test. We examined Kv4.2 wildtype (WT) and knockout (KO) mice in the Lashley maze over 4 days during adulthood. The first day consisted of habituating the mice to the maze. The mice then received five trials per day for the next 3 days. The number of errors and the time to the goal box was recorded for each trial. The goal box contained a weigh boat with an appetitive reward (gelatin with sugar). There was an intertrial interval of 15 minutes. Results: We found that Kv4.2 KO mice committed more errors across the trials compared to the WT mice p<0.001. There was no difference in the latency to find the goal box over the period. Discussion: Our finding that deletion of Kv4.2 resulted in more errors in the Lashley maze across 15 trials contribute to a growing body of evidence that Kv4.2 channels are significantly involved in learning and memory.", "keywords": [ "Kv4.2", "A type current", "hippocampus", "lashley maze", "learning", "potassium ion channel" ], "content": "Introduction\n\nKv4.2 is a subunit of the A–type potassium channel which mediates the excitability of pyramidal neurons in the cortex and hippocampal dendrites1–3. A-type currents regulate cell firing by attenuating action potentials and reduce excitation4–8. Kv4.2 localization in the pyramidal cell dendrites is dependent on membrane associated guanylate kinase protein (PSD-95)9,10. When the Kv4.2 subunit is genetically deleted, the A-type current in the CA1 pyramidal cell dendrites of the hippocampus is almost entirely removed11. Disruption of the Kv4.2 has been associated with both epilepsy12,13 and autism spectrum disorder3.\n\nKv4.2 knockout (KO) mice have a reduction in the A-type current and their threshold for long term potentiation (LTP) is also lowered, resulting in changes in synaptic plasticity11. Previous research has shown the Kv4.2 KO have impaired spatial learning in the Morris water maze (MWM) and a deficit in contextual learning in fear-conditioning14,15. However, these tasks are aversive and stress could contribute to some of the learning deficits initially found. For this experiment, we used appetitive learning to examine the effects of Kv4.2 KO performance in the Lashley maze, which is a low-stress learning task that does not rely on adverse stimuli16,17.\n\n\nMaterials and methods\n\nAnimals: The mice used for this study were Kv4.2 wildtype (WT) and KO adult males (postnatal day 60) that were generated on the 129S6/SvEv background, which had been bred for over 10 generations. All mice were bred in the Baylor University animal facility. For this study, heterozygous parents were bred to obtain both KO and WT mice and both male and female mice were used. All animals were housed in Baylor University’s animal facility on a 14 hour light 10 hour dark cycle at 22°C. Mice were all housed with sex matched littermates following weaning. All mice were given ad libitum access to food and water. All testing and housing complied with the National Institutes of Health Guidelines for the Care and Use of Laboratory Animals. All protocols were approved by the Baylor University Animal Care and Use Committee (Animal Assurance Number A3948-01).\n\nMaze and procedure: The details of the maze construction and procedure can be found in a previous study16. The maze was constructed out of 0.25 cm thick black acrylic plastic and is 60 cm × 28 cm with 16 cm tall walls. The maze had four lanes that were evenly spaced with an additional start (area A) and goal box (area N). The start and goal boxes were 12 cm × 7.25 cm and the entrance began 12 cm from the edge of the maze. The entrance to the boxes was 6 cm wide. Doors 1, 2, and 3 were all 4 cm wide and began 12 cm from the edge of the maze. A 5% gelatin solution in double distilled water with 1.25% sucrose was prepared. The mixture was stored at 5°C until use on training and testing days. The details of the testing per day is detailed in Figure 1.\n\nOn day 1 the test mice were habituated to each of the chambers of the maze. For each mouse, a weigh boat containing a small amount of the gelatin was placed in the goal box (area N). The mice began habituation in section BCD with door 1 and door A blocked and were allowed to explore for 3 minutes. The mice were then moved to section GFE with doors 1 and 2 blocked, and again allowed to explore for 3 minutes. The same was then repeated in area HIJ for another 3 minutes. Finally the mice were moved to area MLK with door 3 and door N blocked and allowed to explore for 5 minutes. The apparatus was cleaned using 30% isopropanol between each mouse and a new weigh boat with fresh gelatin was used for each mouse. On day 2 a fresh weigh boat containing a small amount of gelatin was placed in the area N and the test mouse was placed in area A. The amount of time and path used to reach the goal box was recorded. The number of repeated sections the mouse entered on the way to the goal were recorded. If the mouse did not reach the end after 5 minutes it was guided to the goal using a piece of acrylic plastic used to block the doors, to prevent back tracking and wrong turns. Each mouse received 5 trials, one every 15 minutes. The same procedures were then repeated on days 3 and 4 for a total of 15 trials per mouse.\n\nStatistical analysis: All statistical analyses were done using Prism 6 (GraphPad Software, La Jolla, CA), for the repeated measures, two-way ANOVAs were used to analyze these data. Separate independent t-tests were performed when an interaction was found.\n\n\nResults\n\nThe WT mice committed fewer errors when compared to the KO mice in the maze over the 15 trials F(1, 18) = 11.9, p<0.001 (Figure 2A). There was a significant effect in maze learning over the trials F(14, 252) = 12.9, p < 0.001. There was no interaction between groups over time F(14, 252) = 0.9, p = 0.52. There was no difference between Kv4.2 WT and KO mice in their time to complete the maze F(1, 18) = 0.01, p = 0.92 (Figure 2B). There was a significant decrease for both groups in the time to find the end of the maze across trials F(14, 252) = 4.8, p < 0.001. There was a group over time interaction F(14, 252) = 3.1, p < 0.01. We ran separate t-tests over the 15 trials and found significant differences only on the first trial t(1,18) = 2.2, p < 0.05.\n\nThere was a significant difference between genotypes with the WT mice committing fewer errors when compared to the KO mice. KO and WT mice had no difference in time to completion of Lashley maze, but there was a significant effect of genotype on number of errors. A. Number of errors committed by the mice across the 15 trials. B. Time to completion of the Lashley maze across the 15 trials. However, there was a group × time interaction. An independent t-test revealed a significant difference on the first trial. No other differences were found. WT n = 11, KO n = 9. * = p < 0.05; *** = p < 0.001.\n\n\nDiscussion\n\nKv4.2 wildtype and knockout mice demonstrated improvement in the Lashley maze by showing a reduction in the number of errors to find the goal box. However, the Kv4.2 KO mice committed more errors across the 15 trials compared to WT mice. One important consideration is that there was no difference between groups when examining the time to complete the maze. Kv4.2 KO mice required more time at the first trial, but the time to complete the maze was the same between groups for the remainder of the trials. The latency data suggest that Kv4.2 KO mice were not less active, which is in line with our previous study where we did not observe a difference in locomotor activity in the open field test14.\n\nThe results from the Lashley maze complement previous studies that reported spatial learning deficits in the MWM and contextual learning deficits in the delay fear conditioning test for Kv4.2 KO mice14,15. One of the benefits of the Lashley maze is that the impact of age and sensory abilities is reduced. Impaired vision will reduce the ability of the subject to find the hidden platform in the MWM and impaired hearing can attenuate the ability of the subject to associate a tone with an aversive shock. This is important as there have been several reports that suggest ion channels may contribute to aging-related impairment18,19. Additional sensory tests would need to be performed to examine baseline sensory abilities if older subjects are used in behavioral experiments, or another approach would be to use the Lashley maze. The low induction of stress makes the maze a beneficial test in models that have alterations in anxiety or age-related impairments which could account for differences seen in other more aversive learning tests.\n\n\nData availability\n\nF1000Research: Dataset 1. Data for Kv4.2 knockout mice displaying learning and memory deficits in the Lashley maze, 10.5256/f1000research.9664.d13719320", "appendix": "Author contributions\n\n\n\nGS, NG, and JNL were involved in the project design. GS and NG collected the data; GS, NG, and JNL analyzed the data; GS and JNL wrote the paper, all authors reviewed the paper for submission.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nSupported by intramural funds from Baylor University Research Council.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nCarrasquillo Y, Burkhalter A, Nerbonne JM: A-type K+ channels encoded by Kv4.2, Kv4.3 and Kv1.4 differentially regulate intrinsic excitability of cortical pyramidal neurons. J Physiol. 2012; 590(16): 3877–3890. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBirnbaum SG, Varga AW, Yuan LL, et al.: Structure and function of Kv4-family transient potassium channels. Physiol Rev. 2004; 84(3): 803–833. PubMed Abstract | Publisher Full Text\n\nGuglielmi L, Servettini I, Caramia M, et al.: Update on the implication of potassium channels in autism: K+ channelautism spectrum disorder. Front Cell Neurosci. 2015; 9: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAmberg GC, Koh SD, Imaizumi Y, et al.: A-type potassium currents in smooth muscle. Am J Physiol Cell Physiol. 2003; 284(3): C583–C595. PubMed Abstract | Publisher Full Text\n\nHoffman DA, Magee JC, Colbert CM, et al.: K+ channel regulation of signal propagation in dendrites of hippocampal pyramidal neurons. Nature. 1997; 387(6636): 869–875. PubMed Abstract | Publisher Full Text\n\nMartina M, Schultz JH, Ehmke H, et al.: Functional and molecular differences between voltage-gated K+ channels of fast-spiking interneurons and pyramidal neurons of rat hippocampus. J Neurosci. 1998; 18(20): 8111–8125. PubMed Abstract\n\nJohnston D, Hoffman DA, Magee JC, et al.: Dendritic potassium channels in hippocampal pyramidal neurons. J Physiol. 2000; 525(Pt 1): 75–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCai X, Liang CW, Muralidharan S, et al.: Unique roles of SK and Kv4.2 potassium channels in dendritic integration. Neuron. 2004; 44(2): 351–364. PubMed Abstract | Publisher Full Text\n\nWong W, Schlichter LC: Differential recruitment of Kv1.4 and Kv4.2 to lipid rafts by PSD-95. J Biol Chem. 2004; 279(1): 444–452. PubMed Abstract | Publisher Full Text\n\nWong W, Newell EW, Jugloff DG, et al.: Cell surface targeting and clustering interactions between heterologously expressed PSD-95 and the Shal voltage-gated potassium channel, Kv4.2. J Biol Chem. 2002; 277(23): 20423–30. PubMed Abstract | Publisher Full Text\n\nChen X, Yuan LL, Zhao C, et al.: Deletion of Kv4.2 gene eliminates dendritic A-type K+ current and enhances induction of long-term potentiation in hippocampal CA1 pyramidal neurons. J Neurosci. 2006; 26(47): 12143–12151. PubMed Abstract | Publisher Full Text\n\nMonaghan MM, Menegola M, Vacher H, et al.: Altered expression and localization of hippocampal A-type potassium channel subunits in the pilocarpine-induced model of temporal lobe epilepsy. Neuroscience. 2008; 156(3): 550–562. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSingh B, Ogiwara I, Kaneda M, et al.: A Kv4.2 truncation mutation in a patient with temporal lobe epilepsy. Neurobiol Dis. 2006; 24(2): 245–253. PubMed Abstract | Publisher Full Text\n\nLugo JN, Brewster AL, Spencer CM, et al.: Kv4.2 knockout mice have hippocampal-dependent learning and memory deficits. Learn Mem. 2012; 19(5): 182–189. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLockridge A, Yuan LL: Spatial learning deficits in mice lacking A-type K+ channel subunits. Hippocampus. 2011; 21(11): 1152–6. PubMed Abstract | Publisher Full Text\n\nBressler A, Blizard D, Andrews A: Low-stress route learning using the Lashley III maze in mice. J Vis Exp. 2010; (39): pii: 1786. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLashley KS: Brain mechanisms and intelligence: A quantitative study of injuries to the brain. University of Chicago Press, 1929. Reference Source\n\nSimkin D, Hattori S, Ybarra N, et al.: Aging-Related Hyperexcitability in CA3 Pyramidal Neurons Is Mediated by Enhanced A-Type K+ Channel Function and Expression. J Neurosci. 2015; 35(38): 13206–13218. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOh MM, Simkin D, Disterhoft JF: Intrinsic Hippocampal Excitability Changes of Opposite Signs and Different Origins in CA1 and CA3 Pyramidal Neurons Underlie Aging-Related Cognitive Deficits. Front Syst Neurosci. 2016; 10: 52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith G, Gao N, Lugo J: Dataset 1 in: Kv4.2 Knockout Mice Display Learning and Memory Deficits in the Lashley Maze. F1000Research. 2016. Data Source" }
[ { "id": "17911", "date": "24 Nov 2016", "name": "Peter Backx", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNice succinct paper.  Relevant.\n\nSuggested changes for the authors to consider:\nIn the methods, the authors should mention how errors were identified and quantified.\n\nFigure 2B.  Is there any reason that the WT mice have such a pattern of in the “time to completion” data?\n\nThe authors should provide a background on the expression patterns for Kv4.2 in mouse brains.", "responses": [ { "c_id": "2368", "date": "14 Dec 2016", "name": "Joaquin Lugo", "role": "Author Response", "response": "We would like to thank the reviewer for their comments. We have included our replies below but will include a revised version of the manuscript when we have received both reviews.  Comment 1: We defined an error as an entry into a dead-end cul-de-sac zone (e.g., going from arm H to zone I; Fig. 1) or when the mouse travels back through a previously traveled arm of the maze (e.g., going from arm L to arm I; Fig. 1) Comment 2: We are not sure why we observed this pattern. It may be that the wildtype were more motivated on the first trial of each block. We will have to explore this behavioral change more in future studies. Comment 3: The highest levels of Kv4.2 are found in the CA1 of the hippocampus with less expression in the CA3 and dentate gyrus1. The channels are localized to the somatodendritic regions of the hippocampal dendrites2-4. References Rhodes, K. J. et al. KChIPs and Kv4 alpha subunits as integral components of A-type potassium channels in mammalian brain. J Neurosci 24, 7903-7915 (2004). Sheng, M., Tsaur, M.-L., Jan, Y. N. & Jan, L. Y. Subcellular segregation of two A-type K+ channel proteins in rat central neurons. Neuron 9, 271-284 (1992). Maletic-Savatic, M., Lenn, N. J. & Trimmer, J. S. Differential spatiotemporal expression of K+ channel polypeptides in rat hippocampal neurons developing in situ and in vitro. J Neurosci 15, 3840-3851 (1995). Serodio, P. & Rudy, B. Differential expression of Kv4 K+ channel subunits mediating subthreshold transient K+ (A-type) currents in rat brain. J Neurophysiol 79, 1081-1091 (1998)." } ] }, { "id": "19280", "date": "13 Jan 2017", "name": "Richard Brown", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper tests male Kv4.2 KO mice in the Lashley type III maze at 60+ days of age.\n\nIt is well-written and I have only a few comments.\nWere mice pre-exposed to the gelatin-sugar reward before the tests to prevent neophobia?\n\nThe figure caption for Figure 2 is garbled and should be reorganized.\n\nBoth groups of mice reach the same asymptote in terms of errors by day 15, so although the KO mice made more errors during training, they reached the same endpoint. What does this mean? What type of errors did they make?\n\nThe first trial effect in Figure 2B occurs only for the WT mice but not the KO mice. Why should this occur? What was different about the first trial each day?\n\nWere mice food deprived at all?\n\nHow much of the reward did they eat?\n\nWhen I look at references 14 and 15, there is the suggestion that the KO mice are slower to develop a spatial search strategy and use more procedural strategies. Is there any evidence of this in the Lashley III maze?", "responses": [] } ]
1
https://f1000research.com/articles/5-2456
https://f1000research.com/articles/6-27/v1
09 Jan 17
{ "type": "Research Article", "title": "Age-specific acceleration in malignant melanoma", "authors": [ "Brian L Diffey", "Steven A Frank", "Brian L Diffey" ], "abstract": "Background: The overall incidence of melanoma has increased steadily for several years. The relative change in incidence at different ages has not been fully described. Objective: To describe how incidence at different ages has changed over time and to consider what aspects of tumour biology may explain the observed pattern of change in incidence. Methods: The slope of incidence vs age measures the acceleration of cancer incidence with age. We described the pattern of change over time in the overall incidence of melanoma, as well as in acceleration. We used data for males and females from 3 different countries in the 17 sequential 5-year birth-cohort categories from 1895-99 to 1975-79, from which we derived the incidence patterns. Results: Over time, there has been a tendency for the overall incidence of melanoma to increase and for the acceleration (slope) of the age-incidence curves to decline. The changing patterns of melanoma incidence and acceleration differ between males and females and between the countries analysed. Conclusions: The observed pattern in melanoma of rising incidence and declining acceleration occurs in other cancers in response to genetic knockouts of mechanisms that protect against cancer. Perhaps some protective mechanism with respect to melanoma may be less effective now than in the past, possibly because of more intense environmental challenges.", "keywords": [ "melanoma epidemiology", "age-period-cohort effects", "sun exposure", "age-specific incidence" ], "content": "Introduction\n\nThe incidence of malignant melanoma has increased steadily over the past 50 years in predominately fair-skinned populations1. The trends in incidence probably reflect changing prevalence of risk factors such as increased leisure time in sunny destinations, changing fashion and sunbed use, coupled with increased surveillance, early detection and changes in diagnostic criteria2,3.\n\nThe purpose of this paper is to study the particular ways in which incidence has changed over time. By analysing the 17 sequential 5-year birth cohorts from 1895–99 to 1975–79, we show that incidence has indeed increased steadily over time. Our analysis also shows that the particular patterns of increase in incidence differ between males and females and between different countries.\n\nIn addition to the overall increase in incidence, the relationship between age and incidence has also changed over time. We show that more recent cohorts typically have a disproportionate increase in cases at earlier ages.\n\nTo quantify the age-incidence relationship and its change over time, we study the rate of change of melanoma incidence with age4–6, which is the acceleration of cancer7. The patterns of acceleration provide interesting information about the forces acting on cancer progression at different ages8.\n\n\nMethods\n\nAge-specific incidence data on malignant melanoma (ICD-10; C43) for males and females were obtained for Great Britain9–11 for the period 1975–2014, the USA12 for the period 1975–2013 and Australia13 for the period 1982–2012. Incidence data for the USA relate to white people only.\n\nBecause the incidence of melanoma is increasing over time, age-specific rates are heavily influenced by the year of birth. To allow for this effect, we separated the 17 sequential 5-year birth-cohort categories from 1895–99 to 1975–79. For each cohort, we computed the 5-year average age-specific incidences for males and females aged 25 years and over.\n\nThe analyses were done with Microsoft Excel 2003.\n\n\nResults\n\nTable 1 shows the age-specific incidence for British males born during different time periods. The risk of malignant melanoma within each cohort rises consistently throughout life, as is true for most other cancers8. Figure 1 shows the age-incidence curves for both genders from Great Britain, the USA, and Australia for successive birth cohorts from 1895–99 to 1975–79.\n\nThe plots show the incidence for males (left) and females (right) in Great Britain (top row), USA (middle row), and Australia (bottom row) for the birth cohorts shown in the top legend. The plots do not show the intermediate decadal cohorts because of visual limitations in plotting the data. The plots are based on the summary given in Dataset 1, derived from the data and analyses in Dataset 2–Dataset 5.\n\nFrom Figure 1, it appears that, over time, there has been a tendency for the acceleration (slope) of the incidence curves to decline. The decline in acceleration over time seems particularly strong for certain datasets shown in Figure 1, for example, for British males. Other datasets, such as Australian females, seem not to show a clear trend. Thus, it is helpful to make a more direct analysis for the changing acceleration patterns between the different datasets.\n\nTo describe the tendency for age-specific acceleration to decline over birth cohorts, we calculated the following summary statistics separately for each of the 6 datasets represented by the 6 panels in Figure 2. In each successive pair of the 17 cohorts, we used data only for the common ages shared by the two cohorts. For those common ages, we estimated by linear regression the slope of the log-log age-incidence data, which estimates the age-specific acceleration. We then calculated the ratio of the accelerations for the more recent cohort relative to the prior cohort, and used the logarithm base 2 value of that ratio. A negative value means the more recent cohort has a lower slope.\n\nThe plots show the acceleration for males (left) and females (right) in Great Britain (top row), USA (middle row), and Australia (bottom row) for the birth cohorts shown in the top legend. The plots do not show the intermediate decadal cohorts because of visual limitations in plotting the data. The plots are based on the summary given in Dataset 6, derived from the data and analyses in Dataset 2–Dataset 5.\n\nThe average of the logarithms over the successive pairs of cohorts describes the geometric mean of the slopes, capturing the multiplicative tendency of the slope to change over cohorts. A negative value expresses an overall tendency for the slope to decline over time.\n\nTo gain a sense of the trend in acceleration over the successive cohorts, Table 2 shows, for each of the 6 datasets, the average logarithm for the ratio of successive slopes, and the standard error of that average. We also calculated the average logarithm divided by the standard error of that average, which gives the deviation from zero in terms of the number of standard errors of the mean.\n\nThe overall trends suggest that acceleration has declined over time, consistent with the general visual pattern shown in Figure 2. However, Table 2 shows that there is significant variation in the trends between genders and countries, also apparent from Figure 1 and Figure 2.\n\nIn every case the overall tendency over the cohorts has been for incidence to increase and acceleration (slope) to decline.\n\n\nDiscussion\n\nWe analysed the incidence of malignant melanoma in 6 separate datasets representing males and females from Great Britain, the United States, and Australia, locations with large differences in ambient solar ultraviolet radiation, which is regarded as a major aetiological factor in the disease. Because the incidence of melanoma has tended to increase over time, we calculated the patterns of incidence separately for 17 successive 5-year birth cohorts between 1895 and 1979 in each of the 6 datasets.\n\nIn our analysis, we calculated the age-specific incidence separately for each cohort. We also calculated the acceleration of cancer incidence with age for each cohort, in which acceleration is the rate of increase in incidence with age described by the slope of the log incidence vs log age plots.\n\nThe tendency over the cohorts has been for incidence to increase and acceleration to decline over time. Figure 1 summarizes the incidence patterns, in which the higher position of the curves with the passing of time expresses the rise in incidence. In that figure, one can also see a tendency for the slope to decline with the passing of time, which corresponds to a decline in acceleration. Figure 2 and Table 2 provide a more detailed summary of the way in which acceleration has tended to decline with the passing of time. The variation between sexes and between countries is clear but unexplained.\n\nIt is evident that observed incidence data on melanoma over time are subject to the influence of many factors that include period effects and cohort effects.\n\nPeriod effects can be regarded as resulting from external factors that affect equally all age groups at a particular calendar time and could be a consequences of economic, environmental or social factors; for example, educational awareness and prevention campaigns or depletion of the ozone layer resulting in higher levels of ambient ultraviolet radiation. Also, methodological changes in outcome definitions, classifications, or method of data collection, such as increased surveillance, early detection and changes in diagnostic criteria, could also lead to period effects in data.\n\nCohort effects, on the other hand, result from the unique experience/exposure of a particular group, or cohort, of subjects as they move across time leading to differences in the risk of outcome based on birth year. For example, following the widespread introduction of sunbeds for cosmetic tanning in the 1980s and their popularity amongst younger people, it would be expected that cohorts born after 1960 would be greater users of this form of UV exposure than cohorts born in earlier years.\n\nWe suggest here another possible contributory factor to the observed higher incidence and lower acceleration over time. In other cancer types, genetic mutations that predispose to cancer tend to cause that same coupling of rising incidence and declining acceleration8,14. In the multistage theory of cancer progression, a genetic mutation causes a rise in incidence and decline in acceleration because disease arises only after a certain number of restraining processes have broken down. By that theory, an inherited mutation moves an individual ahead one step at birth, reducing the number of restraining processes that must break down before disease develops15,16.\n\nFewer restraining processes mean faster progression to disease and a rise in incidence. Additionally, multistage theory predicts that the rise in incidence with age (acceleration) goes up with the number of restraining steps. Thus, a reduction in the number of restraining steps after mutation leads to a lower acceleration.\n\nIn the case of melanoma, it would be interesting to study whether a particular restraining process has become less effective over time, perhaps because of a change in environmental exposure patterns. The abrogation of a protective process would, in theory, lead to the observed rise in incidence and decline in acceleration.\n\nA contributory factor to the uncertainties highlighted by our analysis could be linked to the limitations of the study. For example, information on body site, tumour thickness and stage, and histological subtype was absent. Although we selected three countries with well-established cancer registries, we cannot exclude the impact of long-term melanoma prevention strategies, especially in Australia, on incidence trends and, as acknowledged above, the major variation in acceleration found between countries could be the result of environmental and social influences rather tumour biology.\n\n\nData availability\n\nDataset 1. Summary data for Figure 1, age-specific incidence of melanoma in different time periods and different countries. doi, 10.5256/f1000research.10491.d14874817\n\nDataset 2. Raw age-specific incidence data for Australia for different age groups in different years. Data obtained from Australian Institute of Health and Welfare (AIHW) 2016, Australian Cancer Incidence and Mortality (ACIM) books: Melanoma of the skin, Canberra: AIHW. Available at http://www.aihw.gov.au/acim-books. doi, 10.5256/f1000research.10491.d14874918\n\nDataset 3. Raw age-specific incidence data for Great Britain for different age groups in different years. Data obtained from (1) Office for National Statistics, Cancer Registration Statistics, England, available at http://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/datasets/cancerregistrationstatisticscancerregistrationstatisticsengland, (2) Welsh Cancer Intelligence and Surveillance Unit, Cancer in Wales, available at: http://www.wcisu.wales.nhs.uk/cancer-in-wales-1, and (3) Information and Statistics Division Scotland, Cancer Statistics, available at: http://www.isdscotland.org/Health-Topics/Cancer/Cancer-Statistics/Skin/. doi, 10.5256/f1000research.10491.d14875019\n\nDataset 4. Raw age-specific incidence data for USA for different age groups in different years. Data obtained from Surveillance Research Program of the Division of Cancer Control and Population Sciences, National Cancer Institute, available at: http://seer.cancer.gov/seerstat. doi, 10.5256/f1000research.10491.d14875120\n\nDataset 5. Transformation of raw data in Dataset 2–Dataset 4 into summary statistics used in the figures and analyses and in Table 1 and Table 2. doi, 10.5256/f1000research.10491.d14875221\n\nDataset 6. Summary data for Figure 2, age-specific acceleration of melanoma in different time periods and different countries. doi, 10.5256/f1000research.10491.d14875322", "appendix": "Author contributions\n\n\n\nBLD initiated the project, collected the data from public databases, did the analyses, and contributed to writing the manuscript. SAF contributed to the design of the analyses, the framing of the work in terms of age-specific acceleration, and the writing of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by National Science Foundation (USA) grant DEB-1251035 to SAF.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nErdmann F, Lortet-Tieulent J, Schüz J, et al.: International trends in the incidence of malignant melanoma 1953–2008--are recent generations at higher or lower risk? Int J Cancer. 2013; 132(2): 385–400. PubMed Abstract | Publisher Full Text\n\nDennis LK: Analysis of the melanoma epidemic, both apparent and real: data from the 1973 through 1994 surveillance, epidemiology, and end results program registry. Arch Dermatol. 1999; 135(3): 275–80. PubMed Abstract | Publisher Full Text\n\nde Vries E, Coebergh JW: Cutaneous malignant melanoma in Europe. Eur J Cancer. 2004; 40(16): 2355–66. PubMed Abstract | Publisher Full Text\n\nDoll R: The age distribution of cancer: implications for models of carcinogenesis. J Roy Stat Soc: Series A (General). 1971; 134(2): 133–166. Publisher Full Text\n\nCook PJ, Doll R, Fellingham SA: A mathematical model for the age distribution of cancer in man. Int J Cancer. 1969; 4(1): 93–112. PubMed Abstract | Publisher Full Text\n\nMoolgavkar SH: Commentary: Fifty years of the multistage model: remarks on a landmark paper. Int J Epidemiol. 2004; 33(6): 1182–1183. PubMed Abstract | Publisher Full Text\n\nFrank SA: Age-specific acceleration of cancer. Curr Biol. 2004; 14(3): 242–246. PubMed Abstract | Publisher Full Text\n\nFrank SA: Dynamics of Cancer: Incidence, Inheritance, and Evolution. Princeton and Oxford: Princeton University Press; 2007. PubMed Abstract\n\nOffice for National Statistics: Cancer Registration Statistics, England. Reference Source\n\nWelsh Cancer Intelligence and Surveillance Unit: Cancer in Wales. Reference Source\n\nInformation and Statistics Division Scotland: Cancer Statistics. Reference Source\n\nSurveillance Research Program of the Division of Cancer Control and Population Sciences. National Cancer Institute. Reference Source\n\nAustralian Institute of Health and Welfare (AIHW): Australian Cancer Incidence and Mortality (ACIM) books: Melanoma of the skin. Canberra: AIHW; 2016. Reference Source\n\nFrank SA: Age-specific incidence of inherited versus sporadic cancers: a test of the multistage theory of carcinogenesis. Proc Natl Acad Sci U S A. 2005; 102(4): 1071–1075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAshley DJ: The two “hit” and multiple “hit” theories of carcinogenesis. Br J Cancer. 1969; 23(2): 313–328. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKnudson AG Jr: Mutation and cancer: statistical study of retinoblastoma. Proc Natl Acad Sci U S A. 1971; 68(4): 820–823. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDiffey BL, Frank SA: Dataset 1 in: Age-specific acceleration in malignant melanoma. F1000Research. 2017. Data Source\n\nDiffey BL, Frank SA: Dataset 2 in: Age-specific acceleration in malignant melanoma. F1000Research. 2017. Data Source\n\nDiffey BL, Frank SA: Dataset 3 in: Age-specific acceleration in malignant melanoma. F1000Research. 2017. Data Source\n\nDiffey BL, Frank SA: Dataset 4 in: Age-specific acceleration in malignant melanoma. F1000Research. 2017. Data Source\n\nDiffey BL, Frank SA: Dataset 5 in: Age-specific acceleration in malignant melanoma. F1000Research. 2017. Data Source\n\nDiffey BL, Frank SA: Dataset 6 in: Age-specific acceleration in malignant melanoma. F1000Research. 2017. Data Source" }
[ { "id": "20144", "date": "13 Feb 2017", "name": "Antony Young", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI found this paper very interesting but hard to follow at times. The 2nd column of the discussion seems to be at bit contradictory. The authors give possible mechanistic explanations for their observations then draw attention to the many uncertainties in the final paragraph. My own view is that these uncertainties are so large that it is too speculative to comment on biological mechanisms. The sex and country differences are apparent in the figures but it would be interesting to comment on these in the discussion. Can anything be explained by aggressive public health campaigns in Australia?\n\nSpecific points\nWhat is the scaling of the x axis of Figure 1?\n\nDifferent colour codes are used in Figures 1 and 2 which I found confusing\n\nAre educational and prevention campaigns period effects? I would have thought that they are cohort effects, because the age group that they affect is likely to be important\n\nShould sunscreens be mentioned? Their possible role in melanoma has been analyzed in several studies\n\nCan anything be deduced about latitude effects?", "responses": [ { "c_id": "2508", "date": "24 Feb 2017", "name": "Steven Frank", "role": "Author Response F1000Research Advisory Board Member", "response": "We thank Antony Young for his comments and helpful criticisms.   In response to the criticism that our discussion of mechanism raised too many uncertainties, we have deleted those paragraphs. In the revision, we replace that part of the discussion with a brief summary of the key result: the observed rise in incidence and decline in acceleration over time. We then add a couple of sentences about the range of factors that may be involved and how those factors may act via the normal genetic and physiological processes that protect against cancer, by analogy with other cancers for which there is more genetic and physiological information about the relation between mechanism and patterns of incidence.   With regard to specific comments:   We added a description of the scaling of axes to the legend of Figure 1.   We clarified that education and prevention campaigns could act as either period or cohort effects, depending on the targeting of age groups.   The suggestions about public health campaigns in Australia, sunscreens, and latitude are all interesting possibilities. We do not have sufficient data or insight to say anything compelling about those issues, so we have not added any new analyses or discussion to the revision." } ] }, { "id": "19972", "date": "17 Feb 2017", "name": "Robert J. Noble", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis well presented study describes an interesting trend in melanoma incidence in three industrialised nations and proposes an explanatory hypothesis. The major caveat is that, as the authors acknowledge, this trend could be due to any number of cohort effects (i.e. changing environmental and social factors that unequally affect different age groups and cohorts). The decrease in acceleration of melanoma incidence could, for example, be due to changes in skin cancer prevention campaigns, foreign travel, fashion, sunbathing behaviour, sunbed use, or frequency of dermatological examination. As the authors do not control for any such factors and provide no evidence for or against any particular hypothesis, they can do no more than speculate about how the observed pattern arose.\n\nAlso importantly, it is unclear to me exactly what the authors mean by a “restraining process” or “protective mechanism” that might have become less effective over time, nor how such a process might be affected by “a change in environmental exposure patterns” or “more intense environmental challenges”. I suspect they mean to suggest that a germline mutation that inactivates a tumour suppressor gene has become more prevalent due to natural selection or genetic drift, but I am unsure if this interpretation is correct. In any case, it would be useful to have more details of the hypothesis and its testable predictions. Would the authors expect to see a similar trend in other cancer types? Why might some countries be more affected than others? And why might men and women be affected differently?\n\nThe study might also benefit from using a mathematical model (as, for example, in reference 14) to estimate how much decrease in prevalence of the protective mechanism would be necessary to explain the observed trends.", "responses": [ { "c_id": "2507", "date": "24 Feb 2017", "name": "Steven Frank", "role": "Author Response F1000Research Advisory Board Member", "response": "We thank Robert Noble for his comments and helpful criticisms.   The main comment in Robert Noble’s review suggested that we add more detail about our mechanistic hypothesis. The other referee, Antony Young, made the opposite comment, suggesting that we delete our discussion of mechanism because it is too speculative.   After considering these opposing suggestions, we decided to delete our previous discussion of possible mechanisms that could explain the observed rise in incidence and decline in acceleration. The entire focus of our analysis and presentation concerned the patterns of incidence over time in the available data. We have no profound insight into the possible mechanistic causes of the interesting patterns that we observed. Thus, we have chosen in the revision to keep the focus on the data and the patterns that emerged from our analysis.   We remain very interested in the puzzle that has emerged from our analysis. However, we now think it best to defer any mechanistic discussions and model development until we have a more compelling argument that could be published as a follow up study to the data in this article." } ] } ]
1
https://f1000research.com/articles/6-27
https://f1000research.com/articles/5-2521/v1
14 Oct 16
{ "type": "Case Report", "title": "Case Report: Emergency awake craniotomy for cerebral abscess in a patient with unrepaired cyanotic congenital heart disease", "authors": [ "Corinne D’Antico", "André Hofer", "Jens Fassl", "Daniel Tobler", "Daniel Zumofen", "Nicolai Goettel", "Corinne D’Antico", "André Hofer", "Jens Fassl", "Daniel Tobler", "Daniel Zumofen" ], "abstract": "We report the case of a 39-year-old male with complex cyanotic congenital heart disease undergoing emergency craniotomy for a cerebral abscess. Maintenance of intraoperative hemodynamic stability and adequate tissue oxygenation during anesthesia may be challenging in patients with cyanotic congenital heart disease. In this case, we decided to perform the surgery as an awake craniotomy after interdisciplinary consensus. We discuss general aspects of anesthetic management during awake craniotomy and specific concerns in the perioperative care of patients with congenital heart disease.", "keywords": [ "Awake craniotomy", "congenital heart disease", "conscious sedation" ], "content": "Introduction\n\nCongenital heart disease (CHD) affects about 0.6% of newborns with a stable incidence over time1,2. Advances in surgical and medical treatment have shifted mortality largely to adulthood3. Numbers of adult patients with CHD are steadily increasing, except in cohorts with Eisenmenger syndrome and unrepaired cyanotic defects4. Therefore, surgeons and anesthesiologists are now facing more repaired survivors of CHD for noncardiac surgery5. CHD patients are at high risk for long-term cardiac and noncardiac complications, and the perioperative management of these patients may be challenging6. In this case report, we present the multidisciplinary management of a nighttime emergency awake craniotomy (AC) for stereotactic evacuation of an intracerebral abscess in an adult with unrepaired tricuspid atresia (TA) with palliative shunts.\n\n\nCase description\n\nA 39-year-old man (weight 75 kg; height 180 cm; body mass index 23 kg m-2) presented to the emergency department at 7 p.m. with right frontal headache, fever, and paresthesia of the left side of the body. Nine days earlier, he underwent diode laser surgery for hypertrophic nasal turbinates under local anesthesia. The patient’s medical history revealed cyanotic CHD – a complex form of unrepaired TA. The patient received bilateral palliative Blalock-Taussig shunts in early childhood. The shunt on the left side was reported to be stenotic, and the right one was secondarily closed. A detailed illustration of the underlying cardiovascular anatomy is shown in Figure 1. In the past, the patient had suffered from bacterial endocarditis, pulmonary hemorrhage, renal and splenic infarctions, transitory ischemic attack and recurrent supraventricular tachycardia that were considered to be complications of his CHD. Regular oral medication consisted of metoprolol 50 mg, torasemide 10 mg and isotretinoin 10 mg once daily. An allergy to cephalosporins was noted.\n\nBaseline peripheral oxygen saturation (SpO2) on room air was 80%. Examination of the patient’s hands revealed clubbed fingers with Hippocratic nails. Blood analysis showed secondary erythrocytosis (hemoglobin 210 g l-1; hematocrit 0.62%) and mild leukocytosis (10.650 × 109 l-1). Serum C-reactive protein concentration was 47.8 mg l-1. He was in sinus rhythm, and left ventricular function was mildly decreased with an ejection fraction of 46%.\n\nCopyright © 2014 New Media Center, University of Basel. All Rights Reserved.\n\nEmergency contrast-enhanced computed tomography of the brain showed a ring-enhancing lesion within the right superior temporal gyrus. Subsequent Gadolinium-enhanced magnetic resonance imaging supported the differential diagnosis of an acute intracerebral abscess (Figure 2). Based on these findings, emergency surgical evacuation of the abscess by computer-assisted stereotactic craniotomy was indicated. After interdisciplinary consensus involving the anesthetic and neurosurgical team, as well as the treating cardiologist, we decided to perform the procedure as an AC.\n\n(A) Contrast-enhanced computed tomography (CT) scan and (B) gadolinium-enhanced magnetic resonance imaging (MRI) revealed a 2.7 × 2.9 × 3.2 cm ring-enhancing lesion within the right superior temporal gyrus with significant surrounding edema and a small area of central hemorrhage. (C) Diffusion-weighted imaging (DWI) and (D) apparent diffusion coefficient (ADC) MRI showed a diffusion-restricted core, supporting the differential diagnosis of acute cerebral abscess.\n\nUpon arrival in the operating room, the patient was comfortably installed in the supine position with routine anesthesia monitoring (5-lead electrocardiogram, pulse-oximetry, noninvasive blood pressure monitoring). An arterial line was inserted in the left radial artery. The peripheral intravenous line was equipped with an air-eliminating filter to prevent paradoxical embolism. Supplemental oxygen at 4 l min-1 was administered via nasal cannula to the spontaneously breathing patient. Expiratory carbon dioxide and respiratory rate were measured. Fentanyl 50 µg and midazolam 1 mg IV were administered during preparation for surgery. Prior to fixing the head in the Mayfield frame, conscious sedation was initiated using a target-controlled infusion (TCI, Injectomat TIVA Agilia, Fresenius Kabi AG, Oberdorf, Switzerland) of propofol and remifentanil with target effect-site concentrations (Cet) of 0.5 µg ml-1 and 1.0 ng ml-1, respectively. After increasing the Cet of propofol to 1.0 µg ml-1 due to patient discomfort during head pinning, the patient lost consciousness for a short period of time. Bradypnoea and oxygen desaturation to a Sp02 of 80% occurred, and assisted mask-bag ventilation was required temporarily. The neurosurgeon then applied local anesthesia to the incision site using 20 ml of a 1:1 mixture of 0.5% bupivacaine and 1% lidocaine with 1:100,000 epinephrine. For remainder of the procedure, Cet of propofol (0.5 µg ml-1) and remifentanil (0.5–1.5 ng ml-1) were adjusted to the patient’s clinical level of sedation and pain and bispectral index monitoring. The patient was hemodynamically stable throughout the intervention. Respiratory rate stayed at 12–15 breath min-1, and SpO2 ranged between 80 and 88%.\n\nThe patient’s left hemiparesthesia improved immediately following craniotomy and abscess decompression. Postoperatively, the patient was admitted to the intensive care unit. Some residual paresthesia was still present at discharge from the intensive care unit 10 hours later; however, neurological symptoms completely ceased by the second postoperative day. Bacteriological culture of the abscess fluid confirmed the diagnosis of a cerebral abscess and revealed Streptococcus intermedius. This was interpreted as hematogenous spread in the context of the previous turbinate surgery. The patient had received 2 g of meropenem IV intraoperatively. Under specific antibiotic treatment consisting of penicillin IV and oral metronidazole for 6 weeks, the abscess radiologically regressed. The patient was discharged home 14 days after the operation in stable condition.\n\n\nDiscussion\n\nIt was important to understand the complex anatomy and underlying pathophysiology of the CHD for optimal anesthetic management of this patient. The predominant defect in this case is TA (Figure 1). The tricuspid valve is absent, and the right ventricle is hypoplastic leading to a single-ventricle physiology. Transposed great arteries (TGA), as in this patient, are associated in about 30% of cases7. In children with TA and TGA, pulmonary blood flow is usually elevated, unless the pulmonary valve is stenotic or artretic. Because no direct communication exists between the right atrium and the right ventricle, systemic venous return to the right atrium must be shunted to the left atrium through an atrial septal defect or patent foramen ovale (right-to-left shunt). Oxygen saturation values are equal in the aorta and the pulmonary artery due to complete mixing of systemic and pulmonary venous blood in the left ventricle. Pulmonary blood flow determines the degree of cyanosis and is influenced by the interplay of several factors such as the size of the ventricular septal defect, the presence or absence of pulmonary stenosis, as well as the patency of the ductus arteriosus. Most infants with TA require a palliative procedure (e.g. Blalock-Taussig shunt) before definitive surgery can be performed8. In this case, a Fontan palliation could not be performed, and survival was only possible due to decreased pulmonary blood flow.\n\nPatients with CHD and chronic cyanosis may present a number of secondary pathophysiologic phenomena and are prone to cardiac and extracardiac complications, such as cardiac arrhythmias, thrombotic events, or bleeding disorders6,9. There is an increased risk for any kind of infection, including those of the central nervous system. Patients with new-onset headaches should be screened for cerebral abscess, which is a well-described complication of cyanotic CHD. The purpose of emergency surgery is to reduce the infectious burden, to decompress the adjacent brain, and to provide bacteriological samples that may guide antimicrobial therapy.\n\nPatients with CHD, especially those with complex defects, have increased perioperative morbidity10,11. Additional risk factors for poor outcome in noncardiac surgery are emergencies and procedures involving the respiratory or central nervous system6,9–11. The main objective in the management of this patient undergoing emergency craniotomy was to maintain pulmonary blood flow through the aorto-pulmonary anastomosis (Blalock-Taussig shunt) in order to provide optimal oxygen delivery, maintain systemic and pulmonary vascular resistance, and myocardial contractility9,12. We decided that these goals may best be achieved using a conscious sedation technique for AC. We considered that the myocardial depression and the drop in systemic vascular resistance associated with large doses of anesthetic agents during a general anesthetic could have compromised intraoperative hemodynamic stability in this high-risk patient.\n\nAC has evolved into a standard of care for neurosurgical procedures that require awake functional mapping of the motor, sensory, visual, or language cortex when tumors are located in close proximity to eloquent areas of the brain, as well as for functional neurosurgery and epilepsy surgery. However, the practice of AC has spread to include routine procedures that do not involve awake functional cortical mapping or electrophysiological recording, e.g. stereotactic brain biopsy, ventriculostomy, or the evacuation of subdural hematomas. This is in part due to the implication of refined anesthetic management protocols and the use of effect-site controllable intravenous anesthetic agents such as propofol, dexmedetomidine and ultra-short acting opioids (e.g. remifentanil). A recent systematic review showed that awake brain tumor resection led to a better perioperative neurological outcome compared with surgery under general anesthesia; moreover, AC was consistently associated with shorter hospital stay, less resource utilization, and high patient satisfaction13. Major intraoperative complications during AC include respiratory depression, arterial hypertension, nausea and vomiting, air embolisms, brain swelling, seizures and loss of patient cooperation14. Cautious patient selection focusing on airway assessment, ability to cooperate, risk of sedation failure and intraoperative surgical complications, as well as adequate preoperative psychological preparation of the patient are key elements for successful AC14.\n\nThe evidence from the literature regarding the use of AC in cardiac patients is scarce. A recent case report describes the anesthetic management of an AC in a patient with cardiomyopathy and low cardiac output15. The maintenance of intraoperative hemodynamic stability, indicated by a reduced use of vasopressors, seems to be facilitated during AC16. In our patient, we favored AC primarily because of the underlying complex cyanotic CHD, and in order to preserve as much functional cardiovascular capacity as possible.\n\n\nConsent\n\nWritten informed consent for publication of the patients’ details and images was obtained from the patient.", "appendix": "Author contributions\n\n\n\nC. D‘A., A. H., and N. G. were the anesthesiologists involved in the case, and drafted and approved the final manuscript. J. F. reviewed and approved the final manuscript. D. T. is the treating cardiologist of the patient, and drafted and approved the final manuscript. D. Z. was the neurosurgeon involved in the case, and drafted and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors thank Allison Dwileski for proof-reading this manuscript.\n\n\nReferences\n\nHoffman JI, Kaplan S: The incidence of congenital heart disease. J Am Coll Cardiol. 2002; 39(12): 1890–900. PubMed Abstract | Publisher Full Text\n\nvan der Linde D, Konings EE, Slager MA, et al.: Birth prevalence of congenital heart disease worldwide: a systematic review and meta-analysis. J Am Coll Cardiol. 2011; 58(21): 2241–7. PubMed Abstract | Publisher Full Text\n\nKhairy P, Ionescu-Ittu R, Mackie AS, et al.: Changing mortality in congenital heart disease. J Am Coll Cardiol. 2010; 56(14): 1149–57. PubMed Abstract | Publisher Full Text\n\nGreutmann M, Tobler D, Kovacs AH, et al.: Increasing mortality burden among adults with complex congenital heart disease. Congenit Heart Dis. 2015; 10(2): 117–27. PubMed Abstract | Publisher Full Text\n\nZomer AC, Verheugt CL, Vaartjes I, et al.: Surgery in adults with congenital heart disease. Circulation. 2011; 124(20): 2195–201. PubMed Abstract | Publisher Full Text\n\nCannesson M, Collange V, Lehot JJ: Anesthesia in adult patients with congenital heart disease. Curr Opin Anaesthesiol. 2009; 22(1): 88–94. PubMed Abstract | Publisher Full Text\n\nHo SY, Baker EJ, Rigby ML, et al.: Color atlas of congenital heart disease. Morphologic and clinical correlations. London: Mosby-Wolfe Times Mirror international, 1995. Reference Source\n\nBacker CL, Mavroudis C: Mastery of cardio-thoracic surgery. Philadelphia: Lippincott-Raven, 1997.\n\nLovell AT: Anaesthetic implications of grown-up congenital heart disease. Br J Anaesth. 2004; 93(1): 129–39. PubMed Abstract | Publisher Full Text\n\nHennein HA, Mendeloff EN, Cilley RE, et al.: Predictors of postoperative outcome after general surgical procedures in patients with congenital heart disease. J Pediatr Surg. 1994; 29(7): 866–70. PubMed Abstract | Publisher Full Text\n\nBaum VC, Barton DM, Gutgesell HP: Influence of congenital heart disease on mortality after noncardiac surgery in hospitalized children. Pediatrics. 2000; 105(2): 332–5. PubMed Abstract\n\nCannesson M, Earing MG, Collange V, et al.: Anesthesia for noncardiac surgery in adults with congenital heart disease. Anesthesiology. 2009; 111(2): 432–40. PubMed Abstract | Publisher Full Text\n\nBrown T, Shah AH, Bregy A, et al.: Awake craniotomy for brain tumor resection: the rule rather than the exception? J Neurosurg Anesthesiol. 2013; 25(3): 240–7. PubMed Abstract | Publisher Full Text\n\nChui J: Anesthesia for awake craniotomy: An update. Rev Colomb Anestesiol. 2015; 43(Supplement 1): 22–8. Publisher Full Text\n\nMeng L, Weston SD, Chang EF, et al.: Awake craniotomy in a patient with ejection fraction of 10%: considerations of cerebrovascular and cardiovascular physiology. J Clin Anesth. 2015; 27(3): 256–61. PubMed Abstract | Publisher Full Text\n\nRajan S, Cata JP, Nada E, et al.: Asleep-awake-asleep craniotomy: a comparison with general anesthesia for resection of supratentorial tumors. J Clin Neurosci. 2013; 20(8): 1068–73. PubMed Abstract | Publisher Full Text" }
[ { "id": "17294", "date": "01 Nov 2016", "name": "Christopher Lysakowski", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nConcerning a case report of awake craniotomy in patient with cyanotic congenital heart disease (CHD).\n\nThe authors describe an unusual case of a rather complicated and rare pathology that needs an emergency surgical intervention during a night. I have no specific scientific remarks concerning the content nor the design, but I think that for several reasons this case should be known by large anaesthesia community.\nFirst, the number of patients with CHD that can be scheduled for surgery is growing so our responsibility is to know how they should be handled, why spontaneous ventilation is preferable than mechanical, for which kind of complications we should be prepared etc.\nSecond, we can always discuss which drugs should be used in a case of awake craniotomy, is dexmedetomidine better than other drugs? Personal experience and a local policy should be respected in such case.\n\nWhat is notable is that the authors were capable to establish, in this short time and during a night, an interdisciplinary consensus involving all disciplines, which was probably crucial to handle this case in this remarkable way.", "responses": [] }, { "id": "17000", "date": "16 Nov 2016", "name": "Girija Prasad Rath", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is an interesting case report of awake craniotomy with an indication to preserve functional cardiovascular capacity in an adult with congenital cyanotic heart disease. The case was well managed by the authors. The report may be indexed with following minor changes: Hemodynamic responses to various interventions or to painful responses to be described in graphical / picture format.", "responses": [] } ]
1
https://f1000research.com/articles/5-2521
https://f1000research.com/articles/5-1477/v1
23 Jun 16
{ "type": "Research Article", "title": "Electronic medical records in humanitarian emergencies – the development of an Ebola clinical information and patient management system", "authors": [ "Kiran Jobanputra", "Jane Greig", "Ganesh Shankar", "Eric Perakslis", "Ronald Kremer", "Jay Achar", "Ivan Gayton", "Jane Greig", "Ganesh Shankar", "Eric Perakslis", "Ronald Kremer", "Jay Achar", "Ivan Gayton" ], "abstract": "By November 2015, the West Africa Ebola epidemic had caused 28598 infections and 11299 deaths in the three countries most affected. The outbreak required rapid innovation and adaptation. Médecins sans Frontières (MSF) scaled up its usual 20-30 bed Ebola management centres (EMCs) to 100-300 beds with over 300 workers in some settings. This brought challenges in patient and clinical data management resulting from the difficulties of working safely with high numbers of Ebola patients. We describe a project MSF established with software developers and the Google Social Impact Team to develop context-adapted tools to address the challenges of recording Ebola clinical information. We share the outcomes and key lessons learned in innovating rapidly under pressure in difficult environmental conditions. Information on adoption, maintenance, and data quality was gathered through review of project documentation, discussions with field staff and key project stakeholders, and analysis of tablet data. In March 2015, a full prototype was deployed in Magburaka EMC, Sierra Leone. Inpatient data were captured on 204 clinical interactions with 34 patients from 5 March until 10 April 2015. 85 record “pairs” for 32 patients with 26 data items (temperature and symptoms) per pair were analysed. The average agreement between sources was 85%, ranging from 69% to 95% for individual variables. The time taken to deliver the product was more than that anticipated by MSF (7 months versus 6 weeks). Deployment of the tablet coincided with a dramatic drop in patient numbers and thus had little impact on patient care. We have identified lessons specific to humanitarian-technology collaborative projects and propose a framework for emergency humanitarian innovation. Time and effort is required to bridge differences in organisational culture between the technology and humanitarian worlds. This investment is essential for establishing a shared vision on deliverables, urgency, and ownership of product.", "keywords": [ "Ebola", "electronic medical records", "EMR", "innovation", "humanitarian" ], "content": "Background\n\nBy November 2015, the West Africa Ebola epidemic had caused 28598 infections and 11299 deaths in the three countries most affected1. In response, Médecins sans Frontières (MSF), drawing on over 20 years of experience in responding to Ebola outbreaks, had treated more than 7500 suspected Ebola cases, including more than 4700 confirmed cases. The outbreak, unprecedented in size, required rapid innovation and adaptation. To cope with the number of cases, MSF scaled up its usual 20–30 bed Ebola management centres (EMCs) to 100–300 beds with over 300 workers in some settings. This increased scale brought challenges in patient and clinical data management resulting from the difficulties of working safely with high numbers of Ebola patients (Panel 1). Little had been published on managing clinical documentation and data transfer from filovirus wards, and there were no established, standardized, or widely used approaches2. Here we describe how MSF developed context-adapted tools to address the challenges of recording Ebola clinical information for in-patient management. We share the key issues and lessons learned in innovating rapidly under pressure in difficult environmental conditions and propose a framework for emergency humanitarian innovation.\n\nEMC=Ebola management centre. PPE=personal protective equipment.\n\n*Information on the ETC is at: http://www.etcluster.org/about-etc\n\n\nMethods\n\nIn September 2014, MSF medical staff established a collaborative project with a small group of software developers and the Google Social Impact Team. The aim was to develop a tool to enable the collection, visualisation, and sharing of standardised information on Ebola patients and treatment programmes, and thereby make the most efficient use of the limited time that staff are able to remain in personal protective equipment (PPE) to treat patients in the high-risk zone of an EMC.\n\nInformal discussions with current and returning MSF field staff enabled a first set of requirements to be drawn up. Initial scoping, conducted by Google over 2 days, showed that several data collection devices were already under development for Ebola, some of which promised to be rapidly deployable3,4. However, it was determined that none of these would sufficiently meet the requirements that had been identified. These requirements were further refined through consultation with current MSF field staff, operations managers, and medical specialists as well as public health specialists from Harvard Medical School and The London School of Hygiene and Tropical Medicine. A product specification was generated to meet these requirements (Panel 2) based on the following principles: the solution should be available quickly (within 6 weeks); it should be \"just enough\" to meet the requirements; it should be useable without programming knowledge; and both the product and its intellectual property (IP) should be freely or cheaply available.\n\nPPE=personal protective equipment.\n\nTo speed up development and ensure that the final tools would be available at low cost, the development team used pre-existing open-source platforms where possible, rather than developing from scratch. The data model and database from OpenMRS (an open-source Medical Record System; platform v 1.10.1 on an Edison server running Yocto Linux + Debian GNU/Linux 7) were combined with data entry elements from OpenDataKit (open-source mobile data collection tools), together with a bespoke user interface (made with code forked from ODK Collect v1.4.4, running on Android 4.4.2 [KitKat]), designed for a high-risk zone. Panel 3 outlines the tool that was finally developed. A full, open-source, technical specification of the product is available5.\n\nEMC=Ebola management centre. NOS=not otherwise specified. IV=intravenous.\n\nThe client-server architecture developed for this project involves a low-energy server implemented on a 36 × 25 × 4 mm Intel Edison computer, built into an enclosure full of batteries and a custom charging circuit to ensure all-hours reliability. Rather than relying on the internet for updates, backups, and maintenance, the backup and updating system relies solely on USB sticks. The client software installed on the tablets can be replaced with any other application, which can then benefit from the hardware and server capacity of the system.\n\nApproximately US$1.9 million was spent on development, of which $1.8 million was provided by Google. This included a team of five full-time engineers for 6 months, as well as contractors and consultants. Approximately $500,000 was spent on custom manufacturing of hardware, often at above-market costs due to the urgency of the project. For instance, a factory in California was commissioned to run for 72 hours straight during the Christmas of 2014 to manufacture tablet enclosures. In all, the project took 7 months from concept to deployment.\n\nIn January 2015, an alpha version was field-piloted in the MSF EMC in Magburaka, Sierra Leone; feedback from the field team was incorporated and bugs were fixed. In March 2015, a full prototype was deployed in Magburaka. Inpatient data were captured on 204 clinical interactions with 34 patients from 5 March until 10 April 2015 (equating to 95% of those admitted in this relatively quiet period). In this initial deployment, for each clinician-patient interaction the routine paper record system was maintained in parallel; clinical observations and treatments were recorded on paper in the high risk zone by the clinician wearing PPE, then shouted over the fence to a colleague standing in the low-risk zone (an area of an EMC where there is low risk of Ebola transmission and staff do not wear full PPE) who transcribed them to ‘clean’ paper patient charts, and later entered into an Excel database by a data encoder.\n\nInformation on adoption, maintenance, and data quality was gathered through review of project documentation, discussions with field staff and key project stakeholders, and analysis of data from the tablets. We carried out a rapid informal mixed methods evaluation to look at adoption/acceptance by health workers, implementation and maintenance challenges, and data quality and usefulness. Observation of staff and six unstructured staff interviews were carried out by a member of the implementation team with experience of implementing and evaluating e-health innovations, and were recorded in note form. Six semi-structured interviews with key project stakeholders (Google, MSF, Harvard) were carried out using a topic guide by an MSF administrator with experience in qualitative research, who recorded and transcribed the interviews. Project documentation from September 2014 to April 2015, including situation reports, vision and requirements documents, were included as an additional data source. Thematic analysis of project documentation, observations and field interview notes, and stakeholder interview transcripts was performed by the administrator supported by a senior team member with experience in health-care evaluation.\n\nData from the tablets were analysed via a semi-automated match of record “pairs” (clinician-patient interactions where a record for the same interaction existed in both the tablet dataset and the paper→Excel dataset). Briefly, for data items in both tablet and Excel datasets (temperature and symptoms), the simple data (raw data rather than OpenMRS codes) were extracted from the tablet dataset and multiple records for interactions within 30 minutes were manually merged. If a symptom was recorded as present in any record in a set of multiple records within 30 minutes, the symptom was marked as present. The Excel dataset was checked and corrected against scanned paper records. The aligned tablet and cleaned Excel datasets were combined into a single list, sorted by patient and date/time of interaction, such that the single daily record in Excel was paired with the first tablet record that day. This is a potential limitation, as the paper/Excel observations may have recorded only when a second or later set of observations was recorded on the tablet. The record pairs were then compared by simple Excel calculations (with temperature match using rounded-down integers), and the total number of matches calculated for all parts of an observation record, and for a symptom across all observations. The proportion of matches was calculated including only items with an entry in both records, so excluding missing items, which were relatively common on the tablet for graded symptoms (e.g. 24 hour count of diarrhoea or vomiting episodes, extent of weakness). Observations of vital signs were not part of the Excel dataset, but the presence of these items in the scanned paper records was documented and compared to whether these signs were also recorded in the tablet record. The quality of the scanned paper charts was subjectively assessed.\n\nThis evaluation met the criteria of the MSF Ethics Review Board for exemption from ethics review.\n\nThe time taken to deliver the product was substantially more than that anticipated by MSF (7 months, as opposed to 6 weeks). Transfer of knowledge of the hardware from Google to MSF did not occur. The initial specification was not fully delivered. A patient localising feature (radio-frequency identification tags for patients and a network of readers) was developed but never completed. In addition, exported data require significant work to clean and analyse outside of the application interface.\n\nDeployment of the tablet coincided with a dramatic drop in patient numbers. As a result, the full prototype had little impact on patient care.\n\nAdoption. The final product deployed in Magburaka was well received by medical staff, some of whom had no previous experience with computers or touch-screen devices. Staff described the system as intuitive and reliable for data entry and visualisation, even when power and internet connections were interrupted.\n\nImplementation and maintenance. Tablets remained functional after frequent dipping in chlorine disinfectant, and the system remained active for periods of up to 24 hours without electricity.\n\nData quality and usefulness. 85 record “pairs” for 32 patients with 26 data items (temperature and symptoms) per pair were available to be analysed.\n\nThe average agreement between both sources was 85%, ranging from 69% to 95% for individual variables.\n\nThe tablet contained more unique patient encounters than the paper records – the paper chart usually showed one set of observations per day, while the tablet was used to record additional encounters that were (at best) otherwise written only in patient progress notes (which were not standardised and thus difficult for a data encoder to enter into a standardised database format). However, it is unknown if this recording would still occur if the EMC were very busy.\n\nThe tablet contained data fields for vital signs (BP, HR, RR) that were not always recorded on the paper vital signs charts, but rather only on the free-text clinician progress notes (which had not been entered into the Excel database at the time of matching). At least one of these vital signs was entered in the tablet for about 40% of 204 interactions, usually matching the paper record, and sometimes providing data that did not exist in the paper record. In a few instances (11/85 interaction pairs), one or more vital signs in the paper record were not in the tablet dataset.\n\nThe paper charts were sometimes hard to read, leading to errors in the Excel database such as a temperature recorded in the paper record that could be read as either 39.2 or 34.2 (the tablet record showed 34.2). In addition, there was variable use in the symptoms chart of symbols such as yes, y, no, n, ✓, ✘, -, |, ?. About 20% of paper-recorded encounters had some ambiguity, including no or only partial zero reporting (that is, specifying that symptoms were absent, rather than an assumption that any symptom not marked as present is absent as opposed to not assessed).\n\nInterviews with field staff and key stakeholders revealed these common themes around challenges faced in the project.\n\nLack of agreed vision from start: The team’s haste to deploy something quickly that was “just enough” meant that insufficient time was taken to establish internal (MSF) project vision. Instead, the project team adopted a more perfection-driven Google vision (which extended the timeframe), and the user (MSF) questioned only late why something so complex had been made that was potentially useful in the long term, but not in this outbreak.\n\nNo explicit approach to project management: MSF project management structure, including governance and an organogram, was defined only when the project started to falter. As a result, the software developers were never sure who was the MSF lead, or who to go to when things went wrong, so at times asked the wrong question of the wrong person and lost focus on the minimum viable product requirements. At a late stage, efforts were made to develop an evaluation plan, but the numbers of cases dropped very substantially soon after.\n\nBusiness pressures: the pressure to start meant that reflection on team composition happened too late. For example, the team approached numerous medical people with different backgrounds, experiences, and expectations for user input. Since the epidemic evolved rapidly, this focus on “the last medic returning” as the optimal source of information meant that the user stories became rapidly out of date. A consistent medical focal person was appointed to the team to provide ongoing guidance, but this was done too late. Likewise, Google’s haste to move onto the next project meant that they did not give the time required for knowledge transfer to a technologically-naïve partner.\n\nLack of understanding between the humanitarian and technology fields: MSF expected to be able to describe what was needed quickly and then leave the software developers to deliver, without realising the time commitment required from them to enable the developers to make the right product. In parallel, the developers expected that health programmes could be measured in terms of number of patients accessed, and struggled to incorporate a health outcome perspective in their objectives and notion of impact.\n\n\nDiscussion\n\nThe tablet facilitated more frequent and slightly more detailed data than the fence→ paper→Excel database routine, and there was high consistency between the results from the two systems. Given that the tablet data record is essentially real-time (in terms of data entry and opportunity for correction), it is likely to be the more accurate record. This assumption requires validation, but there is anecdotal evidence of the errors and uncertainties of the non-electronic data system from several EMCs.\n\nTo our knowledge, this is the first time a portable real-time electronic medical record (EMR) has been developed that specifically meets the needs of an extreme biohazard environment in a resource-limited setting with erratic power and internet supply. A client-server architecture normally necessitates a reliable, high-bandwidth internet connection, or a local server with very reliable electrical energy, IT support, and the availability of specialized parts. These requirements make a client-server architecture difficult to implement in rural sub-Saharan Africa.\n\nMSF has gained experience and understanding of the process of technological innovation, including the limits and challenges of deploying new technology and working with technology partners6. The positive collaboration between MSF and Google has sparked interest in the potential for humanitarian-technology collaborations7. Successful deployment of reliable client-server architecture in a rural sub-Saharan African environment represents a proof of concept, such that it is already being deployed by other agencies (The Wellbody Alliance, in collaboration with Harvard Medical School).\n\nHowever, there were major challenges with the project. Due to the length of time to deliver the product, in the context of an evolving Ebola outbreak, a product was delivered too late to have an impact on patient care or efficiency of EMC management. Lack of knowledge transfer meant that MSF had a product that they did not fully know how to support or adapt for future use.\n\nThe challenges we describe are echoed throughout the literature on humanitarian and technological innovation, such as best practice principles for humanitarian innovation defined by ALNAP and lessons learnt from IT innovation as outlined in the Chaos Manifesto8,9. Our experience has led us to identify some concrete lessons specific to humanitarian-technology collaborative projects. Most importantly, time and effort is required to bridge differences in organisational culture between the technology and humanitarian worlds. This investment is essential for establishing a shared vision on deliverables, urgency, and ownership of product. To guide this process, there is a need for a framework for innovation in humanitarian emergencies. The overwhelming priority for medical teams is immediate delivery of care, and innovation projects must fit around this reality; careful consideration must be given to whether there are sufficient resources to deliver the project. Therefore, emergency innovation projects need to be agile, iterative, and free from heavy project management processes. Yet if we are serious about innovation, we need to push the boundaries to ensure it has impact not only on our current patients, but also for future patients. We have outlined the key components of such a framework in Panel 4. This framework should be used as a flexible and practical tool; we caution against adapting systems that could lead to stifling of innovation, especially in complex and challenging environments where fast solutions are needed.\n\nAn MSF team is in the process of adapting the software code-base of the Ebola data management solution to develop a generic emergency EMR that can be rapidly modified for different types of emergency. Our vision is to build a modifiable Open-MRS backbone for which new disease-specific apps can be developed within 5 days by a non-coder. The first ‘test-case’ is nutrition, for which we are working on apps for inpatient and ambulatory therapeutic feeding centres. In this project, having learnt from our experience with the Ebola EMR, we have taken the time to apply our framework (Panel 4) from the start.\n\nA complete kit of open-source software, hardware, and documentation for the Ebola EMR is available on-line5. A consortium has been established to support other users to modify and improve the system for future outbreaks, and develop the software for new-use cases10. Harvard Medical School, MSF, and Google are all represented in this consortium. Harvard Medical School have carried out successful deployments of the hardware, which they have combined with new software for community surveillance, triage, and inpatient care. The consortium is focused upon new-use cases, the economics of the solution and driving economies of scale, the incorporation of additional software and hardware toolsets, and the enabling of clinical research during outbreak settings.\n\n\nConclusion\n\nThe fundamental innovation of our project in the end, was not the new technology that was developed, but rather the adaptation of existing technology so that it would work in an environment that is generally hostile for complex systems. We hope that the lessons learned and the tools developed in this collaboration will help others involved in innovation in humanitarian crises to gain the balance between speed, impact, and sustainability.\n\n\nData availability\n\nData are deposited in a secure server in MSF Operational Centre Amsterdam and are available via a managed access process under the terms of the MSF Data Sharing Policy11. For contact details and explanation of process please visit http://fieldresearch.msf.org/msf/handle/10144/306501 or email data.sharing@msf.org.", "appendix": "Author contributions\n\n\n\nThe innovation project was led by IG, with support and input from all authors. KJ and JG wrote the article, with contributions from all authors.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors would like to acknowledge Jesse Berns (MSF Switzerland) and Karl Blanchet (London School of Hygiene and Tropical Medicine) for their valuable input into the initial development of the Ebola Clinical Information System. We thank Sarah Venis (MSF UK) for editing assistance; Isobel Aiken for key informant interviews; and Philipp du Cros for project oversight and input into the article.\n\n\nReferences\n\nWHO: Ebola Situation Report - 18 November 2015. (accessed 18/11/15). Reference Source\n\nBühler S, Roddy P, Nolte E, et al.: Clinical documentation and data transfer from Ebola and Marburg virus disease wards in outbreak settings: health care workers' experiences and preferences. Viruses. 2014; 6(2): 927–937. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGallego MS: Project ELEOS: a barcode/handheld computer based solution for Ebola Management Centres. Presentation at: MSF Scientific Day 2015, London (accessed 11/11/15). Reference Source\n\nNunes L, Areias M: How ThoughtWorks Brasil Fought Ebola. (accessed 11/11/15). Reference Source\n\nProject Buendia. (accessed 11/11/15). Reference Source\n\nGayton I, Achar J, Greig J, et al.: Tablet-based clinical management tool: building technology for Ebola management centres. Presentation at: MSF Scientific Day 2015, London. (accessed 11/11/15). Reference Source\n\nMetz C: Google Builds a New Tablet for the Fight Against Ebola. Wired. 2015; (accessed 11/11/15). Reference Source\n\nRamalingham B, Scriven K, Foley C: Innovations in International Humanitarian Action. In: ALNAP Review of Humanitarian Action (ch 3). 2014. Reference Source\n\nThe Standish Group: Chaos Manifesto 2013: Think big, act small. (Accessed 11/11/15). Reference Source\n\nThe consortium website is in preparation; for further details contact: eperakslis@gmail.com.\n\nKarunakara U: Data Sharing in a Humanitarian Organization: The Experience of Médecins Sans Frontières. PLoS Med. 2013; 10(12): e1001562. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "14552", "date": "11 Jul 2016", "name": "James Whitworth", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a honest evaluation of an operational research project to innovate a new tool for recording, visualising and sharing medical data in response to the challenging needs of a high risk infectious situation in a humanitarian emergency, namely the outbreak of Ebola in West Africa 2014-15.  This is an interesting and valuable paper about developing a clinical information and patient management system based on tablet computers that can be used in Ebola Treatment Centres and similar high risk environments. It is mainly a narrative paper with little in the way of quantitative results. The lessons learned are fairly generic to any project management challenge, and go beyond those of innovation in humanitarian settings.\nThey describe the process of developing a tablet and new software to respond to the complex requirements of high bio-hazard infection control, and compare this tool with standard practice in this setting. They note the urgency of the project, which was intended to improve clinical care, and by implication outcomes, for patients by helping medical staff make more efficient use of the limited time they could spend in the red zone due to the constraints of personal protective equipment.\n\nThe title is appropriate and the abstract is generally clear, although I felt that the description of the comparison of tablet and paper-based data was not clear. It was not just tablet data that were analysed, and what is meant by ‘record “pairs”’ is not clear just from reading the abstract.\nThe background and methods are mostly clear and appropriate. I am not an expert on computer hardware or software, so cannot comment on the appropriateness of the systems selected. The researchers used a mixed methods approach appropriate for the different aspects of the project they wished to evaluate: namely need identification, buy-in, implementation, tool functionality and maintenance, and a measurement of data quality. They have identified key issues that delayed the process of development and the limited testing of the tool they were able to achieve.\nI am astonished at the cost of development of the system, which seems to have been driven largely by the salary costs and person-hours involved, as well as producing customised hardware.\nIt was not clear to me quite how the data collected in the high risk zone were exported for further analysis and storage. Was this by disinfecting the tablets in chlorine and physically removing them from the high risk zone, or was it through connection some form of local area network that allowed the data collected in the high risk zone to be electronically transmitted outside? This could be made clearer.\nI wondered about issues of confidentiality of patient data being transferred from Sierra Leone to the MSF headquarters in Europe (panel 2). How were these addressed, approved and overseen?\nI was surprised that no data were collected on clinical management according to panel 3. This seems like a missed opportunity, as further research into optimal management of cases of ebola is still required. There is comment about this at the end of the discussion (page7), but was this an oversight when the system was being developed?\nDid this study undergo any ethical scrutiny in Sierra Leone? I would imagine that it should have done, but this is not mentioned.\nMethods para 6:\nSuggest rephrasing the second sentence to make it clear that this part of the paragraph refers to data extraction for the tablet, and the latter to the paper/excel database. Is it a correct understanding that only the first tablet entry of the day was taken into account? If so, could you mention why it was not possible to take account of subsequent entries given the relatively small number of records under analysis Did practitioners entering data in the high risk zone review their own entries once out? Our question refers to the comment in the first para of the discussion about opportunity for correction. It might be useful to include a paragraph on the procedure for using the tablet.\n\nThe results are very interesting, and it is good to see an honest description of the problems encountered with this project. Their results highlight areas of tension in the process and detail the successes (a functioning tool with good data capture) and failures (implementation too late to have impact on patient care, and lacking one of the initial requirement – a way to find patients in a large facility) of the project. From both they draw out lessons for future innovation between humanitarian and technological enterprises.  They also detail the technical specifications and overview costings for the tool.\n\nI was not clear what was meant by ‘transfer of knowledge of the hardware from Google to MSF did not occur’ (page 5). Further on in the discussion section of the paper there is a comment about the transfer of knowledge, but this appears to be related to the software rather than the hardware. Indeed, it seems that this problem has been overcome and the software has been adapted for use in other emergencies (page 6).\nI would have appreciated more information about the ‘significant work needed to clean and analyse the exported data’ from the tablet. Why was that necessary?\nThe results are generally clearly described, and suggest that the tablet may be at least as good as paper records, though the analysis currently does not appear to take into account the role of chance, and a proportion of records (17% ?) had to be excluded. The qualitative study is quite weak. It appears that different data were collected systematically using the two different systems, so only some of the data were collected using both methods.  Further comments on the quantitative study are:\nIf we understand correctly the agreement between the two methods was calculated by simple percentage agreement: would it be possible to perform a Kappa test to take into account chance? What was the range of interaction pairs per patient?  Was any difference seen in the recording by either method related to the severity of the patient’s illness, or the number of people admitted at the time? Is it possible to state the percentage of missing data overall and/or by recording method and comment on the implications for the agreement results? On rough calculation overall it seems to be 17% (204-(85x2) = 34, 34/204) missing data in one or other system.\n\nThe agreement between data sources ranged between 69 and 95%. This is rated as ‘high consistency’ but I was not sure on what basis. It was not clear to me, which collection method was felt to be the gold standard, or even if this was determined before the comparison. I sense from the discussion, the tablet was thought to perform better than the paper method, but I am not convinced about this from the results.\n\nThere was clearly a tension between MSF wanting a ‘good-enough‘ product, and Google being perfection-driven. I suspect this was not just a question of communication, there is likely to be some underlying differences in corporate culture, which might be hard to bridge even with all the time in the world. What is meant by ’Google’s haste to move onto the next project’ (page 6). Was this ebola-related, or simply reflecting that this department at Google was busy with a range of different projects?\nThe discussion was well written and interesting. The authors claim that this is the first such device to be developed. How did they search for other evidence of similar projects? I heard anecdotally of other projects, but am not aware whether they reached fruition, or have ever been published in any form. It would be good to have a thorough systematic search to establish what else has been developed.\nIs it correct to assume that the more frequent and detailed data available with the tablet related to the recording of vital signs, which were not entered from the paper record at the time of analysis?\nCould you comment on the value gained from the additional encounters recorded in the tablet and whether users found recording real-time information was clinically useful?\n\nIt is not clear to what extent this system can be made available to others. The reference to the consortium being established is to a website that is being prepared.\nWhat data are available from MSF in Amsterdam? Are they data related to the development of this tablet computer based system or the patient data from this Ebola Treatment Centre? This is not clear to me. In our view, however, the authors present a well-targeted piece of operational research undertaken in challenging circumstances with transparent insights into where the development process fell short. They make it clear that their results are limited, but provide a foundation for moving forward.", "responses": [ { "c_id": "2200", "date": "30 Sep 2016", "name": "kiran jobanputra", "role": "Author Response", "response": "The authors would like to thank the reviewer for their thorough and constructive critique of this manuscript. We have organised our response by theme: Record pairs: All interactions continued to be captured on paper during the pilot of the tablet-based EMR, and the comparison included all those clinical entries that had been captured by both means (referred to as “record pairs’. Data transfer: Data was transferred wirelessly to a server outside the high risk zone via a secure local area network, which indeed was one of the key advantages of this technology for Ebola Management Centres. Confidentiality: Tablet data included only an ID code and approximate date of birth based on current age, so although data was linked to an individual, the individual was not identifiable. Treatment module: A module to record treatments administered was planned and worked on, but not completed and put on hold when it was clear that the outbreak was reducing in scale and thus the opportunity to field test the tablet was highly time sensitive. National oversight and ethics approval: This was not a study as such, but an ad hoc evaluation of an operational innovation that was introduced at the peak of the emergency. The evaluation had no impact on patient experience, since it did not involve subjecting patients to any process additional to that which they would already be undergoing; no additional patient data was collected for this evaluation. Interviews with staff were informal and focused on experience of using the tablets. As such, it met the MSF ERB criteria for exemption from formal ethics review and was approved by the MSF medical director. The tablet-based EMR was discussed with and demonstrated to health authorities prior to implementation, who were supportive of this innovation and did not suggest additional ethical scrutiny. Data checking and validation: Practitioners were able to review the data at any time, including within the high-risk zone, so were able to check for consistency. Since the implementation team regularly checked for consistency between data entered into the tablet, and data on the server, it was not deemed necessary to ask the clinicians to do this themselves. Why was knowledge transfer not successful?: Due to the hasty decommissioning of the team, there was insufficient time for a comprehensive handover (from Google to MSF) of information and understanding required to operate and maintain the software and hardware that had been developed. In the case of the software and key hardware, MSF was able to hire an ex-Google engineer who worked on this project to further develop the software and complete the process of handover to MSF. However some parts of the hardware (e.g. polycarbonate casing) were deemed too expensive to warrant further investment by MSF, so little attempt was made to obtain the knowledge necessary to produce these. What was the 'significant work needed to clean the data'? The database was not configured to create a single record for a patient encounter if the user moved in and out of the record, resulting in upto five partial records within a 10 minute period which all related to a single encounter, and thus required to be merged into a single record of that encounter. In addition, new users entered a variety of practice data which was retained in the database, and needed to be identified and removed. Finally, records were exported with content and order based on OpenMRS coding, resulting in 3 data fields per data element and an order that was not linked to the logical grouping and order on the tablet forms. How reliable was the comparison of tablet and paper data?: Of 204 tablet records, 119 did not have an equivalent record in the Excel database because only a single set of observations were recorded each day on paper whereas on the tablet additional observations were sometimes recorded later the same day. Therefore only one record pair per day was possible. Of 111 encounters recorded on paper, 28 were not matched to a tablet record. The median number of record pairs per patient was 2.0, with a maximum of 19 for a patient who was admitted for 19 days. There were relatively few patients admitted during the implementation, many of whom were later confirmed as not having Ebola, so while the influence of severity of illness or patient load on the data quality are interesting in theory, it cannot be validly assessed.    To what extent can this system be made available to others? The software is available on-line and can be down-loaded for free; the code is also freely available for developers to use. Several of the developers involved have formed an open-source community that is supporting this code, which has replaced the consortium in this regard." } ] }, { "id": "15182", "date": "23 Aug 2016", "name": "Benjamin O. Black", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nResearch and development of technology in the humanitarian arena is a rapidly growing field that encompasses many specialities. There are international working groups, peer reviewed journals and conferences dedicated to this field. This paper is a valuable addition to the body of evidence openly available to the academic, technician and aid worker. My review of this paper relies on my field experience of working in the humanitarian setting, particularly though the West African Ebola outbreak of 2014 till 2016.\n\nThis paper describes the process, implementation and challenges of introducing a novel way of recording patient data electronically in a high biohazard environment. The aim being to improve data collection and the efficiency of patient interactions and communication.\n\nIt is bold and brave of the authors to openly state the errors made through the design and implementation of this project. While there is substantial discussion around the lessons identified from their experience, there could have also been more highlighting some of the possible successes, not least that this project was rolled out at a time of unprecedented pressures on the organisation responsible, Médecins Sans Frontières (MSF).\n\nIn their peer review Whitworth and Bower (2016) have critically appraised the methodology and data analysis described in this paper. I am therefore not going to repeat what has already been expertly written, focussing instead on the patient care and field worker practicalities of this tool.\n\nInvolvement from the field is raised on several occasions in the article, however it remains unclear to what lengths the designers and implementers used the returnees’ knowledge on the field realities to make this tool suitable for the task in-hand, beyond “informal discussions”. All volunteers returning from Ebola missions were routinely interviewed in the European headquarters by the organisation, to what extent this was used as an opportunity by the developing team is unclear, if not utilised was this a missed opportunity that could be added to their list of “lessons identified”. It would be useful to know how many returnees were interviewed, the style of interview used and how the information gained guided the designers. Furthermore, given that all returnees were debriefed at head office it would be worth stating if the returnees were all systematically questioned in a standardised manner, or if specific persons were chosen and how this choice was made.\n\nThe field staff who used both the alpha version and full prototype were also interviewed, it would be informative to the reader to understand how many staff were involved in these interviews, whether they were expatriate or local staff and what their specific job role was. The experience of a European doctor may be different to that of a local nurse or health care assistant.\n\nThe authors rightly discuss the difficulty of working in the full Personal Protective Equipment (PPE) that is necessary to remain safe inside the “high risk” area of the Ebola Management Centre (EMC), it would be interesting to gain a deeper insight into how much this was factored into the overall program design. For example, visual difficulties and clumsiness were often described as part of the challenge of working in PPE. Were large buttons or colour coding on the tablet part of the design to assist the healthcare worker overcome these difficulties?\n\nDuring very busy times at the peak of the outbreak a numbered system was used to quickly identify the patients with the most immediate needs, a score from 1 (very well) to 5 (critically ill) was assigned to each patient. It would have been informative to know if this or other field-designed approaches were considered or incorporated into the program. Whilst the focus of the tablet appeared to be for transfer of medical information, there is much communication that must also be made between the EMC workers. Identification and location of deceased patients must be efficiently passed to hygiene teams, or safety concerns to logistical teams, was inter-disciplinary communication considered as part of this program?\n\nSimilarly, whilst communication on patient symptomatology was traditionally shouted across the fence and then transcribed into folders outside the “high-risk” area, treatment decisions also needed to be remembered or shouted over. It is not entirely clear how much the tablet was utilised for recording treatments given, for example intravenous fluid administration, hygiene procedures or palliative care medications. This would be valuable from a patient care perspective and likely reduce prescribing errors, missing or duplication of doses. Retrospectively this could also have added to data on what treatment and care was being given and how often. Commonly a generic recipe of medications was administered to all patients from admission into the EMC, this could have therefore been pre-programmed into the tablets as an electronic prescription.\n\nThe method of shouting information across the fence in the EMC, whilst this was often a part of high to low risk communication, was not exclusive. Digital photography that could later be transcribed was often used, particularly during times of high activity or when transferring patients from one location to another. Similarly, only those patients that were bedbound or too unwell to speak with healthcare workers from across the fence required examination and questioning in full PPE. Hence for many inpatient interactions the “high-risk” tablet would not be required. Explanation of how these other interactions were incorporated into the data gathering would be useful, particularly for the completeness of the information generated. The benefits of being able to talk with a patient across the fence, where the clinician was recognisable (no PPE needed) and there were no time constraints due to heat stress, meant that this was a desirable way of working whenever possible. However, it would not negate the use of the tablet as information could still be recorded on the hand-held device.\n\nThe stated aim of making the most effective use of the limited time in PPE to treat patients has not been quantified. Did the tablet assist the healthcare worker in treating the patients, did the number of patients receiving clinical review or treatment change as a result of the tablet or were the health workers able to achieve the same quality of work in a shorter period of time? I think these would have been important questions to ask, rather than the focus being on the ease of data gathering and transfer, it could have been on quality and availability of patient care, perhaps including patient outcomes.\n\nI found the concept of using radio-frequency identification tags particularly interesting, this would have had great value both for finding and identifying patients for examination, but also importantly for identification and tracking of the deceased. Perhaps because this intervention came during the tail-end of the outbreak there is little discussion of how the tablet could be used for the management of deceased patients, though this was a major issue particularly during the peak of the outbreak when there was a high mortality rate. Unfortunately, the radio-frequency tag aspect of this project did not come to fruition, however a discussion on how this could be moved forward would be worthwhile.\n\nThe potential benefits in the efficiency and quality of data collection gained through the use of the electronic tablet system are clear. How this translated into improving patient care seems less apparent. The balance between patient benefit and data for the sake of research and publication remained controversial throughout the Ebola outbreak. Whilst I don’t question the motives of the team behind this innovation, it would be interesting to know how they managed the balance of this innovation being primarily for patient rather than organisational benefit.\n\nThe honesty of the authors and implementing team is to be applauded, though deeper discussion on how to move forward would have been of value. This venture came at a high price, and though it was not the success they had hoped for it is certainly not an entirely failed project either. Whilst the authors have identified lessons, it is not clear how they have been learnt. The conclusion would benefit a wider discussion of what next, where this technology can continue to be of use and developed to meet its ultimate aim: saving lives.\n\nThe technology is presented as being developed for the specific situation of the EMC, however it would be easily transferable to many humanitarian settings where large numbers of patients or biohazard risks limit the accuracy and efficiency of traditional clinical record keeping. Measles, yellow fever, hepatitis E, cholera and Lassa fever outbreaks would all be well suited to the continued use and development of a potentially very useful field tool. The authors do refer to their vision of the future, and the plan to test a modifiable version of this technology in a nutrition project, a greater emphasis on this vison and how the lessons identified will be learned for the future could have been of use.\n\nAs previously stated, humanitarian technology is a rapidly growing and developing phenomenon. The challenges, lessons and future plans identified by the authors are a useful reminder of how best intentions can go askew without the correct planning and inter-agency communication. These lessons are unlikely be unique to this situation, being broadly adaptable. The honesty of the authors on the difficulties in collaborating with Google is commendable, the next step though is who and how they will work with next. MSF has an opportunity to work within and alongside others in the humanitarian technology field, perhaps partners with the experience to assist them in avoiding these common pitfalls again.", "responses": [ { "c_id": "2199", "date": "30 Sep 2016", "name": "kiran jobanputra", "role": "Author Response", "response": "The authors would like to thank the reviewer for their thorough and constructive critique of this manuscript. We have organised our response by theme: User involvement: extensive user consultation was carried out, as well as user testing with national staff, international staff returning from the field and clinicians who had worked in comparable settings. Several design features addressed the challenges of working in PPE, including large buttons. We have clarified this in the text. RFID: Rapid identification of patients, including the sickest patients, was originally part of the original scope of the project; RFID identification would have helped address this. This was however dropped from the scope as the patient numbers declined and the priority shifted towards transfer of patient data outside the high risk area. Treatment module: a module to record treatments administered was planned and worked on, but not completed and put on hold when it was clear that the outbreak was reducing in scale and thus the opportunity to field test the tablet was highly time sensitive. Using these tools for research: The impetus to develop the EMR was to support improved patient care, including through more efficient use of staff time within the high-risk zone. There were no plans to use the tablet-based EMR for research, although this would be a potential application of this technology." } ] } ]
1
https://f1000research.com/articles/5-1477
https://f1000research.com/articles/6-182/v1
23 Feb 17
{ "type": "Research Note", "title": "A cross sectional study to determine the prevalence and risk factors of low back pain among public technical institute staff in Kurdistan Region, Iraq", "authors": [ "Karwan Mahmood Khudhir", "Kochr Ali Mahmood", "Kochar Khasro Saleh", "Mosharaf Hossain", "Karwan Mahmood Khudhir", "Kochr Ali Mahmood", "Kochar Khasro Saleh" ], "abstract": "There is a lack of quantitative data regarding exposure response relationships between low back pain (LBP) and associated risk factors among institute staff in Kurdistan Region, Iraq. This study explored such associations in an analytic cross-sectional study. Data collection was carried out with a self-administered questionnaire. A total of 70 (90%) institute staff from Koya Technical Institute (KTI) participated in this study. The findings indicated that 61.4% of KTI staff report LBP. Independent variables significantly associated with reporting LBP (P value <0.05) during past 12 months were smoking (OR=10.882; 95%CI=1.301-90.995) and job tenure (OR=3.159; 95% CI=1.072-9.312). In conclusion, LBP is significantly associated with smoking and years worked; therefore, workers should be educated on the effects of smoking not only as it relates to LBP, but also how it affects the whole body and how to quit it. This can be done through health promotion campaigns and programs sponsored by the university.", "keywords": [ "prevalence", "low back pain", "risk factors", "institute staffs", "Kurdistan Region" ], "content": "Introduction\n\nLow back pain (LBP) has become one of the major public health issues in recent years. It is one of the most prevalent work-related musculoskeletal disorders, about 70% of adult in United Kingdom experienced at least one episode of LBP in their life, which lead to modifications in their work and lifestyle1. The burden of LBP can be defined as a pain or discomfort located below the margin of the 12th rib and above the inferior gluteus fold, with or without leg pain2. The economic loss due to LBP does not only affect the quality of a workers’ social life, but also the organisation and society as a whole due to reduction of working capacity, decreased production, and early retirement3. Estimates of frequency, a measure of the proportion of a population with LBP during a specified period, differ between developing and industrially developed countries, between industrially developed countries, and within industrially developed countries, according to diverse studies and across diverse regions within countries4. Many international studies among university staff have reported high prevalence of LBP5. It has been well-known that LBP is amongst the most common cause of physical disability in the European adult population. At any one time, eight in every ten adults experience LBP6, accounting for long periods of absence from work, and the greater this period the lower the chances of going back to work7.\n\nAccording to Mehra et al. (2011), every year about 3.6 million outpatient visits in the United States are attributed to LBP, and it is responsible for the second most common neurological disease, as well as the fifth most common cause for physician visits8. Demographic, occupational and individual factors are recognised as important risk factors associated with an increasing burden of LBP9–11. In Koya Technical Institute (KTI; Koya district, Erbil Governorate, Iraq), different job positions exist; academicians, administrators, cleaners, bus drivers, and secretaries, as well as various levels of support staff. In the different positions, workers spend much time on their work, which requires standing and sitting for a long time to teach, using computers, driving, lifting heavy loads, etc. Due to the activities the workers engage in, the prevalence of LBP has increased. For instance, being a member of academic staff does not only involve teaching students, but also preparing lessons, evaluating and grading students' coursework, examinations, laboratory work, undertaking personal research projects and actively contributing to the institution's research profile, writing up research paper and arranging it for publication, supervising student’s study activities, and managing administrative tasks associated to the department (such as new student admissions, and at a senior level this may involve the role of head of department). Furthermore, the administrative staff of KTI are exposed to a prolonged use of computers, which predisposes the employees to static work posture and awkward body posture, and prolongs sitting; this is because the Kurdistan government has applied the use of computer in most daily tasks, especially among office workers. Until now no studies have been carried out concerning the relationship between factors associated to LBP in a representative sample of institute staff in Kurdistan Region, Iraq. Thus, it is clear that there is a need to identify the risk factors associated with LBP in this country.\n\nKTI has a labour force of approximately 77 academic and non-academic staff, belonging to different areas of specialization in different departments. These employees in their different areas of specialization may be exposed to a number of occupational, individual and psycho-social factors, which may result LBP. Whereas a significant body of research has discovered the association between working in office and university with LBP in developed countries, there is still limited study with regards to the institute population in Iraq especially about LBP. With this rationale, the present study was undertaken among public technical institute staff that aimed to determine the prevalence and risk factors of LBP among workers.\n\n\nMethods\n\nThis cross-sectional study was conducted among KTI staff from 10th February to early March 2016. A list with the names of the workers (academic and non-academic) was obtained from the KTI Office and used for the purpose of this study. In this study, the inclusion criteria was all KTI staff who have worked at least ≥12 months in KTI and with no history of LBP due to accident. Participants who are on leave during the study period and refused to answer the survey questions were excluded. The sampling for this study was done in two phases. In the first phase, information about the whole population of KTI staff, for instance the number of employees in each department, type of occupation, were obtained from each department. In the second phase, the researcher divided workers into two homogeneous subgroups (Strata) according to type of occupation: academic and supportive staff, which include administrative staff, bus drivers, cleaners and technicians. In the third phase, a simple random sampling technique was used to select 70 participants based on the inclusive criteria.\n\nInitially, a self-administered questionnaire was distributed to all participants by four institute students. After a few days the students returned to collect the completed questionnaire. The average response rate across all staff was 87.5%.\n\nThe survey (Supplementary File 1 and Supplementary File 2) was a self-administered questionnaire, which was composed of three sections to collect information on socio-demographic and occupational factors, as well as on LBP. Section A: Participants’ socio-demographic factors, which included age, gender, education level and smoking status. Section B: A modified Nordic questionnaire was used to collect information about LBP among the participant population12. Section C: Participants’ occupational characteristics, which included type of job, duration of employment, standing and sitting work posture. For better understanding for the participants, the questionnaire was translated from English to the Kurdish language.\n\nThe collected data were entered in to SPSS for Windows version 21. Univariate analysis was used to examine the characteristics of KTI staff and the prevalence of LBP. Bivariate analysis (χ2 test) was used to determine associations between risk factors and LBP, and logistic regression analysis was used to determine the adjusted odds ratio (OR) with a 95% confidence interval (CI).\n\nThis study was approved by the Ethics Committee of Koya Technical Institutes in October 2015 (ref: KTI 24015). Interviews were carried out after obtaining written informed consent of the respondents.\n\n\nResults\n\nQuantitative data (frequency, percentage, median, inter quartile range [IQR]) of participants are listed in Table 1. Out of 80 self-administrated questionnaires which were distributed, 70 of them agreed to participate, giving an 87.5% respondent rate. Median cut-off points were used to dichotomize the continuous variables. In total, 50% of respondents who participated in the study were ≤37 years and 50% were >37 years, with a median of age of 33.5 years (IQR,12 years). In all, 54.3% were female, 31.4% had an education level up to college diploma or equivalent, 78.6% were non-smokers, and 82.9% were academic (as opposed to supportive) staff. A total of 48.6% had a duration of employment of >6 years (median, 6; IQR, 6), 52.9% stood for ≤3 hours/working day (median, 3; IQR, 2), and 57.1% sat for ≤2 hours/working day (median, 2; IQR, 2).\n\nThe prevalence of LBP among KTI staff in the past 12 months was 61.4%. The distribution of socio-demographic characteristics and working conditions for the groups with LBP and without LBP are shown in Figure 1. The statistical analysis revealed that there was significant association between smoking, job tenure and LBP (P <0.05). However, age, gender, education level, type of job, standing and static work posture was not significantly associated with LBP.\n\nDistribution of socio-demographic characteristics and occupational factors in staff with and without LBP. **P<0.05 by χ2 test. LBP, low back pain.\n\nVariables with P <0.05 in the bivariate analysis were analysed together by multiple logistic regression to determine how well the effects of independent variables on LBP were associated (Table 2). The results showed that KTI staff that smoke were 10 times more likely to have LBP (OR, 10.882; 95% CI, 1.301-90.995) than non-smoking staff. In addition, staff with job tenure of >6 years were 3 times more likely to develop LBP (OR, 3.159; 95% CI, 1.072-9.312) compared with staff with job tenure ≤ 6 years.\n\nSE = standard error; Sig. = significance (P value); OR = odds ratio.\n\n\nDiscussion\n\nThis study found that the prevalence of LBP among KTI staff during the past 12 months was 61.4%. The reported prevalence of LBP among KTI in Kurdistan region (Iraq) was slightly higher compared to other study findings, which reported the prevalence of LBP among school teachers China in 2012 as 45.6%13. It is also much higher compared with results reported among Adam Hospital medical college staff in Ethiopia (41.4%)14, as well as a studies conducted in European countries (with a point prevalence of 21.5%)15 and among university staff in Thailand (22.3%)5. However, it is much lower than the annual prevalence of LBP reported among nurses (73.5%) in Nigeria16. The high prevalence of LBP might affect the working life of KTI staff through decreased productivity, absenteeism and increased medical cost. LBP have been found by numerous researchers to be related with various factors, both occupational factors (duration of employment, static work posture, awkward posture) and individual factors (smoking, obesity, exercise)6,12.\n\nAccording to this study, KTI staff who were smoke are 10 times more likely to have LBP compared with non-smoking staff. Similar outcomes have been demonstrated by a study conducted among school teachers17. This could be explained as scientists have shown that there is an association between cigarette smoking and musculoskeletal disorders. Firstly, cigarette smoking may cause a decrease in the amount of blood perfusion to bones and to almost all tissues of the human body, which leads to low production of bone-forming cells (osteoblasts)18. Another explanation is that cigarette smoking causes calcium deficiency by reducing the absorption of the quantity of calcium from diet, which the body needs for building strong bones19. Moreover, cigarette smoking has a negative effect on the growth of the lungs and it is also detrimental to the well-being of the lungs, such that it results in shortness of breath and a decrease in the quantity of oxygen made available to the muscles20. In addition, KTI staff with job tenure >6 years were 3 times more likely have LBP than those who worked ≤6 years. Similar outcomes have been demonstrated by a study conducted among university workers21. This finding suggests that an increased risk of LBP occurs during advanced (after 6 years) employment in the institute. An association between LBP and job tenure was also reported in a study conducted by Noroozi et al. (2015)22 among medical sciences office workers of Ahvaz Jundishapur University. Thus, the longer staff work in KTI, the higher the risk of developing LBP. LBP is cumulative ailment and over time may result from frequent exposure to risk factors with insufficient recovery23. The same body part is activated for long periods of time if the institute staff often repeat similar tasks for several months or years. As a result, internal tolerance of tissues is exceeded when accumulation of loading occurs, due to exposures of long duration24.\n\nThe results also showed that there were not a significant association between LBP and age, gender, level of education, type of job, and static work posture. This study was a cross-sectional study where the association between risk factors and LBP at a particular point in time could be determined. The finding was relied on self-reported questionnaire and there was no medical test to confirm the presence of LBP. Recall bias may also have occurred, and information bias happened during the completion of the questionnaire, due to the different understanding of participants based on their opinion and judgment.\n\n\nConclusions\n\nOverall, LBP was found to be common among staff at KTI. Thus, there is need for an awareness to be created among staff through strategic prevention programs. The workers should be educated on the effects of smoking not only as it relates to LBP but, also on all the whole body parts and how to quit it. This can be done through health promotion campaigns and programs sponsored by the university, in order to reduce the prevalence and prevent the risk of LBP among KTI staff.\n\n\nData availability\n\nRaw datasets have not been made available at the request of the Ethics Committee in order to maintain participant confidentiality. This data is stored at the Department of Preventive Health, Koya Technical Institute and is available upon request. Please contact the first author Karwan Mahmood Khudhir (karwan85.mahmud@gmail.com) for further information.", "appendix": "Author contributions\n\n\n\nKMK conceived the study. KMK, KAH, KKS designed the experiments. MH performed the data analysis. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors wish to thank all contributors to this research study, especially the Koya Technical Institute staff who participated.\n\n\nSupplementary materials\n\nSupplementary File 1: The questionnaire in English.\n\nClick here to access the data.\n\nSupplementary File 2: The questionnaire in Kurdish.\n\nClick here to access the data.\n\n\nReferences\n\nAndersson GB: Epidemiologic aspects on low-back pain in industry. Spine (Phila Pa 1976). 1981; 6(1): 53–60. PubMed Abstract | Publisher Full Text\n\nFreburger JK, Holmes GM, Agans RP, et al.: The rising prevalence of chronic low back pain. Arch Intern Med. 2009; 169(3): 251–258. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTsuboi H, Takeuchi K, Watanabe M, et al.: Psychosocial factors related to low back pain among school personnel in Nagoya, Japan. Ind Health. 2002; 40(3): 266–271. PubMed Abstract | Publisher Full Text\n\nManek NJ, MacGregor AJ: Epidemiology of back disorders: prevalence, risk factors, and prognosis. Curr Opin Rheumatol. 2005; 17(2): 134–140. PubMed Abstract | Publisher Full Text\n\nKhruakhorn S, Sritipsukho P, Siripakarn Y, et al.: Prevalence and risk factors of low back pain among the university staff. J Med Assoc Thai. 2010; 93(Suppl 7): S142–8. PubMed Abstract\n\nBener A, Verjee M, Dafeeah EE, et al.: Psychological factors: anxiety, depression, and somatization symptoms in low back pain patients. J Pain Res. 2013; 6: 95–101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTurner JA, Franklin G, Fulton-Kehoe D, et al.: Prediction of chronic disability in work-related musculoskeletal disorders: a prospective, population-based study. BMC Musculoskelet Disord. 2004; 5: 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMehra M, Hill K, Nicholl D, et al.: The burden of chronic low back pain with and without a neuropathic component: a healthcare resource use and cost analysis. J Med Econ. 2012; 15(2): 245–452. PubMed Abstract | Publisher Full Text\n\nMacfarlane GJ, Jones GT, Hannaford PC: Managing low back pain presenting to primary care: where do we go from here? Pain. 2006; 122(3): 219–222. PubMed Abstract | Publisher Full Text\n\nBejia I, Younes M, Jamila HB, et al.: Prevalence and factors associated to low back pain among hospital staff. Joint Bone Spine. 2005; 72(3): 254–259. PubMed Abstract | Publisher Full Text\n\nVos T, Flaxman AD, Naghavi M, et al.: Years lived with disability (YLDs) for 1160 sequelae of 289 diseases and injuries 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet. 2013; 380(9859): 2163–2196. PubMed Abstract | Publisher Full Text\n\nKuorinka I, Jonsson B, Kilbom A, et al.: Standardised Nordic questionnaires for the analysis of musculoskeletal symptoms. Appl Ergon. 1987; 18(3): 233–237. PubMed Abstract | Publisher Full Text\n\nYue P, Liu F, Li L: Neck/shoulder pain and low back pain among school teachers in China, prevalence and risk factors. BMC public health. 2012; 12(1): 789. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlem DA, Ephrem MG, Seblewengel L, et al.: Prevalence of Low Back Pain and Associated Risk Factors Among Adama Hospital Medical College Staff, Ethiopia. Eur J Prev Med. 2015; 3(6): 188–192. Publisher Full Text\n\nDuthey B: Priority medicines for europe and the world:“a public health approach to innovation”. WHO Background paper. 2013; 6. Reference Source\n\nSikiru L, Shmaila H: Prevalence and risk factors of low back pain among nurses in Africa: Nigerian and Ethiopian specialized hospitals survey study. East Afr J Public Health. 2009; 6(1): 22–25. PubMed Abstract | Publisher Full Text\n\nErick PN, Smith DR: Low back pain among school teachers in Botswana, prevalence and risk factors. BMC Musculoskelet Disord. 2014; 15(1): 359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUS Department of Health and Human Services: Bone Health and Osteoporosis: A Report of the Surgeon General. Rockville, MD: US Department of Health and Human Services, Office of the Surgeon General, 87, 2004. PubMed Abstract\n\nUS Department of Health and Human Services: How Tobacco Smoke Causes Disease: The Biology and Behavioral Basis for Smoking-Attributable Disease: A Report of the Surgeon General. Atlanta, GA: US Department of Health and Human Services, CDC; 2010. National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, 2011. PubMed Abstract\n\nBrot C, Jorgensen NR, Sorensen OH: The influence of smoking on vitamin D status and calcium metabolism. Eur J Clin Nutr. 1999; 53(12): 920–926. PubMed Abstract | Publisher Full Text\n\nKarwan MK, Azuhairi AA, Hayati KS: Predictors of upper limb disorders among a public university workers in Malaysia. Int J Public Health Clin Sci. 2015; 2(3): 133–150. Reference Source\n\nNoroozi MV, Hajibabaei M, Saki A, et al.: Prevalence of Musculoskeletal Disorders Among Office Workers. Jundishapur J Health Sci. 2015; 7(1): 1–5, e27157. Publisher Full Text\n\nMarras WS, Ferguson SA, Lavender SA, et al.: Cumulative spine loading and clinically meaningful declines in low-back function. Hum Factors. 2014; 56(1): 29–43. PubMed Abstract | Publisher Full Text\n\nRadwin RG, Marras WS, Lavender SA: Biomechanical aspects of work-related musculoskeletal disorders. Theo Issues Ergon Sci. 2001; 2(2): 153–217. Publisher Full Text" }
[ { "id": "21214", "date": "24 Mar 2017", "name": "Md Mizanur Rahman", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle: rewrite the title and make it short\nAbstract: Add background of the study. Accepted\nIntroduction: Accepted\nMethodology: Accepted\nAnalysis and results: Multiple logistic regression, It is actually binary logistic regression analysis. Figure 1, it is not well fitted. Either delete it or  present in cross-table. Insufficient description of logistic regression analysis, please, add model fitting information, sample size, GOF etc. Relationship between LBP and smoking appeared to be spurious because of very wide 95% confidence interval. Please, recheck smoking status and LBP in cross table to see the optimum cell frequency.\nConclusion: Accepted.", "responses": [] }, { "id": "21076", "date": "18 Apr 2017", "name": "Rusli Bin Nordin", "expertise": [ "Reviewer Expertise Environmental and occupational health" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTITLE: Too long. Summarize as: Prevalence and risk factors of low back pain among public technical institute staff in Kurdistan Region, Iraq.1\nABSTRACT: Sample size (n=70) is rather small, potentially leading towards possible spurious associations (wide confidence intervals). P values should be presented with exact values, not p<0.05. Should include limitation of the study due to small sample size. Add keywords.\nINTRODUCTION: Accepted. Please correct typhos.\nMETHODS:  Study design is accepted. Sampling was flawed: initially it appeared to be sampling from two populations (academic and support staff) but later only simple random sampling was applied. Sample size estimation was not done; hence, the sample size obtained (n=70) is very much an under sampling error, leading to possible spurious associations. Questionnaire validation was not done properly: back translation from Kurdish to English language was not done. Statistical Analysis: Too short. Need to elaborate on the logistic regression modelling approach.\nRESULTS:Table 1 is accepted. Figure 1 is incomplete (asterisks for significant associations were not indicated on the figure). Table 2 on logistic regression should include the Naegelkerke's R square.\n\nDISCUSSION: Generally accepted. However, need to mention the limitation of the study and the possible spurious association between smoking status, job tenure, and LBP. Since there was no mention of the variance explained by the two independent variables on LBP, there should be a short discussion on other possible determinants of LBP among these workers.\nCONCLUSIONS: Generally acceptable. However, need to emphasize the importance of ensuring adequate sample sizes in future studies to avoid under sampling and associated statistical issues.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Partly", "responses": [] } ]
1
https://f1000research.com/articles/6-182
https://f1000research.com/articles/6-181/v1
23 Feb 17
{ "type": "Data Note", "title": "­A curated transcriptomic dataset collection relevant to embryonic development associated with in vitro fertilization in healthy individuals and patients with polycystic ovary syndrome", "authors": [ "Rafah Mackeh", "Sabri Boughorbel", "Damien Chaussabel", "Tomoshige Kino", "Sabri Boughorbel", "Damien Chaussabel", "Tomoshige Kino" ], "abstract": "The collection of large-scale datasets available in public repositories is rapidly growing and providing opportunities to identify and fill gaps in different fields of biomedical research. However, users of these datasets should be able to selectively browse datasets related to their field of interest. Here we made available a collection of transcriptome datasets related to human follicular cells from normal individuals or patients with polycystic ovary syndrome, in the process of their development, during in vitro fertilization. After RNA-seq dataset exclusion and careful selection based on study description and sample information, 12 datasets, encompassing a total of 85 unique transcriptome profiles, were identified in NCBI Gene Expression Omnibus and uploaded to the Gene Expression Browser (GXB), a web application specifically designed for interactive query and visualization of integrated large-scale data. Once annotated in GXB, multiple sample grouping has been made in order to create rank lists to allow easy data interpretation and comparison. The GXB tool also allows the users to browse a single gene across multiple projects to evaluate its expression profiles in multiple biological systems/conditions in a web-based customized graphical views. The curated dataset is accessible at the following link: http://ivf.gxbsidra.org/dm3/landing.gsp.", "keywords": [ "Blastocysts", "cumulus cells transcriptomics", "embryos", "Gene Expression Omnibus", "granulosa cells", "in vitro fertilization", "oocytes", "polycystic ovary syndrome" ], "content": "Introduction\n\nOocytes are maternal germ cells developed in ovaries during the fetal phase and kept throughout the female reproductive ages for monthly maturation and subsequent ovulation following the endocrinological regulation associated with menstrual cycles1. Oocyte maturation starts with the monthly resumption of the first meiotic process of one primary oocyte arrested in prophase I (characterized by the germinal vesicle, also classified as immature or metaphase I (MI) stage)1. After extrusion of the first polar body, the primary oocyte progresses to metaphase II of the second meiosis and becomes the secondary oocyte, which is competent to fertilization by a sperm.\n\nSuch oocyte growth/maturation occurs inside the ovarian follicle, which is also concomitantly under a process called folliculogenesis. Folliculogenesis consists of follicular cell proliferation, development and differentiation1. Primordial follicles containing primary oocytes grow into the mature Graafian follicle with the coordinated progression of the holding germ cells to the secondary oocytes2. Ovulation then occurs under the regulation of gonadotropins and sex steroids, resulting in the release of an oocyte into the peritoneal cavity. Upon fertilization by a sperm, the liberated oocyte resumes its second meiotic division to become the zygote, which further goes into a form of embryo called morula through several mitotic divisions and compaction of component cells. Continuous cell division further transforms morula to blastocyst, which has a fluid-filled cavity and is ready for implanting to the uterine endometrium3.\n\nThe oocyte in the ovarian follicle is a primary regulator of follicular cell differentiation and function, whereas metabolic cooperation occurs between oocytes and follicular cells to ensure substrate supply necessary for oocyte growth/maturation4. The follicular cells consist of two types of cell groups, theca cells (also known as stromal cells) and granulosa cells. Theca cells form the outer layer of the ovarian follicle, while inner granulosa cells make a direct contact with the oocyte. These cells also produce steroid hormones, such as progestins and estrogens, under the control of pituitary gonadotropins, which is important for priming uterine endometrium and other reproductive tissues for supporting expected implantation and pregnancy5. During folliculogenesis, granulosa cells continuously proliferate to form the follicular antrum, a fluid-filled cavity formed among the granulosa cell cluster. Upon formation of the antrum, two populations of granulosa cells become identifiable: one cell group known as cumulus cells (CCs), which surround the oocyte and remain associated with it even after ovulation, and the other group called mural granulosa cells, which form an inner layer of the follicle. The oocyte and CCs form the cumulus-oocyte-complex in which these cells directly communicate with each other through the gap-junctions created between them. This cellular communication plays a central role in the regulation of folliculogenesis and oocyte maturation by enabling the nutritional transfer and traffic of macromolecules between them6.\n\nIn vitro fertilization (IVF) is one type of assisted reproductive technology developed for the treatment of infertility7. It is a procedure consisting of (1) harvesting oocytes from the peritoneal cavity of the women artificially stimulated for their ovulation, (2) fertilization of the oocytes by mixing with sperms in vitro, and (3) implantation of fertilized oocytes into the uterine cavity. Before implantation, fertilized oocytes are regularly cultured for 2–6 days in a growth medium allowing its cell division and multiplication. Although a lot of improvements have been added to IVF, its success rate for successful live birth is still less than 50% even in younger women, and the main challenge remains the risk of multiple pregnancies, which is directly associated with increased incidence of fetal morbidity and infant mortality during maternal, perinatal and neonatal periods8. To prevent multiple IVF-associated pregnancies, single-embryo transfer is considered, for which selection of the most viable and healthy embryo is critical. Morphological inspection of embryos is employed for selecting high quality embryos9,10, but it is not sufficient to predict the developmental potential of embryos. Therefore, studies have been performed during the last several years to develop better methods of embryo selection by examining proteomics or metabolomics of embryos11–13. Recently, emergence of microarray technology has introduced a new approach to study the genetic aspects of fertility. Primarily, studies employing this new technique focused on the role surrounding follicular cells for evaluating the quality of carrying oocytes, and estimated its usefulness by comparing and correlating the data from stromal cells with the quality of embryos and with a positive or negative IVF outcome14–19. Such studies also included samples obtained from healthy or diseased women, for example women with polycystic ovary syndrome (PCOS), for whom the IVF success rate is known to be reduced compared with healthy subjects20.\n\nTo help identify knowledge gaps in the field of IVF, ovarian function and/or the influence of reproductive diseases, we provide here a resource enabling mainstream researchers in this field to browse transcriptomic datasets relevant to the oocyte and surrounding stromal cells obtained from healthy subjects or those with PCOS, in association with IVF outcome. Such a resource offers a unique opportunity to identify the genes that play key roles in oocyte maturation, embryonic development and crosstalk between oocytes and granulosa cells, eventually contributing to the future improvement of the IVF procedure.\n\n\nMethods\n\nIn order to identify datasets relevant to IVF, we developed queries in a way to include the conditions, such as oocytes, CCs or granulosa cells in humans. Queries were employed on NCBI (https://www.ncbi.nlm.nih.gov/) and are as follows:\n\n- Homo sapiens [organism] AND (oocyte OR oocytes) AND (“Expression profiling by array” [gdsType] OR “Expression profiling by high throughput sequencing” [gdsType]).\n\n- Homo sapiens[organism] AND cumulus cells AND (“Expression profiling by array”[gdsType] OR “Expression profiling by high throughput sequencing”[gdsType]).\n\n- Homo sapiens[organism] AND Granulosa cells AND (“Expression profiling by array”[gdsType] OR “Expression profiling by high throughput sequencing”[gdsType]).\n\n- Homo sapiens[organism] AND (in vitro fertilization OR in vitro fertilization OR in vitro fecundation) AND (“Expression profiling by array”[gdsType] OR “Expression profiling by high throughput sequencing”[gdsType]).\n\nThis query retrieved 85 datasets. After excluding RNA-seq datasets from the collection and examining each dataset carefully based on study description and list of samples and their annotations to verify their direct relevance to the theme of this data compendium, a total number of 23 datasets were selected. In total, 12 were successfully uploaded into the data browser. Details of these datasets are recapitulated in Table 1.\n\n*: available at http://ivf.gxbsidra.org/dm3/geneBrowser/list.\n\nAfter curation, each dataset was downloaded from the Gene Expression Omnibus of the National Center for Biotechnology Information website (NCBI GEO) using the SOFT file format, and was then uploaded, along with its study information and samples available, to the Gene Expression Browser, version 1.2 (GXB; http://ivf.gxbsidra.org/dm3/geneBrowser/list), an interactive web-based application developed at the Benaroya Research Institute (Seattle, WA, USA), hosted on the Amazon Web Services cloud (https://github.com/BenaroyaResearch/gxbrowser) (https://aws.amazon.com)21. In GXB, we grouped the samples according to the expected future interpretation and comparison of study results. Each group contains samples of biological replicates, such as samples from control patients, and is compared to another group of samples. For example, Control group vs PCOS group, or Blastocysts group vs embryos of poor quality. Finally, computed ranking lists were created based on each grouping, using the rank list option provided in the GXB software. Therefore, GXB provides the users with a means to easily navigate and filter our uploaded and processed dataset collections, which are available at http://ivf.gxbsidra.org/dm3/landing.gsp.\n\nA web tutorial for GXB is available online: http://ivf.gxbsidra.org/dm3/tutorials.gsp#gxbtut and is briefly reproduced here so that readers can use this article as a standalone resource21,22: “datasets of interest can be quickly identified either by filtering criteria from pre-defined lists shown on the left side of the GXB dataset navigation window, or by entering a query term in the search box located at its left top portion. Clicking on one of the studies listed in the dataset navigation window opens a viewer, which is designed to provide interactive browsing and graphic representations of the large-scale data in an interpretable format. This interface is intended to navigate ranked gene lists and displays transcriptomic results graphically in a context-rich environment. Selecting a gene from the rank-ordered list on the left side of the data-viewing window displays its expression values graphically. The drop-down menus directly above the graphical display give the users the following options: a) Change how the gene list is ranked, which allows the user to change the method used to rank the genes, or to include only the genes that are selected based on his/her specific biological interest; b) Change sample grouping (Group Set button), so that in some datasets, a user can switch between groups, based on, for example, the cell types and the diseases of interest; c) Sort individual samples within a group based on the associated categorical or continuous variables (e.g., gender and age); d) Toggle between the histogram and a box-plot plot with expression values, which are demonstrated as a single point for each sample in the graph; e) Paste color legends for sample groups; f) Select categorical information that is to be overlaid at the bottom of the graph. For example, the user can display gender or smoking status using this function; g) Provide a color legend for the categorical information overlaid at the bottom of the graph; h) Download the graph in a jpeg format. Generally, raw data of the measurements per se shown in graphs have no intrinsic utility in the absence of their contextual information. It is therefore important to display such information together with the data shown in the graphs, so that viewers are able to interpret demonstrated data and gain new insights from it. In the datasets provided, the contextual information has been organized under different tabs directly above the graphical display. The tabs can be hidden to make more room for displaying the data plots, or revealed by clicking on the blue “Show Info Panel” button in the top right corner of the display window. Information for the gene, which is selected from the list and is shown in the left side of the display, is available under the “Gene” tab. The study information is also available under the “Study” tab. Further, information on individual samples is provided under the “Sample” tab. Rolling the mouse cursor over a histogram bar while displaying the “Sample” tab enables viewing of any clinical, demographic, or laboratory information provided for the selected sample. Finally, the “Downloads” tab allows advanced users to retrieve the original datasets for their future analysis to be performed outside GXB. It also provides all available sample annotation data together with the expression data.”\n\n\nDataset validation\n\nQuality checks for the datasets uploaded to GXB were performed by validating the specific expression of the Xist transcript (X-inactive specific transcript), which is a non-protein-coding RNA that inactivates one of the diploid X chromosomes existing in the female cells of mammals23,24. Since all uploaded datasets comprised samples obtained from women, Xist was expected to be present and expressed at high levels in all samples, except one dataset which comprises oocyte transcriptomic data, as haploid oocytes do not bear chromosome X inactivation. Expectedly, when microarrays provided probes for Xist, its expression was present in all datasets comprising cumulus or granulosa cells. While Xist expression was absent in oocyte samples of the GSE12034, it was highly expressed in the non-ovarian diploid tissue samples of the same dataset. Additional validation of our datasets was performed by examining the expression of some ovarian-specific genes, such as those specific to the zona pellucida protein (ZP1, ZP2 and ZP3), FIGLA (folliculogenesis-specific basic helix-loop-helix gene, also known as factor in the germline α), which encodes a transcription factor regulating the expression of multiple oocyte-specific genes25, and BMP15 (bone morphogenetic protein 15), which is functional in the folliculogenesis26. FIGLA was selectively expressed in oocyte samples in the GSE12034 dataset, but not in non-ovarian control tissues. The same expression pattern was also confirmed for ZP1, ZP2, ZP3, and BMP15.\n\n\nData availability\n\nAll datasets were cited in our manuscript. They are designated by their GEO accession numbers (e.g. GSE34526), and can also be accessed using this identifier via the NCBI GEO website (https://www.ncbi.nlm.nih.gov/gds/?term=). User can download all uploaded dataset files and associated sample information through the GXB tool: “Downloads” tab.", "appendix": "Author contributions\n\n\n\nRM and TK conceived the theme of this dataset collection, SB contributed to the loading and curation of datasets. RM prepared the first draft of the manuscript, TK and DC edited it.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was support by the Intramural Grant of the Sidra Medical and Research Center.\n\n\nAcknowledgements\n\nWe thank all the investigators who deposited their datasets used in this study to NCBI GEO.\n\n\nReferences\n\nHutt KJ, Albertini DF: An oocentric view of folliculogenesis and embryogenesis. Reprod Biomed Online. 2007; 14(6): 758–64. PubMed Abstract | Publisher Full Text\n\nvan den Hurk R, Zhao J: Formation of mammalian oocytes and their growth, differentiation and maturation within ovarian follicles. Theriogenology. 2005; 63(6): 1717–51. PubMed Abstract | Publisher Full Text\n\nVaillancourt C, Lafond J: Human embryogenesis: overview. Methods Mol Biol. 2009; 550: 3–7. PubMed Abstract | Publisher Full Text\n\nLi R, Albertini DF: The road to maturation: somatic cell interaction and self-organization of the mammalian oocyte. Nat Rev Mol Cell Biol. 2013; 14(3): 141–52. PubMed Abstract | Publisher Full Text\n\nRider V, Kimler BF, Justice WM: Progesterone-growth factor interactions in uterine stromal cells. Biol Reprod. 1998; 59(3): 464–9. PubMed Abstract | Publisher Full Text\n\nRossello RA, H D: Cell communication and tissue engineering. Commun Integr Biol. 2010; 3(1): 53–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWood C, Trounson A: Clinical in vitro fertilization. Second edition, Springer-Verlag, 1989. Publisher Full Text\n\nLand JA, Evers JL: Risks and complications in assisted reproduction techniques: Report of an ESHRE consensus meeting. Hum Reprod. 2003; 18(2): 455–7. PubMed Abstract | Publisher Full Text\n\nBorini A, Lagalla C, Cattoli M, et al.: Predictive factors for embryo implantation potential. Reprod Biomed Online. 2005; 10(5): 653–68. PubMed Abstract | Publisher Full Text\n\nCoticchio G, Bonu MA, Bianchi V, et al.: Criteria to assess human oocyte quality after cryopreservation. Reprod Biomed Online. 2005; 11(4): 421–7. PubMed Abstract | Publisher Full Text\n\nKatz-Jaffe MG, Gardner DK, Schoolcraft WB: Proteomic analysis of individual human embryos to identify novel biomarkers of development and viability. Fertil Steril. 2006; 85(1): 101–7. PubMed Abstract | Publisher Full Text\n\nKatz-Jaffe MG, McReynolds S, Gardner DK, et al.: The role of proteomics in defining the human embryonic secretome. Mol Hum Reprod. 2009; 15(5): 271–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSingh R, Sinclair KD: Metabolomics: approaches to assessing oocyte and embryo quality. Theriogenology. 2007; 68(Suppl 1): S56–62. PubMed Abstract | Publisher Full Text\n\nHuang X, Hao C, Shen X, et al.: Differences in the transcriptional profiles of human cumulus cells isolated from MI and MII oocytes of patients with polycystic ovary syndrome. Reproduction. 2013; 145(6): 597–608. PubMed Abstract | Publisher Full Text\n\nKenigsberg S, Bentov Y, Chalifa-Caspi V, et al.: Gene expression microarray profiles of cumulus cells in lean and overweight-obese polycystic ovary syndrome patients. Mol Hum Reprod. 2009; 15(2): 89–103. PubMed Abstract | Publisher Full Text\n\nFeuerstein P, Puard V, Chevalier C, et al.: Genomic assessment of human cumulus cell marker genes as predictors of oocyte developmental competence: impact of various experimental factors. PLoS One. 2012; 7(7): e40449. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Montfoort AP, Geraedts JP, Dumoulin JC, et al.: Differential gene expression in cumulus cells as a prognostic indicator of embryo viability: a microarray analysis. Mol Hum Reprod. 2008; 14(3): 157–68. PubMed Abstract | Publisher Full Text\n\nWood JR, Dumesic DA, Abbott DH, et al.: Molecular abnormalities in oocytes from women with polycystic ovary syndrome revealed by microarray analysis. J Clin Endocrinol Metab. 2007; 92(2): 705–13. PubMed Abstract | Publisher Full Text\n\nPapler TB, Bokal EV, Tacer KF, et al.: Differences in cumulus cells gene expression between modified natural and stimulated in vitro fertilization cycles. J Assist Reprod Genet. 2014; 31(1): 79–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalomba S, Daolio J, La Sala GB: Oocyte Competence in Women with Polycystic Ovary Syndrome. Trends Endocrinol Metab. 2016; pii: S1043-2760(16)30168-0. PubMed Abstract | Publisher Full Text\n\nSpeake C, Presnell S, Domico K, et al.: An interactive web application for the dissemination of human systems immunology data. J Transl Med. 2015; 13: 196. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRinchai D, Boughorbel S, Presnell S, et al.: A curated compendium of monocyte transcriptome datasets of relevance to human monocyte immunobiology research [version 2; referees: 2 approved]. F1000Res. 2016; 5: 291. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHendrich BD, Plenge RM, Willard HF: Identification and characterization of the human XIST gene promoter: implications for models of X chromosome inactivation. Nucleic Acids Res. 1997; 25(13): 2661–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrown CJ, Ballabio A, Rupert JL, et al.: A gene from the region of the human X inactivation centre is expressed exclusively from the inactive X chromosome. Nature. 1991; 349(6304): 38–44. PubMed Abstract | Publisher Full Text\n\nHuntriss J, Gosden R, Hinkins M, et al.: Isolation, characterization and expression of the human Factor In the Germline alpha (FIGLA) gene in ovarian follicles and oocytes. Mol Hum Reprod. 2002; 8(12): 1087–95. PubMed Abstract | Publisher Full Text\n\nPersani L, Rossetti R, Di Pasquale E, et al.: The fundamental role of bone morphogenetic protein 15 in ovarian function and its involvement in female fertility disorders. Hum Reprod Update. 2014; 20(6): 869–83. PubMed Abstract | Publisher Full Text\n\nKaur S, Archer KJ, Devi MG, et al.: Differential gene expression in granulosa cells from polycystic ovary syndrome patients with and without insulin resistance: identification of susceptibility gene sets through network analysis. J Clin Endocrinol Metab. 2012; 97(10): E2016–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOuandaogo ZG, Haouzi D, Assou S, et al.: Human cumulus cells molecular signature in relation to oocyte nuclear maturity stage. PLoS One. 2011; 6(11): e27179. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzalez-Muñoz E, Arboleda-Estudillo Y, Otu HH, et al.: Cell reprogramming. Histone chaperone ASF1A is required for maintenance of pluripotency and cellular reprogramming. Science. 2014; 345(6198): 822–5. PubMed Abstract | Publisher Full Text" }
[ { "id": "20488", "date": "01 Mar 2017", "name": "Rawad Hodeify", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Mackeh et al. presents a very interesting and novel approach to identify genes that are potentially linked to embryonic development. The authors introducing a valuable resource collecting gene expression profiling datasets from oocytes and surrounding stromal cells of healthy subjects or those with polycystic ovary syndrome, in correlation with IVF outcome. This resource is quite beneficial by providing a catalogue of genes that show altered expression in negative IVF outcome.\n\nThe transcriptomic datasets are presented in an easy-to-use interactive web application that enables users, including those who are not experts in gene expression profiling, to identify altered gene expression in oocytes and associated cells in normal and diseased situations. Overall, I would give the manuscript in its current form a high priority to be indexed. I have minor comment and suggestion.\nIn page 3, the first paragraph (line 10) of the introduction, the secondary oocyte is also commonly known as ‘egg’. I would suggest that both terms are mentioned.\n\nIt will be interesting for the authors to check whether adding the term (Egg) in the queries will yield extra datasets.\n\nI have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.", "responses": [] }, { "id": "20579", "date": "17 Mar 2017", "name": "François J. Richard", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAlthough this manuscript is showing a web platform to look at genes involved in PCOS patients in oocyte, cumulus cells and granulosa cells, some concerns should also be considered and is not addressed here.\nBias can be obtained because of the method of RNA isolation, purification and RNA amplification that may be different between published papers.\n\nThe datasets were validated using cellular specific gene expression but nothing is mentioned about cellular contamination, reference genes (housekeeping genes), ...\n\nBecause the data are classified by study using raw values for specific gene for each sample, it becomes highly difficult to grasp meaningful information. It would have been helpful to further analyze the data and not only showing the raw data of each sample.\nIn conclusion, it seems to be a good platform to run a quick analysis looking at several studies.", "responses": [] }, { "id": "20492", "date": "20 Mar 2017", "name": "Rita Singh", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Mackeh et al. is a collection of the gene expression datasets of oocyte, cumulus cells, and granulosa cells of normal and PCOS patients undergoing IVF. It is a good compilation of related datasets already published in public repositories.\nHowever, there are some concerns that are not addressed here.\nHow were the expression values, as shown in the graphs, obtained from the raw data files? The details of the methodology used to analyse the raw data and to generate the ranked gene lists should be given. In the present form, it is difficult to make use of the data for any meaningful scientific analysis.\nThe purpose of this study is to browse a single gene across multiple projects to evaluate its expression profiles in multiple biological systems/conditions in web-based customized graphical views. However, the gene expression data is shown as expression values for some datasets and as Log2 expression values for the others.\nThere is a typographical mistake in the spellings of granulosa cells.\nThe Pubmed articles linked to the data sets are not available.\nAlthough putting together these data is helpful for the analysis of transcriptome data from normal and PCOS patients undergoing IVF, it would be meaningful but not mandatory to include the data available from similar platforms for theca cells.\nIt is a good effort done by the authors to put together several studies.", "responses": [] } ]
1
https://f1000research.com/articles/6-181
https://f1000research.com/articles/5-598/v1
07 Apr 16
{ "type": "Research Article", "title": "Control of Aedes aegypti in a remote Guatemalan community vulnerable to dengue, chikungunya and Zika virus: Prospective evaluation of an integrated intervention of web-based health worker training in vector control, low-cost ecological ovillantas, and community engagement", "authors": [ "Gerard Ulibarri", "Angel Betanzos", "Mireya Betanzos", "Juan Jacobo Rojas", "Angel Betanzos", "Mireya Betanzos", "Juan Jacobo Rojas" ], "abstract": "Objective: To study the effectiveness of an integrated intervention of health worker training, a low-cost ecological mosquito ovitrap, and community engagement on Aedes spp. mosquito control over 10 months in 2015 in an urban remote community in Guatemala at risk of dengue, chikungunya and Zika virus transmission.\n\nMethods: We implemented a three-component integrated intervention consisting of: web-based training of local health personnel in vector control, cluster-randomized assignment of ecological ovillantas or standard ovitraps to capture Aedes aegypti mosquito eggs, and community engagement to promote participation of community members and health personnel in the understanding and maintenance of ovitraps for mosquito control. The intervention was implemented in local collaboration with the Ministry of Health’s Vector Control Programme, and in international collaboration with the National Institute of Public Health in Mexico.\n\nFindings: Eighty percent of the 25 local health personnel enrolled in the training programme received accreditation of their improved knowledge of vector control. Significantly more eggs were trapped by  ecological ovillantas than standard ovitraps over the 10 month (42 week) study period (t=5.2577; p<0.05). Among both community members and health workers, the levels of knowledge, interest, and participation in community mosquito control and trapping increased. Recommendations for enhancing and sustaining community mosquito control were identified.\n\nConclusion: Our three-component integrated intervention proved beneficial to this remote community at risk of mosquito-borne diseases such as dengue, chikungunya, and Zika. The combination of training of health workers, low-cost ecological ovillanta to destroy the second generation of mosquitoes, and community engagement ensured the project met local needs and fostered collaboration and participation of the community, which can help improve sustainability. The ovillanta intervention and methodology may be modified to target other species such as Culex, should it be established that such mosquitoes carry Zika virus in addition to Aedes.", "keywords": [ "Zika virus", "mosquito-borne disease", "virology", "viruses", "vector control", "Latin America", "community engagement", "Dengue virus" ], "content": "Introduction\n\nThere is increasing concern about mosquito-borne disease, amplified by the recent Latin American outbreaks of Zika virus, which have raised new alarm about their rapid spread and illness in vulnerable populations1. The fact that globalization increases vector migration has been demonstrated with the first finding of mosquitoes infected with the African West Nile virus in 2009 in the New York City region2. The Americas have seen invasion of dengue virus, chikungunya virus, and most recently the Zika virus, causing dangerous outbreaks and subsequent morbidity and mortality that have proved difficult to control in a sustainable manner.\n\nAll of these viruses, plus the yellow fever virus, are transmitted mainly by mosquitoes of the Aedes genus (subgender: Stegomyia), more specifically the African species Aedes aegypti, one of the most aggressive vector mosquitoes capable of transmitting these illnesses, and to a lesser level the Asian tiger mosquito, Aedes albopictus, which prefers to bite during the day and is emerging as one of the most adaptable insects in the world3–5. The rapid rise of virus transmission on the American continent is due partly to the lack of historical immunological experience (have not been exposed to the viral infection) or known cross-immunization to the different species and viral genotypes6,7 of the people in the Americas. A second factor affecting the transmission is the stationary abundance of the vector Ae aegypti and other species susceptible to viral infection (e.g. Culex spp?). Furthermore, the traditional control method – pesticide – is both indiscriminate and thwarted by the multiple cases of pesticide resistance in Aedes mosquitoes recently reported8,9.\n\nThe rapid spread of Ae aegypti mosquitoes in most of the American continent began around 200010, and now dengue fever is active in more than 128 countries11, chikungunya virus in 40 countries12, and the Zika virus in currently 26 countries and rising13.\n\nSo while the globalization of Aedes spp. mosquitoes started more than 15 years ago, we have not been able to stop or control it. What solutions are available?\n\nSeveral methods have been employed for detecting the presence of mosquitoes in the field, but none have proved perfect. Standard ovitraps to monitor Aedes mosquitoes are large 1-litre black buckets (Figure 1) filled with plain water or attractant solutions based on natural plant infusions, along with a wooden strip or porous pellon paper, which the mosquito lands on and lays its eggs (called oviposition)14,15.\n\na) Classical ovitrap b) Covered ovillanta b) Inside an ovillanta showing the landing strip.\n\nHuman landing techniques have also been used, but less so now due to ethical concerns. The BG-Sentinel trap is a newer mosquito monitoring device, which has proved similar or better to the human-landing method for the detection of mosquitoes such as Anopheles darling16.\n\nOf these monitoring methods, only the standard ovitraps have thus far proved to effectively reduce the amount of adult mosquitoes when they are used for extended periods and the ovitraps are continually surveyed and maintained, a very expensive and time-consuming exercise17.\n\nTo overcome these limitations, other ovitraps have appeared on the market – some force the larvae to a confined, flooded partition where they die because they cannot reach the surface and breathe. Others contain glue or larvicidal compounds, chemical or biological (e.g. Bacillus thuringensis, fish ‘Poecillia maylandi’), to rapidly destroy the larvae. Each of these new traps has monitoring merits, but none have proved efficacious in the control and reduction of mosquitoes in field studies18.\n\nTypically, the water or attractant solution used in these ovitraps is removed and expelled into the ground each time the ovitrap is surveyed, and as a result any larvae or pupae are destroyed on the dry soil. The collection of the eggs “glued” to the landing strip allows for counting and thus monitoring. Two problems have been identified with this “dumping” of the solution. First, the need for clean replacement water is often problematic in remote areas without the basic infrastructure, and second, any eggs dislodged from the landing strip onto the dumped water or solution represent the possibility of continuous reproduction of mosquitoes. It is known that Aedes eggs can survive on the dry ground for several months19, thus creating a compromised situation of perpetual reproduction.\n\nTo address these problems, we determined that filtration and recycling of the solution (either water or attractant) seemed to be an optimal alternative, and we set about developing a modified ovitrap. The most attracting solution to Aedes mosquitoes is the solution that has had larvae of the same species (conspecific larvae), making the recycling of the solution more attractive to female mosquitoes20.\n\nThe development of our modified ovitrap at Laurentian University in Sudbury, Canada was initially geared to the reduction of the local West Nile virus vector, mainly the species Culex pipiens and Culex restuans. A 90% selective reduction of the adult mosquitoes of these species was achieved over three months around the site where the ovitraps were installed in 2008 (unpublished report, Ulibarri et al.). The attractant solution used in these modified ovitraps was based on the fermentation of natural plants spiked with homemade chemicals known to be oviposition-attractive to these species, which we also developed. The modified ovitrap is commercially available from Green-Strike, Canada (http://green-strike.com/products/mosquito-preventer-2) together with the solutions that selectively attract the local Culex spp. (Cu-Lure™) or Aedes spp. mosquitoes (Ae-Lure™) (e.g Aedes vexans/Ochlerotatus canadiensis). The attractant has the advantage of being both non-toxic and environmentally sensitive.\n\nThe same principles driving the Canadian ovitrap were later used in a field study during 2013-2014 in Petatlán, Mexico against the dengue fever mosquitoes Ae albopictus and Ae aegypti. The Aedes oviposition was reduced by 71% on the site with the modified ovitraps, compared to oviposition in a control site (unpublished report, Ulibarri et al.).\n\nBuilding on these previous studies, we undertook a field study in Sayaxche, Guatemala beginning in February 2015. We were determined to use an integrated approach that involved education of the community and health workers responsible for vector control about the mosquito cycle, sustainable strategies for keeping homes and gardens clean and unattractive to mosquitoes, the implementation and maintenance of mosquito traps, and collaboration between community and health sectors to collectively manage vector control and prevention of mosquito borne disease. Here we describe our project and its results.\n\nWhen we embarked on the collaborative project, two challenges arose immediately. First, we discovered that local health workers required more extensive training in vector control than initially thought. This was promptly solved when the Instituto Nacional de Salud Publica (INSP) in Cuernavaca, Mexico offered to share a beta version of a web-based programme they created with the International Development Research Centre (IDRC) in Canada, to provide vector control and health information in Spanish (see Annex 1).\n\nSecond, we were unable to procure our original ovitraps from Canada, and we needed to search for an appropriate alternative. We decided to use recycled tires – partly because tires represent up to 29% of the breeding sites chosen by Ae aegypti mosquitoes, and it also gave us the opportunity to recycle some old tires that were littering the local environment21,22. Modifying the ovitrap with tires gave birth to the ovillanta, a piece of a tire fitted with a valve to help direct the attractant solution to a filter (Figure 1).\n\n\nMethods\n\nFor this project, we designed the intervention to include three components in an integrated fashion: training health workers in vector control, low-cost ecological mosquito ovillanta, and community engagement. We sought to determine the effectiveness of this integrated intervention on Aedes spp. mosquito control over 10 months in 2015 in an urban remote community in Guatemala at risk of dengue, chikungunya and Zika virus transmission.\n\nOur project was enabled by intersectoral collaboration between academic researchers, local health authorities from the Ministry of Health Vector Control Programme of Guatemala. international collaborators in Canada, Guatemala and Mexico, and community members. Our broad goal was to empower the community, supported by trained health personnel, to adopt the administration, care and maintenance of the health of the community.\n\nEthics. This study was reviewed and authorized by the representatives of the Program on Vector-Borne Transmitted Diseases of the Ministry of Public Health and Social Assistance of Guatemala (Programa de Enfermedades Trasmitidas por Vectores del Ministerio de Salud Publica y Asistencia Social (ETV/MSPAS) de Guatemala). The MSPSA follows guidelines adopted from the Reglamento del Sub-Comité de ética e investigación (www.paho.com)23. We interviewed and invited each of the health workers to participate in the web-based training and encouraged them to voluntarily subscribe online to the program. We personally engaged people within the community, always accompanied by workers from the Health Unit of Sayaxche. It is customary in the region to first engage the representatives of the community by a verbal invitation to sit down and discuss the project. Once permission was obtained, we randomized the houses, and invited adults to voluntarily participate, assuring them that if there was anything that caused discomfort we would withdraw them immediately from the program. All invitations, focus groups, and interviews were verbally carried out in Spanish and recorded on a portable recorder, and transcribed soon after. When necessary, an interpreter was present for interviews with non-Spanish speaking community members. We, accompanied by representatives of the community leaders, explained the benefits of their participation and their responsibilities, allowing them to voluntarily join the study project. The protocol for the community engagement/social participation evaluations was also presented and approved by the ETV/MSPAS representatives.\n\nStudy area. Sayaxche is a remote urban center of the southwestern part and health unit area of the Peten territory in Guatemala. It borders the states of Chiapas and Tabasco in Mexico. Sayaxche’s Q’eqchi Maya origin means the ‘Ceiba’s wye or fork.’ The surface is 3752 km2 and occupies 10.9% of Peten’s territory. It is 250 m above sea level, with a warm, varied and humid tropical climate, with a rainy season (June to November) and a dry season (December to May). The monthly average temperature varies between 23°C (December/January) and 32°C (in the dry month of May).\n\nIntervention. Our three-component integrated intervention consisted of: a) web-based training of local health personnel in vector control, b) cluster-randomised assignment of ecological ovillantas or standard ovitraps to capture Ae aegypti mosquitoes, and c) community engagement to promote participation of community members and health personnel in the understanding and maintenance of ovitraps for mosquito control.\n\na) Web-based training of local health personnel in vector control (Annex 2). As we embarked on our project we consulted with the local health authorities, including the manager of the Vector Control Programme in Sayaxche, to determine the needs of the personnel. These included the need for improved technical skills, liaising with community including cultural sensitivity, and promoting and implementing programmes, as well as a strong need to improve the efficacy of the preventive measures of vector transmission in the region. As mentioned earlier, the health workers had higher educational needs than we had initially expected.\n\nTo meet these needs, we collaborated with academic leaders at the INSP and with colleagues in the EcoHealth Leadership Initiative on Vector Borne Diseases for Latin America and the Caribbean, which is part of IDRC. We developed learning objectives, a web-based platform and a bibliography of resources to help strengthen the technical skills of the health personnel in the vigilance, prevention and control of vector-borne diseases using an ecosystem approach that aims for sustainability and is tailored to local community needs (see Annex 1 and Annex 2 for more details of the curricular planning). The web-based training modelled a dynamic process of teaching-learning, incorporating both theoretical and practical aspects of how to prevent and control vector-borne diseases. It utilized diverse learning strategies with specific content sessions, directed homework, and literature review. It took place over 5 months (April to September) in 2015, covering 40 hours in total, with 50% of practice and homework on a virtual platform.\n\nWe measured improved knowledge and skills of local health personnel in vector control by their successful accreditation following the web-based training programme.\n\nb) Control of vector mosquitoes using ovillantas. We developed an ecological ovillanta from recycled tires, and compared this device to standard ovitraps in terms of mosquito oviposition. We quantified egg collection from standard ovitraps on sites with ovillantas and compared it to standard ovitraps on sites without ovillantas.\n\nWe modified the standard ovitrap to create ovillantas, made out of a piece of recycled tire with a PVC flanged wash basin drain-type tube and a valve, for the capture of the vector mosquito’s eggs (Ae aegypti). The attractant solution called Ae-Lure (8 mL; Green-Strike, Canada) was placed on the ovillanta and two litres of clean well water were added. Each ovillanta contained two landing strips – one on each end of the apparatus. For the landing strips we used 15&#215;10 cm pieces of pellon paper, but other porous material might be used.\n\nWe compared the ecological ovillantas with the standard ovitrap, which was a black bucket containing one litre of clean well water with one piece of the same pellon paper (but of a larger size, 30&#215;10 cm) around the rim.\n\nThe urban core of Sayaxche contains 15 neighbourhoods, which comprised 14,454 inhabitants within 3,882 houses. We designed the random sampling based on our previous study in Petatlán, Mexico (unpublished report, Ulibarri et al.) and modified it to fit Sayaxche, applied as follows: 14 neighbourhoods (out of 15) were divided in half making two separate groups, the study group and the control group. Within each neighbourhood, 3 continuous blocks of houses were randomly chosen and one house at the center of each side of the block (north, east, south and west) was consulted to set up either 2 ovillantas and 1 standard ovitrap (study site) or 1 standard ovitrap per house (control site). If a chosen house declined to participate, the next house was invited. In the end each neighbourhood had 12 households participating. The total number of sites with ovillantas/ovitraps were 84 for the study group and 84 for the control group (see Table 1 for the names of the neighbourhoods).\n\n* Monitoring with ovitraps model: Ministry de Health, México. Program of Control of Dengue Vigilance, Prevention y Control. CENAPRECE. Anexo 3\n\nOnce the ovillantas and ovitraps were installed in the different households, the health workers started cleaning the ovillantas in the presence of household members and, later on, as interest increased, household members themselves were responsible for cleaning the ovillantas or ovitraps, in the presence of health workers. At the beginning of our project, the joint care and egg collection was carried out once a week, on a day pre-established with the community. During the high season (June-October), it was necessary to monitor and clean the ovillantas and ovitraps twice a week, in order to avoid the production of mosquitoes due to the rapid development of the larvae during high temperatures. The pellon papers containing the eggs were removed and given to the health workers, and new clean pieces of paper were installed. The papers with the eggs were taken to the laboratory of the local health unit by the health workers, and counted there by specialized personnel.\n\nAt the beginning of the project we started with three different types of mosquito traps: an ecological ovillanta with AE-Lure, an ecological ovillanta with Malaria-Lure, and the standard ovitrap with water. But shortly after initiation, we discovered Malaria-Lure to be ineffective for malaria-parasite carrying Anopheles mosquitoes. Therefore, by week 4 we changed all ecological ovillantas to include AE-Lure only.\n\nWe measured mosquito control by the mean number of eggs counted in ecological ovillantas versus standard ovitraps during 31 collections over 10 months between Feb and Nov 2015. Egg counting was undertaken once a week.\n\nc) Community engagement. Through training, our intersectoral multidisciplinary approach aimed to improve and sustain the technical competencies of health workers. Through the modification of the ovitrap, we were sensitive to cost, ecological concerns, and local applicability. For both of these integrated interventions to work, community involvement was required. Therefore, we simultaneously undertook community engagement activities, recognising the importance of sustained social participation by both community members and health workers.\n\nWe developed strategies to engage the main representatives of the community, in order to motivate participation in the mosquito control programme. We focused on education of the community (in addition to health workers) about the mosquito cycle, sustainable strategies for keeping homes and gardens clean and unattractive to mosquitoes, the implementation and maintenance of mosquito traps, and collaboration between community and health sectors to collectively manage vector control and prevention of mosquito-borne disease. Specifically, community members were taught how to safely keep the ovillantas or ovitraps, including how to clean the traps, change the pellon paper and hand the pellon paper to the health workers. They were incentivised by the education they received on how to reduce the amount of mosquitoes and thus keep themselves and the community healthy. Another strong incentive was the absence of fogging, thus no pesticides in their neighbourhood, unless they did not care for the ovillantas properly and there were too many mosquitoes.\n\nWe measured social participation by surveying community member and health worker perceptions, knowledge and participation in mosquito control using qualitative methods. (see Annex 3 for fuller details of the evaluation of social participation). Focus groups were undertaken in eight neighbourhoods, four from the study site and four from the control site. Within each neighbourhood, one person from 24 randomly selected households (12 households with either ovillantas or ovitraps and 12 households that did not have either, but were located within the neighbourhoods of the study) was invited to participate in a focus group. Of the sixteen focus groups, the number of participants ranged from three to eight household members each.\n\nIn addition, qualitative interviews were conducted with three health workers who undertook the training and participated in the mosquito control programme, and one programme manager. Evaluation of social participation and analysis of the qualitative data was carried out at the beginning (February 2015) during the setup of the ovillantas and ovitraps in each household, in order to establish a base level for evaluation. This was followed by a second interview at the end of the project (December 2015). The interviews were carried out following the protocol validated by the ETV/MSPAS representative and were conducted by an experienced researcher independent from the other two components of the project (ovillanta study, web-based training) (MB).\n\nData analysis. We analysed quantitative variables (egg count) and categorical variables (area, type of trap) for their frequency of distribution, measuring the central tendency with a graphical representation and distribution tests. The software package Stata13® was used to carry out the t-tests, which analyze the difference between two means and the dispersion of the scores (see Dataset 1).\n\nWith respect to the qualitative data, we undertook an analysis per topic to understand perceptions, social participation, usefulness of strategies for control and prevention of the diseases, negative and positive factors of the control programme, and recommendations for future strategies.\n\nEntomological poll. A survey of potential breeding sites was conducted in each house of the intervention neighbourhoods, where the ovillantas were placed. Each container capable of accumulating water (drums, tires, cans, etc.) was analysed and larvae/pupae were counted if present. A percentage of each type of container that was positive was established compared to the total number of containers found (see Dataset 2).\n\n\nResults\n\na) Web-based training of health workers in vector control. Twenty five health workers subscribed to the web-based training programme (of 35 eligible). Most had obtained elementary school level education (67%); the remainder had achieved junior high school level (18%), high school (10%), or university undergraduate level (5%). Following the web-based training (comprised of online lectures, case studies, virtual discussions and homework) teams of students worked on a proposed ecological intervention for the prevention and control of the vector mosquito in Sayaxche using a manual validated by the academic unit of the INSP (Annex 1) and were evaluated (directly by the INSP evaluation personnel, independently from the project’s participants). Eighty percent of the students were accredited, obtaining between 75 and 95 points out of 100, and receiving a certification issued by the INSP.\n\nThe perceptions and evaluations of the course by the students and program coordinators were remarkably positive, recommending that the course be implemented in all health units across Guatemala. Nevertheless, several difficulties were reported by the students. The most important was the difficulty to access computer equipment and the lack of training on the use of computers with internet in remote places. Some students complained about the excess of homework and the little time allocated for it. However, it was a general consensus that the applied benefits generated by the learning process were immediately felt in the technical field work, making the health workers more reassured.\n\nb) Ecological ovillanta. A total of 84 households (from 2204 in the area) were the focus of study within the 7 study site neighbourhoods. All 84 households that agreed to participate in our study were allocated to the study intervention (2 ecological ovillantas and 1 standard ovitrap). Intervention households were no more than 50 metres apart (in three continuous blocks), which we felt gave sufficient coverage because mosquitoes tend to fly up to 500 m around looking for blood or an oviposition site24.\n\nOvillantas were set up on Feb 8, 2015; the monitoring ovitraps were installed only on April 12, 2015. All of the systems, ovillantas and control ovitraps, were monitored and cleaned by the health workers on a weekly basis, with community members present. The health workers filtered and recycled the attractant solution, counting and eliminating any eggs deposited on the landing strips. One interruption of this weekly schedule took place, for 5 weeks in August and September, due to labour and political problems within the Ministry of Health in Guatemala.\n\nThe mean weekly egg count was higher in neighbourhoods with ovillantas with a mean of 19.26334 (SE 0.4707; 95% CI: 18.34056, 20.18613) than at the control sites using standard ovitraps, with a mean of 13.2787 (SE 0.8249; 95% CI: 11.66214, 14.89748). The difference was statistically significant (t= 5.2577; p< 0.05).\n\nFigure 2-left, shows the total amount of Aedes eggs collected and destroyed per month at the study sites using ovillantas (blue) (181,336 eggs over 10 months) and ovitraps (orange) (27,053 eggs).\n\nFigure 2-right, shows the amount of Aedes eggs collected and destroyed at the study site, per ovillanta (blue) or ovitrap (orange). This shows that ovillantas are almost twice as effective in attracting Aedes oviposition than the standard ovitraps.\n\nLeft: Total eggs counted per site. Right: Eggs per site and per device (ovillanta(blue)/ovitrap(red)). Site with ovillantas= 181,336 total Aedes eggs, Control site =27,053 total Aedes eggs.\n\nFigure 3 shows the mean value between the ovillantas and standard ovitraps, during the 42 weeks of the study.\n\nHowever, within the study site households, there were no statistically significant differences in the amount of Aedes eggs collected from ecological ovillantas and standard ovitraps over the 42 week study. We believe this may be because mosquitoes leave a natural pheromone with each egg each time they oviposit, thus the recycling may have concentrated the natural pheromone (the initial solution was not discarded, but instead new solution was added). Understanding this further will be the focus of a future study.\n\nWe also observed that independent of the data obtained per neighbourhood, the amount of Aedes eggs collected was greater on standard ovitraps (mean= 20. 514, SE 0.6181; 95% CI: 19.30213, 21.7254) when compared to ovillantas at the same site (mean= 15.854, SE 0.55027; 95% CI: 14.77511, 16.93254), with a statistically significant difference of t= -5.581 (p< 0.05). Furthermore, we observed that standard ovitraps at the same sites as ovillantas (i.e., study sites) (mean= 23.985, SE 0.8103; 95% CI: 22.396, 25.574) collected more Aedes eggs than standard ovitraps at the control sites (no ovillantas present) (mean= 13.279, SE 0.824; 95% CI: 11.622, 14.89). We speculate that there may be a synergistic attraction effect when there are ovillantas present in close proximity to standard ovitraps, which could explain why the cluster of ovillantas/ovitraps produced better results per unit than an ovitrap alone (Figure 2, right).\n\nThe observed amount of Aedes eggs differed within neighbourhood. There was also the competition of the ovillantas/ovitraps against natural breeding places within a given house, based upon an entomological poll we conducted (Table 2). Among the study neighbourhoods, El Centro stands out as the site with highest egg count across the study period (mean= 47.23; 95% CI: 43.12, 51.34), and major breeding container diversity, with 127 different types (drums: 16.26%, laundry water basin: 21.95%, tubs: 9.76% and other smaller containers: 21.95%, and from each one of the container types: 40%, 37.04%, 16.97% and 22,22%, respectively presented a positive larval/pupae count). El Centro produced 23.58% of the total Aedes larvae found among all the study sites. In the neighbourhood La Esperanza (mean= 41.22) an average of 7.79% of the whole containers contained larvae/pupae. While the same were found among the laundry water basin and buckets/canisters in 12.50% of each type of container. In the neighbourhood of San Miguel 50% of the tires found contained larvae/pupae, followed by 37.50% of laundry water basins. In contrast the neighbourhoods of La Pista and Hojita Verde showed the lowest levels of larvae production among the inspected containers with 4.55% (mean= 4.05) and 4.44% (mean= 0.14), respectively. In both of these neighbourhoods the drums were the main containers where larvae were found (21.05% and 16.67%, respectively).\n\nSource: Entomological Poll of the Program in Sayaxche based on the normativity of the Ministry of Health, Mexico (CENAPRECE).\n\nN=Number of each water containers with larvae/pupae.\n\nWe were unable to conduct definitive viral identification tests because the technology does not exist, or is inaccessible in these remote locations. The low numbers among the official reports (Table 3) show the limitations, in several communities around Sayaxche, of establishing a surveillance of confirmed or suspected cases. Cases are diagnosed only in extreme clinical conditions of fever, rash, etc.\n\n(data from: Laboratorio Nacional de Salud, Guatemala city, Guatemala).\n\n*There were 2 imported cases confirmed in Sayaxche.\n\nc) Social participation. Sixteen focus groups with household members were conducted, across eight neighbourhoods – four within the study site and four within the control site. Each group comprised three to eight participants (70% women), out of 12 people per focus group invited.\n\nTaken together, several themes of sociocultural issues were identified from the focus groups, which can help guide future strategies geared to strengthen community participation in mosquito control:\n\n1. Belief that the mosquitoes breed only in natural ponds, not in backyards.\n\n2. Belief that cleaning of the house and garden are tasks of women only.\n\n3. Belief that the Ministry of Health’s services are not efficient.\n\n4. Preference for self-medication using local medicinal plants (e.g. Cat’s Claw (Uncaria tomentosa) against dengue and chikungunya fever.\n\n5. Very little information is provided to the public about the cause of illness and consequences.\n\n6. Dependency of the community on public services to maintain the cleanliness of the streets.\n\n7. Dependency of the community on public antimalarial control, without participation.\n\n8. Low regard for the advantages of community participation.\n\n9. Communication difficulties between some ethnic groups.\n\n10. Low acceptance of new mosquito control methods and transmission prevention strategies, despite wanting less reliance on pesticides.\n\nIn terms of the ecological ovillanta as a form of mosquito control, community members provided several views. Overall, the recycling of the attractant solution was welcomed given the difficulty to obtain clean water in the region. There was satisfaction with the use of both the ovillantas and ovitraps as preventive methods and to reduce the overall number of adult mosquitoes. There was interest in knowing how many Aedes eggs were collected each week, and women in particular were keen to be more involved in the activities related to health and showed interest in knowing how the system worked and learning how to maintain it. Among those surveyed who did not have ovillantas, many showed interest in having them installed at their households. In general, the impression was that the ovillantas were effective in reducing adult mosquitoes.\n\nAmong the health workers and manager interviewed, the training programme was said to motivate and strengthen their social and technical field work. Financial resources, vehicles, fuel and personnel were said to be scarce. They requested more research in the field, so they could further learn. They expressed concern that implementation of any new strategies be sustained.\n\n\nDiscussion\n\nWe found our three-component integrated intervention to have proved beneficial to this remote community at risk of mosquito-borne diseases such as dengue, chikungunya, and Zika. The combination of health worker training, low-cost ecological ovillantas, and community engagement ensured the project met local needs and fostered collaboration and participation of the community, which can help improve sustainability.\n\nThe integrated involvement and web-based certification of local health workers strengthened expertise in the area, and has generated evidence of ecosystem alternatives against Aedes mosquitoes.\n\nThe ecological ovillantas, made out of recycled material (garbage that when left in the field increases the potential of creating breeding sites), proved positive in the capture of Aedes eggs and possibly on the reduction of the adult mosquito populations. A remarkable acceptance and willingness to participate was observed, not only from the community but also from health workers who monitored its implementation. This was bolstered by community members learning and observing the early biological life stages of the vector, as well as observing the number of Aedes eggs collected in their households, which permitted them to relate those to the presence and quantities of adult mosquitoes potentially produced. It is possible that there may be a synergistic effect between the motivation to participate in the control of mosquitoes, using the innovative strategy of the ovillantas, and the complementary actions needed to maintain an orderly and clean house and backyard. This serves to avoid the creation of artificial breeding sites for the Ae aegypti mosquito and the transmission of viral infections.\n\nInterestingly, a recent report by Ayres25 from the Brazilian Oswaldo Cruz Foundation described that species of mosquitoes other than Aedes can be infected with the Zika virus, including Culex spp. mosquitoes, in laboratory settings. Recent reporting from Brazil has also affirmed concerns that infection with Zika virus of Culex mosquitoes is vastly more common than Aedes and is being overlooked in the prevention and control efforts26. So while the contribution of these other species to Zika transmission has not as yet been firmly established, we anticipate that our ovillanta approach could potentially be used to reduce populations of either species. For example, Culex restuans/pipiens mosquitoes were collected in the first Canadian study using a modified ovitrap. In the summer of 2007, approximately 3.2 million eggs (in rafts) were counted and destroyed in the city of Sudbury, Canada in a 90-day study, using 150 modified ovitraps, resulting in a 90% reduction of the adult Culex spp. population within the study sites (unpublished results Ulibarri et al.). The only requirement to attract a different species, or to have an effect on both Aedes and Culex mosquitoes in the region, is a change of the attractant solution used in the ovillanta, or the set up of a second ovillanta with different attractant solution – the equipment stays the same.\n\nIn general, the level of knowledge that community members and health workers held about different viral infections was almost non-existent and, at times, wrong. Clearly there is a need to carry out novel strategies aimed at gaining and maintaining the attention of the community; traditional recommendations provided by health workers tend to bore them, they said. More dynamism is necessary, especially with children. In addition, it is important to include the active participation of infected patients with health centres to avoid further mosquito infection. In this way, infected symptomatic cases can be properly recorded and monitored, and the patient can be followed accordingly.\n\nThe participation of the population in vector-borne disease prevention and control is an area that requires more effort and attention. One aspect requiring sensitivity is the clear extent of individualism within communities in this area of Guatemala, and the evident conflicts among different ethnic groups, mainly around culture and language. These work against social cohesion and participation. All the government groups responsible for ensuring health and safety require coordination and collaboration, including reducing the number of abandoned lots where mosquitoes breed. It would be beneficial for the government to apply an ecosystem approach27 for the communal benefit and to establish a mechanism for people not to throw garbage on the streets and learn to recycle the same properly.\n\nThere were several limitations to our study.\n\nOur project was not able to consider the inclusion of the epidemiologic impact of the methodology on viral transmission using sequential polls of seroprevalence. However, the intensity of dengue transmission while the project developed was higher in cities close to Sayaxche, such as Las Cruces and La Libertad. During our study period, the reported incidence of dengue in 2015 for La Libertad, Las Cruces and Sayaxche was 3.33, 10.91 and 2.06 per 10,000 pop., respectively. This was also a direct observation of the Director of the Program in Sayaxche with respect to the buffering effect possibly associated with the intervention using the ovillantas. The Pan American Health Organization (PAHO) reports 18,058 probable cases (1,228 laboratory-confirmed cases) across Guatemala in 201528 and 1,179 probable cases up to week 11 of 2016. We ask ourselves, how has the city of Sayaxche been spared this national burden? The confirmed incidence in La Libertad was 5.29 times higher than the ones registered in Sayaxche during the same year. The effectiveness of the ovillanta intervention on the reduction of dengue virus transmission within the city of Sayaxche could be based on the fact that zero autochthonous cases were reported, and only three imported cases were confirmed (unpublished data, Area de Salud Peten-Suroccidental). We remain cautious and will continue to monitor the epidemiology closely.\n\nSecond, while we believe the neighbourhoods and households shared the same or similar features in terms of sociodemographics, climate, and others, we did not specifically measure these variables. And although we systematically monitored the presence of backyard containers with larvae or potential water collection (in part through the entomological poll), their presence was not documented nor measured in households that may also have served as mosquito breeding places (i.e., tubs, drums, etc. that filled with rain water).\n\nThird, we did not implement and complete the health worker training prior to initiating the second and third components of the study. The web-based training for health workers was provided simultaneously, showing that the trained personnel gained a better understanding of the vector control process and a better transfer of information to the community. All of the health workers involved in installing and monitoring the ovillantas undertook the web-based training, and only one failed to be accredited. We believe that early training of the personnel, prior to interacting with the community, might have produced better results. The fact that the community were already acquainted with most of the health workers enabled their acceptance of the community engagement activities and of the ovillantas study.\n\nNonetheless, our project provides evidence for a promising alternative to harmful pesticides and standard ovitraps at a time when the threat of viral outbreaks is increasing. By incorporating ecology and community-oriented elements, this alternative has the potential to be effectively scaled-up and be sustainable.\n\n\nData availability\n\nF1000Research: Dataset 1. Database Sayaxche 2015. The raw data of Aedes spp. egg collection during the study in Sayaxche, Guatemala during 10 months of 2015, 10.5256/f1000research.8461.d11891729\n\nF1000Research: Dataset 2. Raw data of the entomological poll, 10.5256/f1000research.8461.d11891930", "appendix": "Author contributions\n\n\n\nGU developed the technology. GU, AB and JJR designed the study and implemented the methodology. AB modified, administered and supervised the INSP course and evaluation and performed the statistical analysis. MB designed the community focal groups evaluation and carried out the interviews. JJR managed the health personnel group and collected the data. GU and AB wrote the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nGU used to advise the developers of the Green-Strike ovitrap in Canada, and declares that has not received financial compensation for his contributions. All authors declare that they have no competing interests.\n\n\nGrant information\n\nThis research was conducted under the project # 0624-01-10 funded by Grand Challenges Canada through their Stars in Global Health Round 7 programme, Phase I.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors would like to thank the Grand Challenges Canada program for financial support on project and Laurentian University of Sudbury for their steadfast administrative support. The INSP for allowing to test their Beta program. Dr. Roy FitzGerald-Alvarado (Director), the Health Unit of Sayaxche’s technical personnel for their hard work, albeit labor difficulties within the Ministry of Health. Ing. Enrique Ulibarri and Mtra. Beatriz Arrez for their expertise and support with the ovillantas and attractants. We thank Dr. Jocalyn Clark for editorial assistance with this manuscript.\n\n\nSupplementary material (Spanish-language)\n\nAnnex 1. Intensive course to strengthen the vigilance, prevention and control of dengue transmission, an ecosystematic approach. (Curso Intensivo para el fortalecimiento de la vigilancia, prevencion y control del dengue con enfoque ecosistemico, 16 pages).\n\nClick here to access the data.\n\nAnnex 2. Competence and curricular program of the Course offered to the tactical operations personnel in Sayaxche, Peten, Guatemala. (Competencia y MOdulos Curriculares de; Curso para el Personal Tactico Operativo de Sayaxche, Peten, Guatemala, 1 page, small print).\n\nClick here to access the data.\n\nAnnex 3. Evaluation of the Project: Integral Participation and use of ovillantas in the prevention and control of Malaria and dengue in Sayaxche.) (Evaluación del proyecto Participación integral y ovitrampas en la prevención y control del vector de la Malaria y Dengue en Sayaxché, 21 pages).\n\nClick here to access the data.\n\n\nReferences\n\nKilpatrick AM, Randolph SE: Drivers, dynamics, and control of emerging vector-borne zoonotic diseases. Lancet. 2012; 380(9857): 1946–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEidson M, Schmit K, Hagiwara Y, et al.: Dead crow density and West Nile virus monitoring, New York. Emerg Infect Dis. 2005; 11(9): 1370–1375. PubMed Abstract | Free Full Text\n\nPaupy C, Delatte H, Bagny L, et al.: Aedes albopictus, an arbovirus vector: from the darkness to the light. Microbes Infect. 2009; 11(14–15): 1177–1185. PubMed Abstract | Publisher Full Text\n\nRezza G: Aedes albopictus and the reemergence of dengue. BMC Public Health. 2012; 12: 72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchafner F, Medlock JM, Van Bortel W: Public health significance of invasive mosquitoes in Europe. Clin Microbiol Infect. 2013; 19(8): 685–692. PubMed Abstract | Publisher Full Text\n\nCruz-Pacheco G, Esteva L, Vargas C: Multi-species interactions in West Nile virus infection. J Biol Dyn. 2012; 6(2): 281–298. PubMed Abstract | Publisher Full Text\n\nLi J, Gao N, Fan D, et al.: Cross-protection induced by Japanese encephalitis vaccines against different genotypes of Dengue viruses in mice. Sci Rep. 2016; 6: 19953. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFaucon F, Dusfour I, Gaude T, et al.: Identifying genomic changes associated with insecticide resistance in the dengue mosquito Aedes aegypti by deep targeted sequencing. Genome Res. 2015; 25(9): 1347–1359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeming R, Manrique-Saide P, Medina Barreiro A, et al.: Spatial variation of insecticide resistance in the dengue vector Aedes aegypti presents unique vector control challenges. Parasit Vectors. 2016; 9(1): 67. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrathwaite Dick O, San Martín JL, Montoya RH, et al.: The history of dengue outbreaks in the Americas. Am J Trop Med Hyg. 2012; 87(4): 584–593. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO: Dengue and severe dengue. 2016. Reference Source\n\nWHO: Dengue control-Chikungunya. Reference Source\n\nECDC: Countries and territories with local Zika transmission. 2013. Reference Source\n\nL’Ambert G, Ferré JB, Schaffner F, et al.: Comparison of different trapping methods for surveillance of mosquito vectors of West Nile virus in Rhône Delta, France. J Vector Ecol. 2012; 37(2): 269–275. PubMed Abstract | Publisher Full Text\n\nGovellae NJ, Chaki PP, Killeen GF: Entomological surveillance of behavioural resilience and resistance in residual malaria vector populations. Malar J. 2013; 12: 124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGama RA, Silva IM, Geier M, et al.: Development of the BG-Malaria trap as an alternative to human-landing catches for the capture of Anopheles darlingi. Mem Inst Oswaldo Cruz. Rio de Janeiro, 2013; 108(6): 763–771. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorrison AC, Zielinski-Gutierrez E, Scott TW, et al.: Defining challenges and proposing solutions for control of the virus vector Aedes aegypti. PLoS Med. 2008; 5(3): e68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCornel AJ, Holeman J, Nieman CC, et al.: Surveillance, insecticide resistance and control of an invasive Aedes aegypti (Diptera: Culicidae) population in California [version 1; referees: 1 approved with reservations]. F1000Res. 2016; 5: 194. Publisher Full Text\n\nRezende GL, Martins AJ, Gentile C, et al.: Embryonic desiccation resistance in Aedes aegypti: presumptive role of the chitinized Serosal Cuticle. BMC Dev Biol. 2008; 8(1): 82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTilak R, Gupta V, Suryam V, et al.: A Laboratory Investigation into Oviposition Responses of Aedes aegypti to Some Common Household Substances and Water from Conspecific Larvae. Med J Armed Forces India. 2005; 61(3): 227–229. Publisher Full Text\n\nMarten GG, Borjas G, Cush M, et al.: Control of Larval Aedes aegypti (Diptera: Culicidae) by Cyclopoid Copepods in Peridomestic Breeding Containers. J Med Entomol. 1994; 31(1): 36–44. PubMed Abstract | Publisher Full Text\n\nPena CJ, Gonzalvez G, Chadee DD: A modified tire ovitrap for monitoring Aedes albopictus in the field. J Vector Ecol. 2004; 29(2): 374–375. PubMed Abstract\n\nReglamento del Sub-Comité de ética e investigación. Reference Source\n\nHonório, NA, Silva Wda C, Leite PJ, et al.: Dispersal of Aedes aegypti and Aedes albopictus (Diptera: Culicidae) in an Urban Endemic Dengue Area in the State of Rio de Janeiro, Brazil. Mem Inst Oswaldo Cruz. Rio de Janeiro, 2003; 98(2): 191–198. PubMed Abstract | Publisher Full Text\n\nAyres CF: Identification of Zika virus vectors and implications for control. Lancet Infect Dis. 2016; 16(3): 278–279. PubMed Abstract | Publisher Full Text\n\nNolen S: WHO may be leading Brazil down wrong path on Zika virus. Globe and Mail. 2016. Reference Source\n\nQuintero J, Carrasquilla G, Suárez R, et al.: An ecosystemic approach to evaluating ecological, socioeconomic and group dynamics affecting the prevalence of Aedes aegypti in two Colombian towns. Cad. Saúde Pública, Rio de Janeiro, 2009; 25(Sup 1): S93–103. PubMed Abstract | Publisher Full Text\n\nPAHO: Dengue. 2016. Reference Source\n\nUlibarri G, Betanzos A, Betanzos M, et al.: Dataset 1 in: Control of Aedes aegypti in a remote Guatemalan community vulnerable to dengue, chikungunya and Zika virus: Prospective evaluation of an integrated intervention of web-based health worker training in vector control, low-cost ecological ovillantas, and community engagement. F1000Research. 2016. Data Source\n\nUlibarri G, Betanzos A, Betanzos M, et al.: Dataset 2 in: Control of Aedes aegypti in a remote Guatemalan community vulnerable to dengue, chikungunya and Zika virus: Prospective evaluation of an integrated intervention of web-based health worker training in vector control, low-cost ecological ovillantas, and community engagement. F1000Research. 2016. Data Source" }
[ { "id": "14070", "date": "31 May 2016", "name": "Laith Yakob", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract: I think these are both a little misleading. The integrated intervention was not evaluated in such a way that would allow for rigorous evaluation (there were too few trial arms) – I think the presented results can only be used to compare the two different trap types. The sentence in the abstract that indicates the higher mosquito egg collections found using the novel trap is also misleading – analysing total egg counts over the course of the experiment actually indicated higher captures with the standard ovitrap.\n\nArticle content: There are major issues with the study design. Not least of all, it does not allow for the assessment of any additive benefits from community engagement and healthcare worker education. The authors tested their new ‘ovillantas’ trap against standard ovitraps. Unfortunately, they only used lure-baited water in their novel trap meaning any difference found between trap types could just as easily be a result of the lure as it was the new trap design. Although cost was mentioned several times in the study, no indication was provided of the relative costs of the new tested trap (plus lure and construction time) versus the standard trap and so no conclusions could be drawn pertaining to cost effectiveness.\n\nRecycling the water in the traps is a good idea in the short term because natural chemicals emitted by ovipositing mosquitoes are attractive to future ovipositors. However, there must be an optimum duration over which water can be recycled, beyond which the build-up of algae etc would actually be unattractive to ovipositing females. This was never mentioned.\n\nLack of internet access in this remote population is something that perhaps should have been identified as a major study limitation before the attempted use of web-based education services.\n\nAfter community participation inevitably wanes and the eggs are no longer collected and routinely removed/destroyed, the ovillantas will provide ideal breeding grounds for disease vectors. Part of the study involved community education on eliminating breeding grounds from participant houses but these ‘traps’ sound as though they themselves will become ideal larval sites in time. The only ways around this would be the removal of traps by trained personnel over a predetermined time period or the continual surveillance of the local communities’ upkeep of the traps – both expensive and unlikely to occur.\n\nCount data were analysed using t-tests. Count data are typically not normally distributed (typically they are Poisson or negative/binomial distributed) and would require alternative analysis methods.\n\nMinor issues in the content include the assertion that ovitraps have previously been shown to significantly reduce adult mosquito numbers – to the best of my knowledge this is not the case. Also, the authors mention that the 500m dispersal ability of A. aegypti informed the distribution of the intervention households (no more than 50 metres apart) but I neither understand this logic nor agree with this very-much-upper-estimate in A. aegypti mosquito movement.\n\nConclusions: Overall, I’m left with questions over the usefulness of this new tool relative to the standard trap – depending on how the authors analysed their data they showed improved captures with both traps (and neither analysis was conducted rigorously). I also am left unsure of the additional benefits in terms of community participation or education/training because the study design did not allow for these to be tested independently.\n\nData: The data appear to be presented clearly and are in a format that allows for their future use by other scientists.", "responses": [ { "c_id": "2343", "date": "16 Dec 2016", "name": "Gerard Ulibarri", "role": "Author Response", "response": "Title and Abstract;The title has been modified to clarify that these are preliminary results shared with the public due to the Zika presence.  More trials are planned when funding is granted.  In the mean time, this is an approach which is working and should be used to help people protect themselves. One of the novelties here was that people from the community did participate actively, using a novel approach made out of items from their backyard. Higher mosquito egg count was achieved on site with ovillantas against site with only standard ovitrap.  Although, the reviewer is right that the ovitrap in ovillanta site collected more than the ovitrap alone.  We are investigating this phenomenon and believe is due to the ‘skip oviposition’ nature of the Aedes mosquito. Article content; We also compared water with ovillantas, against standard ovitraps, and at all time the ovillantas collected about 2.5 times more eggs than standard ovillanta (data not shown). ​​​​​​We did find algae problem but in very few ovillantas.  The fact of filtering and recycling the solution seemed to keep the solution clean for extended period of time.  When an ovillanta was polluted, then the whole solution was replaced with fresh water plus attractant solution. ​​​​​​Internet accessibility at the Health Center was identified prior to implementation of the study.  What was a problem, and we did not see it at the beginning, was that not everybody has access to internet in their homes, and internet is extremely slow in the region.  Thus, it was necessary that every single person travel to the Health Center for the ‘lessons’, an inconvenience to those far from the Center. Community participation; Absolutely, the same would happen if standard ovitraps were used.  For this reason, the Health personnel was trained to supervise the surveillance and cleanliness of the ovillantas in a regular basis while gathering the information on egg count (pellon paper exchange).  If the participant failed to maintain the ovillanta, this was removed from their premises or the Health personnel took over the maintenance.  Very few instances of this happened. t-tests; Other methods will be used and compared when more data is available.  For the time being, we considered that the t-tests were enough to show the significant difference provided by the use of ovillantas against standard ovitraps. Conclusions: This has been a very difficult, three-prong, study, and the paper reflects the first data produced, more will be produced in due time.  The Health personnel showed improvement after their Internet-based course, the community accepted the ovillantas very well and participated (something new), and the cluster of ovillantas collected more eggs than the isolated standard ovitraps. Those are the facts.  We are still studying how each one of the interventions influenced the outcome of the whole intervention.  In this report, we have presented the results to the holistic approach.  More questions will be answered and data will be presented in due time" } ] }, { "id": "15613", "date": "25 Aug 2016", "name": "Leon Eklund Hugo", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis investigation incorporated three components to improve the control of Aedes aegypti, mosquito vector of dengue, Chikungunya and Zika viruses, in a remote urban community in Guatemala. The three components were; use of web-based training in vector control for local health personnel, the development of mosquito ovitraps made locally out of used tires and evaluation in a trapping survey and community engagement activities. The authors are commended on the performance of a mosquito trap made from a common waste item that normally contributes to mosquito production. The new traps, referred to as ovillantas, caught more eggs per site, per device, than standard ovitraps.\n\nAlthough the title refers to the control of Ae. aegypti, which is the theme throughout the manuscript, the studies presented are not specifically tailored to measuring the success of the components in the context of Ae. aegypti control (mosquito population reduction). For component one; web-based education of health workers; a knowledge, action and practice (KAP) survey of the knowledge of health workers before they undertook training could have assisted to determine the changes due to the training module. For component two; data for the counts of mosquito larvae/pupae in different mosquito traps over time is presented within treatment communities. While this provides promising data on the effectiveness of ovillantas as trapping tools, there are no indication that the trapping had the effect of reducing the mosquito population. Ecological ovillantas may prove to be excellent tools for monitoring mosquito population numbers in resource limited settings, however more work is needed to establish their ability to reduce mosquito population sizes.  The authors attempted to asses the effect of their intervention on mosquito control by examining Dengue seroprevalence data, however low case numbers were reported in official records and these are likely to be underestimates, limiting this approach. Adult mosquito trapping would be another avenue to monitoring mosquito population changes. For component three, several themes were identified among socioeconomic issues affecting mosquito control, however future studies will be required to measure resulting improvements in community engagement.\nIn summary, a multi-component strategy towards the control of Ae. aegypti is presented that incorporates training of health workers in vector control, a new mosquito ovitrap constructed from recycled materials and community engagement activities. While an increase in trapping efficacy is demonstrated for the ovillantas, it is difficult to gauge the effectiveness of these components on Ae. aegypti control from this study alone. The title and theme may be better presented as \"development of strategies towards the control of Ae. aegypti\". I am hopeful that improved Ae. aegypti control can be demonstrated from future application of these interventions.", "responses": [ { "c_id": "2342", "date": "16 Dec 2016", "name": "Gerard Ulibarri", "role": "Author Response", "response": "Although we tend to agree with the Mr. Hugo (reviewer),we clarify that this is a report on preliminary data.  Entomological studies are necessary in order to establish with certainty that the viral infection has been reduced.  Adult traps do not work very well in Aedes spp. mosquitoes (reason for the use of standard ovitraps to monitor the presence of Aedes females in the study area). Nevertheless adult traps are planned for the near future studies, something very difficult to achieve in remote communities where commercial carbon dioxide is not easy to access.  A new trap is being designed to overcome this difficulties, and results will be reported in due time. In summary; Title has been modified to express that these are  ‘preliminary results”, more detailed data will be provided in due course" } ] }, { "id": "13527", "date": "30 Aug 2016", "name": "Anna M Stewart Ibarra", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall: Good example of intersectoral engagement and innovation in vector surveillance. The ovillanta design, which the authors show to be a highly sensitive vector monitoring tool, is most important element of this publication.  The paper needs to be strengthened through greater review of the literature in the introduction and discussion, clarifying research methods especially social science methods, and clarifying the data shown in the figures. The title is misleading (and some of the text), because the authors do not demonstrate a reduction in Ae. aegypti abundance.\n\nIntroduction\nParagraph 3. “The rapid spread of Ae. aegypti mosquitoes… began around 2000.” I’m not sure that this is correct. I believe that Aedes reinvaded the Americas in the 1970s-80s, and dengue reemerged and spread through the region in the 90s and 2000s. Please check other references.\n\nParagraph 4. Strengthen the discussion of Aedes surveillance tools. I suggest discussing currently available ae aegypti surveillance techniques, the role of surveillance in disease control, and limitations of current techniques. Are you comparing stationary vector surveillance/monitoring tools, vector control tools, or integrated surveillance and vector control tools? Traditional ovitraps, human landing catches, and bg sentinel traps all have slightly different objectives. Be sure to clarify this. I would suggest refocusing on the role of ovitraps, and conducting a more comprehensive literature review of different ovitraps currently on the market, their limitations and strengths.\n\nI suggest incorporating the paragraph about filtration and recycling into the subsequent paragraph so it is easier for the reader to understand that the current study is based on prior well-developed studies conducted in Canada. This will help with the flow of the intro.\n\nHow did you determine the most attracting solution? Lab based experiments? Please explain or cite prior studies.\n\nIn the introduction, please clarify the objectives of the study, the study design, and the study endpoints (e.g., increased education of health workers, number of Ae. aegypti eggs in XX communities).\n\nMethods I was a bit confused by the study design. Consider creating a diagram or map that depicts the study design. Why were standard ovitraps used in the intervention community?\n\nEgg counts are an indirect measure of adult female mosquito abundance (true entomological risk). Egg counts are affected by the number of alternative breeding sites in the environment. Did you account for other factors that could affect egg counts during the study period, such as differences in household characteristics between the treatment and control sites, elimination of containers or other vector control interventions?\n\nOverall the social science research methods and results lack sufficient detail determine the validity of the results. Three interviews is a small sample size. How do you know these are representative? Given the small number of informants, this information could also be included as part of the local context in the discussion section instead of presented as a result. How did you analyze the qualitative data from focus groups and interviews? What social science research methods were used to analyze the transcribed texts? Were the texts coded? Who did the coding? What software was used? Please include appropriate description of the methods and references.\n\nPlease provide more detail regarding the entomological surveys. Did you empty and collect and count all larvae/pupae from all containers? How were cisterns managed? Did you conduct the entomological surveys in the control neighborhoods to ensure that there were no differences between treatment and control sites? What dates/season did you conduct the surveys? The container positivity will vary greatly by season.\n\nResults Can you summarize the results of the training evaluation survey in a table?\n\nStrongly recommend showing summary statistics (maybe from the national census if not available from study households?) to compare the characteristics of the the treatment and control communities or households, to show that there were no major differences in housing conditions, demographics, access to piped water, etc. which could influence differences in the egg count data. In my experience, subtle differences in access to piped water can create completely different vector population dynamics even within the same city.\n\nFig 2, left panel. This figure is supposed to show monthly egg counts by site (treatment and control), but the legend indicates trap types rather than site types. This is confusing since both standard ovitraps and ovillantas were used in the intervention site. Fig 2, right panel. This figure is supposed to show monthly egg counts by site and by type of trap, but there are 2 lines on the figure rather than 4. Please clarify.\n\nFig 3. Why did you fit a linear curve? There appears to be seasonality in the data, indicating a nonlinear seasonal dynamic.\n\nPlease produce publication quality graphics, with consistent formatting and axes (e.g., epi weeks versus months)\n\nThe investigators report that more eggs were collected from the intervention site with ovillantas.  This clearly shows that ovillantas are a more sensitive surveillance tool. I think it is difficult to say whether they are more effective at reducing the mosquito population, unless you have other entomological indicators to show that the mosquito population was suppressed.\n\nDiscussion The investigators indicate that the level of knowledge of community members about viral infections is low. Please show these results in the results section.\n\nUnder study limitations, the major limitation (if the authors are claiming that the ovillanta is an effective vector control tool) is the lack of data on adult Aedes aegypti abundance or other entomological indicators.\n\nThe investigators should broaden their discussion to discuss their findings in light of other studies published regarding community dengue perception, alternative ovitrap designs that have been published recently by R. Barrera and others, and the effectiveness of community participation in vector surveillance. The discussion and introduction could be strengthened by a deeper review of the literature.\n\nOther comments: Please include a map of your study site. The authors mention that they limited by lack of climate data. This information is available online. They could download the data and and plot temperature and rainfall with Figure 3, to explain the seasonality.", "responses": [ { "c_id": "2341", "date": "16 Dec 2016", "name": "Gerard Ulibarri", "role": "Author Response", "response": "Title: The title has changed to express these are preliminary results on the approach. Introduction: Par. 3; These statement has been modified and a proper reference added Par 4;The intention of the paper was to showcase the positive preliminary result of the project.  The comparative study of ovitraps in the market should be a full paper on itself, and a new publication is being prepared o that topic.  The authors consider this not to be strictly necessary for this paper, because we are only trying to show that ovillantas are slightly better than standard ovitraps when used in clusters of oviposition traps. Attractant solution: We used a commercial formulation, out of Canada, recommended to attract Aedes mosquitoes.  Thus, we cannot describe how the solution was developed in this paper, we only give reference to where we obtained it from. Method;Oviposition count on a standard ovitraps is the standard way of measuring Aedes mosquito presence in a neighbourhood.  Thus, having standard ovitraps in the site where the ovillantas were present, seemed the right way to determine the amount of female mosquitoes present in the area. But of course, we had the ovillantas as well….which could have served to determine the same, but not recognized as ‘the standard method’ Egg count;Traditional cleaning of the neighbourhoods is a standard method employed in Latin America to reduce mosquito breeding sites.  Since this was already implemented in Sayaxche, we only verified that this was a regular exercise.  An it was through out the study. Social Science;We agree, these are only preliminary results, more in depth studies are planned and will be published accordingly. Recommendation;We considered that all the neighbourhoods used for the study around the city of Sayaxche have the same (or statistically similar) conditions of piped water, rain water containers, garbage clean up interventions, Health Unit educational information about mosquito issues and interventions. National census or data from the local Government is difficult to gather, we are still waiting for data form official sources." } ] }, { "id": "16060", "date": "02 Sep 2016", "name": "Johannes Sommerfield", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes an intervention carried out in rural Guatemala consisting of comparing standard vector control against an integrated delivery of a) web-based health worker training, b) low-cost ovitraps (“ovillantas”) and c) community engagement. I agree with the other reviews and add the following:\n\nThe title is quite long. In the introduction make clearer that the modified ovitraps are called “ovillantes” (the photo shows that but not in the introduction). To what extent do the authors think the ovillantes can be replicated in other settings, e.g., urban? Why did the authors choose to work on Aedes aegypti in a rural area, although Aedes aegypti is mainly an urban vector? Would ovillantes be applicable in cities, in Guatemala or elsewhere in Latin America?\n\nPertinent research project in times of arboviral disease epidemics, however, under methods more clearly describe study design and indicators used to measure intervention effectiveness and detail methods, including methods in qualitative research. There is now growing evidence that community interventions can in fact have an impact on the disease incidence and serological parameters  (recent publication on Nicaragua), so there is a need for good research and strong evidence on the effect of community interventions. This project did not do that, clearly spelled out in the discussion section, but the methodology section could be strengthened by explaining the analytical (and not only descriptive) elements of the study design.\n\nThe results suggest that ovillantes are more effective than standard ovitraps by comparing both interventions in different neighborhoods (“study sites”). Clarify whether this is by cluster-randomization or not.\n\nFocus groups results are presented to show (positive) impact on social participation but they are not presented by intervention areas so that the effect of the intervention on social participation  cannot clearly be attributed.\n\nAs the article argues, the FGD results “can help guide future strategies geared to strengthen community participation” but they do not fully indicate an  impact of the intervention. Also, while referring to the “community” and social participation the article refers to different ethnic groups and certain “communication difficulties”: unclear what this refers to (aren’t all of the communities of Mayan descent? What sub-groups or other ethnic groups is the article referring to?).", "responses": [ { "c_id": "2340", "date": "16 Dec 2016", "name": "Gerard Ulibarri", "role": "Author Response", "response": "Comments from the authors are accommodated following the reviewers 'paragraphs' Title has been modified. Sayaxche is a small city (urban) in an isolated area of Guatemala (rural).  All study was carried out within the city limits.  Sayaxche was chosen because the high dengue cases in previous years.  Yes, there are plans to implement the ovillantas in other cities, in Latin America, Asia, South Pacific, etc.  Wherever the Aedes mosquitoes are a problem, some people are already doing so. ​​​​​​This is a publication explaining the preliminary results from a three-prong study, a new publication will deal with the technical parts of the study, when data is more abundant. We know that this publication might have been premature, when dealing with pure scientific data. We trust that publishing these preliminary results early, could help more people given the urgency. We believe that together we could learn more about the implementation of this simple/affordable way to reduce Aedes population, and in turn provide tools aimed at protecting vulnerable people ​​​​​​Given the size of the city of Sayaxche, the ovillantas setting was ‘randomized’ in a manner that most of the city was covered. Although, a slight preference was given to sections of the city where most affected by dengue, based on previous years local data. The aim of the study was to ‘study’ social participation at the same time as the effect of the ‘new’ ovillantas on amount of mosquitoes produced in the area. The social participation was clearly present, the effect of the ovillantas was clearly demonstrated.  Now, we need to study the long term effect of the combined methodology. ​​​​​​In this study in particular, we encountered a cultural ‘division’ among the different members of the community.  At times, we required a translator to communicate with the ‘participants’ of the study, given that their mother language was Mayan, not Spanish.  This difficulty might  or might no be important, but needs to be considered while implementing these types of strategies in other settings. Cultural values in Mayan culture are different than most Western societies, thus we needed to adapt to this challenge, and the outcome was positive." } ] } ]
1
https://f1000research.com/articles/5-598
https://f1000research.com/articles/4-1234/v1
09 Nov 15
{ "type": "Research Article", "title": "An adaptable toolkit to assess commercial fishery costs and benefits related to marine protected area network design", "authors": [ "Rémi M. Daigle", "Cristián J. Monaco", "Ashley K. Elgin", "Cristián J. Monaco", "Ashley K. Elgin" ], "abstract": "Around the world, governments are establishing Marine Protected Area (MPA) networks to meet their commitments to the United Nations Convention on Biological Diversity. MPAs are often used in an effort to conserve biodiversity and manage fisheries stocks. However, their efficacy and effect on fisheries yields remain unclear. We conducted a case-study on the economic impact of different MPA network design strategies on the Atlantic cod (Gadus morhua) fisheries in Canada. The open-source R toolbox that we developed to analyze this case study can be customized to conduct similar analyses for other systems. We used a spatially-explicit individual-based model of population growth and dispersal coupled with a fisheries management and harvesting component. We found that MPA networks that both protect the target species’ habitat (particularly the spawning grounds), and were spatially optimized to improve population connectivity had the highest net present value (i.e., were most profitable for the fishing industry). These higher profits were achieved primarily by reducing the distance travelled for fishing and reducing the probability of a moratorium event. These findings add to a growing body of knowledge demonstrating the importance of incorporating population connectivity in the MPA planning process, as well as the ability of this R toolbox to explore ecological and economic consequences of alternative MPA network designs.", "keywords": [ "Cost-benefit analysis", "Atlantic Cod", "individual based models", "Conservation", "Fisheries management" ], "content": "Introduction\n\nMarine Protected Areas (MPAs) have risen to be among the most popular measures to conserve biodiversity and manage populations subjected to strong fishing pressure. MPAs offer ‘safe zones’ for individuals to breed and grow, thus potentially facilitating population persistence. Biological impacts of MPAs also have been shown to extend beyond their boundaries via “spillover effects”, potentially benefitting adjacent fisheries and thus contributing to existing management strategies (Bellier et al., 2013; Kellner et al., 2007; Sanchirico et al., 2006). Concomitant with the global establishment of MPAs, there has been a growing field of science seeking to objectively quantify the contribution of MPAs to the conservation of commercially and culturally important species. Importantly, studies are highlighting the need for improvements on design (Gaines et al., 2010). Careful consideration should be paid to the placement of MPAs relative to the natural history of focal species, particularly if connectivity is rooted in the objectives and/or design. For example, studies based on genetic analyses have revealed that invertebrate larvae disperse shorter distances than fish species (~50–100 vs. 100–200 km, respectively) (Kinlan & Gaines, 2003; Shanks et al., 2003), suggesting that optimal size and spacing between MPAs should be adjusted according to the target species.\n\nDispersal processes are particularly relevant to MPA design because many marine species under protection by MPAs disperse during different life stages. The dynamics and persistence of populations are influenced by connectivity, which in turn is regulated by dispersal during the larval, juvenile or adult phases (Cowen & Sponaugle, 2009; Grüss et al., 2011; Levin, 2006). Environmental variables (e.g. temperature, currents) may also influence these processes. For instance, the temperature dependence of the planktonic larval duration (PLD) may add a latitude-dependent element to dispersion, which is rarely incorporated into MPA design protocols because of a scarcity of dispersal data (Laurel & Bradbury, 2006).\n\nA promising effort to account for environmental complexities has been the development of MPA networks; that is, a series of independent protected areas interconnected by the movement of organisms between them (Avasthi, 2005; Gaines et al., 2010). For these networks to succeed, managers are required to make decisions about the optimal size, location, and number of MPAs to implement (Halpern, 2003). While poorly chosen locations for reserves may have negative effects on populations’ productivity (Costello & Polasky, 2008; Crowder et al., 2000), well-designed networks, with highly connected reserves, may allow for increased reproduction and survival, although much of the evidence is theoretical or indirect.\n\nCurrently, the theoretical importance of connectivity is well established in the ecological literature (e.g., Gaines et al., 2010; Palumbi, 2004). Although one of the four main design principles for MPAs is connectivity (UNEP-CBD, 2011), many networks of MPAs have not adequately considered this factor in their designs (Allison et al., 2003; Hastings & Botsford, 2003; Lester et al., 2009). Globally, only 18 to 49% of MPAs are regarded as part of a connected network depending on which definition of “connected network” is used (Wood et al., 2008). These definitions range from having at least one other MPA greater than 12.5 km2 within 10–20 km to at least one other MPA greater than 3.14 km2 within 20–150 km. There is also a bias towards large reserves, as the ten largest MPAs (most of which were created after 2005) comprise 53% of the protected area (Devillers et al., 2015). As of 2011, California is the only jurisdiction that uses both size and spacing in MPA design (Moffitt et al., 2011). Additionally, there is a spatial bias in protection due to jurisdictional, political and logistical concerns. The pelagic environment, which constitutes 99% of biosphere volume and supplies >80% of human fish food supply, may be <0.1% protected (Game et al., 2009) while areas within the exclusive economic zones are 1.5% protected (Wood et al., 2008). However, these estimates are now outdated and the area protected is growing every year. While opinions vary regarding the total proportion of area that should be protected (10–30%), it is clear that more MPAs are needed to meet that goal (Game et al., 2009; Wood et al., 2008). Adding new MPAs is an opportunity to improve the connectivity of existing networks.\n\nMPA research has also revealed that, in addition to the biological and ecological benefits, the success of MPAs needs to be quantified in terms of economic factors. For instance, evidence shows that substantial investment (i.e., USD 5–19 billion; Balmford et al., 2004) would greatly increase the sustainability in the global marine fish catch. In this regard, studies that address the feedbacks between empirical fishery data and human behavior are expected to improve our ability to forecast short-term fisheries (White et al., 2013). One economic tool that is used to assess fisheries is cost-benefit analysis, which quantifies the balance between costs and benefits of a management action (e.g., Zimmerman et al., 2015). A major challenge for cost-benefit analysis for MPAs is that the costs are incurred immediately, yet the benefits may not be fully realized until the distant future. Hence, it is necessary to conduct cost-benefit analysis over a multi-year time horizon and apply a social discount factor to account for the fact that costs and benefits are valued less in the future than in the present (Arrow et al., 2013).\n\nDespite the need for more comprehensive analytical models that encompass both biological and economic criteria, few have been developed thus far (White et al., 2013). One such program is the freeware Marxan, which has been mainly used for conservation planning purposes (Ball et al., 2009). There are also add-on programs, such as MultCSync (Moffett et al., 2005) and NatureServe Vista (http://www.natureserve.org/vista) that can be used to evaluate conservation goals in light of social and economic criteria. However, these tools do not incorporate population connectivity into their analyses by default. Here we introduce a flexible and user-friendly toolkit developed in R as an open-source package for stakeholders to explore the effects of varying different biological and economical parameters on the performance of an MPA network. It addresses specifically the interaction between MPA network design and population connectivity. To illustrate how this toolkit can be applied to different scenarios, we address two specific objectives: (1) examine if having a network of connected MPAs provides more resilience/population stability compared to having a single small, isolated MPA; and (2) determine if there is an economic benefit associated with connected networks of MPAs compared to other potential MPA designs.\n\nWe used the Canadian Atlantic cod (Gadus morhua) fishery for our case study because cod is a species with high commercial and cultural value. Cod populations found off the continental shelf of Newfoundland were once one of the world’s richest fishing resources, and shaped Newfoundland society for centuries (Hamilton & Butler, 2001). Atlantic cod stocks suffered greatly between 1960 and 1990, mostly due to poorly managed fishing pressure. While northwestern cod annual landings were <300 kt before the 1960s, the introduction of advanced fishing gear (e.g. factory trawlers) boosted catches to >800 kt by 1968. The high fishing pressure dramatically reduced standing stocks, and by 1977 fish landings had dropped significantly. Populations seemed to recover in the 1980s, along with increased fishing efforts, which corresponded with a twofold decline in survival probability (Hutchings & Myers, 1994). As a result, the stocks off Newfoundland and Labrador went commercially extinct (i.e., were no longer economically viable to harvest) in 1992 (Hutchings & Myers, 1994). The fishery has since shifted to one focused on lobster ($19M), snow crab ($121M), and northern shrimp ($179M) and the total value of the fisheries harvest is now worth more than it was before the moratorium (Schrank & Roy, 2013). While some cod stocks have started showing tenuous signs of recovery, they are nowhere near pre-moratorium levels (DFO, 2014); however, protecting a few populations from targeted or bycatch fisheries mortality may improve the probability of recovery. For this reason, we think that exploring the potential benefit of MPA networks for Atlantic Cod would be an informative exercise with implications for management.\n\n\nMethods\n\nWe developed an individual-based population model for Atlantic Cod (Gadus morhua), which includes fisheries harvesting and alternative management strategies. The model is available as an open-source R toolkit (Daigle et al., 2015a), which can be downloaded and adapted to the researcher’s specific questions. We describe the general structure of the model below without going into exhaustive detail. Please refer to the toolkit’s ‘read me’ section for more information on the model mechanics and specific parameter information. The model output presented below is available on figshare (Daigle et al., 2015b).\n\nThe model is organized hierarchically into modules and sub-modules (Figure 1) for users to manipulate according to their own requirements. Here we used this model to evaluate four management strategies (Figure 2) in terms of socially discounted net financial benefit.\n\nThis model evaluates contrasting management scenarios and compares them based on socially discounted net benefits.\n\nThe “Status Quo” scenario (purple) has all existing MPAs ≥ 400 km2. The MPAs in the “Maximum Distance” scenario (green) have been placed to maximize the distance between them. The MPAs in the “Fixed Distance” scenario (red) have been placed to optimize the distance between them (~75km). The cod’s breeding locations in the “Targeted” scenarios (blue) have been protected by default and optimal distance MPAs have been added exclusively in habitat suitable for cod. Scenarios from replicate 1 used as an example, other replicates are available in the appendix (Figure A2–Figure A100).\n\nThe first module establishes a ‘Spatial Base Layer’ by defining a basic grid and creating specific protection scenarios for the resource. The model then runs three modules in sequence; ‘Growth and Reproduction’, ‘Dispersal’, and ‘Harvesting’, all of which determine the population dynamics of the resource (see section Population dynamics). These modules are contained in two nested recursive loops, an outer loop that instructs for specific management strategies scenarios (see section Management strategies scenarios), and an inner loop that defines the time dimension. Here we ran the model between the years 2000 and 2050, with 1 yr time steps. Finally, the model runs through a ‘Cost of Evaluation’ module, which factors in costs associated with complying with MPA spatial restrictions, the benefits based on catch values, and social discount rates (Figure 1).\n\nThe length and weight of fish is estimated from the Von Bertalanffy growth model in (Knickle & Rose, 2013). It is known that sexual maturity for coastal cod occurs between 2–4 years and 6–9 years for some oceanic stock (Otterå, 2004); our model approximates this by determining sexual maturity based on a sigmoid logistic curve. The modelled fish begin maturing at 2 y, 50% are mature at 4 y and all are mature at 6 y. Egg production is determined by the weight of spawning females (0.5 million eggs per kg of female). Larval dispersal was approximated with a random walk of 2 cm s-1 over 90 d, which is equivalent to the mean current velocity (Brander & Hurley, 1992). Adult dispersal was also approximated using a random walk, which was calibrated with tagging data from Lawson & Rose (2000). Our model has four sources of mortality: larval, recruitment, adult, and fisheries. In the model, larval and adult mortality vary randomly through time and space, while recruitment, carrying capacity, and fisheries mortality are linked to adult biomass. Larval and adult mortality are estimated yearly from a beta (α=1000 and β=1.2, which approximates mean larval mortality of 99.88% with a range of 98.98–99.99%; Mountain et al., 2008) and normal distribution (μ = 0.5938 and SD = 0.0517; Swain & Chouinard, 2008), respectively. Recruitment mortality is estimated using a Beverton-Holt model and carrying capacity is assumed to be 0.43 (±0.38 SD) t km-2, which represents an average value for Canadian cod stocks (Myers et al., 2001). Carrying capacity is also enforced for adult fish and areas with biomass which exceed the carrying capacity are subject to increased mortality. Fisheries mortality is determined by estimating cod biomass by sampling the fish population under 0.1% of the entire EEZ (Exclusive Economic Zone) and using an FMSY (fishing mortality that produces a maximum sustainable yield) of 0.28 (Mountain et al., 2008). The model sets the target quota to ⅔ of FMSY according to the precautionary principle. Fisherman are assumed to have near perfect knowledge of the optimal fishing locations, and will travel the smallest distance to catch the most fish. We assume that fish under 38 cm are not targeted or caught by fisherman (Feekings et al., 2013). Fish were valued at a landed price of CAD$1.24 kg-1 (DFO, 2015). We used operating costs from the Mixed Fleet Fishery (DFO, 2015) because cod-specific operating costs are difficult to obtain given that cod is caught as bycatch, but not targeted (Schrank & Roy, 2013).\n\nThe “Status Quo” scenario has all existing MPAs in Eastern Canada that are at least as large as model cell size (cells are 400 km2). We have fixed the profitability ratio for the Status Quo scenario to ~1.6 (landed value/operating cost), which represents the current values for the Mixed Fleet Fishery (DFO, 2015). To estimate operating cost for the other scenarios, we identified distance variable operating costs (fuel and labour), determined the average distance traveled under the Status Quo scenario, and then calculated a distance correction factor based on the difference in distance traveled for a scenario relative to the Status Quo.\n\nThe scenarios with MPA protection had 10% of the EEZ closed to fishing, but the criteria for MPA placement was different for each. The MPAs in the “Maximum Distance” scenario have been placed to maximize the distance between them, representing a worst-case scenario in terms of population connectivity. The MPAs in the “Fixed Distance” scenario have been placed to optimize the distance between them (~75km) relative to a mean adult dispersal distance (Lawson & Rose, 2000). In the “Targeted” scenario, the cod’s breeding locations have been protected by default and optimal distance MPAs have been added exclusively in habitat suitable for cod. The size distribution of the newly generated MPAs were based on that of the coastal and marine protected areas in the World Database on Protected Areas (UNEP-WCMC, 2015).\n\nWe calculated net benefits for years 2001–2051 of the model simulation. Social discount rate (SDR) is used to translate future values into present values. The outcome of cost benefit analysis can be influenced by discount rate (Lupi et al., 2003), so we used a range of values (1.5%, 3.0%, and 6.0%) to test this sensitivity. Discount factor βt was calculated from SDR:\n\nβt=1(1+SDR)t\n\nNet present value (NPV) was calculated from the sum of present values of net revenue (total revenue from catch Rt minus total operational costs Ct) for each year in the simulation:\n\nNPVscenario=∑t=1t=50(Rt−Ct)×βt\n\nWe used a repeated measures ANOVA to examine the effects of scenario and time on the cumulative present catch value, and repeated measures on replicate. Replicate is nested within scenario and time since they are both considered non-independent. Time is non-independent because we are making repeated measures on the same model run, and scenario is non-independent because scenarios within a replicate are created using the same selection of MPA sizes.\n\n\nResults\n\nCod biomass did not recover to historically high abundances in any of the four management scenarios. Cod biomass ranged from an average of 50 to 74 kt depending on the management scenario (Table 1, Figure 3). While there were no dramatic increases in biomass over time, there is a slight increase in mean biomass for all scenarios except the Status Quo, particularly in the Targeted scenario. Similarly, the pattern for the harvest follows that of biomass with a few differences. The Targeted scenario has the second highest mean catch (Table 1, Figure 4) despite having the highest biomass. Conversely, the Fixed Distance scenario has a smaller catch than the Status Quo scenario despite having higher biomass. The probability of a moratorium (triggered when total biomass is below 10 kt) being enforced in a given year actually increases for the Maximum (13.88%) and Fixed Distance (7.25%) scenarios relative to the Status Quo (2.29%). The Targeted scenario (2.25%) had the lowest probability of a moratorium overall (Table1).\n\nThe solid lines represent the mean of all replicates (n=100) and the shaded regions show the standard error around each mean.\n\nThe solid lines represent the mean of all replicates (n=100) and the shaded regions show the standard error around each mean.\n\nThe mean distance from shore of each fish caught was actually highest in the Status Quo scenario (Table 1, Figure 5). There is a 5 to 10 yr period of adjustment to the management scenario regulation, but the distances in each scenario achieve a stable plateau. On average, fish were caught 30.01 (± 0.14) km from shore in the Status Quo scenario while only 27.17 (±0.12) km in the Targeted scenario.\n\nThe solid lines represent the mean of all replicates (n=100) and the shaded regions show the standard error around each mean.\n\nIn any given year, the differences between management scenarios in terms of biomass, catch, or mean distance from shore are not very large (Table 1, Figure 3–Figure 5). However, when compounded over multiple decades, these differences have a meaningful effect on economic value (Figure 6). For the first ~20 y, all scenarios have similar present catch cumulative values (Figure 6); however, for the remaining 30 y, the Targeted and Maximum Distance scenarios diverge significantly from the Status Quo and Fixed Distance scenarios. For all values of SDR, the effect of time, scenario, and their interaction were found to have significant effects on NPV (Table 2). With an SDR of 0.015, 0.03, or 0.06, the mean total value of the Status Quo scenario over 51 years is 138, 98, or 58 million CAD respectively while that of the Targeted scenario is 198, 138, or 75 million CAD. Additionally the Targeted scenario was the least likely to have an occurrence of a negative NPV.\n\nThe figures on the left show cumulative net present value, which represents the progression of net present values for each of the years in the time horizon. The solid lines display the mean of all replicates (n=100) and the shaded regions represent standard error. The figures on the right show the final net present value for the full 50 year time horizon. Here, the replicates are represented using a box plot. The median line is surrounded by the 25% and 75% quartiles, whiskers show the limits of 1.5 times the inter-quartile range, and points beyond that range are considered outliers.\n\n\nDiscussion\n\nThese findings suggest that well-designed MPA networks can provide net benefits to the fishing industry, while poorly designed networks can have some deleterious effects, such as an increased probability of moratoriums. This dichotomy highlights the importance of objectively measuring outcomes when evaluating management options. Our R toolbox provides an ideal platform for such studies since it is open source and adaptable. The user can simply input a different set of biological parameters, study area, or fisheries parameters to evaluate similar management scenarios on a different species, or area. Given that spatially explicit population models for multiple species are critical to properly evaluate MPA network design (Moffitt et al., 2011), we have designed our toolbox so that the biological parameters are the easiest to modify. At the next level of complexity, the user can create customized functions that create management scenarios or input predefined management scenarios by supplying the necessary shapefiles containing geospatial vector data. Finally, the user can edit whole modules or sub-modules if they wish to explore specific questions such as the effect of non-random dispersal or interactions between species.\n\nIn terms of network design principles, adding MPAs consistently increased the biomass, but not all MPA networks were equal in terms of Net Present Value. Well-connected MPA network design is thought to reduce population demographic variability by diversifying the potential source populations (Andrello et al., 2015; Costello & Polasky, 2008; Cowen & Sponaugle, 2009). Correspondingly, our results indicate that abiding to principles of connectivity (Fixed Distance scenarios) and protecting productive habitats (Targeted scenarios) reduces the variability in the NPV by ensuring well-connected populations. Conversely, the Maximum Distance scenario, where the MPAs network had lower connectivity, had the highest variability in total present value of all scenarios with added MPAs and the highest probability of a moratorium. While it is not surprising that the Maximum Distance scenario provides the least benefits, it is surprising that it nearly doubles the probability of a moratorium relative to the Status Quo. We suspect that this is a side-effect of how a moratorium is triggered in the model; whereby, if the total fishable (i.e. outside MPAs) biomass is < 10 kt, a moratorium is triggered. In the scenarios with enhanced MPA networks, much of biomass is concentrated within the MPAs, making it easier for the biomass in unprotected areas to reach the moratorium threshold. However, in the Targeted scenario, the benefits of population connectivity and increased protection of habitats vital to the cod’s life cycle (e.g. breeding grounds) more than compensate for the increased probability of a moratorium since it is lower than the Status Quo. By combining the positive effects of population connectivity, protection of vital habitats, and near-shore source habitats, the Targeted scenario provides the greatest net benefit to the fishing industry. In fact all other scenarios have a higher probability of having a negative total present value because they are missing at least one of these positive effects.\n\nIntuitively, we expected to see mean fishing distance from shore increase in scenarios with MPAs since we assumed fishermen would be displaced by the presence of MPAs. Our findings were entirely contradictory to these expectations. In all scenarios with MPAs, mean fishing distance from shore decreases relative to the Status Quo since MPAs in our model promote the presence of near-shore source populations that cannot be depleted by fishing. In these scenarios, fishermen spend less on fuel and labour since they do not travel as far. It should be noted here that our model currently assumes that fishermen can have a home port anywhere on the shoreline, which minimizes displacement in our model. In reality, there will be much greater social, economic, and logistical constraints on MPA placement. However, our model has the ability to assess the economic and ecological consequences of any arrangement of MPAs that managers determine to be feasible.\n\nThe efficacy of any MPA network is greatly influenced by the level of compliance among commercial fishermen in that region (McClanahan, 1999). For this reason, we conducted this cost-benefit analysis from the operational point of view (i.e., both the costs and benefits are directly connected to the commercial fishery). As a result, the model outcome is most relevant to the fishermen and can help to make a case for compliance. Mascia (2003) recommended that one measure of performance for an MPA should be how well it enhances the livelihoods of fishermen. Therefore, we should describe the net benefits as they relate to the people most immediately affected by the new MPA network.\n\nA complete evaluation of all costs and benefits associated with MPA networks is beyond the scope of this study. The model does not include other benefits associated with expanding the MPA network. Indeed, benefits are not limited to only increasing fisheries’ profits via increasing yields. Other benefits include job creation related to recreational fisheries or tourism and maintaining other ecosystem services; the value of which exceeds that of the commercial fisheries itself (Agardi, 1997; Armstrong, 2007; Ghermandi & Nunes, 2013; Lester et al., 2009; Pendleton et al., 2012). In early iterations of the model, we included the costs of establishing and maintaining the MPAs, but the NPV estimates were consistently negative and large. This is to be expected since we are not considering the full suite of benefits listed above. A full treatment of costs and benefits would provide important information for governments seeking to justify the large expense associated with MPAs and would therefore be a rich area for future development.\n\nOne trend made clear in our results is that the benefits are realized on long time scales. Fishermen must follow the restrictions for up to 20 years before the net benefits of the Targeted scenario clearly surpass those of the Status Quo. This requires patience and a commitment to preserving a resource for future generations, particularly in the face of inevitable uncertainty in how the cod population will respond along the way. The standard market discount rate tends to be impatient and may not be appropriate for valuing future ecosystem services with high cultural value. If the environmental outcome will affect multiple people and generations, there is a higher social value placed on the service (Hardisty & Weber, 2009). Therefore, it is justified to apply a lower social discount rate when evaluating future costs and benefits. Social discount rate can have a strong influence on the recommendations from cost-benefit analysis. With increasing discount rates, there is less incentive to preserve fish stocks to generate future profits (Sanchirico et al., 2006). In our study, the net benefits of the enhanced MPA network scenarios are diminished when SDR >3%. Similarly, Zimmermann et al. (2015) found that the difference in NPV between their fishery models was indistinguishable at discount rates >5%.\n\nSound reserve design has potential for directly enhancing fisheries and preserving sensitive species. Additionally, evidence shows that paying attention to the network design can make populations more resilient to both natural and anthropogenic disturbances. For example, networks can be tailored to better cope with the threats imposed by ongoing climate change (McLeod et al., 2009). Specifically, a system’s resilience could be enhanced by considering MPAs’ size, shape, risk spreading (i.e., protection of a variety of habitat types), protection of ecologically critical areas, degree of connectivity, maintenance of important functional groups, and external sources of stress like pollution (Green et al., 2015; McLeod et al., 2009). These aspects can be addressed in the model presented here via some shape of a Targeted scenario, which again lends support to this management strategy.\n\nOur modeling results suggested that the capacity for Atlantic cod populations to significantly recover densities comparable to those observed before the 1960s is limited in every management scenario considered. Similarly, one study reported that only one of 12 stocks of Northwest Atlantic cod analyzed showed ‘substantial recovery’, despite moratoria and fishing quota restrictions established after mid 1990s (Shelton et al., 2006). Another study by Swain & Chouinard (2008) predicted extinction of cod within the next 20 years (with fishing) or 40 years (without fishing) given the current levels of productivity. As argued by Shelton et al. (2006) the low productivity in cod populations documented after the big collapse, as indicated by higher natural mortalities and reduced growth rates (Swain et al., 2003), might hinder recovery along with the negative impacts of continuing fishery practices. Although our results are consistent with this previous work, our model makes multiple simplifying assumptions about biotic (e.g. ecological interactions) and abiotic (e.g. oceanographic processes) variables that can drive population dynamics and potentially contribute to ecosystem regime shifts. Additionally, our model does not predict any extinctions using the current parameters- possibly because we did not incorporate an Allee effect at low population densities (Stephens & Sutherland, 1999).\n\nWhile our model currently provides interesting results regarding MPA design, there are some key aspects to consider when interpreting these findings. This model only considers the interaction between the fishing industry and a single commercial species. This removes possibly important interactions with cod predators (Cook et al., 2015), competitors (Minto & Worm, 2012), or prey (Worm & Myers, 2003). Furthermore, this eliminates the possibility of a mixed species fishery or switching to alternative species (e.g., crab, and shrimp, and lobster). We believe that by choosing a relatively high natural mortality rate that reflects the new ecological reality (Swain & Chouinard, 2008), we have incorporated the bulk of these ecosystem effects. We have considered both larval and adult dispersal to be entirely random in the model, which neglects potentially important oceanographic, migratory, and homing behaviours (Green et al., 2014; Lawson & Rose, 2000). Similarly, carrying capacity is randomly generated from a realistic distribution, which ignores the true spatial subtleties of actual cod habitat. However, much of the information needed to incorporate such spatially explicit behaviours and habitats is currently unavailable.\n\nIn conclusion, the fishing industry stands to benefit financially from well-designed MPA networks (e.g., the Targeted scenario) through increased yields, lower operating costs for the commercial fishermen, and a lower probability of a moratorium. Under scenarios with new MPAs some traditional fishing locations may have been closed, thereby displacing fishing efforts. However, the spill-over effects of the well-designed MPA networks more than compensated for any displacement by providing near-shore source populations and ultimately decreased mean fishing distance from shore. Further, targeted protection of spawning grounds can produce long-term financial benefits that exceed those associated with the other simulated MPA network scenarios. These findings demonstrate the power and flexibility of our spatially explicit toolbox in assessing the costs and benefits of different MPA network designs.\n\n\nData and Software availability\n\nThe tabulated output for the model is available at http://dx.doi.org/10.6084/m9.figshare.1585146.\n\nThe R toolkit can be accessed and downloaded at https://github.com/remi-daigle/bioeconomic_MPA\n\nhttp://dx.doi.org/10.5281/zenodo.32946\n\nPublished under the MIT license.", "appendix": "Author contributions\n\n\n\nRD designed and built the software toolbox. CM and AB reviewed the literature to parameterize the toolbox. RD conceived the initial concept, but all authors contributed significantly to the development of the study. All authors were involved in the revision of the draft manuscript, and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis manuscript is the product of a collaboration formed during the Ecological Dissertations in the Aquatic Sciences (Eco-DAS) XI Symposium in October 2014 at the University of Hawai`i in Honolulu. Eco-DAS XI funding was provided by the National Science Foundation (NSF Award #1356792). Eco-DAS XI was sponsored by the Center for Microbial Oceanography: Research and Education (C-MORE), the University of Hawai`i School of Ocean and Earth Science and Technology (SOEST) and the UH Department of Oceanography.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to especially acknowledge the organizers of Eco-DAS XI, Paul Kemp and Lydia Baker for their great work and thoughtful comments on this manuscript. We also thank Ryan Stanley for providing helpful feedback that greatly improved the manuscript.\n\n\nSupplementary material\n\nAppendix Figures A1–99.\n\nMap of eastern Canadian EEZ (Exclusive Economic Zone) and examples of different scenarios for the planning of Marine Protected Areas (MPAs). The “Status Quo” scenario (purple) has all existing MPAs ≥ 400 km2. The MPAs in the “Maximum Distance” scenario (green) have been placed to maximize the distance between them. The MPAs in the “Fixed Distance” scenario (red) have been placed to optimize the distance between them (~75km). The cod’s breeding locations in the “Targeted” scenarios (blue) have been protected by default and optimal distance MPAs have been added exclusively in habitat suitable for cod. Scenarios from replicate 2–100, and replicate 1 is available in Figure 2.\n\nClick here to access the data.\n\n\nReferences\n\nAgardi TS: Marine protected areas and ocean conservation. Academic Press, 1997. Reference Source\n\nAllison GW, Gaines SD, Lubchenco J, et al.: Ensuring persistence of marine reserves: Catastrophes require adopting an insurance factor. Ecol Appl. 2003; 13: 8–24. Publisher Full Text\n\nAndrello M, Jacobi MN, Manel S, et al.: Extending networks of protected areas to optimize connectivity and population growth rate. Ecography. 2015; 38: 273–282. Publisher Full Text\n\nArmstrong CW: A note on the ecological-economic modelling of marine reserves in fisheries. Ecol Econ. 2007; 62(2): 242–250. Publisher Full Text\n\nArrow K, Cropper M, Gollier C, et al.: Environmental economics. Determining benefits and costs for future generations. Science. 2013; 341(6144): 349–350. PubMed Abstract | Publisher Full Text\n\nAvasthi A: Ecosystem management. California tries to connect its scattered marine reserves. Science. 2005; 308(5721): 487–488. PubMed Abstract | Publisher Full Text\n\nBall IR, Possingham HP, Watts M: Marxan and relatives: Software for spatial conservation prioritisation. Chapter 14: In Spatial conservation prioritisation: Quantitative methods and computational tools. Eds Moilanen A, KA Wilson, and HP Possingham. Oxford University Press, Oxford, UK. 2009; 185–195. Reference Source\n\nBalmford A, Gravestock P, Hockley N, et al.: The worldwide costs of marine protected areas. Proc Natl Acad Sci U S A. 2004; 101(26): 9694–9697. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBellier E, Neubauer P, Monestiez P, et al.: Marine reserve spillover: Modelling from multiple data sources. Ecol Inform. 2013; 18: 188–193. Publisher Full Text\n\nBrander K, Hurley PC: Distribution of early-stage Atlantic cod (Gadus morhua), haddock (Melanogrammus aeglefinus), and witch flounder (Glyptocephalus cynoglossus) eggs on the Scotian Shelf: a reappraisal of evidence on the coupling of cod spawning and plankton production. Can J Fish Aquat Sci. 1992; 49(2): 238–251. Publisher Full Text\n\nCook RM, Holmes SJ, Fryer RJ: Grey seal predation impairs recovery of an over-exploited fish stock. J Appl Ecol. 2015; 52(4): 969–979. Publisher Full Text\n\nCostello C, Polasky S: Optimal harvesting of stochastic spatial resources. J Environ Econ Manage. 2008; 56(1): 1–18. Publisher Full Text\n\nCowen RK, Sponaugle S: Larval dispersal and marine population connectivity. Ann Rev Mar Sci. 2009; 1: 443–466. PubMed Abstract | Publisher Full Text\n\nCrowder LB, Lyman SJ, Figueira WF, et al.: Source-sink population dynamics and the problem of siting marine reserves. B Mar Sci. 2000; 66(3): 799–820. Reference Source\n\nDaigle R, Monaco CJ, Baldridge A: Bioeconomic MPA network design - R toolkit v0.1.1. Retrieved 15: 26, Oct 26, 2015 (GMT). Figshare. 2015a. Publisher Full Text\n\nDaigle R, Monaco CJ, Baldridge A: Bioeconomic MPA network design - Cod case study output. Retrieved 15: 26, Oct 26, 2015 (GMT), Figshare. 2015b. Publisher Full Text\n\nDFO: Stock Assessment of NAFO Subdivision 3Ps Cod. Can Sci Advis Sec Sci Advis Rep. 2014; 001. Reference Source\n\nDepartment of Fisheries and Oceans Canada (DFO): Commercial Fisheries Landings Reports. 2015. Reference Source\n\nDevillers R, Pressey RL, Grech A, et al.: Reinventing residual reserves in the sea: are we favouring ease of establishment over need for protection? Aquat Conserv. 2015; 25(4): 480–504. Publisher Full Text\n\nFeekings J, Lewy P, Madsen N, et al.: The effect of regulation changes and influential factors on Atlantic cod discards in the Baltic Sea demersal trawl fishery. Can J Fish Aquat Sci. 2013; 70(4): 534–542. Publisher Full Text\n\nGaines SD, White C, Carr MH, et al.: Designing marine reserve networks for both conservation and fisheries management. Proc Natl Acad Sci U S A. 2010; 107(43): 18286–18293. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGame ET, Grantham HS, Hobday AJ, et al.: Pelagic protected areas: the missing dimension in ocean conservation. Trends Ecol Evol. 2009; 24(7): 360–369. PubMed Abstract | Publisher Full Text\n\nGhermandi A, Nunes PALD: A global map of coastal recreation values: Results from a spatially explicit meta-analysis. Ecol Econ. 2013; 86: 1–15. Publisher Full Text\n\nGreen AL, Fernandes L, Almany G, et al.: Designing Marine Reserves for Fisheries Management, Biodiversity Conservation, and Climate Change Adaptation. Coast Manage. 2014; 42(2): 143–159. Publisher Full Text\n\nGreen AL, Maypa AP, Almany GR, et al.: Larval dispersal and movement patterns of coral reef fishes, and implications for marine reserve network design: Connectivity and marine reserves. Biol Rev Camb Philos Soc. 2015; 90(4): 1215–47. PubMed Abstract | Publisher Full Text\n\nGrüss A, Kaplan DM, Guénette S, et al.: Consequences of adult and juvenile movement for marine protected areas. Biol Conserv. 2011; 144(2): 692–702. Publisher Full Text\n\nHalpern BS: The impact of marine reserves: do reserves work and does reserve size matter? Ecol Appl. 2003; 13: 117–137. Publisher Full Text\n\nHamilton LC, Butler M: Outport adaptations: Social indicators through Newfoundland’s cod crisis. Hum Ecol Rev. 2001; 8(2): 1–11. Reference Source\n\nHardisty DJ, Weber EU: Discounting future green: money versus the environment. J Exp Psychol Gen. 2009; 138(3): 329–340. PubMed Abstract | Publisher Full Text\n\nHastings A, Botsford LW: Comparing designs of marine reserves for fisheries and for biodiversity. Ecol Appl. 2003; 13: 65–70. Publisher Full Text\n\nHutchings JA, Myers RA: What can be learned from the collapse of a renewable resource? Atlantic cod, Gadus morhua, of Newfoundland and Labrador. Can J Fish Aquat Sci. 1994; 51(9): 2126–2146. Publisher Full Text\n\nKellner JB, Tetreault I, Gaines SD, et al.: Fishing the line near marine reserves in single and multispecies fisheries. Ecol Appl. 2007; 17(4): 1039–1054. PubMed Abstract | Publisher Full Text\n\nKinlan BP, Gaines SD: Propagule dispersal in marine and terrestrial environments: a community perspective. Ecology. 2003; 84: 2007–2020. Publisher Full Text\n\nKnickle DC, Rose GA: Comparing growth and maturity of sympatric Atlantic (Gadus morhua) and Greenland (Gadus ogac) cod in coastal Newfoundland. Can J Zool. 2013; 91(9): 672–677. Publisher Full Text\n\nLaurel BJ, Bradbury IR: “Big” concerns with high latitude marine protected areas (MPAs): trends in connectivity and MPA size. Can J Fish Aquat Sci. 2006; 63(12): 2603–2607. Publisher Full Text\n\nLawson GL, Rose GA: Seasonal distribution and movements of coastal cod (Gadus morhua L.) in Placentia Bay, Newfoundland. Fish Res. 2000; 49(1): 61–75. Publisher Full Text\n\nLester SE, Halpern BS, Grorud-Colvert K, et al.: Biological effects within no-take marine reserves: a global synthesis. Mar Ecol Prog Ser. 2009; 384: 33–46. Publisher Full Text\n\nLevin LA: Recent progress in understanding larval dispersal: new directions and digressions. Integr Comp Biol. 2006; 46(3): 282–297. PubMed Abstract | Publisher Full Text\n\nLupi F, Hoehn JP, Christie GC: Using an Economic Model of Recreational Fishing to Evaluate the Benefits of Sea Lamprey (Petromyzon marinus) Control on the St. Marys River. J Great Lakes Res. 2003; 29(Suppl 1): 742–754. Publisher Full Text\n\nMascia MB: The human dimension of coral reef marine protected areas: recent social science research and its policy implications. Conserv Biol. 2003; 17(2). 630–632. Publisher Full Text\n\nMcClanahan TR: Is there a future for coral reef parks in poor tropical countries? Coral Reefs. 1999; 18(4): 321–325. Publisher Full Text\n\nMcLeod E, Salm R, Green A, et al.: Designing marine protected area networks to address the impacts of climate change. Front Ecol Environ. 2009; 7(7): 362–370. Publisher Full Text\n\nMinto C, Worm B: Interactions between small pelagic fish and young cod across the North Atlantic. Ecology. 2012; 93(10): 2139–2154. PubMed Abstract | Publisher Full Text\n\nMoffett A, Garson J, Sarkar S: MultCSync: a software package for incorporating multiple criteria in conservation planning. Environ Model Softw. 2005; 20(10): 1315–1322. Publisher Full Text\n\nMoffitt EA, Wilson White J, Botsford LW: The utility and limitations of size and spacing guidelines for designing marine protected area (MPA) networks. Biol Conserv. 2011; 144(1): 306–318. Publisher Full Text\n\nMountain D, Green J, Sibunka J, et al.: Growth and mortality of Atlantic cod Gadus morhua and haddock Melanogrammus aeglefinus eggs and larvae on Georges Bank, 1995 to 1999. Mar Ecol Prog Ser. 2008; 353: 225–242. Publisher Full Text\n\nMyers RA, MacKenzie BR, Bowen KG, et al.: What is the carrying capacity for fish in the ocean? A meta-analysis of population dynamics of North Atlantic cod. Can J Fish Aquat Sci. 2001; 58(7): 1464–1476. Publisher Full Text\n\nOtterå H: Cultured Aquatic Species Information Programme. Gadus morhua. In: FAO Fisheries and Aquaculture Department. [online]. Rome, 2004. Reference Source\n\nPalumbi SR: Marine reserves and ocean neighborhoods: The spatial scale of marine populations and their management. Annu Rev Environ Resour. 2004; 29: 31–68. Publisher Full Text\n\nPendleton L, Donato DC, Murray BC, et al.: Estimating global “blue carbon” emissions from conversion and degradation of vegetated coastal ecosystems. PLoS One. 2012; 7(9): e43542. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSanchirico JN, Malvadkar U, Hastings A, et al.: When are no-take zones an economically optimal fishery management strategy? Ecol Appl. 2006; 16(5): 1643–1659. PubMed Abstract | Publisher Full Text\n\nSchrank WE, Roy N: The Newfoundland Fishery and Economy Twenty Years after the Northern Cod Moratorium. Mar Resour Econ. 2013; 28(4): 397–413. Publisher Full Text\n\nShanks AL, Grantham BA, Carr MH: Propagule dispersal distance and the size and spacing of marine reserves. Ecol Appl. 2003; 13: S159–S169. Publisher Full Text\n\nShelton PA, Sinclair AF, Chouinard GA, et al.: Fishing under low productivity conditions is further delaying recovery of Northwest Atlantic cod (Gadus morhua). Can J Fish Aquat Sci. 2006; 63(2): 235–238. Publisher Full Text\n\nStephens PA, Sutherland WJ: Consequences of the Allee effect for behaviour, ecology and conservation. Trends Ecol Evol. 1999; 14(10): 401–405. PubMed Abstract | Publisher Full Text\n\nSwain DP, Chouinard GA: Predicted extirpation of the dominant demersal fish in a large marine ecosystem: Atlantic cod (Gadus morhua) in the southern Gulf of St. Lawrence. Can J Fish Aquat Sci. 2008; 65(11): 2315–2319. Publisher Full Text\n\nSwain DP, Sinclair AF, Castonguay M, et al.: Density-versus temperature-dependent growth of Atlantic cod (Gadus morhua) in the Gulf of St. Lawrence and on the Scotian Shelf. Fish Res. 2003; 59(3): 327–341. Publisher Full Text\n\nUNEP-CBD: Convention on biological diversity - Strategic plan for biodiversity 2011–2020: Further information related to the technical rationale for the aichi biodiversity targets, including potential indicators and milestones. United Nations Environment Programme. 2011. Reference Source\n\nUNEP-WCMC: Sourcebook of opportunities for enhancing cooperation among the Biodiversity-related Conventions at national and regional levels. United Nations Environment Programme. 2015. Reference Source\n\nWhite JW, Scholz AJ, Rassweiler A, et al.: A comparison of approaches used for economic analysis in marine protected area network planning in California. Ocean Coast Manag. 2013; 74: 77–89. Publisher Full Text\n\nWood LJ, Fish L, Laughren J, et al.: Assessing progress towards global marine protection targets: shortfalls in information and action. Oryx. 2008; 42(3): 340–351. Publisher Full Text\n\nWorm B, Myers RA: Meta-analysis of cod-shrimp interactions reveals top-down control in oceanic food webs. Ecology. 2003; 84: 162–173. Publisher Full Text\n\nZimmermann F, Jørgensen C, Wilberg M: Bioeconomic consequences of fishing-induced evolution: a model predicts limited impact on net present value. Can J Fish Aquat Sci. 2015; 72(4): 612–624. Publisher Full Text" }
[ { "id": "11198", "date": "03 Dec 2015", "name": "Pedro Peres-Neto", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports a quantitative tool that can be used to analyze the economical impact of different network designs for marine protected areas. The framework is quite flexible and will provide a useful toolkit for discussing the pros and cons of different designs. That said, I feel that it considered very little discussion regarding other potential algorithms and also how to consider multiple species at the same time. In particular, would the authors suggest to model different species separately and then build a consensus across different MPA designs? Some discussion on this issue would be particularly important. Another issue is environmental heterogeneity. I think it would be important that the authors at least discuss how environmental heterogeneity should be taken into account in designing MPA across different scenarios. Large habitat heterogeneity within a given area or across areas would certainly lead to different design decisions than small heterogeneity. How does the matrix and quality of corridors should be considered. In sum, I think this is a much needed tool but the authors need to provide some additional information on how they see it being applied considering multiple species and environmental heterogeneity.", "responses": [ { "c_id": "2514", "date": "23 Feb 2017", "name": "Remi Daigle", "role": "Author Response", "response": "In response to multiple species: The modular design of the tool kit could allow users to add this level of complexity if so desired, but the current structure would require higher computational power to run a multiple species model. We address this at the end of the first discussion paragraph.   Although we are not evaluating multiple species in this paper, we provide some citations and descriptions of approaches tried by other researchers. (See Discussion Paragraph 9.)   We have modified the model so that habitat carrying capacity is now a customizable input. While our paper does not focus on the issue of habitat heterogeneity, the BESTMPA package could now be used to investigate this question." } ] }, { "id": "11831", "date": "25 Jan 2016", "name": "Marco Andrello", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAs a referee, I am asked to comment on \"whether the work has been well designed, executed and discussed, not whether it is of importance or particular novelty\". The toolkit is presented quite clearly. The purpose of this article was also to assess the management of cod. A positive point of this point is the ability to assess the economic and ecological consequences of MPAs spatial planning. I like this point very much.This article lacks a sensitivity analysis of the model results to all the parameters used in the model. Many parameters are uncertain, so a sensitivity analysis will be helpful. Also, you should not use an ANOVA to examine simulation results (see White et al. 20141).There are other points that should be addressed:Egg production (0.5 million eggs per kg of female) is missing a reference. Is the random walk for larval dispersal and adult movement realistic? A beta distribution for larval mortality seems fine; but a normal distribution for adult mortality can give values above 1 or below 0, which is problematic. Why was the size distribution of the newly generated MPAs based on that of the World Database of Protected Areas? Existing protected areas are not optimally sized. The maximum distance scenario is the best in terms of biomass. Is there a \"dramatic\" increase in this scenario? Biomass attains more 100 kT in 2043 (the decreases). Compared to biomass in the other scenarios, this increase in relevant; What about the historical biomass? It is not clear to me how the profitability ratio works. Please define the summary statistics used in Table 1. What are \"distance\" and \"moratorium\"? Are values averaged over the fifty years (2000-2050)?I do not have a sufficient level of expertise to evaluate the Cost-benefit analysis.", "responses": [ { "c_id": "2513", "date": "23 Feb 2017", "name": "Remi Daigle", "role": "Author Response", "response": "\"As a referee, I am asked to comment on \"whether the work has been well designed, executed and discussed, not whether it is of importance or particular novelty\". The toolkit is presented quite clearly. The purpose of this article was also to assess the management of cod. A positive point of this point is the ability to assess the economic and ecological consequences of MPAs spatial planning. I like this point very much. This article lacks a sensitivity analysis of the model results to all the parameters used in the model. Many parameters are uncertain, so a sensitivity analysis will be helpful. Also, you should not use an ANOVA to examine simulation results (see White et al. 20141).\" Sensitivity Analysis: We agree that, in light of parameter estimate uncertainty, sensitivity analysis was an important addition to this study.  We modified select parameters by ± 10% (or ± 1 y for integer values) for 25 replicate model runs and then compared the mean final net present value for the full 50 year time horizon to that of the full model (n = 100) in order to quantify influence of that parameter estimate on model output.  We did this using the social discount rate of 0.015 because this is the value for which differences among the scenarios are most pronounced.  We also focused only on the Targeted and Status Quo scenarios because they are the most extreme scenarios.  We discovered that the most sensitive parameters are minimum age of catch and fecundity.   Use of ANOVA:  We removed any reference to the ANOVA. Instead we report Error statistics (mean absolute error, Root mean error, and mean absolute percent error) \"There are other points that should be addressed:\" - Egg production (0.5 million eggs per kg of female) is missing a reference. We included a new reference (Otterå, 2004) to support this parameter value.   - \"Is the random walk for larval dispersal and adult movement realistic?\" Not quite, it does not really respect possible home ranges (adherence to an area), but it is scaled to real movement patterns. The random walk is the simplest approximation, and we make it possible for the users to use the best available connectivity information. This could be data from tagging, bio-physical models, or in the absence of more accurate data, the user can simply use a random walk.  - \"A beta distribution for larval mortality seems fine; but a normal distribution for adult mortality can give values above 1 or below 0, which is problematic.\" We have added a step that eliminates any values above 1 or below 0 (M <- M[M<=1&M>=0]) which given the cod parameters was exceedingly rare.   - \"Why was the size distribution of the newly generated MPAs based on that of the World Database of Protected Areas? Existing protected areas are not optimally sized.\" Andrello brings up an excellent point that the existing MPAs are not optimally sized. However, our intent of drawing from the existing size distribution was to work within the bounds set by previous management decisions. Our assumption here is that decision makers would consider the size of the MPA selected as reasonable based on precedent set in other areas. We revised the text at the end of the “Management strategies scenarios” section to reflect our rationale.     - \"The maximum distance scenario is the best in terms of biomass. Is there a \"dramatic\" increase in this scenario? Biomass attains more 100 kT in 2043 (the decreases). Compared to biomass in the other scenarios, this increase in relevant; What about the historical biomass?\" We had problems with the “virtual_fish_ratio” of the original model, where 1 ‘virtual’ fish represented 20000 real fish. This led to unnecessary variability in the results. The model has been redesigned and the “virtual_fish_ratio” has been eliminated making the new version a truly individual based model. The biomass for the maximum distance scenario is not as high with the new model.   - \"It is not clear to me how the profitability ratio works.\" We revised the description to clarify how the profitability ratio was calculated (see first paragraph of the “Management strategies scenario” section.  In the process, we discovered we had neglected to include one of the necessary citations, which is now added to the the References section.     - \"Please define the summary statistics used in Table 1. What are \"distance\" and \"moratorium\"? Are values averaged over the fifty years (2000-2050)?\" We have edited the table legend to address these questions." } ] } ]
1
https://f1000research.com/articles/4-1234
https://f1000research.com/articles/5-2432/v1
04 Oct 16
{ "type": "Research Note", "title": "Validation of syngeneic mouse models of melanoma and non-small cell lung cancer for investigating the anticancer effects of the soy-derived peptide Lunasin", "authors": [ "Bharat Devapatla", "Chris Shidal", "Kavitha Yaddanapudi", "Keith R. Davis", "Bharat Devapatla", "Chris Shidal", "Kavitha Yaddanapudi" ], "abstract": "Background: Lunasin is a naturally occurring peptide present in soybean that has both chemopreventive and therapeutic activities that can prevent cellular transformation and inhibit the growth of several human cancer types. Recent studies indicate that Lunasin has several distinct potential modes of action including suppressing integrin signaling and epigenetic effects driven by modulation of histone acetylation. In addition to direct effects on cancer cells, Lunasin also has effects on innate immunity that may contribute to its ability to inhibit tumor growth in vivo. Methods: Standard assays for cell proliferation and colony formation were used to assess Lunasin’s in vitro activity against murine Lewis lung carcinoma (LLC) and B16-F0 melanoma cells.  Lunasin’s in vivo activity was assessed by comparing the growth of tumors initiated by subcutaneous implantation of LLC or B16-F0 cells in Lunasin-treated and untreated C57BL/6 mice. Results: Lunasin was found to inhibit growth of murine LLC cells and murine B16-F0 melanoma cells in vitro and in wild-type C57BL/6 mice.  The effects of Lunasin in these two mouse models were very similar to those previously observed in studies of human non-small cell lung cancer and melanoma cell lines. Conclusions: We have now validated two established syngeneic mouse models as being responsive to Lunasin treatment.  The validation of these two in vivo syngeneic models will allow detailed studies on the combined therapeutic and immune effects of Lunasin in a fully immunocompetent mouse model.", "keywords": [ "Lung cancer", "Melanoma", "Syngeneic tumor model", "LLC", "B16-F0" ], "content": "Introduction\n\nLunasin is a multifunctional bioactive peptide present as a component of the storage protein fraction in soybean seeds and in soy-derived food products1–4. Studies from several laboratories have documented that Lunasin has both chemopreventive activity that inhibits cellular transformation by carcinogens or oncogenes5–7 and chemotherapeutic activity against multiple human cancer types8–15. Taken together, these observations suggest that Lunasin may be one of the factors responsible for the lower cancer rates observed in people who consume high-soy diets1–3. One intriguing aspect of Lunasin is that this 44 amino acid peptide has at least three potential functional domains; a polyaspartic-acid C-terminal tail that binds Lunasin to the core histones H3 and H47,16–18, a tripeptide Arg-Gly-Asp (RGD) domain that can serve as a recognition signal for specific integrins9,16,19, and a putative helical chromatin binding domain3,11.\n\nOur previous studies found that native Lunasin purified from soybean has therapeutic activity against established human non-small cell lung cancer (NSCLC) and melanoma cell lines both in vitro and in vivo8,15. In the case of NSCLC, in vitro studies suggested that a primary mechanism of action was the inhibition of proliferation caused by inhibition of integrin signaling and decreased retinoblastoma protein phosphorylation15,16,20. In the case of melanoma, Lunasin caused a significant decrease in putative cancer stem cells by causing these cells to switch phenotypes to a cell type expressing higher levels of the transcription factor MITF and one of its downstream targets, tyrosinase. In addition, decreased levels of the stemness protein Nanog were also observed8. Our recent unpublished studies suggest that Lunasin effects on melanoma cells are also mediated, at least in part, by effects on integrin signaling (C. Shidal, K. Yaddanapudi, and K.R. Davis, unpublished results). These results, along with a recent report on the effects of Lunasin on colon cancer stem-like cells10, suggest the exciting possibility that Lunasin can be used to target cancer stem cells.\n\nOne of the more recent unexpected and exciting findings regarding Lunasin’s anticancer effects is that Lunasin appears to also have immunomodulatory activity11,21,22. Interestingly, these effects correlate with epigenetic effects and do not require the RGD domain or the polyaspartic-acid tail, thus implicating the putative chromatin-binding domain as being important11. Given that Lunasin has both direct therapeutic effects on cancer cells as well as the ability to affect immunity, we were prompted to determine if syngeneic mouse cancer models could be identified where both of these activities could be studied in concert so that the relative contribution of these two different effects on the potent in vivo activity of Lunasin could be determined. In these studies, we demonstrate that Lunasin has significant in vitro and in vivo activity in syngeneic mouse models for lung cancer and melanoma. These syngeneic models will provide the ability to pursue studies of Lunasin action in an immunocompetent host and use genetic approaches to understand how specific genetic manipulations affect Lunasin’s ability to inhibit tumor growth and metastasis.\n\n\nMethods\n\nLunasin was purified from soybean white flake (Owensboro Grain Company) as previously described23 by Kentucky BioProcessing (Owensboro, KY). Analysis by sodium dodecyl sulfate polyacrylamide gel electrophoresis indicated that this Lunasin preparation had >99% purity8. The purified Lunasin was diluted to a concentration of 9.3 mg/ml in sterile 50 mM sodium phosphate buffer, pH 7.4 and stored at 4°C.\n\nLLC (mouse lung carcinoma) and B16-F0 (mouse melanoma) cell lines were obtained from the American Type Culture Collection (ATTC). LLC and B16-F0 cells were cultured in DMEM medium (Invitrogen). Medium was supplemented with 10% fetal bovine serum (Invitrogen), 100 IU/mL of penicillin, and 100 μg/mL of streptomycin (Invitrogen) and cells grown at 37°C in a humidified incubator containing 5% CO2.\n\nIn vitro cell growth inhibition was measured via a tetrazolium-based [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium salt (MTS) assay (Promega). Briefly, 2 × 103 cells were plated into 96-well plates and incubated overnight. The cells were treated with the indicated concentrations of Lunasin for 72 hours in 100 µL fresh medium. Every 24 hours, cell culture media was replaced with fresh culture media amended with the indicated concentrations of Lunasin. After 72 hours, 20 µL of CellTiter 96® AQueous One reagent (Promega) was added and incubated with the cells for 1 hour. Absorbance was measured at 490 nm using a Synergy HT plate reader (Biotek). Cell growth was estimated from the absorbance readings and has been normalized to vehicle-treated control cells. Averages of three replicates per treatment were used for analysis.\n\nThese assays were done as previously described8 except that a 24-well per plate format was used. LLC and B16-F0 cells were plated at a density of 500 and 1,000 cells/well, respectively.\n\nSix-week-old male mice (C57 BL/6) were purchased from Harlan Laboratories (Indianapolis, IN). All procedures involving mice were carried out in accordance with the international guidelines of the Association for Assessment and Accreditation of Laboratory Animals Care with the approval of the University of Louisville Institutional Animal Care and Use Committee (Protocol # 12091). Mice were maintained in the University of Louisville Health Center animal use facility and maintained by Research Resources Facilities staff using standard approved protocols. Mice were housed in polycarbonate shoebox cages (maximum 5 mice/cage) on a ventilated rack system in a temperature controlled room operating on a timed 12 hour light/dark cycle. Mice were randomly placed into groups (6–10 mice per group) and received subcutaneous injections of LLC (1 × 105) or B16-F0 (1 × 106) cells suspended in 100 μL of phosphate buffered saline (PBS) in the hind flank. Tumors were measured starting 10 days post-injection up to 22 days post-injection. Tumor size was measured twice weekly using digital calipers (Mitutoyo) with an accuracy of ± 0.02 mm. Tumor volume was calculated as w2×l2 where w = width and l = length. All mice except in control group were treated with Lunasin daily starting from the day of injection. Lunasin was administered by intraperitoneal (IP) injections in 50 mM phosphate buffer at a dose of either 10 or 30 mg Lunasin/kg body weight. In some experiments, cells were pretreated with 100 μM Lunasin for 72 hours prior to injection of cells into mice. At the end of the experiments, mice were euthanized by CO2 asphyxiation followed by cervical dislocation.\n\n\nResults and discussion\n\nWe tested the ability of Lunasin to inhibit LLC and B16-F0 growth in both adherent and non-adherent assays. In adherent assays, Lunasin had modest dose-dependent effects on the growth of both LLC and B16-F0 cells; <10% at 30 and 100 μM Lunasin (Figure 1A–B, Table 1). In contrast, Lunasin had substantial inhibitory activity in non-adherent colony forming assays. Both LLC and B16-F0 exhibited a dose-dependent reduction in colony formation from ~20% to 40% over a Lunasin concentration range of 10 to 100 μM (Figure 1C–D, Table 2). The difference in activity observed in adherent versus non-adherent assays recapitulates our previous results using human NSCLC and melanoma cells and likely reflects differences in integrin expression profiles under these distinct culture conditions8,15. The sensitivity of the mouse cell lines were comparable to that observed for human NSCLC and melanoma cells. Growth inhibition under adherent culture conditions was <15% for most NSCLC cell lines and <10% for melanoma cell lines treated with 100 μM Lunasin, whereas inhibition of colony formation by human NSCLC and melanoma cell lines treated with 100 μM Lunasin ranged from ~65% to 85%, and ~20 to 40%, respectively. These results demonstrate that the Lunasin sensitivity of human and mouse lung cancer and melanoma cells are quite similar in vitro.\n\nCells were cultured under adherent (A,B) or non-adherent (C,D) culture conditions and treated with the indicated concentrations of Lunasin. For adherent-culture assays, proliferation was assessed after 72 hours of treatment using a MTS assay. For non-adherent-culture assays, colonies were allowed to form over 10–18 days until colonies grew to approximately 100 μm in diameter. The number of colonies formed was counted after staining with crystal violet. Data from both assays have been normalized to the vehicle treated control and represent the mean ± S.D. An asterisk (*) indicates that a treatment was significantly different (p < 0.05) from the control as determined by an unpaired student’s t-test.\n\nWe initially tested the ability of Lunasin to inhibit tumor growth initiated by LLC cells at doses of 10 and 30 mg/kg. Lunasin inhibited tumor growth in mice treated at the 30 mg/kg dose by 55% at day 22 whereas the 10 mg/kg dose had only modest effects that were only statistically significant on days 18 and 20 (Figure 2A, Table 3). We next tested whether pre-treating LLC cells with 100 μM for 72 h in vitro prior to implantation further affected tumor growth. The results clearly show that pre-treatment did not enhance inhibition of tumor growth by Lunasin at a dose of 30 mg/kg (Figure 2B, Table 4). In this experiment, tumor growth at day 22 was 43% of the control. The inhibition of LLC tumor growth by Lunasin was somewhat less than that observed in xenograft studies of NSCLC H1299 where tumor growth was reduced by 63% at 32 days in mice treated with 30 mg/kg Lunasin15.\n\n(A) Effects of 10 mg/kg and 30 mg/kg Lunasin treatment on LLC tumor growth. (B) Effects of 30 mg/kg Lunasin on the growth of tumors initiated by LLC cells either not pre-treated (Lunasin-, red) or pre-treated with 100 μM Lunasin (Lunasin+, blue) for 72 hours prior to injection of cells into mice. (C) Effects of 30 mg/kg Lunasin on the growth of tumors initiated by B16-F0 cells either not pre-treated (Lunasin-, red) or pre-treated with 100 μM Lunasin (Lunasin+, blue) for 72 hours prior to injection of cells into mice. LLC (1 × 105) or B16-F0 (1 × 106) cells were injected subcutaneously in the hind flanks of mice to initiate tumors. Lunasin treatments were initiated on the same day that cells were injected and continued daily until the end of the experiment. Tumor volumes were determined from caliper measurements. Treatment groups contained 6–10 mice per group. The data shown represent the mean ± SEM and an asterisk (*) indicates that an individual treatment was significantly different (p < 0.05) from the control as determined by an unpaired student’s t-test.\n\nLunasin at a dose of 30 mg/kg was also found to inhibit tumor growth initiated by B16-F0 melanoma cells, with a reduction in tumor growth of 60% at day 22 (Figure 2C, Table 5). As was the case with LLC, pre-treatment with 100 μM Lunasin for 72 h in vitro did not enhance inhibition of tumor growth. These results are quite comparable to our xenograft studies using the human melanoma cell line A375 where we observed a 55% reduction in tumor volume 34 days after implantation8.\n\n\nConclusion\n\nThese studies establish that syngeneic mouse models for lung cancer and melanoma are sensitive to Lunasin and that their sensitivity is comparable to that observed in studies of human NSCLC and melanoma. Thus, these models can be used to apply the power of mouse genetic tools to further elucidate the mechanisms of Lunasin action and provide important new information on the feasibility of using Lunasin to treat these two deadly cancers.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data of validation of syngeneic mouse models of melanoma and non-small cell lung cancer for investigating the anticancer effects of the soy-derived peptide lunasin, 10.5256/f1000research.9661.d13696524", "appendix": "Author contributions\n\n\n\nBD performed the MTS assays and in vivo studies, and edited the manuscript; CS performed the colony-forming assays and edited the manuscript; KY assisted in directing the study and editing the manuscript; KRD directed the study and wrote the manuscript. All authors agreed to the final content of this article.\n\n\nCompeting interests\n\n\n\nKRD is listed as an inventor on two issued patents relating to the expression and purification of Lunasin peptides and may benefit financially if the technologies described in these patents are licensed or sold. BD, CS, and KY declare no conflict of interests.\n\n\nGrant information\n\nThis work was funded by Owensboro Grain Company, Owensboro, Kentucky, USA.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nLule VK, Garg S, Pophaly SD, et al.: \"Potential health benefits of lunasin: a multifaceted soy-derived bioactive peptide\". J Food Sci. 2015; 80(3): R485–94. PubMed Abstract | Publisher Full Text\n\nLiu J, Jia SH, Kirberger M, et al.: Lunasin as a promising health-beneficial peptide. Eur Rev Med Pharmacol Sci. 2014; 18(14): 2070–5. PubMed Abstract\n\nHernández-Ledesma B, Hsieh CC, de Lumen BO: Chemopreventive properties of Peptide Lunasin: a review. Protein Pept Lett. 2013; 20(4): 424–32. PubMed Abstract | Publisher Full Text\n\nCavazos A, Morales E, Dia VP, et al.: Analysis of lunasin in commercial and pilot plant produced soybean products and an improved method of lunasin purification. J Food Sci. 2012; 77(5): C539–45. PubMed Abstract | Publisher Full Text\n\nHsieh CC, Hernández-Ledesma B, de Lumen BO: Lunasin-aspirin combination against NIH/3T3 cells transformation induced by chemical carcinogens. Plant Foods Hum Nutr. 2011; 66(2): 107–13. PubMed Abstract | Publisher Full Text\n\nLam Y, Galvez A, de Lumen BO: Lunasin suppresses E1A-mediated transformation of mammalian cells but does not inhibit growth of immortalized and established cancer cell lines. Nutr Cancer. 2003; 47(1): 88–94. PubMed Abstract | Publisher Full Text\n\nGalvez AF, Chen N, Macasieb J, et al.: Chemopreventive property of a soybean peptide (lunasin) that binds to deacetylated histones and inhibits acetylation. Cancer Res. 2001; 61(20): 7473–8. PubMed Abstract\n\nShidal C, Al-Rayyan N, Yaddanapudi K, et al.: Lunasin is a novel therapeutic agent for targeting melanoma cancer stem cells. Oncotarget. 2016. PubMed Abstract | Publisher Full Text\n\nJiang Q, Pan Y, Cheng Y, et al.: Lunasin suppresses the migration and invasion of breast cancer cells by inhibiting matrix metalloproteinase-2/-9 via the FAK/Akt/ERK and NF-κB signaling pathways. Oncol Rep. 2016; 36(1): 253–62. PubMed Abstract | Publisher Full Text\n\nMontales MT, Simmen RC, Ferreira ES, et al.: Metformin and soybean-derived bioactive molecules attenuate the expansion of stem cell-like epithelial subpopulation and confer apoptotic sensitivity in human colon cancer cells. Genes Nutr. 2015; 10(6): 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChang HC, Lewis D, Tung CY, et al.: Soypeptide lunasin in cytokine immunotherapy for lymphoma. Cancer Immunol Immunother. 2014; 63(3): 283–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDia VP, Gonzalez de Mejia E: Lunasin potentiates the effect of oxaliplatin preventing outgrowth of colon cancer metastasis, binds to α5β1 integrin and suppresses FAK/ERK/NF-κB signaling. Cancer Lett. 2011; 313(2): 167–80. PubMed Abstract | Publisher Full Text\n\nHsieh CC, Hernández-Ledesma B, de Lumen BO: Lunasin, a novel seed peptide, sensitizes human breast cancer MDA-MB-231 cells to aspirin-arrested cell cycle and induced apoptosis. Chem Biol Interact. 2010; 186(2): 127–34. PubMed Abstract | Publisher Full Text\n\nDia VP, Mejia EG: Lunasin promotes apoptosis in human colon cancer cells by mitochondrial pathway activation and induction of nuclear clusterin expression. Cancer Lett. 2010; 295(1): 44–53. PubMed Abstract | Publisher Full Text\n\nMcConnell EJ, Devapatla B, Yaddanapudi K, et al.: The soybean-derived peptide lunasin inhibits non-small cell lung cancer cell proliferation by suppressing phosphorylation of the retinoblastoma protein. Oncotarget. 2015; 6(7): 4649–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nInaba J, McConnell EJ, Davis KR: Lunasin sensitivity in non-small cell lung cancer cells is linked to suppression of integrin signaling and changes in histone acetylation. Int J Mol Sci. 2014; 15(12): 23705–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHernández-Ledesma B, Hsieh CC, de Lumen BO: Relationship between lunasin's sequence and its inhibitory activity of histones H3 and H4 acetylation. Mol Nutr Food Res. 2011; 55(7): 989–98. PubMed Abstract | Publisher Full Text\n\nJeong HJ, Jeong JB, Kim DS, et al.: Inhibition of core histone acetylation by the cancer preventive peptide lunasin. J Agric Food Chem. 2007; 55(3): 632–7. PubMed Abstract | Publisher Full Text\n\nCam A, de Mejia EG: RGD-peptide lunasin inhibits Akt-mediated NF-κB activation in human macrophages through interaction with the αVβ3 integrin. Mol Nutr Food Res. 2012; 56(10): 1569–81. PubMed Abstract | Publisher Full Text\n\nDavis KR, Inaba J: Lunasin—a multifunctional anticancer peptide from soybean. Int J Cancer Ther. 2016; 4(2): 4218. Publisher Full Text\n\nYang X, Zhu J, Tung CY, et al.: Lunasin alleviates allergic airway inflammation while increases antigen-specific Tregs. PLoS One. 2015; 10(2): e0115330. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTung CY, Lewis DE, Han L, et al.: Activation of dendritic cell function by soypeptide lunasin as a novel vaccine adjuvant. Vaccine. 2014; 32(42): 5411–9. PubMed Abstract | Publisher Full Text\n\nSeber LE, Barnett BW, McConnell EJ, et al.: Scalable purification and characterization of the anticancer lunasin peptide from soybean. PLoS One. 2012; 7(4): e35409. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDevapatla B, Shidal C, Yaddanapudi K, et al.: Dataset 1 in: Validation of syngeneic mouse models of melanoma and non-small cell lung cancer for investigating the anticancer effects of the soy-derived peptide Lunasin. F1000Research. 2016. Data Source" }
[ { "id": "17429", "date": "07 Nov 2016", "name": "Jean-Pierre Gillet", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript would benefit from further discussion on:\nThe clinical relevance of the Lunasin concentration used in the mice studies.\n\nThe rationale and the relevance of pretreating the cell lines with Lunasin prior to mouse engraftment.\n\nThe pros/cons of this model compared with GEMM (including regulatory agency recommendations to better appraise the current drug development environment).", "responses": [ { "c_id": "2489", "date": "22 Feb 2017", "name": "Keith Davis", "role": "Author Response F1000Research Advisory Board Member", "response": "We thank Dr. Gillet for his review and suggestions. 1. The clinical relevance of the Lunasin concentration used in the mice studies.   As discussed in our response to Dr. Terabe’s comments, it is difficult to assess the clinical relevance of the 30 mg/kg body weight dose used in our studies beyond the fact that this dose is comparable to that used for other protein and/or peptide drugs and that it is not unusually high when compared to a number of other agents being tested at the preclinical stage.   2.  The rationale and the relevance of pretreating the cell lines with Lunasin prior to mouse engraftment.   The rationale for pretreating the cells prior to engraftment was based on our recent studies demonstrating the ability of Lunasin to reduce the putative cancer initiating cell population of human melanoma cell lines (as identified as being ALDHhigh) [1].  We have subsequently not pursued this avenue of research with the B16-F10 or LLC cells so we cannot provide any further data on whether any ALDHhigh cells that may be present in these cells lines were affected by Lunasin.   We have added the rational for the pretreatment to the Results and Discussion.     3.  The pros/cons of this model compared with GEMM (including regulatory agency recommendations to better appraise the current drug development environment).   Dr. Gillet raises a good point, given that the landscape of preclinical cancer research has changed significantly with the development of transgenic mouse models for a variety of cancer types, and in some cases, subtypes of specific cancers. It is well known that traditional xenograft studies using established human cell lines are often not predictive of clinical efficacy, which can also be the case for the B16 and LLC syngeneic models. However, these more traditional models do still have a role in early preclinical studies when one requires a rapid and relatively inexpensive method to obtain an initial assessment of a compound’s anticancer activity in vivo [2, 3]. In the case of xenograft studies, there has been a resurgence of use as they represent a convenient system for maintaining and studying patient-derived tumors.  Our decision to test the syngeneic B16 and LLC models was based primarily on identifying a model where we could use a fully immunocompetent mouse without establishing the rather costly GEMM and humanized mouse models.  Indeed, now that we have significant results in human xenograft and mouse syngeneic models, we hope to be able to extend our studies into appropriate GEMM models as the next logical step leading towards clinical testing.   We have not significantly modified the manuscript to address this point, other than to modify the Conclusion to limit the likely utility of using these syngeneic models.  The purpose of this research note is simply to inform researchers interested in Lunasin that the B16 and LLC models do respond and may be appropriate for their studies, depending on the goal. We feel that a more detailed discussion of the pros and cons and these models compared to GEMM and humanized mice models would be more appropriate for a full research paper.     Shidal, C., et al., Lunasin is a novel therapeutic agent for targeting melanoma cancer stem cells. Oncotarget, 2016 7(51): p. 84128-84141. Richmond, A. and Y. Su, Mouse xenograft models vs GEM models for human cancer therapeutics. Dis Model Mech, 2008. 1(2-3): p. 78-82. Talmadge, J.E., et al., Murine models to evaluate novel and conventional therapeutic strategies for cancer. Am J Pathol, 2007. 170(3): p. 793-804." } ] }, { "id": "17556", "date": "23 Nov 2016", "name": "Masaki Terabe", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMajor points:\nAre the concentrations of Lunasin used physiologically relevant?\n\nDoes Lunasin have similar effects on the progression of these tumors in immunodeficient mice such as RAG deficient mice and RAG/common gamma chain deficient mice? Comparing the effect of Lunasin in these two models, immunodeficient mice and immunocompetent mice, will provide information as to whether adaptive and innate immune cells are involved. Since there is a direct toxicity of Lunasin to tumor cells, it would be important to show the contribution of immune cells to the anti-tumor effect of Lunasin seen in the in vivo models.\n\nMinor point:\nAxis labels of Fig 1 may not be correct. Are those really % control?", "responses": [ { "c_id": "2490", "date": "22 Feb 2017", "name": "Keith Davis", "role": "Author Response F1000Research Advisory Board Member", "response": "We thank Dr. Terabe for his review and comments. 1. Are the concentrations of Lunasin used physiologically relevant?   Our view is that it is difficult to assess the physiological relevance of potential therapeutics in the absence of detailed pharmacokinetic data, particularly when assessing in vitro assays, which often are not very predictive of in vivo effects.  We assume that Lunasin is not particularly stable in cell cultures where a number of proteases are present, thus; it is not surprising that concentrations of 100 µM are required for significant activity.  With respect to the in vivo studies, a dose of 30 mg/kg body weight is similar to that of several biologic drugs, as well as the RGD-peptide drug cilengitide [1]. Given our initial preclinical data in mice showing that daily injection of 30 mg/kg Lunasin does not induce any signs of toxicity while providing a therapeutic effect [2] suggest that the dosage of Lunasin used in vivo is potentially physiologically relevant. Clearly, more studies are needed to validate this conclusion in humans, and it is very likely that substantially more work will need to be done to find appropriate formulations and/or modifications of the Lunasin peptide before it would have potential clinical utility.   2. Does Lunasin have similar effects on the progression of these tumors in immunodeficient mice such as RAG deficient mice and RAG/common gamma chain deficient mice? Comparing the effect of Lunasin in these two models, immunodeficient mice and immunocompetent mice, will provide information as to whether adaptive and innate immune cells are involved. Since there is a direct toxicity of Lunasin to tumor cells, it would be important to show the contribution of immune cells to the anti-tumor effect of Lunasin seen in the in vivo models.   We have not done these studies, nor are we aware that they have been done by others.  We agree that using mouse genotypes with various levels of immunodeficiency would be an excellent way to address the potential immune modulatory effects of Lunasin. However, this is beyond the scope of this research note.    Minor point:   1. Axis labels of Fig 1 may not be correct. Are those really % control?   The original graph did use a decimal representation of the data rather than the percentage.  We have modified the axis to represent the percentage values to be consistent with the axis label.   Nabors, L.B., et al., Two cilengitide regimens in combination with standard treatment for patients with newly diagnosed glioblastoma and unmethylated MGMT gene promoter: results of the open-label, controlled, randomized phase II CORE study. Neuro-Oncology, 2015. 17(5): p. 708-717. Shidal, C., et al., Lunasin is a novel therapeutic agent for targeting melanoma cancer stem cells. Oncotarget, 2016 7(51): p. 84128-84141." } ] }, { "id": "18085", "date": "30 Nov 2016", "name": "Elizabeth S. Yeh", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript would be improved by addressing the following suggestions:\nIs it known how Lunasin is penetrating the cell membrane or whether it is acting on an extracellular receptor?\n\nWith in vivo studies using peptides, it is difficult to assess bioavailability due to lack of stability. The manuscript would benefit from discussing what is known about in vivo delivery. This may take into account why pre-treatment of cells was no more effective than s.c. delivery.", "responses": [ { "c_id": "2491", "date": "22 Feb 2017", "name": "Keith Davis", "role": "Author Response F1000Research Advisory Board Member", "response": "We thank Dr. Yeh for her review and comments. 1. Is it known how Lunasin is penetrating the cell membrane or whether it is acting on an extracellular receptor?   We do not currently know all the precise mechanisms whereby Lunasin may enter the cell. There are data that strongly support the hypothesis that lunasin can enter cells via the recycling of integrin receptors; however, how Lunasin is released internally and makes it way to the nucleus is not known.   Our studies suggest that it may not be necessary for Lunasin to enter the cell to exert at least some of its effects given that the inhibition of integrin signaling likely occurs at the cells surface.   We have added additional details in the Introduction to address this question.   2. With in vivo studies using peptides, it is difficult to assess bioavailability due to lack of stability. The manuscript would benefit from discussing what is known about in vivo delivery. This may take into account why pre-treatment of cells was no more effective than s.c. delivery.   There is very little information available on the pharmacokinetics and bioavailability of Lunasin in vitro or in vivo, particularly with regard to the purified Lunasin peptide.  Human subjects fed 50 g of soy protein per day were found to have detectable levels of Lunasin in their blood.  Based on the amount of Lunasin thought to be present in the soy protein and estimates of loss due to digestion, the authors concluded that the average absorption rate of Lunasin in these subjects averaged 4.5% [1]. Another study suggested that Lunasin was also orally bioavailable in mice and that as much as 30% of the Lunasin reached the tested tissues intact [2].   This is clearly an important area that needs significantly more work as we continue assessing the potential of Lunasin as an anticancer agent.   We have added the details of the available data on bioavailability in the Introduction. Dia, V.P., et al., Presence of lunasin in plasma of men after soy protein consumption. J Agric Food Chem, 2009. 57(4): p. 1260-6. Hsieh, C.C., et al., Complementary roles in cancer prevention: protease inhibitor makes the cancer preventive peptide lunasin bioavailable. PLoS One, 2010. 5(1): p. e8890." } ] } ]
1
https://f1000research.com/articles/5-2432
https://f1000research.com/articles/6-171/v1
21 Feb 17
{ "type": "Opinion Article", "title": "Evolution, immunity and the emergence of brain superautoantigens", "authors": [ "Serge Nataf" ], "abstract": "While some autoimmune disorders remain extremely rare, others largely predominate the epidemiology of human autoimmunity. Notably, these include psoriasis, diabetes, vitiligo, thyroiditis, rheumatoid arthritis and multiple sclerosis. Thus, despite the quasi-infinite number of \"self\" antigens that could theoretically trigger autoimmune responses, only a limited set of antigens, referred here as superautoantigens, induce pathogenic adaptive responses. Several lines of evidence reviewed in this paper indicate that, irrespective of the targeted organ (e.g. thyroid, pancreas, joints, brain or skin), a significant proportion of superautoantigens are highly expressed in the synaptic compartment of the central nervous system (CNS). Such an observation applies notably for GAD65, AchR, ribonucleoproteins, heat shock proteins, collagen IV, laminin, tyrosine hydroxylase and the acetylcholinesterase domain of thyroglobulin. It is also argued that cognitive alterations have been described in a number of autoimmune disorders, including psoriasis, rheumatoid arthritis, lupus, Crohn's disease and autoimmune thyroiditis. Finally, the present paper points out that a great majority of the \"incidental\" autoimmune conditions notably triggered by neoplasms, vaccinations or microbial infections are targeting the synaptic or myelin compartments. On this basis, the concept of an immunological homunculus, proposed by Irun Cohen more than 25 years ago, is extended here in a model where physiological autoimmunity against brain superautoantigens confers both: i) a crucial evolutionary-determined advantage via cognition-promoting autoimmunity; and ii) a major evolutionary-determined vulnerability, leading to the emergence of autoimmune disorders in Homo sapiens. Moreover, in this theoretical framework, the so called co-development/co-evolution model, both the development (at the scale of an individual) and evolution (at the scale of species) of the antibody and T-cell repertoires are coupled to those of the neural repertoires (i.e. the distinct neuronal populations and synaptic circuits supporting cognitive and sensorimotor functions). Clinical implications and future experimental insights are also presented and discussed.", "keywords": [ "autoantigens", "immune repertoire", "natural autoantibodies", "immunological homunculus", "synapse", "paraneoplastic syndrome", "autoimmunity", "central nervous system" ], "content": "Introduction\n\nThe role of auto-immune mechanisms in a large array of diseases continues to be extensively explored, and the identification of new target autoantigens is still an active field of research. However, despite the quasi-infinite number of potential target autoantigens that bear human cells, the majority of our internal antigenic library somehow remains off-target. Indeed, while many orphan autoimmune diseases have been described, the landscape of human autoimmunity is dominated by a limited number of disorders that include psoriasis, diabetes, vitiligo, thyroiditis, rheumatoid arthritis and multiple sclerosis. Moreover, besides \"idiopathic\" autoimmunity, for which no causative event can be conclusively identified, it is worth noting that \"incidental\" autoimmunity, triggered by neoplasms (paraneoplastic syndromes), vaccination (autoimmune/autoinflammatory syndrome induced by adjuvants [ASIA])1 or microbial infections (post-streptococcal glomerulonephritis and Guillain-Barre syndrome secondary to Campylobacter jejuni infection), does not affect all tissues and organs with an evenly distributed incidence. The great majority of such incidental autoimmune disorders clinically express as neurological pathologies and mainly target myelinic or neuronal autoantigens. Overall, these observations indicate that, independently from the MHC haplotype of an individual, autoantigens are not equal with regard to their potential for autoimmunity. There are what could be called superautoantigens, and in particular neural superautoantigens, toward which TCR and antibody repertoires tend to be skewed in humans.\n\nIt is proposed here that the existence of such superautoantigens is shaped by physiological events that are associated with human brain development and functions. The theory of the immunological homunculus, proposed more than 25 years ago by Irun Cohen2,3, constitutes an ideal framework to explain the emergence of superautoantigens during evolution. The present paper firstly provides a brief description of the somatosensory homunculus, i.e. the brain cortical area that, by analogy-based reasoning, inspired the concept of the immunological homunculus. Secondly, the immune and nervous systems are paralleled with regard to: i) the importance of self-generated inputs in the development of both somatosensory and immunological homunculi; and ii) the mechanisms driving a distorted representation of our body in both homunculi.\n\nThe somatosensory homunculus (Figure 1) essentially relates to the sense of touch and neural connections that are established between i) innervated skin territories where peripheral receptors for touch sensory inputs are located and ii) specific subareas of the brain cortex where neurons that integrate touch sensory input are located. The higher the density of sensory receptors in a given skin territory, the larger the surface covered by the corresponding cortical subarea4,5. As a consequence, depending on their respective densities in sensory receptors, two skin territories covering quantitatively similar surfaces may be connected to cortical areas covering greatly different surfaces. In this anatomical and functional segmentation, so-called somatotopy, the topographical heterogeneity of skin territories with regard to the density of sensory receptors is responsible for a distortion of our body representation in the sensory cortex. For example, skin terminal nerves located in the thumb are connected to a much larger brain cortical area than the terminal nerves innervating the whole trunk skin (Figure 1). From a functional point of view, this organization makes sense, since skin sensitivity needs to be highly efficient in anatomical territories requiring a finely tuned motor control, such as thumb, index, lips or tongue. Indeed, the acquisition of motor skills relies on bidirectional sensorimotor connections that allow motor and sensory activities to mutually fuel and integrate. The perception of our own motor activity, a process called sensory reafference (or sensory feedback), greatly participates in sculpting and refining motor programs6,7. Accordingly, in non-human primates, sensory loss in infancy profoundly alters the functional organization of the motor cortex8. Conversely, specific motor programs that are, in part, evolutionary-determined, instruct the use-dependent development of specific subareas of the sensory cortex. Principally, this was shown in experiments where sensory reafferences driven by early primitive motor activity were found to model the sensory cortex of rodents9. Finally, such feedback/feedforward processes between sensory and motor neuronal networks also operate in conditions of post-developmental motor learning10–12.\n\nThe higher the density of sensory receptors in a given skin territory, the larger the surface covered by the corresponding cortical subarea that integrates inputs from this skin territory. In this anatomical and functional segmentation so-called somatotopy, the topographical heterogeneity of skin territories with regard to the density of sensory receptors is responsible for a distortion of our body representation in the sensory cortex. Thus, skin terminal nerves located in the thumb, lips or tongue are connected to a much larger brain cortical area than the terminal nerves innervating the whole trunk skin. The figure was obtained from Human Anatomy and Physiology, Chapter 14.2: Central Processing. OpenStax, Anatomy & Physiology. OpenStax CNX. Jul 30, 2014 (http://cnx.org/contents/FPtK1zmh@6.27:KcreJ7oj@5/Central-Processing).\n\nFor neuroscientists, the term \"developmental plasticity\" mainly refers to the generation of nascent neuronal networks, which during brain development recruit additional neurons and acquire a higher order of intra- and/or inter-network connectivity13,14. In this specific field of research, the visual cortex has offered a unique experimental paradigm to analyze the impact of sensory inputs on the development of sensory neuronal networks. Thus, in cats, rodents and non-human primates, deprivation of visual inputs during early life stages hampers the formation of a fully functional neuronal circuitry in the visual cortex15–17. In addition, recent studies performed in congenitally blind vs sighted humans demonstrated that both the functionality and connectivity of the visual cortex are, in part, shaped by visual experience18. Finally, such an experience-dependent development of sensory neuronal networks was also demonstrated in the somatosensory cortex: trimming the whiskers of newborn rodents induces a partial deprivation of touch sensory inputs that is accompanied by profound developmental alterations of the somatosensory cortex19–21.\n\nImportantly, it was demonstrated that sensory experience impacts on developmental plasticity during specific windows of time, so-called critical periods22,23, which vary depending on the sensory input considered. Moreover, besides the existence of critical periods defined by specific time frames of brain development, it is acknowledged that experience-dependent plasticity actually persists in the adult sensory cortices, although to a lower magnitude24,25. The most illustrative example is given by the cortical reorganization of somatosensory neurons that follows hand amputation in adults26,27.\n\nRecent findings show that in addition to exogenous inputs, self-generated inputs shape the somatosensory homunculus. In particular, as mentioned earlier, our own motor actions, conscious or unconscious, are a constant source of self-generated sensory inputs that reinforce the sensory neural circuitry. Thus, twitches, especially frequent in newborns during sleep, trigger robust sensory reafference that shape the sensory cortex28. Moreover, twitches are not randomly generated and have been proposed to provide a sensorimotor experience that helps build motor synergies for goal-directed wake movements, such as walking29,30. Lastly, that human fetuses perform synchronous and coordinated hand/mouth movements despite general motor immaturity31 is thought to reflect an evolutionary-determined imprinting of such a motor behavior31. In this regard, one may consider that nascent motor programs, which will ultimately support the achievement of species-specific motor behaviors (e.g. hand grasping, complex language-related oral motor skills and walking), provide a whole of self-generated sensory inputs that shape a distorted somatosensory homunculus. Overall, the distorted perception of our own body complementary relies on both experience-dependent and experience-independent (i.e innate) mechanisms that, in turn, are indispensable to the effective development and refinement of major human-specific motor programs.\n\nThe term \"immunological homunculus\" and the associated notion of physiological autoimmunity refers to the development of adaptive immune responses directed against self-generated inputs i.e. \"self\" antigens2,3,32. Strikingly, human newborns harbor a non-maternally derived IgM repertoire that is directed toward autoantigens33–35. Given the sterile fetal environment, natural autoantibodies in newborns cannot result from mechanisms of cross-reactivity or molecular mimicry between \"non-self\" microbial antigens and \"self\" antigens. Interestingly, such a self-directed antibody repertoire has been proposed to form what could be called a \"stem repertoire\" from which networks of reactive and cross-reactive antibodies are progressively generated35–38. In the same manner, the frequency of T cell receptors (TCRs) recognizing autoantigens is much more than predicted by the clonal selection theory. Indeed, the negative selection of auto-reactive T-cells via AutoImmune REgulator gene (AIRE)-dependent epithelial expression of autoantigens is far from constituting a stringent process. An abundance of molecular and cellular interactions that do not relate to clonal deletion prevent physiologically-generated auto-reactive T-cells to exert pathogenic effects39–41. Moreover, in-depth analysis of public TCRs (i.e. TCRs that are shared by a large population of individuals in a given species) has shown that \"self\" peptides are frequently recognized by such public TCRs and could even be their main targets42,43. Thus, there is now compelling evidence that, as initially proposed by Irun Cohen, physiological autoimmunity does not only reflect incidental errors of the selection/tolerance immune machinery, but fulfills major functions under normal or pathological conditions. Notably, these include two major functions: i) anti-tumoral immune responses targeted toward developmentally-expressed autoantigens, which are re-expressed during the tumoral process33,44; and ii) support to cognition via the finely-tuned intra-central nervous system (CNS) activation of T-cells directed against brain antigens45–48. In addition, physiological autoimmunity against a specific set of \"self\" antigens may also prevent pathological autoimmunity against a distinct set of autoantigens. This was recently demonstrated in patients bearing AIRE mutations and exhibiting at the same time immune self-reactivity, responsible for pathological autoimmunity, and immune self-reactivity, protecting from pathological autoimmunity49. Finally, another unexpected advantage conferred by physiological autoimmunity is to provide extended immune repertoires directed against \"non-self\" antigens. In point of fact, TCRs or antibodies directed against \"self\" antigens cross-react with a large range of \"non-self\" antigens, and physiological autoimmunity is essential for successfully tackling microbial infections. In particular, T-cell clones endowed with high reactivity against \"self\" antigens are major components of the adaptive immune response against infectious agents50–52. Thus, overall, the assumption that acquisition of an immunological homunculus represents a major educational step of the immune system development is now largely confirmed.\n\nAn important conclusion that needs to be drawn from the concept of immunological homunculus is that autoimmunity is by essence a physiological process that is required for the harmonious maintenance of our tissues and the fine adaptation of the human species to its environment. Accordingly, physiological autoimmune responses against superautoantigens should provide an evolutionary advantage to the human species. Indeed, when providing a distorted representation of our body, the somatosensory homunculus skews the focus of our perceptive competencies toward skin territories that are essential to the execution of major motor functions in humans, for example walking upright, hand grasping and speech. Similarly, one may propose that the immunological homunculus skews the focus of “self”-directed adaptive immunity toward a specific set of autoantigens that, in humans, represent functionally important targets of physiological autoimmunity. Logically, in humans and other species endowed with a developed neo-cortex (the brain area supporting cognitive functions, arising from the most recent evolutionary changes), brain-derived autoantigens should represent a major share of such a set of superautoantigens. Supporting this view, physiological mechanisms of cognition-promoting autoimmunity have been now extensively demonstrated in rodents. In particular, myelin-specific T-cell clones were shown to robustly stimulate neurogenesis in vivo via the synthesis of neurotrophic factors that are captured in situ by neural progenitors53,54. Conversely, T-cell deficient mice harbor profound cognitive alterations that can be reversed by adoptive transfer of CD4+ T-cells55,56. Interestingly, the cognition-promoting activity of T-cells was shown to specifically rely on a sub-population of memory T-cells recognizing brain-derived antigens and exhibiting homing properties toward meninges, choroid plexus and cervical lymph nodes (i.e. the regional lymph nodes draining cerebrospinal fluid)48,57,58.\n\nMost importantly, the notion of physiological autoimmunity suggests that pathological autoimmunity may not result from the de novo emergence of pathogenic autoreactive clones, but from pre-existing autoreactive clones that have acquired abnormal functional properties2,44,59. In this regard, pathogenic autoimmunity and physiological autoimmunity should be expected to share the same preferential targets i.e. superautoantigens. Exposed below are several lines of evidence indicating that a major source of superautoantigens, targeted by both physiological and pathological autoimmunity, is provided by the CNS.\n\n\nThe CNS is a major source of superautoantigens\n\nThere are two categories of properties that confer a high immunogenic potential to myelin-derived and synapse-derived antigens:\n\n1) Abundance and high renewal rate: While abundance (amount of antigen) is an important factor that determines our immune system's ability to see and react against an antigen, the renewal rate of an antigen is likely to be at least as important. In most cases, renewal implies degradation by the phagocytic system, which is an indispensable step to antigen presentation by mononuclear phagocytes. Conversely, a highly abundant antigen that is poorly renewed may be predicted to be poorly immunogenic. In this view, two categories of brain antigens fulfills the criteria of being both abundant and highly renewed: synapse-derived and myelin-derived antigens. Indeed, synapses are highly dynamic structures that are constantly submitted to a remodeling process supporting the generation and maintenance of operative and adapted neuronal networks. Neurons represent roughly half of all CNS cells and, depending on the brain area considered, each human neuron bears ~100,000 synaptic connections (as inferred from electron microscopy analyses of cortical samples)60. In addition, the synaptic circuitry in humans is highly plastic until at least the third decade, which translates into a high rate of both de novo formation and elimination of synapses61. Similarly, myelin (as assessed by measures of white matter volume) occupies nearly 25% of the total human brain volume62 and was recently shown to be renewed at a very high rate in the steady state63.\n\n2) Inflammation-associated development and function: Microglial cells, brain-resident macrophages, play key regulatory functions during brain development and are currently considered as the main \"architects\" of nascent neuronal circuits64. Such a unique function stands on the ability of microglia to exert finely tuned phagocytic activity and to synthesize a large range of cytokines, which not only control neuronal cell fate, but also the formation, selection, maintenance and remodeling of interneuronal synapses. Thus, during brain development, microglia successively engage distinct activation programs that are in close synchrony with the stepwise establishment and maturation of neuronal circuits65. Moreover, in the mature brain, microglia constantly operate a complement-dependent phagocytosis of poorly-active synapses, thus preventing inappropriate connectivity. Lastly, several lines of evidence indicate that specific inflammatory cytokines regulate synaptic activity and function under physiological conditions66. TNF-alpha secreted by glial cells preserves the efficacy of excitatory synapses67, and IFN-γ is a key molecular support for excitatory synapses68 and neuronal circuitry involved in social behavior69. With regard to the myelin compartment, it is also worth noting that myelination of axons is a highly dynamic process that is coupled to synaptic activity70–74, and thus indirectly linked to physiological inflammation. In addition, microglia exert direct effects on the development and myelinating functions of oligodendrocytes (the myelin-forming cells) via the synthesis of H-ferritin75 and M2-type cytokines76. Overall, the formation, maintenance and plasticity of two major brain molecular compartments, namely myelin and interneuronal synapses, involve a set of exquisitely-controlled immune mechanisms. In this regard, physiological inflammation appears to be required to ensure proper CNS functions66. In addition, while inflammatory mechanisms are now recognized in shaping brain development in rodents and humans, a major distinctive feature of the human brain is the duration of its development over a period of time that extends from early embryonic stages until adolescence77 and beyond78. Indeed, the acquisition of a fully-operative neuronal circuitry supporting primary human-specific brain skills (regarding emotional, cognitive and sensory-motor functions) is a decade-long process that is intimately associated to myelination78. Along this line, the proliferation of neuronal progenitors and their differentiation into cortical neurons, usually designated by the term \"corticogenesis\", was shown to be much slower and complex in humans compared to rodents79. Thus, besides development and maturation, adult synaptic plasticity, allowing the constant remodeling of synaptic connections in order to maintain, extend and/or reorganize neuronal circuits, is likely responsible for a massive exposure of the immune system to synapse- and myelin-related antigens throughout life. The recent demonstration of a rich lymphatic vasculature, which drains brain antigens to cervical lymph nodes80,81, further supports this view.\n\nWhile CNS autoimmune diseases are generally relatively infrequent, many autoantigens identified in non-CNS autoimmune pathologies are enriched in the synaptic fraction of the developing and/or mature brain. Below is a non-exhaustive list of such autoantigens:\n\n1) GAD65: Glutamate decarboxylase 65 (GAD2), considered the main autoantigen in diabetes type I82, is a synaptic enzyme that catalyzes γ-aminobutyric acid (GABA) synthesis from glutamate. Its synaptic expression in inhibitory terminals (i.e. axon terminals from neurons transmitting inhibitory inputs) is indispensable to the effective functioning of the GABAergic system (all neurons for which the primary neurotransmitter is GABA)83.\n\n2) AchR: Myasthenia gravis is mediated by autoantibodies targeting the AchR (acetylcholine receptor) on the post-synaptic membrane of the neuromuscular junction84. However, acetycholine is also a key neurotransmitter in the CNS, and AchR-mediated synaptic transmission is essential in crucial cognitive functions, such as memory85.\n\n3) HSPA5 and other heat shock proteins: The human stress protein, HSPA5 (also known as BIP or GRP78), belonging to the heat shock protein family A (HSP70), is one of the autoantigens involved in the pathophysiology of rheumatoid arthritis86,87. It is also a major component of the synaptic glutamate receptor complex88. Similarly, the heat shock protein HSP60, a predominant target of physiological autoimmunity89, is abundantly expressed in axon terminals90, and mutations in the HSP60 gene result in a human disorder affecting motor neurons (autosomal recessive spastic paraplegia 13)91.\n\n4) Small nuclear ribonucleoproteins: Small nuclear ribonucleoproteins (snRNPs) are core components of the spliceosome machinery and the main autoantigens toward which anti-ribonucleotide antibodies are directed in systemic lupus erythematous (SLE) and Sjögren’s syndrome92. In neurons, specific families of mRNAs are exported toward axon terminals and synapses in structures called RNA granules or ribonucleoprotein particles. Such structures are essential to the proper trafficking of specific mRNA species at distance from the soma and their local translation in the synaptic compartment93. Mutations or deletions in genes coding for RNA-binding proteins (RBPs) are involved in numerous inherited CNS disorders94, including fragile-X mental retardation95, spinal muscular atrophy and spinocerebellar ataxia96, as well as familial forms of the following neurological conditions: autism97,98; amyotrophic lateral sclerosis99; and fronto-temporal lobar degeneration100. The whole spectrum of RNAs and proteins that are complexed to such neuronally-expressed RBPs is currently extensively explored by systemic approaches94,101 and include several snRNPs targeted by autoantibodies in SLE or Sjögren’s syndrome. These snRNPS comprise of the La autoantigen Ssb, which binds FRMP (fragile X mental retardation protein)101, the U1 snRNP, which interacts with SMN (survival of motor neurons)102, and the small non-coding RNA called Y RNA, a component of Ro60 ribonucleoprotein particle, which binds neuronal ELAV-like protein103. Finally, ribosomal proteins are themselves targeted by both pathological autoimmunity in SLE patients104 and physiological autoimmunity in healthy individuals105. Again, synapses are specifically enriched in ribosomes and allow crucial synaptic proteins to be synthesized in a timely manner106,107.\n\n5) Basement membrane proteins: Autoimmunity against collagen IV and laminins, two major components of basement membranes, is responsible for the development of anti-glomerular basement membrane glomerulonephritis, Goodpasture’s disease108 and several autoimmune skin disorders109. Recent evidence indicates that synapses are embedded in a extracellular matrix microenvironment, in which collagen IV and laminins are not only abundant, but critically involved in synapse morphogenesis and synaptic remodeling110–113.\n\n6) Tyrosine hydroxylase: Vitiligo is a frequent autoimmune disease characterized by an immune-mediated destruction of melanocytes leading to skin depigmentation114. Interestingly, the biochemical pathways allowing the synthesis of melanin and dopamine respectively present major similarities115,116, and the intra-CNS grafting of melanocytes was recently proposed as a therapeutic approach for Parkinson's disease, a neurodegenerative disorder affecting dopaminergic neurons115. In this regard, tyrosine hydroxylase, a major enzyme of the dopamine synthesis pathway in neurons is also essential to melanin synthesis and is targeted by autoantibodies in vitiligo117,118. Also, the melanin-concentrating hormone receptor 1 (MCHR1), another identified autoantigen in vitiligo119, is expressed by a subpopulation of CNS neurons and its ligand, MCH, is indeed a neuropeptide regulating energy balance, sleep and mood120,121.\n\n7) Thyroglobulin and acetylcholinesterase: Thyroglobulin (TG) and thyroid peroxidase (TPO) are the two main thyroid autoantigens targeted in Hashimoto's disease122. While anti-TPO antibodies have been shown to bind a subpopulation of astrocytes123, it is worth noting that anti-TG autoantibodies recognize an acetylcholinesterase domain that is essential to both the immunogenicity of TG124–126 and its function127,128. Thus, cross-reactivity between TG and acetylcholinesterase, a target autoantigen in Myasthenia gravis, was proposed as a mechanism of ocular muscle dysfunction in Hashimoto's disease124. As mentioned earlier, the cholinergic system, essentially supported by functional interactions between acetylcholine, acetylcholinesterase and AchR, is crucial to the execution of major cognitive tasks, such as learning and memory.\n\nSubclinical cognitive alterations, as well as psychiatric symptoms, are observed in a large array of non-CNS autoimmune diseases. Interestingly, specific antibody signatures have been shown to be associated with such neurological or psychiatric manifestations, which argues against a non-specific inflammatory process that would essentially involve innate immune mechanisms. Below is a list of the main non-CNS autoimmune disorders that may associate with cognitive and/or psychiatric symptoms:\n\n1) SLE and Sjögren’s syndrome: Besides purely neuropsychiatric forms of SLE, subtle to major cognitive alterations were demonstrated in up to 20% of SLE patients129. Cognitive clinical signs in SLE are accompanied with high titers of anti-N-methyl-D-aspartate receptor (NMDA; also named NR2 glutamate receptor) and/or anti-ribosomal antibodies130,131. Similarly, cognitive dysfunctions along with brain structural alterations, detectable by magnetic resonance imaging (MRI), were reported in up to 65% patients suffering from primary Sjögren’s syndrome132–134. As in SLE patients, a correlation was observed between titers of anti-NR2 antibodies (in the serum or cerebrospinal fluid) and clinical sores of cognitive dysfunction130,135.\n\n2) Hashimto's thyroiditis: Hashimoto's encephalopathy, also known as steroid-responsive encephalopathy associated with autoimmune thyroiditis (SREAT), is a rare condition in which anti-TPO antibodies are involved136. However, apart from SREAT, autoimmune thyroiditis (AIT) patients who are in an euthyroid state, suffer from mild to severe cognitive alterations correlating with serum levels of anti-thyroid antibodies, in particular anti-TPO and anti- TG antibodies137,138.\n\n3) Rheumatoid arthritis: While the rate of motor or sensory neurological symptoms is relatively low in rheumatoid arthritis (RA) patients, the incidence of mood disorders is estimated to reach up to 70%139. Moreover, in independent studies, mild cognitive deficits were demonstrated in more than 70% of RA patients and were associated with MRI or biological signs of altered CNS tissue integrity140,141.\n\n4) Psoriasis: Psoriasis is a chronic skin disorder that may also target joints, and for which several candidate autoantigens have been identified142, including, surprisingly, the melanocytic autoantigen ADMTSL5143. The impact of psoriasis plaques on self-esteem and mood is well-described and the role of psychological stress as a trigger of psoriasis recurrence is also robustly documented144. However, measurable signs of subtle cognitive impairment are also observed in psoriasis patients145,146, even during the early phases of the disease145.\n\n5) Crohn's disease: Inflammatory bowel diseases, including Crohn's disease (CD), associate with distinct profiles of circulating autoantibodies directed notably against glycan, GP2 and GM-CSF147. While glycans were shown to be specifically enriched in synapses148,149, the intra-CNS expression of GP2 and GM-CSF in the developing or mature brain is still lacking. However, two recent MRI studies performed in CD patients demonstrated marked structural brain alterations150,151 that correlated with cognitive dysfunction150.\n\nBesides multiple sclerosis, during which one or several myelin autoantigens are targeted152, a flurry of rare CNS autoimmune disorders, notably including paraneoplastic syndromes (PNS), have been characterized in the past decade. Interestingly, not only do a great majority of PNS have a purely neurological expression, but autoantigens in PNS were found to derive essentially from the synaptic compartment153. Moreover, other CNS autoimmune disorders not associated with neoplasms also target synaptic proteins, and the term \"autoimmune synaptopathies\" has been proposed to designate such pathologies153. The following is a non-exhaustive list of the synaptic autoantigens currently identified: GAD65 (glutamic acid decarboxylase)154; NMDAR (N-methyl-D-aspartate receptor)155; AMPAR (α-amino-3-hydrozy-5-methyl-4-isoxazolepropionic acid receptor)156; Caspr2 (contactin-associated protein-like 2)157; LGI-1 (leucine-rich glioma-inactivated protein 1)158; GABA-B receptor (γ-aminobutyric acid receptor B)159; GABA-A receptor (γ-aminobutyric acid receptor A)160; mGluR5 (metabotropic glutamate receptor 5)161; GlyR (glycine receptor)162; NRXN3 (neurexin-3α)163; AMPH (amphiphysin)164.\n\n\nOn the \"co-evolution\" of the immune and nervous systems\n\nAs shown above, a high number of antigens targeted in CNS or non-CNS autoimmune diseases belong either to the myelin or synaptic compartments. Even though such target autoantigens are evidently also expressed in non-CNS locations, the important questions arising from such an observation are why and how the human immune repertoires are skewed toward brain superautoantigens. As discussed earlier, both anti-tumoral immunity and maintenance of tissue integrity are essential functions that can be assigned to physiological autoimmunity59,165. Nevertheless, these functions do not appear to afford an evolutionary advantage to humans over other species endowed with an adaptive immune system. One may consider that the human species is indeed essentially characterized by a particular ability to operate complex cognitive tasks and perform exquisitely precise motor programs. On this basis, it can be hypothesized that CNS-derived autoantigens are major targets of physiological immunity in humans. Moreover, at the scale of evolution, physiological autoimmunity against CNS auto-antigens may reflect not only the development of fine cognitive and motor functions in a given species, but the extent to which support to these functions is afforded by adaptive immunity in this species.\n\nInteractions between humans and their gut microbiota is an illustrative example of what could be termed an immune-mediated symbiotic relationship. On one hand, gut microbiota constantly stimulate the adaptive immune system and shape T-cell and antibody repertoires, thus expanding, through cross-reactivity, the diversity of adaptive immune responses. In turn, the adaptive immune system tightly controls the composition of gut microbiota and favors the development and maintenance of a long-lasting commensal flora, which is benefitial to the host. Immune response against gut microbiota fluctuates over time and serum antibody titers against microbiota-derived antigens are submitted to variations, determined by epitope-specific clonal expansion and dominance166. Thus, more generally, the commensal flora residing in the skin, gut, lungs and urogenital tract permanently stimulates, shapes and modulates our whole immune repertoire167–169. In this state of \"immunity by equilibrium\"169, populations of Tregs, deriving either from the thymus or peripherally-generated, play a crucial role in the neonatal development of tissue-specific tolerance toward symbiotic flora components170–174. While symbiosis corresponds to a process of co-development and mutual support between species, co-evolution is defined by mutual selective pressure exerted by two species to the benefits of both species. Interestingly, the term phylosymbiosis was recently proposed to depict the parallel demonstrated between the phylogeny of host-associated microbial communities and the phylogeny of species hosting these distinct communities175. Of note, phylosymbiosis appears to be essentially mediated by the immune system of the host175. In this regard, host/microbiota symbiotic interactions could be considered as resulting from a process of immune-mediated co-evolution of organisms.\n\nBy analogy with the notions of phylosymbiosis and co-evolution, it is proposed below that the immune and nervous systems not only co-develop at the scale of an individual, but have co-developed during evolution. This idea essentially stems from the crucial demonstration that human newborns harbor an IgM antibody repertoire directed against autoantigens34,35,38. Obviously, and as previously mentioned, since the amnion forms a sterile environment, cross-reactivity against microbiota-derived antigens cannot explain such an observation. Interestingly, while the full array of autoantigens targeted by IgM in newborns remains to be identified, many of the currently known targeted autoantigens are components of the myelin or synaptic compartments. These include: GAD65, MOG and acetylcholinesterase (cf supra); HSP60, a mitochondrial chaperonin whose genetically-determined alterations lead to a familial form of motor neuron disease176 (cf supra); myosin, a protein whose brain isoform is abundantly expressed in synapses177,178; galectin-1 and -3, two neuronally expressed molecules that bind to the synaptic RNA-binding protein SMN (Survival Motor Neuron)179,180; B2-microglobulin, a key immune molecule also required for proper CNS development and plasticity181.\n\nThese observations suggest that the developing CNS provides a highly diverse array of autoantigens that may stimulate and somehow educate effector T and B cells during the prenatal period. The main advantage conferred to the host by such an educational process would be a pre-natal expansion of memory lymphocytes, which, through cross-reactivity, would provide a larger immune coverage against infectious agents (including pathological microbiota).\n\nBeyond development, it is also suggested that throughout the lifetime of an individual, brain-derived autoantigens may constantly shape the repertoire of memory lymphocytes and provide tonic signals for the survival and self-renewal of naïve lymphocytes182–184. As shown above, synaptic remodeling and myelin renewal are two major hallmarks of physiological neural functions during the life span of an individual. In particular, the learning-mediated establishment of new neuronal circuits, their selection and maintenance or elimination implies a constant adjustment of our neural repertoires (neural repertoires being defined here as all neuronal populations and synaptic circuits available for cognitive or sensorimotor tasks at a given time). Not only will the brain continue to develop and maturate until early adulthood, but cognitive and sensori-motor functions in the mature CNS are inherently-linked to the plasticity of synapses and myelin sheaths. Overall, one may propose that, similarly to microbiota, the CNS constantly fuels the immune system with antigens that shape and modulate our T- and B-cell repertoires. Conversely, the CNS-instructed diversification of our immune repertoires may ensure that essential synaptic circuits are reinforced by cognition-promoting autoreactive lymphocytes. Thus, at the scale of an individual, the acquisition and maintenance of the immune and neural repertoires may be somehow coupled via a process of mutual development and support.\n\nWhat about evolution? It is proposed here that such a coupling of immune and neural repertoires may have been a driving force of evolution. In this theoretical model, the nervous and immune systems would have been submitted to a co-evolution-like process, consisting of a mutual selective pressure exerted by both systems to the benefits of the host. On one hand, through cognition-promoting immunity, the evolutionary-determined emergence and diversification of adaptive immunity would have provided support to new neural repertoires. At the same time, the evolutionary-determined diversification of neural repertoires would have promoted new immune repertoires (and a subsequent larger ability to tackle infections) via exposure to a larger array of CNS-derived antigens. Accordingly, it may be predicted that among species endowed with an adaptive immune system: i) Diversity of the germline-encoded and realized (mature) immune repertoires parallels the diversity exhibited by the neural repertoire; ii) cognition-promoting autoimmunity is quantitatively and qualitatively scaled to the level of complexity that each species exhibits with regard to cognitive functions; and iii) genes involved in the diversification of both immune and neural repertoires have had a major evolutionary impact. Importantly, this model would explain why the realized human TCR repertoire overcomes the one of rodents by a factor of 10185.\n\nCo-development and co-evolution processes may also link the immune system to non-CNS organs: The theoretical model discussed above may be considered as neurocentric. Indeed, the concept of protective autoimmunity likely applies also to non-CNS organs, and the CNS is not the only source of superautoantigens. Thus, in addition to synaptic and myelin antigens, other families of autoantigens are: i) highly renewed, ii) abundantly exposed to the immune system, iii) involved in crucial organ-specific functions, and iv) expressed in a context of physiological inflammation. Evolutionary-determined adaptive immune responses against such non-CNS superautoantigens may provide a large range of functional benefits that do not relate to the CNS. As shown below, some of these may be species-specific (for instance, provide trophic support to organs that are essential to the survival and adaptation of a given species), while others may be shared between species (for instance, fighting against cancer cells).\n\nSpecies-restricted vs inter-species superautoantigens: The advantages conferred to the host by an adaptive immune response directed against a superautoantigen may persist across evolution. In particular, one could anticipate that a substantial share of public TCRs are directed against two categories of superautoantigens: species-restricted and inter-species superautoantigens.\n\n1) Species-restricted superautoagntigens: The choice of the term \"species-restricted\" refers to the notion that such antigens are not necessarily expressed in a species-specific manner. By contrast, the adaptive response mounted against these antigens is species-specific and confers a species-specific evolutionary advantage to the host. Notably, this may be the case for a group of brain-derived antigens that could have emerged as superautoantigens in Homo sapiens.\n\n2) Inter-species superautoantigens: These antigens are not only expressed across distinct (or all) species endowed with an adaptive immune response, but the T-cell response mounted against these antigens confer an evolutionary advantage that is shared between such species.\n\nNatural antibodies directed against CNS antigens may exert only an indirect effect on neural repertoires: Previous studies showed that CNS-directed antibodies can be detected in the blood in a large range of the healthy population186,187. However, antibodies do not or only poorly cross the blood-brain barrier, and cognition-promoting autoimmunity was demonstrated to rely on T-cells rather than autoantibodies54–56,188. Thus, while T-cells and neural repertoires may be mutually supportive, only a one-way functional connection may link the antibody and neural repertoires. On the one hand, exposure of brain antigens to the immune system would benefit the host via an expansion/diversification of both the antibody and TCR repertoires, while on the other hand, only \"self\"-reactive T-cells (but not autoantibodies) would provide support to neural repertoires. Another explanation, not exclusive from the former one, would be that natural autoantibodies directed against CNS antigens participate in the afferent phase of T-cell mediated cognition-promoting immune responses. Engulfment of CNS-derived antigens opsonized or captured by secreted antibodies or, for B-cells, by membrane-bound immunoglobulins, may indeed result in antigen presentation to T-cells, notably in cervical lymph nodes.\n\nNatural autoantibodies and \"self\"-reactive T-cells may provide two separate arms of protective autoimmunity: Although this point remains to be experimentally explored, one may anticipate that autoantibodies and \"self\"-reactive T-cells are targeting distinct (yet partially overlapping) groups of superautoantigens. T-cells and antibody repertoires would thus provide distinct and complementary facets of protective autoimmunity. One may suggest that for some (or possibly many) natural autoantibodies, an essential function is to somehow scavenge and buffer proteins that are renewed or exposed at a high rate in the blood. Supporting this view, spleen marginal zone B-cells, whose hosting tissue is directly plugged on the blood stream, are considered as a major source of natural autoantibodies189,190.\n\n\nClinical implications\n\nThe assumption that immune and neural repertoires are mutually supportive during developmental and post-developmental periods has potentially important clinical implications. In particular, if brain development, from the prenatal period to early adulthood, has a major impact on the acquisition and maintenance of immune repertoires, an endogenous neural origin of pathological autoimmunity may be envisioned. The window of time and immune context during which brain superautoantigens are initially exposed could be a major determinant of the diversity of the T-cell repertoire. In other words, a proper exposure to brain superautoantigens would determine the generation, maintenance and expansion of T-cells that not only recognize brain superautoantigens, but harbor a phenotype and functional profile that are ideally suited to support neural repertoires (i.e. ad hoc homing properties and ad hoc profiles of cytokines and neurotrophic factors). Similar principles may apply to B cells and natural autoantibodies, with the limitations discussed previously.\n\nSeveral elements of the literature support the notion that neural and immune repertoires mutually develop in humans. Cognitive and behavioral alterations are observed in children suffering from several categories of genetically-determined immunodeficiencies. These include severe combined immunodeficiencies191 and the Di Georges syndrome, which associates thymic hypoplasia, cognitive deficits192 and psychiatric manifestations, such as autism and schizophrenia193. Conversely, complex immune alterations have been reported in patients suffering from the two most frequent neuropsychiatric and neurodevelopmental human disorders: autism and schizophrenia. Schizophrenia is associated with a higher incidence of autoimmune disorders, including Grave's disease, psoriasis and celiac disease194. Moreover, a number of studies have demonstrated a large range of immune alterations in the blood of schizophrenic patients195,196. Also, the high incidence of autoimmune disorders in autistic patients and their siblings has suggested that autoimmune mechanisms may be involved in the pathophysiology of autism197. However, in accordance with the co-development/co-evolution model, another explanation could be that the neurodevelopmental alterations characterizing autism and schizophrenia are the cause rather than the consequence of profound alterations of the immune repertoires, which may lead to pathological autoimmunity in a subgroup of these patients.\n\nInterestingly, genome-wide association studies also provide important support to the co-development/co-evolution model. Indeed, as discussed earlier, this model predicts that genes involved in the diversification of both immune and neural repertoires have had a major evolutionary impact. Accordingly, there is now compelling evidence that genetic susceptibility to autism and schizophrenia is conferred, for some parts, by immune-related genes, including HLA genes198–201.\n\nBesides periods of co-development and co-maturation, it is proposed that neural and immune repertoires mutually fuel each other during the whole life of an individual. Considering that CNS-derived antigens, similarly to microbiota-derived antigens, constantly shape immune repertoires, implies that mental state, learning tasks, cognitive activities and/or the execution of sensori-motor programs could exert major and specific effects on immune repertoires. Stress-induced alterations of the immune response is now extensively documented and is known to rely on two main pathways: the hypothalamus-pituitary-adrenal axis and the autonomic nervous system (ASN) pathway202,203. Recently, the brain reward system was also shown to deeply impact immune responses via signaling through the ASN204. CNS-derived superautoantigens and their instructing roles on immune repertoires could thus provide another mechanism of brain-induced immunomodulation. More generally, demonstrating that neural and immune repertoires are functionally coupled could pave the way to innovative therapeutic strategies based on the control of adaptive immune responses by cognitive and/or sensorimotor tasks.\n\n\nExperimental insights\n\nThe assessment of immune repertoires by system biology approaches34,36,43,205 should allow the determination of whether or not cognitive activities, sensori-motor tasks and mental state directly impact immune repertoires. In particular, immune repertoires should be explored in experimental settings known to induce an increased synaptic plasticity in rodents (notably via enrichment of the environment). Similarly, murine genetic models of schizophrenia or autism should be investigated with regard to alterations of immune repertoires (this should be performed only if immune alterations are not expected to occur as a direct consequence of a given genetic manipulation). The same strategy could be also applied to murine models of mood disorders. Of note, recently-developed technologies, such as optogenetics and chemicogenetics, are currently being harnessed to unravel new links between the brain and immune system206. Such innovative approaches should allow to precisely determine the impact exerted by the activation of specific synaptic circuits on: i) the peripheral T- and B-cell repertoires; and ii) the exposure of specific CNS antigens, notably via their draining to cervical lymph nodes. Finally, a global analysis of immune repertoires should be performed in human patients suffering from autism, schizophrenia or mood disorders. Notably, one may anticipate that qualitative or quantitative alterations of public TCRs may occur under these clinical conditions.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThanks to Laurent Pays and Nathalie Davoust for their scientific and personal supports.\n\n\nReferences\n\nPerricone C, Colafrancesco S, Mazor RD, et al.: Autoimmune/inflammatory syndrome induced by adjuvants (ASIA) 2013: Unveiling the pathogenic, clinical and diagnostic aspects. J Autoimmun. 2013; 47: 1–16. PubMed Abstract | Publisher Full Text\n\nCohen IR: The cognitive paradigm and the immunological homunculus. Immunol Today. 1992; 13(12): 490–4. PubMed Abstract | Publisher Full Text\n\nCohen IR: The cognitive principle challenges clonal selection. Immunol Today. 1992; 13(11): 441–4. PubMed Abstract | Publisher Full Text\n\nHaggard P, Taylor-Clarke M, Kennett S: Tactile perception, cortical representation and the bodily self. Curr Biol. 2003; 13(5): R170–R173. PubMed Abstract | Publisher Full Text\n\nYang TT, Gallen CC, Schwartz BJ, et al.: Noninvasive somatosensory homunculus mapping in humans by using a large-array biomagnetometer. Proc Natl Acad Sci U S A. 1993; 90(7): 3098–3102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber D, Gutnisky DA, Peron S, et al.: Multiple dynamic representations in the motor cortex during sensorimotor learning. Nature. 2012; 484(7395): 473–478. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarrière G, Simmers J, Combes D: Multiple mechanisms for integrating proprioceptive inputs that converge on the same motor pattern-generating network. J Neurosci. 2008; 28(35): 8810–8820. PubMed Abstract | Publisher Full Text\n\nQi HX, Jain N, Collins CE, et al.: Functional organization of motor cortex of adult macaque monkeys is altered by sensory loss in infancy. Proc Natl Acad Sci U S A. 2010; 107(7): 3192–3197. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhazipov R, Sirota A, Leinekugel X, et al.: Early motor activity drives spindle bursts in the developing somatosensory cortex. Nature. 2004; 432(7018): 758–761. PubMed Abstract | Publisher Full Text\n\nArce-McShane FI, Ross CF, Takahashi K, et al.: Primary motor and sensory cortical areas communicate via spatiotemporally coordinated networks at multiple frequencies. Proc Natl Acad Sci U S A. 2016; 113(18): 5083–5088. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOstry DJ, Darainy M, Mattar AA, et al.: Somatosensory plasticity and motor learning. J Neurosci. 2010; 30(15): 5384–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWong JD, Wilson ET, Gribble PL: Spatially selective enhancement of proprioceptive acuity following motor learning. J Neurophysiol. 2011; 105(5): 2512–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNelson SB, Valakh V: Excitatory/Inhibitory Balance and Circuit Homeostasis in Autism Spectrum Disorders. Neuron. 2015; 87(4): 684–698. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSporns O, Betzel RF: Modular Brain Networks. Annu Rev Psychol. 2016; 67: 613–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDes Rosiers MH, Sakurada O, Jehle J, et al.: Functional plasticity in the immature striate cortex of the monkey shown by the [14C]deoxyglucose method. Science. 1978; 200(4340): 447–9. PubMed Abstract | Publisher Full Text\n\nTognini P, Napoli D, Tola J, et al.: Experience-dependent DNA methylation regulates plasticity in the developing visual cortex. Nat Neurosci. 2015; 18(7): 956–8. PubMed Abstract | Publisher Full Text\n\nFrégnac Y, Imbert M: Development of neuronal selectivity in primary visual cortex of cat. Physiol Rev. 1984; 64(1): 325–434. PubMed Abstract\n\nWang X, Peelen MV, Han Z, et al.: How Visual Is the Visual Cortex? Comparing Connectional and Functional Fingerprints between Congenitally Blind and Sighted Individuals. J Neurosci. 2015; 35(36): 12545–59. PubMed Abstract | Publisher Full Text\n\nMiquelajauregui A, Kribakaran S, Mostany R, et al.: Layer 4 pyramidal neurons exhibit robust dendritic spine plasticity in vivo after input deprivation. J Neurosci. 2015; 35(18): 7287–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButko MT, Savas JN, Friedman B, et al.: In vivo quantitative proteomics of somatosensory cortical synapses shows which protein levels are modulated by sensory deprivation. Proc Natl Acad Sci U S A. 2013; 110(8): E726–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimons DJ, Land PW: Early experience of tactile stimulation influences organization of somatic sensory cortex. Nature. 1987; 326(6114): 694–7. PubMed Abstract | Publisher Full Text\n\nHooks BM, Chen C: Critical periods in the visual system: changing views for a model of experience-dependent plasticity. Neuron. 2007; 56(2): 312–26. PubMed Abstract | Publisher Full Text\n\nEspinosa JS, Stryker MP: Development and plasticity of the primary visual cortex. Neuron. 2012; 75(2): 230–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoltmaat A, Svoboda K: Experience-dependent structural synaptic plasticity in the mammalian brain. Nat Rev Neurosci. 2009; 10(9): 647–58. PubMed Abstract | Publisher Full Text\n\nWandell BA, Smirnakis SM: Plasticity and stability of visual field maps in adult primary visual cortex. Nat Rev Neurosci. 2009; 10(12): 873–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCalford MB, Tweedale R: Immediate and chronic changes in responses of somatosensory cortex in adult flying-fox after digit amputation. Nature. 1988; 332(6163): 446–8. PubMed Abstract | Publisher Full Text\n\nMakin TR, Scholz J, Filippini N, et al.: Phantom pain is associated with preserved structure and function in the former hand area. Nat Commun. 2013; 4: 1570. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTiriac A, Blumberg MS: Gating of reafference in the external cuneate nucleus during self-generated movements in wake but not sleep. eLife. 2016; 5: pii: e18749. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlumberg MS, Marques HG, Iida F: Twitching in sensorimotor development from sleeping rats to robots. Curr Biol. 2013; 23(12): R532–R537. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTiriac A, Del Rio-Bermudez C, Blumberg MS: Self-generated movements with “unexpected” sensory consequences. Curr Biol. 2014; 24(18): 2136–2141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesmurget M, Richard N, Harquel S, et al.: Neural representations of ethologically relevant hand/mouth synergies in the human precentral gyrus. Proc Natl Acad Sci U S A. 2014; 111(15): 5718–5722. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCohen IR, Young DB: Autoimmunity, microbial immunity and the immunological homunculus. Immunol Today. 1991; 12(4): 105–110. PubMed Abstract | Publisher Full Text\n\nMadi A, Bransburg-Zabary S, Maayan-Metzger A, et al.: Tumor-associated and disease-associated autoantibody repertoires in healthy colostrum and maternal and newborn cord sera. J Immunol. 2015; 194(11): 5272–5281. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMerbl Y, Zucker-Toledano M, Quintana FJ, et al.: Newborn humans manifest autoantibodies to defined self molecules detected by antigen microarray informatics. J Clin Invest. 2007; 117(3): 712–718. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadi A, Hecht I, Bransburg-Zabary S, et al.: Organization of the autoantibody repertoire in healthy newborns and adults revealed by system level informatics of antigen microarray data. Proc Natl Acad Sci U S A. 2009; 106(34): 14484–14489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadi A, Kenett DY, Bransburg-Zabary S, et al.: Network theory analysis of antibody-antigen reactivity data: the immune trees at birth and adulthood. PLoS One. 2011; 6(3): e17445. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadi A, Kenett DY, Bransburg-Zabary S, et al.: Analyses of antigen dependency networks unveil immune system reorganization between birth and adulthood. Chaos. 2011; 21(1): 016109. PubMed Abstract | Publisher Full Text\n\nMadi A, Bransburg-Zabary S, Kenett DY, et al.: The natural autoantibody repertoire in newborns and adults: a current overview. Adv Exp Med Biol. 2012; 750: 198–212. PubMed Abstract | Publisher Full Text\n\nBluestone JA, Bour-Jordan H, Cheng M, et al.: T cells in the control of organ-specific autoimmunity. J Clin Invest. 2015; 125(6): 2250–2260. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimpson LJ, Ansel KM: MicroRNA regulation of lymphocyte tolerance and autoimmunity. J Clin Invest. 2015; 125(6): 2242–2249. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarson A, Housley WJ, Hafler DA: Genetic basis of autoimmunity. J Clin Invest. 2015; 125(6): 2234–2241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCovacu R, Philip H, Jaronen M, et al.: System-wide Analysis of the T Cell Response. Cell Rep. 2016; 14(11): 2733–2744. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadi A, Shifrut E, Reich-Zeliger S, et al.: T-cell receptor repertoires share a restricted set of public and abundant CDR3 sequences that are associated with self-related immunity. Genome Res. 2014; 24(10): 1603–1612. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCohen IR: Activation of benign autoimmunity as both tumor and autoimmune disease immunotherapy: a comprehensive review. J Autoimmun. 2014; 54: 112–117. PubMed Abstract | Publisher Full Text\n\nRon-Harel N, Cardon M, Schwartz M: Brain homeostasis is maintained by “danger” signals stimulating a supportive immune response within the brain’s borders. Brain Behav Immun. 2011; 25(5): 1036–1043. PubMed Abstract | Publisher Full Text\n\nSchwartz M, Shechter R: Protective autoimmunity functions by intracranial immunosurveillance to support the mind: The missing link between health and disease. Mol Psychiatry. 2010; 15(4): 342–354. PubMed Abstract | Publisher Full Text\n\nSchwartz M, Ziv Y: Immunity to self and self-maintenance: a unified theory of brain pathologies. Trends Immunol. 2008; 29(5): 211–219. PubMed Abstract | Publisher Full Text\n\nKipnis J: Multifaceted interactions between adaptive immunity and the central nervous system. Science. 2016; 353(6301): 766–771. PubMed Abstract | Publisher Full Text\n\nMeyer S, Woodward M, Hertel C, et al.: AIRE-Deficient Patients Harbor Unique High-Affinity Disease-Ameliorating Autoantibodies. Cell. 2016; 166(3): 582–595. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuinn KM, Zaloumis SG, Cukalac T, et al.: Heightened self-reactivity associated with selective survival, but not expansion, of naïve virus-specific CD8+ T cells in aged mice. Proc Natl Acad Sci U S A. 2016; 113(5): 1333–1338. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMandl JN, Monteiro JP, Vrisekoop N, et al.: T cell-positive selection uses self-ligand binding strength to optimize repertoire recognition of foreign antigens. Immunity. 2013; 38(2): 263–274. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFulton RB, Hamilton SE, Xing Y, et al.: The TCR’s sensitivity to self peptide-MHC dictates the ability of naive CD8+ T cells to respond to foreign antigens. Nat Immunol. 2015; 16(1): 107–117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShechter R, Ziv Y, Schwartz M: New GABAergic interneurons supported by myelin-specific T cells are formed in intact adult spinal cord. Stem Cells. 2007; 25(9): 2277–2282. PubMed Abstract | Publisher Full Text\n\nZiv Y, Ron N, Butovsky O, et al.: Immune cells contribute to the maintenance of neurogenesis and spatial learning abilities in adulthood. Nat Neurosci. 2006; 9(2): 268–275. PubMed Abstract | Publisher Full Text\n\nKipnis J, Gadani S, Derecki NC: Pro-cognitive properties of T cells. Nat Rev Immunol. 2012; 12(9): 663–669. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDerecki NC, Cardani AN, Yang CH, et al.: Regulation of learning and memory by meningeal immunity: a key role for IL-4. J Exp Med. 2010; 207(5): 1067–1080. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaruch K, Schwartz M: CNS-specific T cells shape brain function via the choroid plexus. Brain Behav Immun. 2013; 34: 11–16. PubMed Abstract | Publisher Full Text\n\nRadjavi A, Smirnov I, Derecki N, et al.: Dynamics of the meningeal CD4+ T-cell repertoire are defined by the cervical lymph nodes and facilitate cognitive task performance in mice. Mol Psychiatry. 2014; 19(5): 531–533. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCohen IR: The self, the world and autoimmunity. Sci Am. 1988; 258(4): 52–60. PubMed Abstract\n\nHuttenlocher PR: Synaptic density in human frontal cortex - developmental changes and effects of aging. Brain Res. 1979; 163(2): 195–205. PubMed Abstract | Publisher Full Text\n\nPetanjek Z, Judaš M, Šimic G, et al.: Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proc Natl Acad Sci U S A. 2011; 108(32): 13281–13286. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPakkenberg B, Gundersen HJ: Neocortical neuron number in humans: effect of sex and age. J Comp Neurol. 1997; 384(2): 312–320. PubMed Abstract | Publisher Full Text\n\nYeung MS, Zdunek S, Bergmann O, et al.: Dynamics of Oligodendrocyte Generation and Myelination in the Human Brain. Cell. 2014; 159(4): 766–774. PubMed Abstract | Publisher Full Text\n\nFrost JL, Schafer DP: Microglia: Architects of the Developing Nervous System. Trends Cell Biol. 2016; 26(8): 587–597. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatcovitch-Natan O, Winter DR, Giladi A, et al.: Microglia development follows a stepwise program to regulate brain homeostasis. Science. 2016; 353(6301): aad8670. PubMed Abstract | Publisher Full Text\n\nXanthos DN, Sandkühler J: Neurogenic neuroinflammation: inflammatory CNS reactions in response to neuronal activity. Nat Rev Neurosci. 2014; 15(1): 43–53. PubMed Abstract | Publisher Full Text\n\nBeattie EC, Stellwagen D, Morishita W, et al.: Control of Synaptic Strength by Glial TNFalpha. Science. 2002; 295(5563): 2282–2285. PubMed Abstract | Publisher Full Text\n\nZhu PJ, Huang W, Kalikulov D, et al.: Suppression of PKR promotes network excitability and enhanced cognition by interferon-γ-mediated disinhibition. Cell. 2011; 147(6): 1384–1396. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFiliano AJ, Xu Y, Tustison NJ, et al.: Unexpected role of interferon-γ in regulating neuronal connectivity and social behaviour. Nature. 2016; 535(7612): 425–429. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHines JH, Ravanelli AM, Schwindt R, et al.: Neuronal activity biases axon selection for myelination in vivo. Nat Neurosci. 2015; 18(5): 683–689. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMensch S, Baraban M, Almeida R, et al.: Synaptic vesicle release regulates myelin sheath number of individual oligodendrocytes in vivo. Nat Neurosci. 2015; 18(5): 628–630. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScholz J, Klein MC, Behrens TE, et al.: Training induces changes in white-matter architecture. Nat Neurosci. 2009; 12(11): 1370–1371. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKlingberg T, Hedehus M, Temple E, et al.: Microstructure of temporo-parietal white matter as a basis for reading ability: evidence from diffusion tensor magnetic resonance imaging. Neuron. 2000; 25(2): 493–500. PubMed Abstract | Publisher Full Text\n\nZhu PJ, Huang W, Kalikulov D, et al.: Suppression of PKR Promotes Network Excitability and Enhanced Cognition by Interferon-γ-Mediated Disinhibition. Cell. 2011; 147(6): 1384–1396. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang X, Surguladze N, Slagle-Webb B, et al.: Cellular iron status influences the functional relationship between microglia and oligodendrocytes. Glia. 2006; 54(8): 795–804. PubMed Abstract | Publisher Full Text\n\nMiron VE, Boyd A, Zhao JW, et al.: M2 microglia and macrophages drive oligodendrocyte differentiation during CNS remyelination. Nat Neurosci. 2013; 16(9): 1211–1218. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhitaker KJ, Vértes PE, Romero-Garcia R, et al.: Adolescence is associated with genomically patterned consolidation of the hubs of the human brain connectome. Proc Natl Acad Sci U S A. 2016; 113(32): 9105–9110. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiller DJ, Duka T, Stimpson CD, et al.: Prolonged myelination in human neocortical evolution. Proc Natl Acad Sci U S A. 2012; 109(41): 16480–16485. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan de Leemput J, Boles NC, Kiehl TR, et al.: CORTECON: A Temporal Transcriptome Analysis of In Vitro Human Cerebral Cortex Development from Human Embryonic Stem Cells. Neuron. 2014; 83(1): 51–68. PubMed Abstract | Publisher Full Text\n\nLouveau A, Smirnov I, Keyes TJ, et al.: Structural and functional features of central nervous system lymphatic vessels. Nature. 2015; 523(7560): 337–341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRaper D, Louveau A, Kipnis J: How Do Meningeal Lymphatic Vessels Drain the CNS? Trends Neurosci. 2016; 39(9): 581–586. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFenalti G, Buckle AM: Structural biology of the GAD autoantigen. Autoimmun Rev. 2010; 9(3): 148–152. PubMed Abstract | Publisher Full Text\n\nMende M, Fletcher EV, Belluardo JL, et al.: Sensory-Derived Glutamate Regulates Presynaptic Inhibitory Terminals in Mouse Spinal Cord. Neuron. 2016; 90(6): 1189–1202. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZisimopoulou P, Brenner T, Trakas N, et al.: Serological diagnostics in myasthenia gravis based on novel assays and recently identified antigens. Autoimmun Rev. 2013; 12(9): 924–930. PubMed Abstract | Publisher Full Text\n\nSarter M, Parikh V, Howe WM: Phasic acetylcholine release and the volume transmission hypothesis: time to move on. Nat Rev Neurosci. 2009; 10(5): 383–390. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShoda H, Fujio K, Sakurai K, et al.: Autoantigen BiP-Derived HLA-DR4 Epitopes Differentially Recognized by Effector and Regulatory T Cells in Rheumatoid Arthritis. Arthritis Rheumatol. 2015; 67(5): 1171–1181. PubMed Abstract | Publisher Full Text\n\nYoo SA, You S, Yoon HJ, et al.: A novel pathogenic role of the ER chaperone GRP78/BiP in rheumatoid arthritis. J Exp Med. 2012; 209(4): 871–886. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFukata Y, Tzingounis AV, Trinidad JC, et al.: Molecular constituents of neuronal AMPA receptors. J Cell Biol. 2005; 169(3): 399–404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCohen IR: Autoantibody repertoires, natural biomarkers, and system controllers. Trends Immunol. 2013; 34(12): 620–625. PubMed Abstract | Publisher Full Text\n\nWillis D, Li KW, Zheng JQ, et al.: Differential transport and local translation of cytoskeletal, injury-response, and neurodegeneration protein mRNAs in axons. J Neurosci. 2005; 25(4): 778–791. PubMed Abstract | Publisher Full Text\n\nHansen JJ, Dürr A, Cournu-Rebeix I, et al.: Hereditary spastic paraplegia SPG13 is associated with a mutation in the gene encoding the mitochondrial chaperonin Hsp60. Am J Hum Genet. 2002; 70(5): 1328–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWolin S: RNPs and autoimmunity: 20 years later. RNA. 2015; 21(4): 548–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBagni C, Greenough WT: From mRNP trafficking to spine dysmorphogenesis: the roots of fragile X syndrome. Nat Rev Neurosci. 2005; 6(5): 376–87. PubMed Abstract | Publisher Full Text\n\nNussbacher JK, Batra R, Lagier-Tourenne C, et al.: RNA-binding proteins in neurodegeneration: Seq and you shall receive. Trends Neurosci. 2015; 38(4): 226–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichter JD, Bassell GJ, Klann E: Dysregulation and restoration of translational homeostasis in fragile X syndrome. Nat Rev Neurosci. 2015; 16(10): 595–605. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu-Yesucevitz L, Bassell GJ, Gitler AD, et al.: Local RNA translation at the synapse and in disease. J Neurosci. 2011; 31(45): 16086–16093. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStepniak B, Kästner A, Poggi G, et al.: Accumulated common variants in the broader fragile X gene family modulate autistic phenotypes. EMBO Mol Med. 2015; 7(12): 1565–1579. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerto S, Usui N, Konopka G, et al.: ELAVL2-regulated transcriptional and splicing networks in human neurons link neurodevelopment and autism. Hum Mol Genet. 2016; 25(12): 2451–2464. PubMed Abstract | Publisher Full Text\n\nKwiatkowski TJ Jr, Bosco DA, Leclerc AL, et al.: Mutations in the FUS/TLS gene on chromosome 16 cause familial amyotrophic lateral sclerosis. Science. 2009; 323(5918): 1205–8. PubMed Abstract | Publisher Full Text\n\nThomas M, Alegre-Abarrategui J, Wade-Martins R: RNA dysfunction and aggrephagy at the centre of an amyotrophic lateral sclerosis/frontotemporal dementia disease continuum. Brain. 2013; 136(Pt 5): 1345–1360. PubMed Abstract | Publisher Full Text\n\nEl Fatimy R, Davidovic L, Tremblay S, et al.: Tracking the Fragile X Mental Retardation Protein in a Highly Ordered Neuronal RiboNucleoParticles Population: A Link between Stalled Polyribosomes and RNA Granules. PLoS Genet. 2016; 12(7): e1006192. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYong J, Pellizzoni L, Dreyfuss G: Sequence-specific interaction of U1 snRNA with the SMN complex. EMBO J. 2002; 21(5): 1188–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScheckel C, Drapeau E, Frias MA, et al.: Regulatory consequences of neuronal ELAV-like protein binding to coding and non-coding RNAs in human brain. eLife. 2016; 5: pii: e10421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAl Kindi MA, Colella AD, Chataway TK, et al.: Secreted autoantibody repertoires in Sjögren’s syndrome and systemic lupus erythematosus: A proteomic approach. Autoimmun Rev. 2016; 15(4): 405–10. PubMed Abstract | Publisher Full Text\n\nStafford HA, Anderson CJ, Reichlin M: Unmasking of anti-ribosomal P autoantibodies in healthy individuals. J Immunol. 1995; 155(5): 2754–61. PubMed Abstract\n\nBuxbaum AR, Wu B, Singer RH: Single β-actin mRNA detection in neurons reveals a mechanism for regulating its translatability. Science. 2014; 343(6169): 419–422. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGraber TE, Hébert-Seropian S, Khoutorsky A, et al.: Reactivation of stalled polyribosomes in synaptic plasticity. Proc Natl Acad Sci U S A. 2013; 110(40): 16205–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGreco A, Rizzo MI, De Virgilio A, et al.: Goodpasture’s syndrome: a clinical update. Autoimmun Rev. 2015; 14(3): 246–253. PubMed Abstract | Publisher Full Text\n\nFoster MH: Basement membranes and autoimmune diseases. Matrix Biol. 2016; pii: S0945-053X(16)30147-0. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarak T, Kwan KY, Louvi A, et al.: Recessive LAMC3 mutations cause malformations of occipital cortical development. Nat Genet. 2011; 43(6): 590–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu YB, Tewari A, Salameh J, et al.: A dystonia-like movement disorder with brain and spinal neuronal defects is caused by mutation of the mouse laminin β1 subunit, Lamb1. eLife. 2015; 4: pii: e11102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQin J, Liang J, Ding M: Perlecan antagonizes collagen IV and ADAMTS9/GON-1 in restricting the growth of presynaptic boutons. J Neurosci. 2014; 34(31): 10311–10324. PubMed Abstract | Publisher Full Text\n\nKurshan PT, Phan AQ, Wang GJ, et al.: Regulation of synaptic extracellular matrix composition is critical for proper synapse morphology. J Neurosci. 2014; 34(38): 12678–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpritz RA: Six decades of vitiligo genetics: genome-wide studies provide insights into autoimmune pathogenesis. J Invest Dermatol. 2012; 132(2): 268–273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAsanuma M, Miyazaki I, Diaz-Corrales FJ, et al.: Transplantation of melanocytes obtained from the skin ameliorates apomorphine-induced abnormal behavior in rodent hemi-parkinsonian models. PLoS One. 2013; 8(6): e65983. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHearing VJ: Determination of melanin synthetic pathways. J Invest Dermatol. 2011; 131(E1): E8–E11. PubMed Abstract | Publisher Full Text\n\nRahoma SF, Sandhu HK, McDonagh AJ, et al.: Epitopes, avidity and IgG subclasses of tyrosine hydroxylase autoantibodies in vitiligo and alopecia areata patients. Br J Dermatol. 2012; 167(1): 17–28. PubMed Abstract | Publisher Full Text\n\nKemp EH, Emhemad S, Akhtar S, et al.: Autoantibodies against tyrosine hydroxylase in patients with non-segmental (generalised) vitiligo. Exp Dermatol. 2011; 20(1): 35–40. PubMed Abstract | Publisher Full Text\n\nKemp EH, Waterman EA, Hawes BE, et al.: The melanin-concentrating hormone receptor 1, a novel target of autoantibody responses in vitiligo. J Clin Invest. 2002; 109(7): 923–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPissios P: Animals models of MCH function and what they can tell us about its role in energy balance. Peptides. 2009; 30(11): 2040–2044. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTorterolo P, Scorza C, Lagos P, et al.: Melanin-Concentrating Hormone (MCH): Role in REM Sleep and Depression. Front Neurosci. 2015; 9: 475. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLachlan SM, Rapoport B: Breaking tolerance to thyroid antigens: changing concepts in thyroid autoimmunity. Endocr Rev. 2014; 35(1): 59–105. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlanchin S, Coffin C, Viader F, et al.: Anti-thyroperoxidase antibodies from patients with Hashimoto’s encephalopathy bind to cerebellar astrocytes. J Neuroimmunol. 2007; 192(1–2): 13–20. PubMed Abstract | Publisher Full Text\n\nThrasyvoulides A, Sakarellos-Daitsiotis M, Philippou G, et al.: B-cell autoepitopes on the acetylcholinesterase-homologous region of human thyroglobulin: association with Graves’ disease and thyroid eye disease. Eur J Endocrinol. 2001; 145(2): 119–127. PubMed Abstract | Publisher Full Text\n\nMappouras DG, Philippou G, Haralambous S, et al.: Antibodies to acetylcholinesterase cross-reacting with thyroglobulin in myasthenia gravis and Graves’s disease. Clin Exp Immunol. 1995; 100(2): 336–343. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLudgate M, Dong Q, Dreyfus PA, et al.: Definition, at the molecular level, of a thyroglobulin-acetylcholinesterase shared epitope: study of its pathophysiological significance in patients with Graves’ ophthalmopathy. Autoimmunity. 1989; 3(3): 167–176. PubMed Abstract | Publisher Full Text\n\nPark YN, Arvan P: The acetylcholinesterase homology region is essential for normal conformational maturation and secretion of thyroglobulin. J Biol Chem. 2004; 279(17): 17085–17089. PubMed Abstract | Publisher Full Text\n\nLee J, Wang X, Di Jeso B, et al.: The cholinesterase-like domain, essential in thyroglobulin trafficking for thyroid hormone synthesis, is required for protein dimerization. J Biol Chem. 2009; 284(19): 12752–12761. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMassardo L, Bravo-Zehnder M, Calderón J, et al.: Anti-N-methyl-D-aspartate receptor and anti-ribosomal-P autoantibodies contribute to cognitive dysfunction in systemic lupus erythematosus. Lupus. 2015; 24(6): 558–568. PubMed Abstract | Publisher Full Text\n\nLauvsnes MB, Beyer MK, Kvaløy JT, et al.: Association of hippocampal atrophy with cerebrospinal fluid antibodies against the NR2 subtype of the N-methyl-D-aspartate receptor in patients with systemic lupus erythematosus and patients with primary Sjögren’s syndrome. Arthritis Rheumatol. 2014; 66(12): 3387–3394. PubMed Abstract | Publisher Full Text\n\nBravo-Zehnder M, Toledo EM, Segovia-Miranda F, et al.: Anti-ribosomal P protein autoantibodies from patients with neuropsychiatric lupus impair memory in mice. Arthritis Rheumatol. 2015; 67(1): 204–214. PubMed Abstract | Publisher Full Text\n\nLe Guern V, Belin C, Henegar C, et al.: Cognitive function and 99mTc-ECD brain SPECT are significantly correlated in patients with primary Sjogren syndrome: a case-control study. Ann Rheum Dis. 2010; 69(1): 132–137. PubMed Abstract | Publisher Full Text\n\nBlanc F, Longato N, Jung B, et al.: Cognitive Dysfunction and Dementia in Primary Sjögren’s Syndrome. ISRN Neurol. 2013; 2013: 501327. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSegal BM, Rhodus N, Moser Sivils KL, et al.: Validation of the brief cognitive symptoms index in Sjögren syndrome. J Rheumatol. 2014; 41(10): 2027–33. PubMed Abstract | Publisher Full Text\n\nLauvsnes MB, Maroni SS, Appenzeller S, et al.: Memory dysfunction in primary Sjögren’s syndrome is associated with anti-NR2 antibodies. Arthritis Rheum. 2013; 65(12): 3209–17. PubMed Abstract | Publisher Full Text\n\nLaurent C, Capron J, Quillerou B, et al.: Steroid-responsive encephalopathy associated with autoimmune thyroiditis (SREAT): Characteristics, treatment and outcome in 251 cases from the literature. Autoimmun Rev. 2016; 15(12): 1129–1133. PubMed Abstract | Publisher Full Text\n\nLeyhe T, Müssig K: Cognitive and affective dysfunctions in autoimmune thyroiditis. Brain Behav Immun. 2014; 41: 261–6. PubMed Abstract | Publisher Full Text\n\nPilhatsch M, Schlagenhauf F, Silverman D, et al.: Antibodies in autoimmune thyroiditis affect glucose metabolism of anterior cingulate. Brain Behav Immun. 2014; 37: 73–7. PubMed Abstract | Publisher Full Text\n\nJoaquim AF, Appenzeller S: Neuropsychiatric manifestations in rheumatoid arthritis. Autoimmun Rev. 2015; 14(12): 1116–22. PubMed Abstract | Publisher Full Text\n\nHamed SA, Selim ZI, Elattar AM, et al.: Assessment of biocorrelates for brain involvement in female patients with rheumatoid arthritis. Clin Rheumatol. 2012; 31(1): 123–32. PubMed Abstract | Publisher Full Text\n\nBartolini M, Candela M, Brugni M, et al.: Are behaviour and motor performances of rheumatoid arthritis patients influenced by subclinical cognitive impairments? A clinical and neuroimaging study. Clin Exp Rheumatol. 2002; 20(4): 491–7. PubMed Abstract\n\nSticherling M: Psoriasis and autoimmunity. Autoimmun Rev. 2016; 15(12): 1167–1170. PubMed Abstract | Publisher Full Text\n\nArakawa A, Siewert K, Stöhr J, et al.: Melanocyte antigen triggers autoimmunity in human psoriasis. J Exp Med. 2015; 212(13): 2203–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwartz J, Evers AW, Bundy C, et al.: Getting under the Skin: Report from the International Psoriasis Council Workshop on the Role of Stress in Psoriasis. Front Psychol. 2016; 7: 87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColgecen E, Celikbilek A, Keskin DT: Cognitive Impairment in Patients with Psoriasis: A Cross-Sectional Study Using the Montreal Cognitive Assessment. Am J Clin Dermatol. 2016; 17(4): 413–9. PubMed Abstract | Publisher Full Text\n\nGisondi P, Sala F, Alessandrini F, et al.: Mild cognitive impairment in patients with moderate to severe chronic plaque psoriasis. Dermatology. 2014; 228(1): 78–85. PubMed Abstract | Publisher Full Text\n\nBonneau J, Dumestre-Perard C, Rinaudo-Gaujous M, et al.: Systematic review: new serological markers (anti-glycan, anti-GP2, anti-GM-CSF Ab) in the prediction of IBD patient outcomes. Autoimmun Rev. 2015; 14(3): 231–45. PubMed Abstract | Publisher Full Text\n\nScott H, Panin VM: The role of protein N-glycosylation in neural transmission. Glycobiology. 2014; 24(5): 407–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFang P, Wang XJ, Xue Y, et al.: In-depth mapping of the mouse brain N-glycoproteome reveals widespread N-glycosylation of diverse brain proteins. Oncotarget. 2016; 7(5): 38796–38809. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNair VA, Beniwal-Patel P, Mbah I, et al.: Structural Imaging Changes and Behavioral Correlates in Patients with Crohn’s Disease in Remission. Front Hum Neurosci. 2016; 10: 460. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomann AK, Thomann PA, Wolf RC, et al.: Altered Markers of Brain Development in Crohn’s Disease with Extraintestinal Manifestations - A Pilot Study. PLoS One. 2016; 11(9): e0163202. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaushansky N, Eisenstein M, Zilkha-Falb R, et al.: The myelin-associated oligodendrocytic basic protein (MOBP) as a relevant primary target autoantigen in multiple sclerosis. Autoimmun Rev. 2010; 9(4): 233–236. PubMed Abstract | Publisher Full Text\n\nCrisp SJ, Kullmann DM, Vincent A: Autoimmune synaptopathies. Nat Rev Neurosci. 2016; 17(2): 103–17. PubMed Abstract | Publisher Full Text\n\nAriño H, Höftberger R, Gresa-Arribas N, et al.: Paraneoplastic Neurological Syndromes and Glutamic Acid Decarboxylase Antibodies. JAMA Neurol. 2015; 72(8): 874–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDalmau J, Tüzün E, Wu H, et al.: Paraneoplastic anti-N-methyl-D-aspartate receptor encephalitis associated with ovarian teratoma. Ann Neurol. 2007; 61(1): 25–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLai M, Hughes EG, Peng X, et al.: AMPA receptor antibodies in limbic encephalitis alter synaptic receptor location. Ann Neurol. 2009; 65(4): 424–434. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLancaster E, Huijbers MG, Bar V, et al.: Investigations of caspr2, an autoantigen of encephalitis and neuromyotonia. Ann Neurol. 2011; 69(2): 303–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLai M, Huijbers MG, Lancaster E, et al.: Investigation of LGI1 as the antigen in limbic encephalitis previously attributed to potassium channels: a case series. Lancet Neurol. 2010; 9(8): 776–785. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLancaster E, Lai M, Peng X, et al.: Antibodies to the GABAB receptor in limbic encephalitis with seizures: case series and characterisation of the antigen. Lancet Neurol. 2010; 9(1): 67–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPetit-Pedrol M, Armangue T, Peng X, et al.: Encephalitis with refractory seizures, status epilepticus, and antibodies to the GABAA receptor: a case series, characterisation of the antigen, and analysis of the effects of antibodies. Lancet Neurol. 2014; 13(3): 276–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLancaster E, Martinez-Hernandez E, Titulaer MJ, et al.: Antibodies to metabotropic glutamate receptor 5 in the Ophelia syndrome. Neurology. 2011; 77(18): 1698–1701. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHutchinson M, Waters P, McHugh J, et al.: Progressive encephalomyelitis, rigidity, and myoclonus: a novel glycine receptor antibody. Neurology. 2008; 71(16): 1291–1292. PubMed Abstract | Publisher Full Text\n\nGresa-Arribas N, Planagumà J, Petit-Pedrol M, et al.: Human neurexin-3α antibodies associate with encephalitis and alter synapse development. Neurology. 2016; 86(24): 2235–2242. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWerner C, Pauli M, Doose S, et al.: Human autoantibodies to amphiphysin induce defective presynaptic vesicle dynamics and composition. Brain. 2016; 139(Pt 2): 365–379. PubMed Abstract | Publisher Full Text\n\nSchwartz M, Cohen IR: Autoimmunity can benefit self-maintenance. Immunol Today. 2000; 21(6): 265–8. PubMed Abstract | Publisher Full Text\n\nKearney JF, Patel P, Stefanov EK, et al.: Natural antibody repertoires: development and functional role in inhibiting allergic airway disease. Annu Rev Immunol. 2015; 33: 475–504. PubMed Abstract | Publisher Full Text\n\nBelkaid Y, Hand TW: Role of the microbiota in immunity and inflammation. Cell. 2014; 157(1): 121–141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHooper LV, Littman DR, Macpherson AJ: Interactions between the microbiota and the immune system. Science. 2012; 336(6086): 1268–1273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEberl G: Immunity by equilibrium. Nat Rev Immunol. 2016; 16(8): 524–532. PubMed Abstract | Publisher Full Text\n\nScharschmidt TC, Vasquez KS, Truong HA, et al.: A Wave of Regulatory T Cells into Neonatal Skin Mediates Tolerance to Commensal Microbes. Immunity. 2015; 43(5): 1011–1021. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCebula A, Seweryn M, Rempala GA, et al.: Thymus-derived regulatory T cells contribute to tolerance to commensal microbiota. Nature. 2013; 497(7448): 258–262. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLathrop SK, Bloom SM, Rao SM, et al.: Peripheral education of the immune system by colonic commensal microbiota. Nature. 2011; 478(7368): 250–254. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAtarashi K, Tanoue T, Shima T, et al.: Induction of colonic regulatory T cells by indigenous Clostridium species. Science. 2011; 331(6015): 337–341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRound JL, Mazmanian SK: Inducible Foxp3+ regulatory T-cell development by a commensal bacterium of the intestinal microbiota. Proc Natl Acad Sci U S A. 2010; 107(27): 12204–12209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrooks AW, Kohl KD, Brucker RM, et al.: Phylosymbiosis: Relationships and Functional Effects of Microbial Communities across Host Evolutionary History. PLoS Biol. 2016; 14(11): e2000225. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHansen JJ, Dürr A, Cournu-Rebeix I, et al.: Hereditary spastic paraplegia SPG13 is associated with a mutation in the gene encoding the mitochondrial chaperonin Hsp60. Am J Hum Genet. 2002; 70(5): 1328–1332. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKneussel M, Wagner W: Myosin motors at neuronal synapses: drivers of membrane transport and actin dynamics. Nat Rev Neurosci. 2013; 14(4): 233–247. PubMed Abstract | Publisher Full Text\n\nMiki T, Malagon G, Pulido C, et al.: Actin- and Myosin-Dependent Vesicle Loading of Presynaptic Docking Sites Prior to Exocytosis. Neuron. 2016; 91(4): 808–823. PubMed Abstract | Publisher Full Text\n\nWinden KD, Oldham MC, Mirnics K, et al.: The organization of the transcriptional network in specific neuronal classes. Mol Syst Biol. 2009; 5: 291. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPark JW, Voss PG, Grabski S, et al.: Association of galectin-1 and galectin-3 with Gemin4 in complexes containing the SMN protein. Nucleic Acids Res. 2001; 29(17): 3595–3602. PubMed Abstract | Free Full Text\n\nHuh GS, Boulanger LM, Du H, et al.: Functional requirement for class I MHC in CNS development and plasticity. Science. 2000; 290(5499): 2155–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nden Braber I, Mugwagwa T, Vrisekoop N, et al.: Maintenance of peripheral naive T cells is sustained by thymus output in mice but not humans. Immunity. 2012; 36(2): 288–297. PubMed Abstract | Publisher Full Text\n\nHochweller K, Wabnitz GH, Samstag Y, et al.: Dendritic cells control T cell tonic signaling required for responsiveness to foreign antigen. Proc Natl Acad Sci U S A. 2010; 107(13): 5931–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErnst B, Lee DS, Chang JM, et al.: The peptide ligands mediating positive selection in the thymus control T cell survival and homeostatic proliferation in the periphery. Immunity. 1999; 11(2): 173–81. PubMed Abstract | Publisher Full Text\n\nNikolich-Zugich J, Slifka MK, Messaoudi I: The many important facets of T-cell repertoire diversity. Nat Rev Immunol. 2004; 4(2): 123–32. PubMed Abstract | Publisher Full Text\n\nCastillo-Gómez E, Oliveira B, Tapken D, et al.: All naturally occurring autoantibodies against the NMDA receptor subunit NR1 have pathogenic potential irrespective of epitope and immunoglobulin class. Mol Psychiatry. 2016. PubMed Abstract | Publisher Full Text\n\nHammer C, Stepniak B, Schneider A, et al.: Neuropsychiatric disease relevance of circulating anti-NMDA receptor autoantibodies depends on blood-brain barrier integrity. Mol Psychiatry. 2014; 19(10): 1143–9. PubMed Abstract | Publisher Full Text\n\nSchwartz M, Kipnis J, Rivest S, et al.: How do immune cells support and shape the brain in health, disease, and aging? J Neurosci. 2013; 33(45): 17587–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRowley B, Tang L, Shinton S, et al.: Autoreactive B-1 B cells: constraints on natural autoantibody B cell antigen receptors. J Autoimmun. 2007; 29(4): 236–245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJulien S, Soulas P, Garaud JC, et al.: B cell positive selection by soluble self-antigen. J Immunol. 2002; 169(8): 4198–204. PubMed Abstract | Publisher Full Text\n\nTitman P, Pink E, Skucek E, et al.: Cognitive and behavioral abnormalities in children after hematopoietic stem cell transplantation for severe congenital immunodeficiencies. Blood. 2008; 112(9): 3907–13. PubMed Abstract | Publisher Full Text\n\nVorstman JA, Breetvelt EJ, Duijff SN, et al.: Cognitive decline preceding the onset of psychosis in patients with 22q11.2 deletion syndrome. JAMA Psychiatry. 2015; 72(4): 377–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchneider M, Debbané M, Bassett AS, et al.: Psychiatric disorders from childhood to adulthood in 22q11.2 deletion syndrome: results from the International Consortium on Brain and Behavior in 22q11.2 Deletion Syndrome. Am J Psychiatry. 2014; 171(6): 627–639. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen SJ, Chao YL, Chen CY, et al.: Prevalence of autoimmune diseases in in-patients with schizophrenia: nationwide population-based study. Br J Psychiatry. 2012; 200(5): 374–80. PubMed Abstract | Publisher Full Text\n\nBergink V, Gibney SM, Drexhage HA: Autoimmunity, inflammation, and psychosis: a search for peripheral markers. Biol Psychiatry. 2014; 75(4): 324–31. PubMed Abstract | Publisher Full Text\n\nSperner-Unterweger B, Fuchs D: Schizophrenia and psychoneuroimmunology: an integrative view. Curr Opin Psychiatry. 2015; 28(3): 201–6. PubMed Abstract | Publisher Full Text\n\nMatelski L, Van de Water J: Risk factors in autism: Thinking outside the brain. J Autoimmun. 2016; 67: 1–7. PubMed Abstract | Publisher Full Text\n\nCrespi BJ, Thiselton DL: Comparative immunogenetics of autism and schizophrenia. Genes Brain Behav. 2011; 10(7): 689–701. PubMed Abstract | Publisher Full Text\n\nMoises HW, Yang L, Kristbjarnarson H, et al.: An international two-stage genome-wide search for schizophrenia susceptibility genes. Nat Genet. 1995; 11(3): 321–4. PubMed Abstract | Publisher Full Text\n\nCorvin A, Morris DW: Genome-wide association studies: findings at the major histocompatibility complex locus in psychosis. Biol Psychiatry. 2014; 75(4): 276–83. PubMed Abstract | Publisher Full Text\n\nSrinivasan S, Bettella F, Mattingsdal M, et al.: Genetic Markers of Human Evolution Are Enriched in Schizophrenia. Biol Psychiatry. 2016; 80(4): 284–292. PubMed Abstract | Publisher Full Text\n\nPadgett DA, Glaser R: How stress influences the immune response. Trends Immunol. 2003; 24(8): 444–8. PubMed Abstract | Publisher Full Text\n\nIrwin MR, Cole SW: Reciprocal regulation of the neural and innate immune systems. Nat Rev Immunol. 2011; 11(9): 625–632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBen-Shaanan TL, Azulay-Debby H, Dubovik T, et al.: Activation of the reward system boosts innate and adaptive immunity. Nat Med. 2016; 22(8): 940–944. PubMed Abstract | Publisher Full Text\n\nCohen IR: Real and artificial immune systems: computing the state of the body. Nat Rev Immunol. 2007; 7(7): 569–74. PubMed Abstract | Publisher Full Text\n\nBen-Shaanan T, Schiller M, Rolls A: Studying brain-regulation of immunity with optogenetics and chemogenetics; A new experimental platform. Brain Behav Immun. 2016; pii: S0889-1591(16)30526-8. PubMed Abstract | Publisher Full Text" }
[ { "id": "20397", "date": "22 Feb 2017", "name": "Irun R. Cohen", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis Opinion Article by Serge Nataf is a brilliant synthesis; it integrates seemingly unrelated observations to build a coherent model that explains unsung relationships between natural “homuncular” autoimmunity and nervous system development. Nataf’s model of the co-evolution of the immune and nervous systems is seminal biological thinking that bridges evolution, physiology, clinical medicine, health and disease. Readers will respond with challenges, experimental tests, and new understanding.", "responses": [] }, { "id": "22001", "date": "03 May 2017", "name": "Abdelhadi Saoudi", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis excellent opinion article by Serge Nataf is based on critical analysis of unrelated literature and proposes a sound model that bridge natural autoimmunity with brain development and functions. This model also raises several interesting biological questions (evolution, physiology and physiopathology) that will need the development of new experimental setting to understand the coevolution of the immune system and CNS.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes", "responses": [] }, { "id": "22275", "date": "09 May 2017", "name": "Carmen Guaza", "expertise": [ "Reviewer Expertise Neuroimmunology/neurosciences" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNataf presents a sound and provocative opinion piece. The article gets readers to open their mind about the view that superautoantigens are shaped by physiological events associated with brain development and function, all in an integrative view. It is amazing the model of co-evolution of the immune and nervous system exposed in a brilliant and coherent way to promote open discussion that will better the field and impact clinical approaches.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-171
https://f1000research.com/articles/6-161/v1
17 Feb 17
{ "type": "Opinion Article", "title": "The challenges with the validation of research antibodies", "authors": [ "Jan L.A. Voskuil" ], "abstract": "This article further discusses the reproducibility crisis in biomedical science and how poor conduct of commercial antibodies contribute to this. In addition, the way quality data are presented on product sheets by antibody vendors is scrutinized. The article proposes that there is a distinction between testing data and validation data, and special attention is asked for consistency between batches and aliquots. Moreover, the article separates the specifics, such as formulation, antigen and price, from the specifics on performance. Finally, a two-tier approach is discussed, enabling scientists to anticipate how an antibody is likely to perform when repeated purchases are required.", "keywords": [ "Antibodies", "validation", "reproducibility", "conformance" ], "content": "Introduction\n\nIn the scientific community, there is growing attention to the quality of commercial research antibodies, particularly since the recent intensified publications on the crisis of reproducibility1–6. Although some papers have already addressed the lack of quality in the antibody market much earlier7–10, since a link was made between lack of scientific reproducibility and antibody conduct11–13, more efforts were made to bring all stakeholders in the research antibody market together to move forward. Such efforts resulted in online discussions (https://www.protocols.io/groups/gbsi-antibody-validation-online-group), publications on validation14–17 and two international meetings18–20. Everyone agreed that to some extent bad quality antibodies may contribute to lack of scientific progress and that something had to be done to remove such blame from the industries. The strong message is that antibodies need proper validation first before being used in scientific research. A few large vendors have commenced with exhaustive validation for some of their products, but the investment for validation of each individual product is very high and such efforts are not commercially attractive enough to apply for all catalogue items when the size of the catalogue is in the hundreds of thousands21. Besides, despite all the good intensions and large investments in the industry, the approach of exhaustive validation is not the complete answer to the problem. When it comes to antibody validation there are some practical difficulties that are not always appreciated, or they are underestimated if not totally ignored. This article aims to create clarity in the practical issues that directly affects the quality and performance of research antibodies, even when a product has successfully gone through an exhaustive validation process.\n\n\nBasic principles of validation\n\nThere is a fundamental difference between testing an antibody in a certain application, and validation. The former is put in practice by most of us (both vendors/manufacturers and research scientists). Until recently testing with a positive result was more than adequate to pass a product for the market and to persuade researchers to buy the tested antibody. For example, when an antibody was tested in Immunohistochemistry (IHC) and there was a signal, the vendor would go ahead adding the data to the product sheet and adding IHC to the tested applications. Any scientist would not think otherwise than to assume this antibody was fit for IHC and to buy the product, especially when the brand is large and deemed reliable. These times are over. Currently a signal needs to be in the right place and in a relevant tissue to be credible.\n\nValidation goes way beyond mere testing. Here, we first consider how the antibody is commonly used. For example, a CD4 antibody is most likely being used in Flow Cytometry (FC). Then it follows that this antibody is primarily tested in FC and not in Western Blot (WB) or IHC. However, for proper validation the signal needs to be specific and selective; that is at the maximal dilution for good signal in the right cell type, there should be hardly any signal in the wrong cell types. Hence, validation always involves comparison between expressing and non-expressing cells or tissues at identical antibody dilutions. A CD4 antibody is validated in FC when it lifts out a proportionate sub population from all T-cells (the proportion of CD4+ T cells). The way to do this is to have all T cells selected from the buffy coat first by a generic T cell marker antibody (formerly and fully validated for this purpose) and have the signal of CD4 label related to the total T cell signal quantified. Ideally there is another validated CD4 antibody to compare with and to confirm that the observed proportion of CD4 signal relative to the total T cell signal is consistent across the two CD4 antibodies. A commonly used format showing a stain distribution of a single cell line with a peak away from background is not evidence of specificity. For IHC or WB, again comparison between expressing and non-expressing cells/tissues is required for proper validation. An antibody fit for and validated in WB will not automatically pass in IHC or FC though. The notion in the literature7 that every antibody needs first validation in WB before moving on to the required assay is flawed and entails the risk of losing out on precious FC antibodies that will never work in WB or IHC.\n\n\nConformity of validated antibodies in batches and aliquots\n\nIn an ideal world, all antibodies on offer are fully validated for the applications on demand by the market. Although we are far away from this reality, all vendors and manufacturers are currently working very hard to reach this goal. Consequently, increasing amounts of fully validated products are emerging daily. However, this is not the end of the tragedy. As discussed thoroughly in multimedia and to a smaller extent in the literature8,14, the antibodies on sale come in batches or lots. And there will be variabilities from batch to batch or from lot to lot. This is true for monoclonal antibodies (especially when sold in an undefined formulation, such as culture media or ascites), but to a much larger extent this is the case for polyclonal antibodies (especially for undefined formulations, such as serum or plasma, but also for antibodies raised to the entire protein and with an undefined epitope). Therefore, the test/validation results shown on the product sheet will no longer be representative after the batch or lot has been replaced by its successor, unless the data have been reproduced with the new batch/lot.\n\nThere is confusion about the terms batch and lot. They are generally used interchangeably. There is a strong case though to distinguish batches from aliquots: It is recommended to have a batch defined by the harvest and purification, while an aliquot is defined by the place and the day a stock vial is split. The term lot is best avoided to keep the separation between batch and aliquot unambiguous. This article proposes to have this principle copied worldwide. The functionality of this distinction is that any non-conformity can be easily traced back either to inactivation by storage or transit (then a different aliquot with a different history will show conformity again), or to a bad purification or bad production (in which case the entire batch will be withdrawn from the market and be replaced by a new batch).\n\nIt is recommended to have transparency regarding batches and aliquots. The batch codes are preferred to be visible on the product sheet, while both the batch code and aliquot coding is required to be specified on the label of every vial.\n\n\nResponsibility of testing and validation\n\nAs soon as a purchased antibody has arrived, it is the responsibility of the scientist to make sure the product arrived in proper conditions. It would be good practice to start reproducing the data as described on the product sheet to make sure the antibody shows conformity. This should be done before splitting the product into aliquots and storage in a (non-cycling) freezer. This way a non-conforming product can be returned or the specifics on the label can be forwarded to the vendor together with the complaint. Any self-respecting vendor will either replace or refund when a product is non-conformant. Once the antibody has demonstrated its integrity, it is time to use it in the intended experiments. No matter the high quality of data shown on the product sheet, every scientist must validate the antibody in the assay and biological material of interest. It is not evident at all that a positively tested antibody on liver or kidney is going to work on fibroblast or neuronal cell lines. In addition, one should not assume that positive result on a lysate of a neuroblast cell line in WB means that the scientist is going to get the antibody to work in lysates of different brain regions. So, the scientist is primarily responsible for the validation of the purchased antibody in the very defined conditions of the experiments to be done. A lot of precious time and biological research material is saved by following the above steps before using the purchased antibody for the intended experiments. Most vendors and manufacturers will most likely not go much further than confirmation of their products in one or a few assay types in one or a few cell types. Vendor and scientist will achieve a shared responsibility when they develop a mutual understanding and respect for each other’s objectives21.\n\n\nDeciding factors on the product of choice\n\nGiven the size and complexity of the research antibody market, the best way to decide which antibody to pick is to consider a two-tier approach. The first tier considers the specifications of the product regardless of its performance (see Table 1). The scientist needs to decide if a mono-specific antibody is required (which may be essential for certain assays when dependent on repeat purchases), and how the product is formulated. These considerations need to be weighed against the clone/batch specifications, presence of quality data and price. The second tier considers the claimed performance, as specified on the product sheet. Here, the scientific integrity of the quality data come into play (see Table 2). There is an important distinction to be made by the scientist if the antibody is required for native conditions or for non-native conditions. Antibodies confirmed in native assays may not work in non-native assays and vice versa. The extent of quality data, as described on the product sheet, is incrementally listed for each of the most common assay types.\n\nOverview of variety on performance-independent specification visible on the vendor’s product sheet. WB: Western Blot.\n\nOverview of variety of performance specifications visible on the vendor’s product sheet. NB: Comparison between wildtype and knock-out is in all cases the best validation and is not incorporated in this schedule.\n\nCell type, a cell line or a cell type from primary culture or a cell type within a mixture of types/tissue; KD: Knock-Down by induced siRNA expression; RT-PCR, quantitative data demonstrated the levels of mRNA in KD relative to wildtype levels; WB: Western Blot; IP: Immunoprecipitation.\n\nThe two tables highlight a sliding scale of quality specifications currently offered on many catalogues worldwide. We should not dismiss vendors and manufacturers for not having the highest level of quality specifications available for each single product because of the practical restrictions coming with the size and resources of every company21. It is down to the scientist to find their way, and in the meanwhile the manufacturers and vendors do their utmost to deserve the scientist’s trust in their quality. Nonetheless, Table 2 demonstrates that many product sheets show inadequate information and are not yet meeting current requirements in the market. There will be increasing demand for testing in biological relevant cell types/tissues or when gene expression allows to have comparative data to validate the observed signals against negative controls.\n\nIn addition, product sheets of many peptide-generated antibodies show an ELISA titre to the immunizing peptide, but they usually claim ELISA in the tested application list, which is deceiving because this claim is read as any type of ELISA involving detection of entire protein. When the antibody was merely tested on peptide-coated micro-wells, it would be better to claim peptide-ELISA as the tested application rather than ELISA. We do see more often the application code IHC better specified as IHC-p (paraffin-embedded) and IHC-fr (frozen sections). Similarly, we could use ELISA-p (peptide or protein coated wells) and ELISA-s (sandwich).\n\n\nReproducibility and specificity\n\nAny proper validation must include evidence of robustness from batch to batch. External factors, such as exposure to freeze/thaw cycles, and to radiation or extreme heat, will affect the integrity of the antibody. An inactivated aliquot may show either lack of signal, or non-specific signal. Batch variations are subject to variations from animal to animal and from purification to purification. It is worth mentioning that undefined formulations, as described in Table 1 column 4, will have a profound effect on the reproducibility from batch to batch and needs serious consideration especially by assay/kit developers who depend on long term supply of product with identical characteristics from order to order. Antibodies with a defined epitope/immunizing peptide are intrinsically more robust compared to antibodies raised to entire proteins because the limited size of the antigen increases the chance of reproducible characteristics8. This principle can only be overruled when large amounts of animals are immunized with the same entire protein and their antibodies are pooled together to reach a gold standard. However, potential cross-reactivity to related other proteins needs to be considered as well. This is not possible for monoclonal antibodies without known epitope mapping, and in such cases validation must include testing of cross-reactivity directly to such related proteins.\n\n\nDiscussion\n\nThe considerations set out above can be used as a starting point to generate scoring systems. Many vendors are already doing this. However, research scientists remain unaware of such scoring as they are used for internal purposes only. Although such practice will ultimately lead to a much higher quality product on the market, for the moment there is a need for research scientists and assay developers to find their way when looking for that specific antibody fit for their special set-up. Up to this point, they are reliant on cited literature and the reputation of the vendor. However, because of exchange of products across catalogues8,20, a situation is created that it is no longer evident from the product sheet if the antibody is offered by the original manufacturer and if the associated quality data is still representative for the current batch on sale. In addition, each large catalogue has several antibodies to the same protein. This makes the choice for the scientist difficult, especially when the cited literature does not specify the catalogue number, and the manufacturer will not be able to tell which one of their products was used for the experiments shown in that paper. This omission has been recognized and publishers are no longer expected to accept a paper without the catalogue numbers of the antibodies used. Therefore, any guidance industries can provide to facilitate biomedical research in finding the right antibody for the specific needs would be more than welcome. In the meanwhile, one is dependent on advice from individual insiders of the industries as they know all relevant details that may not be visible by the public. Such advisers will be best equipped to sift out the best candidate antibodies from the different catalogues for initial testing, followed by proper validation.", "appendix": "Competing interests\n\n\n\nThe author, nor his company Aeonian Biotech, trade in research antibodies. They are impartial in an advisory role and owe their business to their impartiality. Therefore, this article is a mere contribution to the ongoing discussions on reproducibility and reliability of research antibodies without conflicts of interest.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nPrinz F, Schlange T, Asadullah K: Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011; 10(9): 712. PubMed Abstract | Publisher Full Text\n\nBegley CG, Ellis LM: Drug development: Raise standards for preclinical cancer research. Nature. 2012; 483(7391): 531–533. PubMed Abstract | Publisher Full Text\n\nIoannidis JP: Why Most Clinical Research Is Not Useful. PLoS Med. 2016; 13(6): e1002049. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNosek BA, Errington TM: Making sense of replications. eLife. 2017; 6: pii: e23383. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker M, Dolgin E: Cancer reproducibility project releases first results. Nature. 2017; 541(7637): 269–270. PubMed Abstract | Publisher Full Text\n\nKaiser J: BIOMEDICAL RESEARCH. Calling all failed replication experiments. Science. 2016; 351(6273): 548. PubMed Abstract | Publisher Full Text\n\nBordeaux J, Welsh A, Agarwal S, et al.: Antibody validation. Biotechniques. 2010; 48(3): 197–209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVoskuil J: Commercial antibodies and their validation [version 2; referees: 3 approved]. F1000Res. 2014; 3: 232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBradbury A, Plückthun A: Reproducibility: Standardize antibodies used in research. Nature. 2015; 518(7537): 27–29. PubMed Abstract | Publisher Full Text\n\nRoncador G, Engel P, Maestre L, et al.: The European antibody network's practical guide to finding and validating suitable antibodies for research. MAbs. 2016; 8(1): 27–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker M: Reproducibility crisis: Blame it on the antibodies. Nature. 2015; 521(7552): 274–276. PubMed Abstract | Publisher Full Text\n\nBaker M: Antibody anarchy: A call to order. Nature. 2015; 527(7579): 545–551. PubMed Abstract | Publisher Full Text\n\nWeller MG: Quality issues of research antibodies. Anal Chem Insights. 2016; 11: 21–27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPauly D, Hanack K: How to avoid pitfalls in antibody use [version 1; referees: 2 approved]. F1000Res. 2015; 4: 691. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRizner TL, Sasano H, Choi MH, et al.: Recommendations for description and validation of antibodies for research use. J Steroid Biochem Mol Biol. 2016; 156: 40–42. PubMed Abstract | Publisher Full Text\n\nFreedman LP, Gibson MC, Bradbury AR, et al.: [Letter to the Editor] The need for improved education and training in research antibody usage and validation practices. Biotechniques. 2016; 61(1): 16–18. PubMed Abstract | Publisher Full Text\n\nUhlen M, Bandrowski A, Carr S, et al.: A proposal for validation of antibodies. Nat Methods. 2016; 13(10): 823–827. PubMed Abstract | Publisher Full Text\n\nFreedman LP: GBSI Workshop Report: Antibody Validation: Strategies, Policies, and Practices. 2016. Reference Source\n\nVisk DA: Antibody validation stakeholders speak. Gen. 2016; 36(21): 1, 10, 12–13. Publisher Full Text\n\nFreedman LP: The drive for antibody standards-Time to herd the cats. Gen. 2016; 36. Reference Source\n\nLi J: A scientific crisis that starts in the market, not the lab. Sci Am. 2016. Reference Source" }
[ { "id": "20351", "date": "24 Feb 2017", "name": "Alison H. Banham", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is a valuable contribution to the ongoing discussion regarding the importance of using properly validated antibodies to undertake robust and reproducible scientific research. The article distinguishes antibody testing, in which reactivity is seen using a specific technique, from validation, where specificity is observed using the appropriate positive and negative controls, and outlines the information provided on commercial product sheets. This article addresses key practical issues that can arise during production, storage, and shipping that can affect the quality of an antibody, even when the reagent has been well validated. Furthermore, the author proposes that distinguishing and identifying batch and aliquot information is adopted universally.\n\nAn interesting point arising from reading this article is that despite the mention of using two independent antibodies to the target antigen to validate flow cytometry data (in the example using CD4), this information does not appear frequently on the performance specifications from vendors’ product sheets (Table 2), the exception being for immunoprecipitation. The lack of availability of comparative data (same experiment and biological samples) for multiple antibodies makes it difficult to identify the most effective reagent. It would be helpful if product sheets could display comparative data from suppliers having multiple antibodies to the same antigen. While researchers might still need to purchase another antibody, if they wished to compare the best from different manufacturers, having supportive data using an independent antibody significantly strengthens scientific conclusions.\n\nMinor comments\nIt might be useful for more inexperienced researchers to highlight in Table 2 what level of information is considered to be inadequate.\n\nThe statement that “Antibodies with a defined epitope/immunizing peptide are intrinsically more robust compared to antibodies raised to entire proteins because the limited size of the antigen increases the chance of reproducible characteristics” would benefit from further clarification. While this is true for polyclonals, a monoclonal antibody raised against an entire protein is as robust and reproducible as one recognising an immunizing peptide, as both will bind a single epitope.", "responses": [] }, { "id": "20353", "date": "01 Mar 2017", "name": "Fridtjof Lund-Johansen", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author provides a good overview of current issues with antibody validation. His proposal for a tiered approach to validation is well in line with suggestions from a recent workshop organized by the Global Biological Standards Institute (GBSI)1 and also with published guidelines from the International Working Group on Antibody Validation (IWGAV)2. My main comment relates to the choice of controls.\nThe author explains that \"validation always involves comparison between expressing and non-expressing cells or tissues at identical antibody dilutions\". In fact, reputable antibody manufacturers rarely show negative controls in their product specification sheets. This is not surprising since there is no comprehensive and definitive source of information about the distribution of proteins in tissues, cells or subcellular compartments. There are now precise maps for the transcriptome, and some researchers argue that mRNA levels are predictive of protein abundance3,4,5,6. However, published data do not provide a definitive answer to this question, so this remains a controversial issue. In my view, the author should discuss the problems associated with finding bona-fide negative controls for application-specific antibody validation.", "responses": [] }, { "id": "20349", "date": "06 Mar 2017", "name": "Michael G. Weller", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is a valuable and welcome contribution to the ongoing discussion of antibody quality. The suggestion of a two-tier approach is helpful to distinguish between simple descriptive data and the validation of an antibody for a specified application.\nFor an updated article some minor changes should be considered:\nValidation is a very old concept in analytical chemistry and therefore, some definitions have already emerged and many regulations and guidelines, e.g. Thompson et al.1, have been put into effect. Validations of antibody-based methods should make use of these established and proven approaches and should be seen in this context.\n\nIn Table 1 the declaration of \"Specified IgG in µg or mg\" should be discussed briefly, since only in very rare cases the relevant amount has been determined properly. Either non-specific IgG (e.g. antibodies based on ascites) and IgG of a different species (such as bovine IgG) may contaminate the product. In many cases, only the protein content was determined with a semiquantitative spectrophotometric method.\n\n\"Comparison between wildtype and knock-out is in all cases the best validation and is not incorporated in this schedule.\" I want to mention that there are research antibodies against non-proteinaceous targets, which can not be validated this way. Furthermore, in the case of chemically defined antigens (e.g. peptides), the use of LC-MS/MS is perhaps the most powerful approach to validate an antibody-based method.\n\n\"This omission has been recognized and publishers are no longer expected to accept a paper without the catalogue numbers of the antibodies used.\" I do not think that a catalogue number is sufficient for this purpose. A clone number or a real antibody ID would be much better to make an antibody fully traceable2.", "responses": [] }, { "id": "20350", "date": "16 Mar 2017", "name": "Simon Glerup", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article makes an important contribution to the ongoing and growing discussion of the use of antibodies in research. The main point is that validation goes beyond mere testing and that when selecting an antibody it is of critical importance for scientists to consider the data provided by the manufacturer in the light of the actual experiment it is intended for. This includes considerations of the relevant tissue, cell type and technique as well as the use of proper positive and negative controls.\nMinor comments:\nI strongly agree with Referee 2 regarding clarification of the statement ”Antibodies with a defined epitope / immunizing peptide are intrinsically more robust compared to antibodies raised to the entire proteins because the limited size of the antigen increases the chance of reproducible characteristics”. Such statement needs to be accompanied by references to actual data showing this. The current reference 8 is a review.\n\nI consider the sentence “and in the meanwhile the manufacturers and vendors do their utmost to deserve the scientist’s trust in their quality” highly subjective. This may be true in certain cases but I do not think that this can be extended to describe the behavior of the entire industry.", "responses": [] } ]
1
https://f1000research.com/articles/6-161
https://f1000research.com/articles/6-156/v1
17 Feb 17
{ "type": "Research Article", "title": "Cell growth inhibition and apoptotic effects of a specific anti-RTFscFv antibody on prostate cancer, but not glioblastoma, cells", "authors": [ "Foroogh Nejatollahi", "Payam Bayat", "Bahareh Moazen", "Payam Bayat", "Bahareh Moazen" ], "abstract": "Background: Single chain antibody (scFv) has shown interesting results in cancer immunotargeting approaches, due to its advantages over monoclonal antibodies. Regeneration and tolerance factor (RTF) is one of the most important regulators of extracellular and intracellular pH in eukaryotic cells. In this study, the inhibitory effects of a specific anti-RTF scFv were investigated and compared between three types of prostate cancer and two types of glioblastoma cells. Methods: A phage antibody display library of scFv was used to select specific scFvs against RTF using panning process. The reactivity of a selected scFv was assessed by phage ELISA. The anti-proliferative and apoptotic effects of the antibody on prostate cancer (PC-3, Du-145 and LNCaP) and glioblastoma (U-87 MG and A-172) cell lines were investigated by MTT and Annexin V/PI assays. Results: A specific scFv with frequency 35% was selected against RTF epitope. This significantly inhibited the proliferation of the prostate cells after 24 h. The percentages of cell viability (using 1000 scFv/cell) were 52, 61 and 73% for PC-3, Du-145 and LNCaP cells, respectively, compared to untreated cells. The antibody (1000 scFv/cell) induced apoptosis at 50, 40 and 25% in PC-3, Du-145 and LNCaP cells, respectively. No growth inhibition and apoptotic induction was detected for U-87 and A172 glioblastoma cells. Conclusions: Anti-RTFscFv significantly reduced the proliferation of the prostate cancer cells. The inhibition of cell growth and apoptotic induction effects in PC-3 cells were greater than Du-145 and LNCaP cells. This might be due to higher expression of RTF antigen in PC-3 cells and/or better accessibility of RTF to scFv antibody. The resistance of glioblastoma cells to anti-RTF scFv offers the existence of mechanism(s) that abrogate the inhibitory effect(s) of the antibody to RTF. The results suggest that the selected anti-RTF scFv antibody could be an effective new alternative for prostate cancer immunotherapy.", "keywords": [ "Prostate cancer", "Anti-RTF scFv", "Growth inhibition", "Apoptosis", "Immunotherapy" ], "content": "Introduction\n\nProstate cancer is the most prevalent malignancy and the second leading cause of cancer-related death among men in the USA and developing countries1. Several new strategies have been employed to manage prostate cancer, including gene therapy, targeted therapy with prodrugs, angiogenesis inhibition and immunotherapy2,3. In order to exploit the immune system to retard or even stop tumor cell growth, either via targeting tumor antigens or by disturbing signaling pathways, immunotherapy is a very beneficial method that has been developed4. In recent years, monoclonal antibody-based immunotherapy has been used to target prostate-associated antigens5,6. Targeting prostate-associated antigens may make conventional therapeutic regimens, including chemotherapy and radiotherapy, more beneficial if applied in combination7. To provide an effective targeted therapy, a number of prostate cancer-related antigens have been used, including prostate-specific antigen (PSA), prostate specific membrane antigen (PSMA), prostatic acid phosphatase, Prostatic stem cell antigen (PSCA) and kalikrein-4 (KLK4)8–12. Regeneration and tolerance factor (RTF), a novel membrane protein, has also been introduced as a new attractive target for immunotherapy, since its overexpression has been observed in many kinds of malignant and metastatic cancers, and it has been shown to exert immunoregulatory properties13,14. RTF is the a2 isoform of V0 subunit, which is one of the vacuolar H+-ATPase (V-ATPase) proton pumps and participates in the control of pH in normal and tumor cells via proton pumping across the membrane to the extracellular space or intracellular organelles, which, in turn, contributes to extracellular acidification and maintenance of relatively neutral cytosolic pH15. Acidifying the tumor microenvironment plays a key role in tumor cell proliferation, metastasis and resistance to chemotherapy13,14. It has been shown that anti-RTF monoclonal antibody can block RTF-ATPase activity and induces apoptosis in a Jurkat T cell line expressing RTF16. Bermudez et al.17 have demonstrated that the RTF molecule is expressed in highly metastatic prostate cancer cells and inhibiting V-ATPase enhances chemosensitivity in metastatic prostate cancer.\n\nRecombinant DNA technology paved the way for the production of recombinant antibody (rAb) fragments, such as single-chain variable fragment (scFv) antibodies, which are composed of variable heavy (VH) and light (VL) chains linked by a flexible peptide linker18–21. Properties of scFv antibodies, including smaller molecular size, human origin and better penetration to the target compared with whole antibodies, make these molecules suitable for therapeutic applications22–24. In the present study, the inhibitory effects of selected anti-RTF scFvs on three prostate cancer cell lines, PC-3, Du-145 and LNCaP cells, and two glioblastoma cell lines, U-87 MG and A-172, were investigated.\n\n\nMethods\n\nA phage antibody display library of scFv was developed as described previously18,19. Briefly, panning process was performed to enrich the phage library. The RTF peptide amino acids 488–510 was employed as the target antigen. The peptide was diluted (10μg/ml) and coated in a polystyrene immunotube (Nunc, Finland). After an overnight incubation, washing was performed with PBS and blocking solution (10% FCS [Sigma, UK] and 2% skimmed milk in PBS) was added to the tube and was incubated at 37°C for 2 h. After washing four times with PBS/Tween (PBST) and four times with PBS, phage supernatant diluted with blocking solution (1:1) was added and incubated at room temperature for 1 h. The tube was washed, logarithmic phase TG1 E. coli (Sigma, UK) was added and incubated at 37°C for 1 h. The pellet was obtained with centrifugation at 3000 rpm for 5 min, resuspended in 200 μl of 2TY broth and plated onto 2TYG Agar/Ampicilin plate and incubated at 30°C overnight. Panning process was performed for four rounds to obtain specific scFv antibodies against the desired peptide. The VH-Linker-VL inserts of selected scFv clones were PCR amplified (denaturation 1 min, annealing 1 min, elongation 2 min; R1 and R2 vector primers). Mva1 fingerprinting (Sigma, UK) was performed on 20 colonies of the panned library to determine the homogenicity and frequency of positive samples of PCR products.\n\nThe RTF peptide was diluted to 100μg/ml and coated in 96 wells polystyrene plate (Nunc, Denmark). The plate was incubated at 4°C overnight. The wells containing no peptide, unrelated peptide, M13KO7 helper phage (New England Biolabs, UK) and unrelated scFv (scFv against HER221) were also considered as controls. All the wells were in triplicate. The wells were washed three times with PBST and three times with PBS. A 150μl of 2% skimmed milk were added to each well as blocking solution, and incubation was performed at 37°C for 2h. The wells were washed and diluted phage (109 PFU/ml) was added to each well. M13KO7 was also added to the wells allocated for helper phage instead of phage antibody. The plate was incubated at room temperature for 2h. Nonbinding phages were removed by washing with PBST and PBS, and diluted anti-Fd rabbit antibody (1/100; catalog no., B7786; Sigma, UK)19 was added to each well and incubated at room temperature for 1.5h. Following washing, peroxidase conjugated goat anti-rabbit IgG (1/4000; catalog no., A0545; Sigma)19 was added to each well and incubated at room temperature for 1h. Nonbinding antibodies were removed by washing and 0.5 mg/ml of ABTS (Sigma, USA) in citrate buffer/H2O2 was added. The optical density of each well was read at 405 nm.\n\nHuman prostate cancer cell lines, PC-3, Du-145 and LNCaP, and human glioblastoma cell lines, U-87 MGand A-172, were purchased from National Cell Bank of Iran, Pasteur Institute of Iran (Tehran, Iran). The cells were cultured and maintained in RPMI 1640 (Biosera, UK) in CO2 incubator at 37°C. The medium was supplemented with 10% FBS (Biosera, UK), 100U/ml penicillin and 100 μg/ml streptomycin.\n\nEach cell line was transferred into a 96-well flat-bottomed plate (104 cells per well) and incubated at 37°C overnight. The cells were treated in triplicate with different concentrations of anti-RTF scFv antibodies (100, 200, 500, 1000 scFv/cell); M13KO7 and 2TY broth media were used as negative controls. After a 24h treatment at 37°C, MTT [3-(4, 5-dimethylthiazol-2, 5-diphenyltetrazolium bromide, 0.5 mg/ml; Sigma, Germany] was added to each well and incubated at 37°C for 4 hrs. The supernatant was removed and the crystal products were dissolved by adding DMSO (Merck, Germany) and incubation at room temperature overnight. Colorimetric evaluation was performed at 490 nm. The percentage of cell growth was calculated from the absorbance value of untreated and treated cells as follows: percentage of cell growth = (OD490 treated / OD490 untreated) × 100.\n\nCapability of the selected scFv in inducing apoptosis in the prostate and glioblastoma cells were investigated by Annexin-V/propidium iodide (PI) assay. In total, 8×105cells were seeded per culture plate and incubated overnight at 37°C. The cells were treated with anti-RTF scFv antibody (1000 scFv/cell) for 24 h. Untreated cells were considered as negative control. The cells were harvested using 0.25% trypsin/EDTA, washed with cold PBS and transferred into flow cytometry tubes followed by adding Annexin V-FITC and PI to the both treated and untreated cells. Preparation was completed by adding incubation buffer (Roche Applied Science, Germany) to each tube. The tubes belonged to the 5 cell lines were read with BD FACSCalibur (Becton Dickinson, Franklin Lakes, NJ, USA) and analyzed by WinMDI 2.5 software.\n\nThe data obtained from cell proliferation assays were statistically analyzed by ANOVA test using GraphPad Prism 5 software to compare the means of percentages of cell growth between treated and untreated cells. All data are presented as means ± standard deviation (SD).p value<0.05 was considered statistically significant.\n\n\nResults\n\nDNA fingerprinting of the library clones and the selected clones obtained after four rounds of panning are shown in Figure 1. The different patterns of the library clones demonstrated a diverse and heterogeous library. After panning, a predominant pattern with frequency 35% (lanes 2, 3, 4, 6, 8, 10, and 11) was obtained, which was considered as selected scFv against RTF for following experiments.\n\n(A) Heterogeous patterns were obtained for the un-panned library. A common pattern with frequency 35% (lanes 2, 3, 4, 6, 8, 10 and 11) demonstrated the selection of specific scFv after panning. (B) Marker – øX174 DNA (72–1353 bp).\n\nTo evaluate the reactivity of the scFv antibody to the RTF peptide, phage ELISA was performed. The anti-RTF scFv antibodies produced positive ELISA and the average OD was 0.441 at 405 nm (Figure 2). The baseline reading from the wells with no peptide was 0.075. Unrelated peptide, unrelated scFv and M13KO7 wells showed an average absorbance of 0.132, 0.142, and 0.136, respectively.\n\nPlates were set in duplicates and wells in tetraplicates.\n\nThe percentage of cell viability after a 24h treatment with anti-RTF scFv for prostate cancer cell lines are shown in Figure 3. Three concentrations 200, 500 and 1000 scFv/cell demonstrated significant cell inhibition growth in the three cell lines (P value < 0.05). The best growth inhibition was at a concentration of 1000 scFv/cell, and the percentage of cell growth for PC-3, DU-145 and LNCaP cells at these concentrations were 52, 61 and 73%, respectively. No inhibitory effect was observed when the cells were treated with M13KO7 helper phage and 2TY media (negative controls). No significant growth inhibition was detected for glioblastoma cell lines, U-87 MG and A-172 (Figure 4).\n\nGrowth percentage of (A) PC-3, (B) DU-145 and (C) LNCaP cell lines after 24 h treatment with 100, 200, 500 and 1000 anti-RTF scFv/cell. Results of six experiments; *P value< 0.05.\n\nGrowth percentage of (A) U-87 and (B) A-172 cell lines after 24 h treatment with 100, 200, 500 and 1000 anti-RTF scFv/cell. Non-significant growth reduction was observed. Results of six experiments; *P value< 0.05.\n\nApoptosis was induced in prostate cancer cell lines after a 24 h treatment with 1000 scFv/cell. In total, 50, 40 and 25% of PC-3, Du-145 and LNCaP prostate cancer cells, respectively, showed apoptotic cell death (Figure 5), whereas no apoptosis was detected for U-87 MG and A-172 glioblastoma cell lines, representing that the treated cells were viable (Figure 6).\n\nRepresentative histograms of untreated cells (red) and treated cells (outlined by a black line) after 24 h incubation period for (A) PC-3, (B) DU-145 and (C) LNCaP prostate cells. A shift in fluorescence intensity observed for the treated cells demonstrated apoptotic cells. Apoptosis occurred in 50% of PC-3, 40% of DU-145 and 25% of LNCaP cells compared to untreated cells\n\nRepresentative histograms of untreated cells (red) and treated cells (outlined by a black line) after 24 h incubation period for (A) U-87 MG and (B) A-172 glioblastoma cells. Untreated and treated glioblastoma cells overlapped, representing that the treated cells were viable and non-apoptotic.\n\n\nDiscussion\n\nRecombination DNA technology enables the production of human scFv fragments with desirable properties for tissue penetration; therefore, providing immunotherapeutic reagents for targeted therapy of cancers25,26. The potential role of scFvs in targeted therapy of melanoma, lung, breast, colorectal and prostate cancers have been shown previously25,27–30). To isolate a functional scFv, an identified cell target should be selected31. Due to RTF function, which regulates pH in tumor milieu, it has been considered as an ideal target for cancer immunotherapy, and an anti-RTF monoclonal antibody has been capable of inducing apoptosis in an ovarian carcinoma cell line13.\n\nIn the present study, we applied scFv antibodies to target the RTF molecule in both prostate and glioblastoma cancer cells. Amino acids 488–510 of RTF, which was used to isolate anti-RTF monoclonal antibody32, was applied to select specific human scFv against the peptide. Upon isolation of the scFv antibody against RTF from a large phage display library (RRID: AB_2636849) to evaluate the anti-proliferative and apoptosis effects of the anti-RTF scFv antibody, MTT and Annexin V assays were performed. The obtained results demonstrated a significant cell proliferation inhibition after 24 h treatment with 200–1000 scFv/cell for the three prostate cancer cell lines compared to untreated cells. A comparison among cell growth of three prostate cancer cell lines revealed that inhibition of cell growth in the PC-3 cell line was greater than two other cell lines (Du-145 and LNCaP). This might be due to a higher expression of RTF antigen in PC-3 cells and/or better accessibility of RTF to anti-RTF scFv antibody in PC-3 in comparison with Du-145 and LNCaP cell lines. Although Bermudez et al.17 demonstrated that the amount of RTF mRNA in PC-3 is higher than in LNCaP cells, there has been no experiments to compare the levels of RTF mRNA in Du-145 cell line in comparison with PC-3 and LNCaP cell lines. Therefore, the higher growth inhibition in PC-3 after incubation with anti-RTF scFv could be due to higher amounts of RTF molecule in PC-3 than LNCaP.\n\nNo proliferation inhibition was detected for glioblastoma cell lines after incubation with different concentrations of the anti-RTF antibody compared with untreated cells, although the expression of RTF on glioblastoma cells has been confirmed33. There could be several possible reasons for resistance of these cells to the anti-RTF effect. One could be the lack of RTF molecule accessibility to scFv antibody at the cell surface, due to antigen masking. The effect of masking of human epithermal growth factor receptor2 (ErbB2) via hyaluronan has previously been reported. The findings have demonstrated that masking of trastuzumab-binding epitope by hyaluronan took place in trastuzumab resistant breast cancer cell lines, such as JIMT-1. This masking contributes to the tumor cell escape from receptor-oriented therapy. Antigen masking can happen through overexpression of mucin (MUC) in tumor cells34. In a study that was performed to understand the causative mechanism(s) of trastuzumab resistance in breast and some other cancers, it was discovered that MUC4 masks trastuzumab binding epitope of ErbB2, resulting in reduced binding of trastuzumab35. Mishim et al.36 demonstrated increased expression of podoplanin, which is a mucin-like transmembrane sialoglycoprotein in glioblastoma tumor cells. Therefore, a similar masking mechanism might also be attributed in glioblastoma cells, which precludes RTF binding to anti-RTF scFv antibody. Existence of other isoforms (a1, a3, and a4) of a subunit of proton pump on the cell surface can be considered as another possible mechanism that inhibits the anti-proliferative effects of anti-RTF scFv antibodies on U-87 MG and A-172 cell lines. In addition, the proton pump is not the only mechanism of pH regulation in tumor cells. A number of strategies are involved in control and regulation of pH in glioblastoma cell, such as sodium-proton exchanger-1 (NHE1). It has been demonstrated that U87 MG cell line increases the expression of NHE1 molecule in contrast to normal brain cells to maintain an optimal intracellular pH37. However, the mechanism(s) involved in the non-responsiveness of U87 MG and A172 to anti-RTF scFv antibody remains to be elucidated.\n\nIt has been shown that proton pump inhibitors induce apoptosis in human B-cell tumors through a caspase-independent mechanism38.The apoptosis-inducing effects of anti-RTF monoclonal antibody on ovarian carcinoma cells was assessed using Annexin V-FITC assay, and the J774A1 macrophage cell line incubated with anti-RTF showed a complete inhibition of surface ATPase activity39 (US patent, US 7211257 B2). In addition, the role of the anti-RTF in T cell apoptosis has been shown18. In the present study, the results of Annexin V-FITC assay were consistent with the MTT assay: apoptosis was induced in the three treated prostate cancer cells, however no evidence of apoptosis was observed in the treated glioblastoma cells. In recent years many efforts have been made to induce apoptosis in tumor cells through antibodies. For example, the anti-Fas monoclonal antibody was produced and exploited for apoptosis induction in several glioblastoma cell lines. Although some of glioblastoma cell line, such as LN-18 and LN-215, were sensitive to treatment with the monoclonal antibody against Fas, other cell lines, such as LN-308 and LN-405, showed resistance to anti-Fas antibody-mediated apoptosis. The reason for sensitivity was higher expression of Fas molecule in sensitive rather than in resistant cell lines40. Single chain antibodies to some tumor markers, such as PSCA and IL25 receptor, have been capable of triggering apoptosis in tumor cells23,41. The lack of accessibility of RTF to scFv antibody and probably the presence of compensatory mechanism to pH regulation not only can inhibit an anti-proliferative effect, but also can protect the glioblastoma cells from undergoing apoptosis. By comparison, these characteristics were not observed for prostate cancer cells and the novel scFv selected in this study showed significant anti-cancer effects on the prostate cancer cells.\n\nDue to several advantages of scFvs42, a number of single chain antibodies have been selected against prostate cancer biomarkers, such as PSA, PSMA and PSCA41,43,44. Although anti-PSMA scFv has shown promising effects for prostate cancer immunotherapy and has been introduced as a tool for building theranostic reagents for prostate cancer30, it originated from a murine monoclonal antibody which induces human anti mouse antibody response (HAMA)45,46. Whereas the scFv selected in this study originated from human immunoglobulin genes and does not elicit any HAMA reaction. In addition, the ability for genetic manipulation can improve the antibody effect to produce fusion proteins with additional effector functions46–49. The inhibitory effect of human scFvs against prostate cancer was also reported by Vaday et al.50. In that study, two scFvs were selected against CXCR4 and their inhibitory effects on CXCL12- mediated prostate cancer cell activation was investigated. The high affinity scFvs bound to receptor CXCR4 and inhibited its ligand, CXCL12, which resulted in cancer cell inhibition.\n\nThe panning process, as used by the present study, in the selection of scFvs against a target that enriches a phage antibody leads to isolation of specific antibodies with high affinity and high specificity. The novel anti-RTF single chain antibodies selected in this study with significant anti-proliferative and apoptotic functions on the three prostate cancer cell lines offers specific anti-prostate immunotherapy. Future efforts should be focused on testing the ability of anti-RTF scFv to inhibit prostate cancer growth in experimental models. Manipulation of the selected anti-RTF scFv and conjugation with a toxin may increase its ability to eliminate tumor cells and contribute to glioblastoma immunotherapy.\n\nDataset 1: Phage ELISA raw data. doi, 10.5256/f1000research.10803.d15180751\n\nDataset 2: Cell proliferation assay (MTT assay) raw data of three prostate cancer and two glioblastoma cell lines. doi, 10.5256/f1000research.10803.d15180852\n\nDataset 3: Apoptosis raw data for three prostate cancer and two glioblastoma cell lines. doi, 10.5256/f1000research.10803.d15180953", "appendix": "Author contributions\n\n\n\nForoogh Nejatollahi participated in the study conception, coordinated and helped to draft the manuscript, Payam Bayat performed data collection, analyzed the data and drafted the manuscript. Bahareh Moazen participated in data collection and interpretation and helped draft the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was financially supported by Shiraz University of Medical Sciences (grant number 90–5538).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe present article was extracted from a thesis written by Payam Bayat (unpublished thesis: Selection of human recombinant antibodies against RTF and evaluation of their effects on prostate and glioblastoma cell lines; grant number, 90–5538).\n\n\nReferences\n\nTorre LA, Bray F, Sieget RL, et al.: Global cancer statistics, 2012. CA Cancer J Clin. 2015; 65(2): 87–108. PubMed Abstract | Publisher Full Text\n\nAmer MH: Gene therapy for cancer: present status and future perspective. Mol Cell Ther. 2014; 2: 27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMukherji D, Temraz S, Wehbe D, et al.: Angiogenesis and anti-angiogenic therapy in prostate cancer. Crit Rev Oncol Hematol. 2013; 87(2): 122–31. PubMed Abstract | Publisher Full Text\n\nVieweg J: Immunotherapy for advanced prostate cancer. Rev Urol. 2007; 9(Suppl 1): S29–38. PubMed Abstract | Free Full Text\n\nWestdorp H, Sköld AE, Snijer BA, et al.: Immunotherapy for Prostate Cancer: Lessons from Responses to Tumor-Associated Antigens. Front Immunol. 2014; 5: 191. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeves H, Kwok HF: Recent advances in the field of anti-cancer immunotherapy. BBA Clin. 2015; 3: 280–288. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRotow J, Gameiro SR, Madan RA, et al.: Vaccines as monotherapy and in combination therapy for prostate cancer. Clin Transl Sci. 2010; 3(3): 116–122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKantoff PW, Schuetz TJ, Blumenstein BA, et al.: Overall survival analysis of a phase II randomized controlled trial of a Poxviral-based PSA-targeted immunotherapy in metastatic castration-resistant prostate cancer. J Clin Oncol. 2010; 28(7): 1099–105. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAkhtar NH, Pail O, Saran A, et al.: Prostate-Specific Membrane Antigen-Based Therapeutics. Adv Urol. 2012; 2012: 973820. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGraddis TJ, McMahan CJ, Tamman J, et al.: Prostatic acid phosphatase expression in human tissues. Int J Clin Exp Pathol. 2011; 4(3): 295–306. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaran D, Dubey S, Van Veldhuizen P, et al.: Dual antigen target-based immunotherapy for prostate cancer eliminates the growth of established tumors in mice. Immunotherapy. 2011; 3(6): 735–46. PubMed Abstract | Publisher Full Text\n\nDay CH, Fanger GR, Retter MW, et al.: Characterization of KLK4 expression and detection of KLK4-specific antibody in prostate cancer patient sera. Oncogene. 2002; 21(46): 7114–7120. PubMed Abstract | Publisher Full Text\n\nSennoune SR, Bakunts K, Martínez GM, et al.: Vacuolar H+-ATPase in human breast cancer cells with distinct metastatic potential: distribution and functional activity. Am J Physiol Cell Physiol. 2004; 286(6): C1443–C52. PubMed Abstract | Publisher Full Text\n\nSennoune SR, Luo D, Martínez-Zaguilán R: Plasmalemmal vacuolar-type H+-ATPase in cancer biology. Cell Biochem Biophys. 2004; 40(2): 185–206. PubMed Abstract | Publisher Full Text\n\nNishi T, Forgac M: The vacuolar (H+)-ATPases--nature's most versatile proton pumps. Nat Rev Mol Cell Biol. 2002; 3(2): 94–103. PubMed Abstract | Publisher Full Text\n\nBoomer JS, Lee GW, Givens TS, et al.: Regeneration and tolerance factor's potential role in T-cell activation and apoptosis. Hum Immunol. 2000; 61(10): 959–71. PubMed Abstract | Publisher Full Text\n\nBermudez LE: V-ATPase at the Cell Surface in Highly Metastatic Prostate Cancer Cells. Texas Tech University; 2010. Reference Source\n\nNejatollahi F, Hodgetts SJ, Vallely PJ, et al.: Neutralising human recombinant antibodies to human cytomegalovirus glycoproteins gB and gH. FEMS Immunol Med Microbiol. 2002; 34(3): 237–44. PubMed Abstract | Publisher Full Text\n\nNejatollahi F, Malek-hosseini Z, Mehrabani D: Development of Single Chain Antibodies to P185 Tumor Antigen. Iranian Red Medical Journal. 2008; 10(4): 298–302. Reference Source\n\nCheng Y, Li Z, Xi H, et al.: A VL-linker-VH Orientation Dependent Single Chain Variable Antibody Fragment against Rabies Virus G Protein with Enhanced Neutralizing Potency in vivo. Protein Pept Lett. 2016; 23(1): 24–32. PubMed Abstract | Publisher Full Text\n\nNejatollahi F, Jaberipour M, Asgharpour M: Triple blockade of HER2 by a cocktail of anti-HER2 scFv antibodies induces high antiproliferative effects in breast cancer cells. Tumour Biol. 2014; 35(8): 7887–95. PubMed Abstract | Publisher Full Text\n\nLi K, Zettlitz KA, Lipianskaya J, et al.: A fully human scFv phage display library for rapid antibody fragment reformatting. Protein Eng Des Sel. 2015; 28(10): 307–16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYounesi V, Nejatollahi F: Induction of anti-proliferative and apoptotic effects by anti-IL-25 receptor single chain antibodies in breast cancer cells. Int Immunopharmacol. 2014; 23(2): 624–32. PubMed Abstract | Publisher Full Text\n\nNejatollahi F, Asgharpour M, Jaberipour M: Down-regulation of vascular endothelial growth factor expression by anti-Her2/neu single chain antibodies. Med Oncol. 2012; 29(1): 378–83. PubMed Abstract | Publisher Full Text\n\nChames P, Van Regenmortel M, Weiss E, et al.: Therapeutic antibodies: successes, limitations and hopes for the future. Br J Pharmacol. 2009; 157(2): 220–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAhmad ZA, Yeap SK, Ali AM, et al.: scFv antibody: principles and clinical application. Clin Dev Immunol. 2012; 2012: 980250. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCianfriglia M, Fiori V, Dominici S, et al.: CEACAM1 is a Privileged Cell Surface Antigen to Design Novel ScFv Mediated-Immunotherapies of Melanoma, Lung Cancer and Other Types of Tumors. Open Pharmacol J. 2012; 6: 1–11. Publisher Full Text\n\nMohammadi M, Nejatollahi F, Ghasemi Y, et al.: Anti-Metastatic and Anti-Invasion Effects of a Specific Anti-MUC18 scFv Antibody on Breast Cancer Cells. Appl Biochem Biotechnol. 2017; 181(1): 379–390. PubMed Abstract | Publisher Full Text\n\nBremer E: Targeting of the Tumor Necrosis Factor Receptor Superfamily for Cancer Immunotherapy. ISRN Oncol. 2013; 2013: 371854. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrigerio B, Fracasso G, Luison E, et al.: A single-chain fragment against prostate specific membrane antigen as a tool to build theranostic reagents for prostate cancer. Eur J Cancer. 2013; 49(9): 2223–32. PubMed Abstract | Publisher Full Text\n\nRanjbar R, Nejatollahi F, Nedaei Ahmadi AS, et al.: Expression of Vascular Endothelial Growth Factor (VEGF) and Epidermal Growth Factor Receptor (EGFR) in Patients With Serous Ovarian Carcinoma and Their Clinical Significance. Iran J Cancer Prev. 2015; 8(4): e3428. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKulshrestha A, Katara GK, Ibrahim S, et al.: Vacuolar ATPase ‘a2’ isoform exhibits distinct cell surface accumulation and modulates matrix metalloproteinase activity in ovarian cancer. Oncotarget. 2015; 6(6): 3797–3810. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoth P, Aulwurm S, Gekel I, et al.: Regeneration and tolerance factor: a novel mediator of glioblastoma-associated immunosuppression. Cancer Res. 2006; 66(7): 3852–8. PubMed Abstract | Publisher Full Text\n\nPályi-Krekk Z, Barok M, Isola J, et al.: Hyaluronan-induced masking of ErbB2 and CD44-enhanced trastuzumab internalisation in trastuzumab resistant breast cancer. Eur J Cancer. 2007; 43(16): 2423–33. PubMed Abstract | Publisher Full Text\n\nSingh AP, Chaturvedi P, Batra SK: Emerging roles of MUC4 in cancer: a novel target for diagnosis and therapy. Cancer Res. 2007; 67(2): 433–6. PubMed Abstract | Publisher Full Text\n\nMishima K, Kato Y, Kaneko MK, et al.: Increased expression of podoplanin in malignant astrocytic tumors as a novel molecular marker of malignant progression. Acta Neuropathol. 2006; 111(5): 483–8. PubMed Abstract | Publisher Full Text\n\nMcLean LA, Roscoe J, Jorgensen NK, et al.: Malignant gliomas display altered pH regulation by NHE1 compared with nontransformed astrocytes. Am J Physiol Cell Physiol. 2000; 278(4): C676–C88. PubMed Abstract\n\nDe Milito A, Iessi E, Logozzi M, et al.: Proton pump inhibitors induce apoptosis of human B-cell tumors through a caspase-independent mechanism involving reactive oxygen species. Cancer Res. 2007; 67(11): 5408–17. PubMed Abstract | Publisher Full Text\n\nBeaman K: Methods for inducing apoptosis in ovarian carcinoma cells using an anti-regeneration and tolerance factor antibody. US patent, US 7211257 B2. Reference Source\n\nWeller M, Frei K, Groscurth P, et al.: Anti-Fas/APO-1 antibody-mediated apoptosis of cultured human glioma cells. Induction and modulation of sensitivity by cytokines. J Clin Invest. 1994; 94(3): 954–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNejatollahi F, Abdi S, Asgharpour M: Antiproliferative and apoptotic effects of a specific antiprostate stem cell single chain antibody on human prostate cancer cells. J Oncol. 2013; 2013: 839831. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMohammadi M, Nejatollahi F, Sakhteman A, et al.: Insilico analysis of three different tag polypeptides with dual roles in scFv antibodies. J Theor Biol. 2016; 402: 100–6. PubMed Abstract | Publisher Full Text\n\nParker SA, Diaz IL, Anderson KA, et al.: Design, production, and characterization of a single-chain variable fragment (ScFv) derived from the prostate specific membrane antigen (PSMA) monoclonal antibody J591. Protein Expr Purif. 2013; 89(2): 136–45. PubMed Abstract | Publisher Full Text\n\nWang Y, Dossey AM, Froude JW 2nd, et al.: PSA fluoroimmunoassays using anti-PSA ScFv and quantum-dot conjugates. Nanomedicine (Lond). 2008; 3(4): 475–83. PubMed Abstract | Publisher Full Text\n\nNejatollahi F, Silakhori S, Moazen B: Isolation and Evaluation of Specific Human Recombinant Antibodies from a Phage Display Library against HER3 Cancer Signaling Antigen. Middle East J Cancer. 2014; 5(3): 137–44. Reference Source\n\nMoazen B, Ebrahimi E, Nejatollahi F: Single Chain Antibodies Against gp55 of Human Cytomegalovirus (HCMV) for Prophylaxis and Treatment of HCMV Infections. Jundishapur J Microbiol. 2016; 9(3): e16241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMohammadi M, Nejatollahi F: 3D structural modeling of neutralizing SCFV against glycoprotein-D of HSV-1 and evaluation of antigen-antibody interactions by bioinformatic methods. International Journal of Pharma and Bio Sciences. 2014; 5(4): 835–847. Reference Source\n\nNejatollahi F, Ranjbar R, Younesi V, et al.: Deregulation of HER2 downstream signaling in breast cancer cells by a cocktail of anti-HER2 scFvs. Oncol Res. 2013; 20(8): 333–40. PubMed Abstract | Publisher Full Text\n\nEhsaei B, Nejatollahi F, Mohammadi M: Specific single chain antibodies against a neuronal growth inhibitor receptor, nogo receptor 1: Promising new antibodies for the immunotherapy of Multiple Sclerosis. ShirazE-Med J. 2017; 18(1):e45358.Publisher Full Text\n\nVaday GG, Hua SB, Peehl DM, et al.: CXCR4 and CXCL12 (SDF-1) in prostate cancer: inhibitory effects of human single chain Fv antibodies. Clin Cancer Res. 2004; 10(16): 5630–9. PubMed Abstract | Publisher Full Text\n\nNejatollahi F, Bayat P, Moazen B: Dataset 1 in : Cell growth inhibition and apoptotic effects of a specific anti-RTFscFv antibody on prostate cancer, but not glioblastoma, cells. F1000Research. 2017.Data Source\n\nNejatollahi F, Bayat P, Moazen B: Dataset 2 in: Cell growth inhibition and apoptotic effects of a specific anti-RTFscFv antibody on prostate cancer, but not glioblastoma, cells. F1000Research. 2017.Data Source\n\nNejatollahi F, Bayat P, Moazen B: Dataset 3 in: Cell growth inhibition and apoptotic effects of a specific anti-RTFscFv antibody on prostate cancer, but not glioblastoma, cells. F1000Research. 2017. .Data Source" }
[ { "id": "20321", "date": "24 Feb 2017", "name": "Issam Alshami", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this study, the researchers discussed well the inhibitory effects of a specific anti-RTF scFv and compared between three types of prostate cancer and two types of glioblastoma cells. The results are interesting where it has been found that the selected anti-RTF scFv antibody could be an effective new alternative for prostate cancer immunotherapy. The present study provides scientific evidence regarding that.\nData and references are update and sufficient information has been provided for replication of the experiment.\nThe anti-RTFscFv which is selected in this research is a novel antibody. The anti proliferative and apoptotic effects reported here make this antibody an attractive agent for immunotherapy against prostate cancer and other cancers express this antigen. As the authors mentioned the unique properties of scFv antibodies have made these small libraries ideal antibodies for targeted therapy. The anti- RTF scFv which  blocks RTF will lead irregulation of  extracellular and intracellular pH in cells and would lead to cancer cell death as shown by the authors. The in vivo study using this antibody is recommended.", "responses": [] }, { "id": "21172", "date": "29 Mar 2017", "name": "Judith Niesen", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNejatollahi et al. presented in their study a novel human scFv fragment against RTF. The manuscript is well written. The results are demonstrated in a detailed and informative way. The manuscript is only ready for indexing after considering the following major and minor comments:\n\nAbstract:\n\nBeginning: Would be better to say scFvs instead of scFv. Use the plural from. Methods: library of scFvs instead of scFv; “The anti-proliferative and apoptotic effects of the antibody” please correct to scFv. Results: Please add: of the prostate CANCER cells after 24 h. Conclusions: please add: were greater than IN Du-145 and LNCaP cells. Please write Anti-RTFscFv consistent.\nIntroduction: You describe the properties of scFvs, but the property of human origin is nothing special for scFvs, MAbs could also be human and scFvs could also be of mouse origin. Of course it is and advantage if the scFv is human. The introduction is a bit short, compared to the abstract in length. Normally the abstract is much shorter than the introduction in most of the manuscripts. You could go a bit more in detail about the way of function of the scFvs in the cancer cells (or you can mention this point in the discussion section).\nMethods: AnnexinV/FITC Assay please specify which well plate you use 6-well, 12-well, 24-well? In best case a negative control should also be included such as a non-binding scFv, to compare unspecific effects and/or a negative control cell line, not incubating the antigen/receptor etc. Make sure to write 1h (1 h, hrs) or 10 mg/ml (10 mg/ml) consistent throughout the manuscript. Please correct: “U-87 MGand A-17”. Please add the concentration unit (100, 200, 500, 1000 scFv/cell), e.g. nM. Cell proliferation assay: Do you used a positive control (100% killing) as blank value?\nResults: Selection of an anti-RTF scFv antibody: please clarify which clone was used as tested scFv. Figure 1: Please relocate (B) in the figure legend. Figure 2: Please delete title X and Data 1. MTT Assay: It would be great to determine an EC/IC50 values for better comparison to similar acting scFvs/antibodies. Moreover, the unit of scFv/cell is not usual, in our opinion. Please write the concentration in e.g. molarity. AnnexinV/assay: It would be great if you could also obtain and show the dot blot of the histograms, which is the normal way to demonstrate apoptosis. Because in the dot blots you can compare early and late apoptotic/necrotic cells as well as live cells. It would be nice to have binding analysis of the scFv.\n\nDiscussion:\nDid you have any affinity data of your new scFvs? Could you give information about the way of function of the scFvs in the cancer cells. For mAbs it could be ADCC, CDC or blocking of signaling pathways. For scFv-based Immunotoxins its receptor mediated endocytosis and the toxin acts in the cancer cells… It would be nice to have a comparison between the toxicity of the full length mAb and the novel scFv. Are there any data available? Binding analysis of the novel scFv to the used cell lines would be advantageous for the discussion.", "responses": [] }, { "id": "20809", "date": "18 Apr 2017", "name": "Masoumeh Rajabibazl", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting article and topic. The article is well written.\nResearch to find new ways to treatment cancer, especially common cancers such as prostate, is valuable. Research on immunotherapy of various cancers with full length antibodies or better than them, antibody fragments such as scFv that have the ability to inhibit the growth of cancer cells is important.\nThe study is well-designed and good & valid results have been achieved.\nSelection of anti-RTF scFv antibody with anti-proliferative and apoptosis effects against prostate cancer cells is promising. But since scFvs have disadvantages in comparison with whole mAb, in vivo study in future studies is recommended.\nIt is necessary to impose the following points:\nIt is essential to the results of the phage ELISA be added. Figures 1, 2 and 3 further explanation needed.", "responses": [] } ]
1
https://f1000research.com/articles/6-156
https://f1000research.com/articles/6-143/v1
14 Feb 17
{ "type": "Opinion Article", "title": "Surgical education and adult learning: Integrating theory into practice", "authors": [ "Prem Rashid" ], "abstract": "Surgical education continues to evolve from the master-apprentice model. Newer methods of the process need to be used to manage the dual challenges of educating while providing safe surgical care. This requires integrating adult learning concepts into delivery of practical training and education in busy clinical environments. A narrative review aimed at outlining and integrating adult learning and surgical education theory was undertaken. Additionally, this information was used to relate the practical delivery of surgical training and education in day-to-day surgical practice. Concepts were sourced from reference material. Additional material was found using a PubMed search of the words: ‘surgical education theory’ and ‘adult learning theory medical’. This yielded 1351 abstracts, of which 43 articles with a focus on key concepts in adult education theory were used. Key papers were used to formulate structure and additional cross-referenced papers were included where appropriate. Current concepts within adult learning have a lot to offer when considering how to better deliver surgical education and training. Better integration of adult learning theory can be fruitful. Individual teaching surgical units need to rethink their paradigms and consider how each individual can contribute to the education experience. Up skilling courses for trainers can do much to improve the delivery of surgical education. Understanding adult learning concepts and integrating these into day-to-day teaching can be valuable.", "keywords": [ "surgical education", "surgical training", "adult learning", "andragogy", "pedagogy" ], "content": "Introduction\n\nIt is important to understand the current concepts in surgical education in order to explore whether surgical training programs can meet contemporary theory. A change to any program needs to have, at least, a theoretical basis, although it is accepted that a theoretical basis does not always translate into practical reality. This article draws on adult education theory and argues that adult learning frameworks can offer ways to improve surgical education processes.\n\n\nMethodology\n\nThis article presents a narrative review, drawing on systematic review methods aimed at outlining adult education theory and how awareness of such conceptual theory can be related and integrated to the practical delivery of training and education in day-to-day surgical practice. Concepts were sourced from a reference text1 and coursework materials. Additional material was found using a PubMed search of the words: ‘surgical education theory’ and ‘adult learning theory medical’. This yielded 1351 abstracts that offered either pure adult learning theory concepts and/or integration of theory into surgical education models. In total, 43 relevant articles with a focus on key concepts in adult education theory were used a core source material. Key papers were used to formulate structure and additional cross-referenced papers were included where appropriate.\n\n\nWhen did we learn to teach?\n\nAll graduating doctors engage in teaching as part of ongoing professional activity, and some do so without formally recognising it as such because it occurs in day-to-day practice, with colleagues, juniors, patients and ancillary staff. Learning to teach is not commonly part of any general medical curriculum. Like many professional endeavours, teaching by those more experienced becomes a matter of course. With the challenges of delivering clinical care and ensuring satisfactory educational experience, teaching in surgical education may need a more guided process.\n\nMany surgeons and trainees alike equate surgical training with the technical aspects of the surgical craft, but it is known that there are a multitude of technical and non-technical skills that may be taught and learnt for true professional development2.\n\nIt is understandable that trainers and trainees focus upon the technical aspects, since it is that aspect of the craft that differentiates it from other branches of medicine. Nevertheless, collaborative non-technical skills do play a significant role in day-to-day practice and, equally, need to be mastered in that context. Technical skills, including the ever-changing technological developments, require structured learning as well, to ensure acquisition of skill and patient safety. Failure to monitor these processes could lead to catastrophic consequences2–4. Therefore, the question is now not if the training process should change, but how it can change within the current system of delivering surgical training2.\n\n\nStructured surgical education\n\nSurgical education and training can be structured5. The two terms have been used interchangeably, but clearly refer to different aspects of the global education process. Training refers to the practical aspects of learning the craft, and the education process encompasses the appreciation of the background complexities and knowledge5. Both can include technical and non-technical aspects.\n\nThere are significant challenges in evaluating methods of delivering surgical education, including the technical and non-technical skills, and there continues to be a lack of correlation between the new teaching and assessment processes and their evaluation for efficacy2.\n\nGood teachers in any educational endeavour have a variety of common attributes, which includes a good knowledge base, experience, and an ability to patiently teach and empower students. In true educational reform, it is important to study innovative practices across other platforms and constantly review the methods of teaching, styles of delivery and, in doing so, develop curriculum and educational policy accordingly. Educational reform, however, will never be subject to randomised control trials because of the multitude of variables involved6.\n\nIn addition, the surgical education model needs to be delivered in the context of the day-to-day provision of 24 hour surgical services. This key limitation is what most commonly explains the unusually long period of training required to produce a qualified surgeon. The apprenticeship model, with long immersion times, has been the traditional model, but is not efficient and continues to be improved.\n\nIn curriculum development, learning outcomes are important, and stepwise processes to achieve aspects of the curriculum need to be defined7. Learning outcomes are about what the trainee should be able to do, or understand. However, there remain considerable gaps between the documented curriculum, how it is then taught and experienced, and subsequently assessed6. Innovation and managing logistics are key to finding more effective methods8.\n\n\nStepwise learning\n\nIn an ideal world, stepwise training processes will lead to global proficiency in a professional activity9–12. Close supervision will ensure patient safety. It is interesting to reflect that the ‘conservative legacy’ of surgery has also been an impediment in terms of surgical education13.\n\nThere are critical educational issues that need to be reviewed to address surgical education. These are:\n\n1. improved working conditions and reduction of hours in the working day;\n\n2. the apprenticeship model with a prolonged time-based system and ‘learning by osmosis’ needs to be modified;\n\n3. the need to embrace contemporary reflective learning;\n\n4. the need to structure technical and non-technical objectives in surgery;\n\n5. the need for surgical educators, as well as on-site conditions, to provide surgical teaching;\n\n6. the need to address professionalism to meet profession and community expectations.\n\nSurgical teaching units need to rethink their paradigms and consider how each individual, within a unit, can contribute to the education experience. Up-skilling courses can do much to improve the delivery of surgical education.\n\n\nEvolving the teaching model\n\nWe still foster a master-apprentice system, although more structure with defined objectives have become the norm, as this concept fits in with contemporary, adult learning principles. Our trainees do ‘learn on the job’ as a matter of course. Much of the original model was unstructured, and that has progressively changed so that objectives have become clearer and assessment processes have become more transparent and fair, including being subject to query and appeal. As the system has evolved, the greatest change has been in assisting busy clinical surgeons, who often teach ‘pro-bono’, to up-skill and adopt progressive methods.\n\n\nWhat has changed?\n\n• Surgical training programs have evolved over time including objective assessment processes, with constructive feedback being the mechanism for dealing with performance issues. Assessment is about a process to establish if a pre-determined standard has been achieved in a competency.\n\n• The process of assessment has become more practical and workable within the time and resource constraints of a busy clinical unit.\n\n• The process is also more clearly outlined, reasonable, valid, objective, fair and reliable.\n\n• Providing constructive, objective and specific feedback, as well as an opportunity to discuss areas that require attention, has provided scope for improvement. There should be a clear appreciation that failure to correct at an acceptable rate should have consequences that are outlined14.\n\n• All objectives that are set, need to be relevant, achievable, specific, timely (for the level of training), regular and stimulate the desired learning15.\n\nAssessment processes have evolved over time and now include a variety of technical and non-technical competency assessments in line with key competencies (e.g., http://www.surgeons.org/becoming-a-surgeon/surgical-education-training/competencies/). Trainees are also often given the opportunity to provide confidential assessment of the training post at the end of each term. This is primarily used to assist in accreditation processes.\n\n\nHow can we improve teaching?\n\nThere has generally been a chasm between educational theory and practice, which is not unique to medical education. Theory may have practical flaws, but often does inform and improve practical day-to-day performance16.\n\n\nAdult learning principles\n\nThere are a number of theories to help us improve the way we understand medical education.\n\n• Adult learning characteristics17 – differences between adult and child learners\n\n• Adult’s life situation18 – where individuals are in the variable stages of life\n\n• Changes in consciousness19 – ability to reflect upon experience and environment20.\n\nNo single theory can be applied universally. Knowles’ Andragogy concept was proposed as the ‘art and science of helping adults learn’21. This concept was modified to describe it as a continuum from pedagogical principles. This concept ultimately is about what is unique to adult learning and, as such, different to childhood learning22. The context in which adults learn is different because they most often choose to learn, which is a different motivation to compulsory schooling. The adult learner also has a large social, and often, professional experience, which can contribute to reflection. The pace and meaning for an adult who pursues further education is also different, as are the pressures of life in balancing work and personal matters21,23,24.\n\n\nTheories of adult learning\n\nThere are a number of theories that guide us in appreciating adult learning concepts, and understanding these can help us shape teaching programs. These include:\n\nSocial cognitive theory acknowledges the interactive or social aspect of learning, and the influence of the environment also plays an important part. The collective is made up of the environment, personal factors and behavioural issues25. Feedback becomes a major influence as adults take on new skills. Adults have the ability to apply forethought, bring experience and can self-reflect. They can regulate as they integrate new information. This means that adult learning must include objectives that are desired and relevant, task activities that lead to fulfilment of the desired knowledge and an ability to develop upon what they bring to the learning environment20.\n\nReflective practise is one of the core concepts of adult professional development where new information is interpreted in the light of past knowledge and experience. Adults will rework and reframe from the perspectives they have26. Education principles need to encourage thinking before, during and after the event (reflection pre-action, reflection in-action and reflection on-action)27.\n\nTransformative learning is the process by which we internalise and interpret information based on our own experiences to date. We bring an existing ‘way of thinking’ that is used in learning28. The transformation is the evolution of the paradigm based on the learning during the process. Encouraging social discourse, group participation and evaluation of individual differences, is all part of this process. Assumptions are questioned and the consciousness is elevated to a new point of contextual self-appreciation. Adult learners embrace transformation if they feel that the goals are worth attaining and they have control over the learning process29,30.\n\nSelf-directed learning has become part of professional best practice in maintenance of standards. The premise of this is self-motivation, self-direction and self-management. Much of the knowledge is context-dependent and subject to a self-assessment of needs. Multiple portals of easy-to-access and flexible educational delivery platforms are important to make this effective. Having frequent and clear assessment methods help to maintain focus and effectiveness. The student learns more when they are given control of the ‘what?, when?, how?’, and possibly the most important for them, the ‘why?’31–33\n\nExperiential learning, developed by Kolb34 needs to be adopted more formally with structured stages of the learning cycle, which include:\n\n▪ performing a task;\n\n▪ reflecting upon that task with input from others;\n\n▪ identifying areas where improvement could be made; and\n\n▪ readjusting in some way before a new cycle.\n\nFeedback, reflection and debriefing could occur during, or after the event, or both35. This could then be linked into the learning outcomes outlined in the curriculum. Trainees need to know:\n\n▪ what they were expected to learn;\n\n▪ how that could occur;\n\n▪ how it would be assessed; and\n\n▪ how it would become useful in their professional lives.\n\nThe teacher does assume the role of the taskmaster and expert who provides guidelines and is charged with monitoring the process36.\n\nThis is where learners participate in community activities and there is a socio-cultural basis to the process. There is a purpose and connection within a historical context and learning occurs via social interactions and developing relationships that are contextual within the culture. Learners develop an understanding of ‘the way it is done’. This is part of the ‘hidden curriculum’ in medical practice, the values and judgements that help one ‘belong’ to a community, without which one can feel isolated. Role-modelling helps demonstrate some of the social interactions37.\n\nThis involves a ‘trajectory’ beyond situated learning. This is where learners move from peripheral participation into full participation, embracing all the values and experiences within the community. Here learners already ‘belong’ and develop further understanding. They begin to appreciate how the community interacts with the ‘outside’ world. In medical education, we begin with a theoretical knowledge base, (ideally) progress to using simulated experiences, then start to observe real-world activity before beginning the immersive experience of becoming part of that world and growing professionally within that environment38.\n\n\nApplication of theory\n\nAdults need to feel supported within their education framework and they must be allowed to bring in their own know-how and develop with graduated skill levels. It is important that they be given clear direction in terms of what the standards and objectives are, so, assuming they are motivated, they can be allowed to self-drive and manage their own learning. Learner autonomy is important in this context.\n\n\nAn example: Operative surgery\n\nThis can be brought together in an example of a supervisor sitting in on an operative case with a surgical trainee, where the three recognised phases of an operation will be broadly covered. One of the key issues is for the supervisor to be available, which means to be in the environment, if for no other reason than to give advice, if required, and ensure that things go smoothly. Adult learners need to know that they have the supervisor’s confidence and support and that they will be allowed to stretch their skillset to try to solve any problems they may encounter, before the supervisor steps in.\n\nThe supervisor and trainee:\n\n• discuss the case, history, examination and investigations to date (being clear on how the patient came to be in the procedural environment and assessing the needs of the patient, learner and teacher). Good questions for the trainer to ask are, “X, you have done Y number of these, what aspects do you feel you need to improve upon?” or “Can you outline what issues are worth covering with the patient when discussing this operation in this setting?” or “What do you want to focus on today?”\n\n• discuss the steps of the operation and who will undertake which part – the supervisor should not set unexpectedly-high standards, only what is reasonable to achieve for the level of the trainee.\n\n• consider the ‘what ifs?’ – issues that may come up and how to manage them.\n\n• consider what equipment and contingencies need to be in place.\n\n• outline what is planned, discuss informed consent issues – these enquiries of objectives and aims can give insight into the trainee’s self-assessment ability.\n\n• undertake the ‘time out’ process correctly.\n\n• The trainee tries to follow the pre-operative plan and, if there is a deviation, ensures that all are aware of the reason why.\n\n• The supervisor guides through the focal technical steps.\n\n• The supervisor gives immediate feedback on what is being done well and what could be done differently.\n\n• The supervisor considers allowing the trainee to do more if they are progressing well and advises them accordingly.\n\n• The supervisor watches for over-confidence and guides the trainee back if required.\n\n• The supervisor must try not to ‘take over’, unless necessary. It is better to talk the situation through and let the trainee do what they can for themselves (taking over can be confused for condemnation or lack of patience – both of which may exist).\n\nThis can start to occur as the trainee is bringing the case to a close, but some trainees may find this distracting, which may hamper their performance.\n\nThe supervisor should:\n\n• plan to debrief in a quiet environment (not always possible in a busy surgical service).\n\n• ask the trainee how they think they went, stimulate reflection and teach general rules; help them review their performance; explore ways they can set some goals to help address deficiencies, e.g., “Should we look at ways to help you with this issue?” or “What do you think about addressing smaller chunks of the main task and then we can set some methods of getting those nailed in X (period of time)?”\n\n• provide specific examples of how the trainee could have performed better, e.g., “When you got to point X and did Y, I think doing Z may have helped you progress better.” This gives the trainee an idea of what the supervisor would have done.\n\n• congratulate the trainee on what they have achieved, e.g., “I think you did that better this time because the way you did X worked much better and I could see that you were getting that aspect.” A supervisor may think the trainee did well, but if they don’t communicate this, the trainee may feel that some aspect was not done well. Supportive communication is important.\n\nVollmer, et al. offer some relevant issues to consider in this context:39\n\n• Trainee preparedness (they need to know whether their level of preparedness met the supervisor’s expectations.) If that hasn’t happened, then it is important to ask why. The crucial aspect of this is to genuinely enquire rather than using an accusatory tone. Trainees need help to establish GOALS via the SMART process – keeping the process manageable. (SMART goals are specific, measureable, action-oriented, realistic and time-bound.) Trainees need to appreciate the requirement for the smaller, intrinsic goals within the greater concept of what they are trying to ultimately achieve.\n\n• Quality and style of communication (supervisors need to be supportive, clear and compassionate).\n\n• Time constraints (relevant in most clinical units). This means that debriefing may be difficult to organise and needs forethought. There must be a plan to deal with in-service time constraints and, if necessary, find another moment more suitable to the task.\n\n• Environment (may be difficult to achieve the ‘quiet room’ ideal).\n\n• Teacher engagement (most clinical teachers were never taught to teach!). It needs to be understood that each teacher must up-skill along the way and develop in their role.\n\n• Patience and tolerance (a skill that can take time and usually comes with clinical experience and progressive self confidence).\n\n• Autonomy (whether the trainee feels like they were capable and allowed to do what they could – lack of insight notwithstanding).\n\n• Feedback (preferably immediate and constructive as reprimands rarely achieve the perceived desired objectives). For example, “What did you do well? What could you have done better? What areas do you need help with? How can I help you in the areas you are yet to master?”\n\n• Ethical or legal issues that may have come up during the case. These can be complex and need exploring, as this may be the only opportunity for experience. They could be raised at a later time, e.g., “With regard to Mrs Smith the other day, what did you think of what happened?” This could apply to an adverse event that they may have caused or witnessed. These types of issues are critical for discussion, as they do not appear in textbooks.\n\nFeedback for adult learners differs in a number of ways:40\n\n• This can begin with self-assessment. Most adult learners have an idea of how they are performing. Some may be too hard on themselves, but that can be tempered with balanced discussion. The key is to listen and become aware of how they perceive they are performing. Help them reframe their perspective if it seems too harsh.\n\n• It is easy to ramble sometimes, but it is most important to focus on the main objectives and be specific with examples.\n\n• It should be immediate (or soon after the event) – bringing up past events can be confusing and unfair.\n\n• It should be conceptually relevant – about what just happened.\n\n• It should be constructive – the general principle is to instil confidence.\n\n• It should be compassionate and empathetic – putting themselves in the trainee’s shoes.\n\n• Plan to observe rather than judge, using comments like,\n\n“I noticed …” or “I observed …”\n\n• Debriefing consolidates learning.\n\nConsider emotional state – may need to delay feedback, or when serious, consider employee assistance programs.\n\nTheoretical frameworks help construct ways to adapt learning programs, in summary they include:20\n\n• Viewing the learner as a central active contributor.\n\n• That the total experience is more important than individual aspects.\n\n• Learning is related to solutions and understanding real-life issues.\n\n• Past and inter-current social experiences play a key role in framing new knowledge.\n\n• Self-awareness, attitudes and beliefs are important for contextual development.\n\n• Individuals can usually self-regulate and manage their learning, and are commonly motivated to do so.\n\n• Self-reflection is central to the learning experience. The picture becomes more complete when there is a blend of theory, coupled with new information and past, personal and professional experiences.\n\nIn addition, there are characteristics that help define teachers who are seen to be more engaged. They include being:\n\n• available and approachable;\n\n• understanding, and using fair performance indicators;\n\n• non-judgemental;\n\n• capable (to help take over and teach the next steps); and\n\n• engaging of the whole team.\n\nFurther positive features include:\n\n• The quality of the relationships formed are the most important factor in the effectiveness of clinical supervision, more than the method of supervision used41.\n\n• Much of the early theory has come from nursing, psychology, teaching and social work literature. Current supervision and teaching practice has only recently developed theoretical basis41.\n\n• Supervision and teaching is complex, involving changing clinical situations and ensuring that patient safety and care is preserved.\n\n• Trainees and trainers can have widely disparate views of the clinical teaching encounter39. Good communication remains the key.\n\n\nChallenges ahead\n\nIt can be easy to offer theoretical discourse on surgical education, but equally it must be acknowledged that translation into practice and proof that the translation has met the desired objectives is where the real challenge lies. Simulation in surgery, for example, is still not where it is in the aviation industry, where certification is possible with an assessment of performance in multiple scenarios, yet it remains a desired objective42–46. The objective assessment of surgical technical and non-technical skill continues to challenge educators, who need to be able to teach in a step-wise fashion and then ‘sign-off’ a skill base. These tools do exist, as outlined, but the practical implementation by busy clinicians remains the key47. Ultimately, there must be a desire by surgical supervisors to adopt and implement educational reform and systematically measure the effectiveness along the way. Each environment will present unique situational challenges that will need to be adapted to ensure efficiency of clinical services, patient safety and achievement of educational objectives.\n\n\nConclusions\n\nThe key messages for teaching, supervising and assessing are objectivity, transparency, fairness, empathy, support and compassion. Focus on upskilling of surgical supervisors will help deliver a better education framework. While some of the theoretical basis may be challenging to prove, it is clear that, despite there being uncertainty in complex teaching situations, the basics will often help bring it together, and that very process will continue to help in the next encounter, as constant reflection and assessment lead to integration and growth.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) confirms that no grants were involved in supporting this work.\n\n\nReferences\n\nFry H, Kneebone R: Surgical Education: Theorising an Emerging Domain. London, UK: Springer; 2011. Publisher Full Text\n\nKneebone R, Fry H: The Environment of Surgical Training and Education (Chpt 1). In: Fry H, Kneebone R, eds. Surgical Education: Theorising an Emerging Domain. London: Springer; 2011; 2: 3–17. Publisher Full Text\n\nSmith R: All changed, changed utterly. British medicine will be transformed by the Bristol case. BMJ. 1998; 316(7149): 1917–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVincent C, Neale G, Woloshynowych M: Adverse events in British hospitals: preliminary retrospective record review. BMJ. 2001; 322(7285): 517–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCalman KC, Downie RS: Education and training in medicine. Med Educ. 1988; 22(6): 488–91. Publisher Full Text\n\nFry H: Educational Ideas and Surgical Education (Chpt 2). In: Fry H, Kneebone R, eds. Surgical Education: Theorising an Emerging Domain. London: Springer; 2011; 2: 19–36. Publisher Full Text\n\nStenhouse L: An introduction to curriculum research and development. London: Heinemann; 1975. Reference Source\n\nColeman JJ, Esposito TJ, Rozycki GS, et al.: Early subspecialization and perceived competence in surgical training: are residents ready? J Am Coll Surg. 2013; 216(4): 764–71; discussion 771–3. PubMed Abstract | Publisher Full Text\n\nten Cate O: Trust, competence, and the supervisor's role in postgraduate training. BMJ. 2006; 333(7571): 748–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKneebone R, Nestel D, Wetzel C, et al.: The human face of simulation: patient-focused simulation training. Acad Med. 2006; 81(10): 919–24. PubMed Abstract | Publisher Full Text\n\nLeBlanc VR, Tabak D, Kneebone R, et al.: Psychometric properties of an integrated assessment of technical and communication skills. Am J Surg. 2009; 197(1): 96–101. PubMed Abstract | Publisher Full Text\n\nNestel D, Kneebone R, Barnet A, et al.: Evaluation of a clinical communication programme for perioperative and surgical care practitioners. Qual Saf Health Care. 2010; 19(5): e1. PubMed Abstract | Publisher Full Text\n\nBleakley A: Learning and Identity Construction in the Professional World of Surgery (Chpt 11). In: Fry H, Kneebone R, eds. Surgical Education: Theorising an Emerging Domain. London: Springer; 2011; 2: 183–197. Publisher Full Text\n\nCarr S: The Foundation Programme assessment tools: an opportunity to enhance feedback to trainees? Postgrad Med J. 2006; 82(971): 576–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNewble DI, Jaeger K: The effect of assessments and examinations on the learning of medical students. Med Educ. 1983; 17(3): 165–71. PubMed Abstract | Publisher Full Text\n\nTripp D: Critical Incidents in Teaching. Developing Professional Judgment. London: Routledge; 1993. Reference Source\n\nMerriam SB: Adult learning and theory building: a review. Adult Education Quarterly. 1987; 37(4): 187–98. Publisher Full Text\n\nKnox AB: Proficiency theory of adult learning. Contemp Educ Psychol. 1980; 5(4); 378–404. Publisher Full Text\n\nMezirow J: A critical theory of adult learning and education. Adult Education Quarterly. 1981; 32: 3–27. Publisher Full Text\n\nKaufman DM, Mann KV: Teaching and learning in medical education: how theory can inform practice. In: Swanwick T, ed. Understanding Medical Education: Evidence, Theory & Practice. 1st ed. Chichester, West Sussex: Wiley-Blackwell; 2010; 16–36. Publisher Full Text\n\nKnowles MS: The Mordern Practice of Adult Education: from pedagogy to andragogy. 2nd ed. New York: Cambridge; 1980. Reference Source\n\nMerriam SB: Updating our knowledge of adult learning. J Contin Educ Health Prof. 1996; 16(3): 136–43. Reference Source\n\nResnick LB: Learning in school and out. Educ Res. 1987; 16(9): 13–20. Publisher Full Text\n\nMerriam SB, Caffarella RS: Learning in Adulthood: a comprehensive guide. San Francisco, CA: Jossey-Bass; 1991. Reference Source\n\nBandura A: Social Foundations of Thought and Action: A social cognitive theory. Englewood Cliffs, NJ.: Prentice-Hall; 1986. Reference Source\n\nSchon DA: Educating the refective practioner: toward a new design for teaching and learning in the professions. San Francisco, CA: Jossey-Bass; 1987. Reference Source\n\nSlotnick HB: How doctors learn: the role of clinical problems across the medical school-to-practice continuum. Acad Med. 1996; 71(1): 28–34. PubMed Abstract\n\nMezirow J: Transformative dimensions of adult learning. 1st ed. San Francisco: Jossey-Bass; 1991. Reference Source\n\nCranton P: Understanding and promoting transformative learning: a guide for the education of adults. San Francisco, CA: Jossey-Bass; 1994. Reference Source\n\nMezirow J: Learning as transformation. San Francisco, CA: Jossey-Bass; 2000. Reference Source\n\nCandy PC: Self -direction in lifelong learning. San Francisco, CA: Jossey-Bass; 1991.\n\nBrydges R, Dubrowski A, Regehr G: A new concept of unsupervised learning: directed self-guided learning in the health professions. Acad Med. 2010; 85(10 Suppl): S49–55. PubMed Abstract | Publisher Full Text\n\nMurad MH, Coto-Yglesias F, Varkey P, et al.: The effectiveness of self-directed learning in health professions education: a systematic review. Med Educ. 2010; 44(11): 1057–68. PubMed Abstract | Publisher Full Text\n\nKolb DA: Experiential learning: experience as thesource of learning and development. USA: Prentice Hall; 1984. Reference Source\n\nSchon DA: Educating the Reflective Practitioner. Toward a new design for teaching and learning in the professions. San Francisco: Jossey-Bass; 2009. Reference Source\n\nRogoff B, Matusov E, White C: Models of teaching and learning: participation in a community of learners. In: Olsen DR, Torrance N, eds. The handbook of education and human development: New methinds of learning, teaching and schooling. Oxford, UK: Blackwell; 1996; 388–415. Reference Source\n\nLave J, Wenger E: Situated learning: legitimate peripheral participants. New York: Cambridge University Press; 1991. Reference Source\n\nBarab SA, Barnett M, Squire K: Developing an empirical account of a community of practice: characterising the essential tensions. Journal of the Learning Sciences. 2002; 11: 489–542. Reference Source\n\nVollmer CM Jr, Newman LR, Huang G, et al.: Perspectives on intraoperative teaching: divergence and convergence between learner and teacher. J Surg Educ. 2011; 68(6): 485–94. PubMed Abstract | |Publisher Full Text\n\nMcAllister L, Schafer J: Giving and receiving feedback. In: Higgs J, ed. Communicating in the health sciences. 3rd ed. South Melbourne, VIC.: Oxford University Press; 2012; 141–9. Reference Source\n\nKilminster SM, Jolly BC: Effective supervision in clinical practice settings: a literature review. Med Educ. 2000; 34(10): 827–40. PubMed Abstract | Publisher Full Text\n\nStefanidis D, Sevdalis N, Paige J, et al.: Simulation in Surgery: What's Needed Next? Ann Surg. 2015; 261(5): 846–53. PubMed Abstract | Publisher Full Text\n\nShamim KM, Ahmed K, Gavazzi A, et al.: Development and implementation of centralized simulation training: evaluation of feasibility, acceptability and construct validity. BJU Int. 2013; 111(3): 518–23. PubMed Abstract | Publisher Full Text\n\nSatava RM: Emerging trends that herald the future of surgical simulation. Surg Clin North Am. 2010; 90(3): 623–33. PubMed Abstract | Publisher Full Text\n\nRashid P, Gianduzzo TR: Urology technical and non-technical skills development: the emerging role of simulation. BJU Int. 2016; 117(Suppl 4): 9–16. PubMed Abstract | Publisher Full Text\n\nHamdorf JM, Blackham R: Surgical simulators in training: are we there yet? ANZ J Surg. 2010; 80(9): 579. PubMed Abstract | Publisher Full Text\n\nGrantcharov TP, Schulze S, Kristiansen VB: The impact of objective assessment and constructive feedback on improvement of laparoscopic performance in the operating room. Surg Endosc. 2007; 21(12): 2240–3. PubMed Abstract | Publisher Full Text" }
[ { "id": "20600", "date": "28 Feb 2017", "name": "Kathryn Rzetelski-West", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article is a very interesting piece in the context of the current surgical training climate in Australia and around the world. With reduced working hours, emphasis on operating room efficiency, increased complexity of surgical cases, and greater emphasis of medico-legal implications; surgical education needs to be improved.  The author attempts to give a summary on the educational theory behind adult learning and integrate theory into practical advice for clinicians. This is important, as most surgical teachers have no academic education training or background, despite this important job.\nThe article does contain all the components one would expect from a journal article, although one wonders whether a PubMed search of ‘surgical education theory’ and ‘adult learning theory medical’, reminiscent of a metanalysis, was necessary in an opinion article.\nThe title and abstract are appropriate. The article is well organised. The flow of the article is logical with the theoretical basis presented first, followed by the practical implementation into surgical practice. The article is an excellent review of adult learning theory and its varying components. The author does a good job at synthesizing the literature. The differing components comprising adult learning theories are brief but informative, and do not overwhelm the reader in complexity. The references used are appropriate for any readers wanting more in-depth reading. The article is well written and easy to understand. The example of operative surgery is, in particular, very easy to read and will appeal to clinicians. The dot point format aids its readability, as does the practical nature of the advice.  The structure of pre-operative assessment, intra-op teaching, feedback and post-operative debriefing is particularly effective.\n\nPossible areas for consideration:\nStructured surgical education – The first sentence in this paragraph feels disjointed and fragmented. It does not seem to relate to the following sentences. It may improve sentence flow if it specified which aspects of structured surgical education and training it was referring to e.g. its objectives, assessment etc.\n\nUnder the paragraph of intra-operative teaching, the author discusses allowing the trainee to do more (dot point 4) and the supervisor must try not to take over (dot point 6). This is reminiscent of the theories of ‘Vygotsky and the Zone of Proximal Development’1 and ‘scaffolding’2. Vygotsky describes what a learner can do with the help of a more capable other1, and scaffolding2 refers to where it is as important for teachers to know when to step back from supporting learners as it is to provide that support. Although not strictly based on adult learning theory, they are relevant to the ways in which adults learn. A major criticism of adult learning principles is that adults actually learn in similar ways to children34.\n\nIt may be prudent to note that although adult learning theory is very popular, it is a controversial theory with opponents as well as supporters. There are educationalists who believe that the learning style of adults are essentially the same as children. There is only weak research supporting adult learning theory, and much of the research has been shown to be inconclusive and have contradictory outcomes. Indeed Knowles later stated: “I am at the point now of seeing that andragogy is simply another model of assumptions about learners to be used alongside the pedagogical model of assumptions, thereby providing two alternative models for testing out the assumption as to their 'fit' with particular situations. Furthermore, the models are probably most useful when seen not as dichotomous but rather as two ends of a spectrum, with a realistic assumption in a given situation falling in between the two ends”5.\n\nUp-skilling of surgical supervisors is one of the major solutions offered in this opinion article, however little practical advice of what exactly this should consist of is not detailed. It may be beneficial to offer more detailed advice on what skills should be improved and how they should be obtained.\n\nUp skilling is spelt three different ways in the article: up skilling, up-skilling and upskilling.", "responses": [] }, { "id": "20746", "date": "07 Mar 2017", "name": "Mohan Arianayagam", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well written and well researched piece on surgical education. The article reviews current educational theory which is certainly very useful for a surgical audience and also highlights the challenges of teaching in a busy surgical service and postgraduate environment. The model of surgical teaching has been unchanged for decades with the teacher/apprentice model being the mainstay of teaching. While most of us are keen to teach we are not trained as teachers. This article helps provide a framework for teaching in this environment, including techniques for feedback to “close the loop”.\n\nOverall a very useful article.", "responses": [] }, { "id": "20806", "date": "08 Mar 2017", "name": "Matthew Winter", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nA very useful contribution to the literature to engage both supervisors and trainees alike. This paper addresses the importance of critically analysing our approach to surgical education and training while focusing on the theories of adult learning to guide such approach.\nTitle is very appropriate and abstract summarises article sufficiently.\nDesign of article is appropriate and in keeping with its attempt to summarise the existing theories of adult education and provides an excellent narrative of extending these theories into the context of operative surgery. The opinions presented are well constructed, balanced and based on foundation literature. Very well done.\nI make the following suggestions to improve the article.\n• Adult learning principles (page 3) are referred to as theories under its heading. Two paragraphs later the heading Theories of adult learning surfaces. This created a small degree of confusion. Could this be addressed to simplify reader dissonance.\n• The theory of “Experimental learning” seems similar to “Reflective learning”. Does the author feel that separating these into two theories enhances the paper?\n• The use of dot points throughout the paper is inconsistent. Especially the word following the dot point – sometimes listed as a capital letter sometimes not.\n• The term ‘Up-skilling’ is spelt differing ways throughout the article.", "responses": [] } ]
1
https://f1000research.com/articles/6-143
https://f1000research.com/articles/6-137/v1
14 Feb 17
{ "type": "Research Article", "title": "Growth performance and feed utilization of keureling fish Tor tambra (Cyprinidae) fed formulated diet supplemented with enhanced probiotic.", "authors": [ "Zainal A. Muchlisin", "Tanzil Murda", "Cut Yulvizar", "Irma Dewiyanti", "Nur Fadli", "Fardin Afrido", "Mohd Nor Siti-Azizah", "Abdullah A. Muhammadar", "Tanzil Murda", "Cut Yulvizar", "Irma Dewiyanti", "Nur Fadli", "Fardin Afrido", "Mohd Nor Siti-Azizah", "Abdullah A. Muhammadar" ], "abstract": "Background The objective of the present study was to determine the optimum dosage of probiotic in the diet of keureling fish (Tor tambra) fry. Methods Lactobacillus casei from Yakult® was used as a starter, and enhanced with Curcuma xanthorrhiza, Kaempferia galanga and molasses. The mixture was fermented for 7 days prior to use as probiotic in a formulated diet containing 30% crude protein. Four levels of probiotic dosage; 0 ml kg-1 (control), 5 ml kg-1, 10 ml kg-1 and 15 ml kg-1 were tested in this study. The fish was fed twice a day at 08.00 AM and 06.00 PM at the ration of 5% body weight for 80 days. Results The results showed that growth performance and feed efficiency increased with increasing probiotic dosage in the diet from control (no probiotic) to 10 ml kg-1 of probiotic dosage and then decreased when the dosage was increased up to 15 ml kg-1. Conclusions The best values for all measured parameters were recorded at the dosage of 10 ml kg-1. Therefore, it was concluded that the optimum dosage of enhanced probiotic for T. tambra fry was 10 ml kg-1 of feed.", "keywords": [ "Mahseer", "Probiotic", "Curcuma xanthorrhiza", "Kaempferia galangal", "Lactobacillus casei" ], "content": "Introduction\n\nAquaculture is a promising business and growing faster in recent years. In Indonesia the most common species of freshwater fish used for aquaculture are several introduced species, for example, tilapia (Oreochromis niloticus), common carp (Cyprinus carpio), and African catfish Clarias gariepinus1,2. However, Indonesia has a great diversity of freshwater fish species3, several of which have the potential for aquaculture. Muchlisin4 evaluated 114 species of freshwater fish from the waters of Aceh Province. In total, 40 species are being utilized for consumption, and 14 species have a high economic value and great potential to be utilized as such. One of these species is Tor tambra, locally known as keureling fish.\n\nPresently, aquaculture of keureling fish has already been initiated in Aceh Province, Indonesia. Several studies on this species have been documented, for example, Muchlisin5 has reported the domestication techniques for broodstock, and the prevalence of ectoparasites and endoparasites in keureling fish6–7. Moreover, Muchlisin et al.8 reported that a diet of 30% protein gave the best growth performance for T. tambra fry, compared to 20% and 25% protein. Muchlisin et al.9–10 have also studied the effect of papain enzyme and additional vitamins in the diet. However, the growth pattern analysis has shown that cultured fish display slower growth compared to wild populations11. This is probably due to low protein digestibility resulting in low feed efficiency. It has been suggested that the growth performance of cultured populations could be enhanced through addition of probiotic to the diet to increase feed efficiency12. Probiotic in the diet functions as an agent that triggers the metabolism of nutrients from complex compounds into simpler compounds which are readily absorbed by the intestine13,14. Several studies have reported that addition of probiotic to the diet has a significant effect on growth performance in some species of freshwater fish, for example, catfish Pangasius sutchi and Pangasius hypothalamus15,16, nile tilapia Oreochromis niloticus17, Catla catla18, gourami Osphronemus gourami19 and three-spot gourami Trichopodus trichopterus20.\n\nIt is important to overcome the low growth problem of T. tambra in captivity, and presently there is no study available on the effects of adding probiotic to the T. tambra diet. Therefore, we evaluated the effect of probiotic Lactobacillus casei from Yakult® enhanced with temulawak (Curcuma xanthorrhiza) and kencur (Kaempferia galanga) on the growth performance and feed utilization of T. tambra.\n\n\nMethods\n\nThe study was conducted at local aquaculture ponds at Desa Meunasah Krueng, in the Beutong Subdistrict of the Nagan Raya District, from August 2014 to December 2014. The completely randomized design was utilized in this study. Four levels of probiotic dosage were tested, namely: 0 ml probiotic kg-1 of feed (control), 5 ml probiotic kg-1 of feed, 10 ml probiotic kg-1 of feed, and 15 ml probiotic kg-1 of feed. Every treatment was replicated three times. The experimental diet containing 30% protein was prepared using raw materials purchased from the local market. Each material and formulated feed was tested for crude protein content (Table 1).\n\nA total of 180 T. tambra fish fry with average length of 3.5 cm and average weight of 0.36 g were used in this study. The fry was purchased from the local farmer in the Nagan Raya District and distributed randomly into 12 1m × 1m × 1m hapas (cage settle nets) in a 25m × 20m × 1.3m ground pond at the stocking density of 15 fishes per hapa. The fish were weaned for 7 days prior to experimental procedures. They were fed the experimental diet minus probiotic during the weaning process. After weaning, the fish were fed on experimental diet at a feeding level of 5% body weight twice a day (08.00 AM and 06.00 PM) for 80 days. The pond was equipped with a water flow system at a water discharge of 120 L min-1. The weight gain was calculated at 10 day intervals. The feces of the fish were collected from respective hapas to examine the protein content of the feces using the Kjeldahl method21.\n\nThe crude protein in the raw material, feces and experimental diet was measured using the Kjeldahl method21. About one gram of dry sample (raw material, experimental diet or feces) was weighed and placed into Kjeldahl beakers, then 10 g of catalyst (SeOCl2, Selenium Oxydichloride) and 25 ml sulfuric acid were added into the same beaker. The sample was heated to 250 °C for 20 minutes, shaken carefully, then heated to 350 °C for 2 hours. The samples were left to cool for 10 minutes, then 300 ml distilled water was added. Diluted samples were distilled, and this was followed by titration using 0.1 N HCl. Crude lipid was measured by chloroform-methanol extraction21. Samples of the raw material or diet were homogenized with a high-speed homogenizer for 5 min and lipid was determined gravimetrically after solvent separation and vacuum drying.\n\nThe probiotic was prepared using a mixture of temulawak (C. xanthorrhiza), kencur (K. galanga), molasses and Yakult® (Lactobacillus casei) as a starter. The Yakult® was purchased from a local market in Banda Aceh, Indonesia. For 1 liter of probiotic mixture the following is needed: 50 g of temulawak, 100 g kencur, 100 ml of molasses and 1 bottle of Yakult®. All materials were mashed and mixed, then placed in sealed containers and fermented for 7 days. Every 2 days the container was opened to remove the gas of fermentation. For each experiment, the corresponding amount of probiotic solution was mixed with egg yolk, whisked and sprayed evenly on the diet, then dried at room temperature for 30 minutes prior to feeding to the experimental fish.\n\nWeight gain (Wg) and specific growth rate (SGR), food conversion ratio (FCR) and feed efficiency (FE) were calculated as follows:\n\na.  Wg = Wt – Wo, where Wg = weight gain (g), Wt = weight at the end of the experiment (g), Wo = weight at start of the experiment (g)\n\nb.  The specific growth rate is the percentage of weight gain per day. SGR (% day-1) = (Ln Wt – Ln Wo) / t x 100, where Ln= Logarithm natural, t = experiment duration (day)22. Daily growth and survival rates were also calculated, based on Muchlisin et al.10.\n\nc.  The Feed Conversion Ratio is the amount of feed (g) required to produce 1 gram of fish23, FCR = F/(Wt – Wo), where F= the amount of given feed (g)\n\nd.  The Feed Efficiency is the total weight gain produced per total weight of feed consumed23, FE (%)= (1/ FCR) x 100.\n\nAll data were subjected to analysis of variance (ANOVA), followed by the comparison of means using Duncan’s multiple range test24.\n\nAll procedures involving animals were conducted in compliance with The Syiah Kuala University Research and Ethics Guidelines, Section of Animal Care and Use in Research (Ethic Code No: 958 /2015). Please refer to Supplementary File 1 for the completed ARRIVE guidelines checklist.\n\n\nResults\n\nEstimated weight gain the keureling fish fry ranged between 0.73 g to 1.48 g, and the specific growth rate was 1.40% to 2.04%. and the survival rate was 66.67% to 95.56%(Table 2). Feed efficiency ranged between 28.40% to 42.21% and feed conversion ratio ranged between 2.37 to 3.52 (Table 2). ANOVA revealed that probiotic had a significant effect on the growth performance, survival rate, feed efficiency, feed conversion and crude protein content in the feces of keureling fish fry (P<0.05), where the best results were observed at a dosage of 10 ml probiotic kg-1 of feed. The growth trend of the keureling fish fry during the experiment significantly increased from day 10 to day 40 and from day 50 to day 80 (Figure 1).\n\nAll data were subjected to analysis of variance (ANOVA), followed by the comparison of means using Duncan’s multiple range test. The means at different probiotic dosage across each parameter were compared. When means have the same superscript (a, b, c, d), they are not significantly different.\n\n\nDiscussion\n\nThe study revealed that a probiotic dosage of 10 ml kg-1 gave the best results compared to other dosages. 10 ml kg-1 may provide an optimum condition for digestive bacteria such as Lactobacillus sp. to grow well and facilitate feed digestibility. This is based on the low protein content in the feces, an indication that the protein was digested better at this dosage. Arief et al.25 stated that Lactobacillus sp. has the ability to balance and enhance microbial condition in the digestive tract by converting carbohydrates into lactic acid, which reduces the pH and so improves the digestibility functions of the tilapia fish, Oreochromis niloticus. This would then stimulate the production of endogenous enzymes to improve absorption of nutrients, and inhibit the growth and activity of pathogenic organisms in the digestive tract. Irianto26 also stated that the addition of probiotics to the diet increases the amount and activity of bacteria in the digestive tract of tilapia fish, and stimulates bacteria to secrete digestive enzymes such as protease and amylase which play an important role in protein and carbohydrates digestion, respectively. Marzouk et al.27 stated that the activities of natural digestive bacteria of tilapia would change significantly when supplemented with external digestive microbes. The activity of probiotic bacteria greatly affects the balance of microflora in the digestive tract that will suppress other pathogenic bacteria resulting in increased digestive efficiency28.\n\nTemulawak and kencur contain bioactive compounds such as curcumins and atsiri oil, respectively, with associated health benefits. These compounds can function as antibiotics, neutralize toxins and increase the secretion of bile29. This improves the digestive system and increases appetite in fish thus accelerating their growth performance. A similar finding by Hassan et al.30 reported that the combination of K. galangal and yeast probiotic had a significant effect on the growth performance and product quality of the Labeo rohita fingerling. Besides, curcumin also helps promote the immune system31.\n\nHowever, excessive probiotics could hamper growth as recorded in this study. As observed the growth performance increased from control (without probiotic) up to 10 ml kg-1 then decreased when the probiotic dosage was upped to 15 ml kg-1. According to Atlas and Bartha32, higher doses of probiotics favour production of secondary metabolites due to the increased bacterial load, leading to competition for nutrient and substrate utilization and inhibition of digestion and nutrient absorption. Pelczar and Chan33 stated that excessive secondary metabolites will kill some bacteria groups, reducing digestibility. Therefore a number of digestive bacteria should be at an optimum level but this differs among species.\n\n\nConclusions\n\nAddition of probiotics to the diet of the keureling fish (T. tambra) could enhance growth performance, feed efficiency, feed conversion, protein retention and protein digestibility of larva. We found that 10 ml probiotic kg-1 of feed was an optimum dosage for this species.\n\n\nData availability\n\nDataset 1: Raw data and processed data collected for the study. This includes crude protein in the feces, growth performance, daily growth rate, specific growth rate, survival rate, weight, feed conversion ratio.\n\nDOI, 10.5256/f1000research.10693.d15153634\n\nDataset 2: Raw data of Tor tambra weight gain.\n\nDOI, 10.5256/f1000research.10693.d15153735", "appendix": "Author contributions\n\n\n\nZAM developed the research proposal and study design and approved the final draft of the paper. TM, TA and ID were responsible for conducting experiments and data collection. CY, AAM and NF carried out data analysis and were involved in report drafting. MNS was responsible for proximate analysis and proofreading of the final draft.\n\n\nCompeting interests\n\n\n\nWe declare no competing interests with PT Yakult Indonesia Persada, the producer of Yakult® in Indonesia.\n\n\nGrant information\n\nThis study was supported by Syiah Kuala University and the Ministry of Research, Technology and Higher Education of the Republic of Indonesia through the Penelitian Unggulan Perguruan Tingi scheme.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Syiah Kuala University and the Ministry of Research, Technology and Higher Education of the Republic of Indonesia for supporting this study. The authors thank Mr. Khaidir for his assistance during the field experiment in Nagan Raya District.\n\n\nSupplementary material\n\nSupplementary Figure 1: Completed ARRIVE guidelines checklist.\n\nClick here to access the data.\n\n\nReferences\n\nMuchlisin ZA: Analisis kebijakan introduksi spesies ikan asing di perairan umum daratan Provinsi Aceh. Journal Kebijakan Sosial Ekonomi Kelautan dan Perikanan. 2011; 1(1): 79–89. Reference Source\n\nMuchlisin ZA: First report on introduced freshwater fishes in the waters of Aceh, Indonesia. Arch Pol Fish. 2012; 20(2): 129–135. Publisher Full Text\n\nMuchlisin ZA, Siti-Azizah MN: Diversity and distribution of freshwater fishes in Aceh waters, northern Sumatra Indonesia. International Journal of Zoological Research. 2009; 5(2): 62–79. Publisher Full Text\n\nMuchlisin ZA: Study on potency of freshwater fishes in Aceh waters as a basis for aquaculture and conservation development programs. Jurnal Iktiologi Indonesia. 2013a; 13(1): 91–96. Reference Source\n\nMuchlisin ZA: A preliminary study on the domestication of wild keureling broodfish (Tor tambra). Proceedings of the 3rd Annual International Conference Syiah Kuala University (AIC Unsyiah) 2013 in conjunction with The 2nd International Conference on Multidisciplinary Research (ICMR) 2013 October 2–4, 2013 Banda Aceh, Indonesia. 2013; 380–382. Reference Source\n\nMuchlisin ZA, Munazir AM, Fuady Z, et al.: Prevalence of ectoparasites on mahseer fish (Tor tambra. Valenciennes, 1842) from aquaculture ponds and wild population of Nagan Raya District, Indonesia. Human and Veterinary Medicine. 2014; 6: 148–152. Reference Source\n\nMuchlisin ZA, Fuadi Z, Munazir AM, et al.: First report on Asian fish tapeworm (Bothriocephalus acheilognathi) infection of indigenous mahseer (Tor tambra) from Nagan Raya District, Aceh Province, Indonesia. Bulgarian Journal of Veterinary Medicine. 2015a; 18: 361–366. Publisher Full Text\n\nMuchlisin ZA, Nazir M, Fadli N, et al.: Growth performance, protein and lipid retentions on the carcass of Acehnese mahseer, Tor tambra (Pisces: Cyprinidae) fed commercial diet at different levels of protein. Iranian Journal of Fisheries Science. 2017; Accepted paper.\n\nMuchlisin ZA, Afrido F, Murda T, et al.: The effectiveness of experimental diet with varying levels of papain on the growth performance, survival rate and feed utilization of keureling fish (Tor tambra). Biosaintifika. 2016a; 8(2): 172–177. Publisher Full Text\n\nMuchlisin ZA, Arisa A, Muhammadar AA, et al.: Growth performance and feed utilization of keureling (Tor tambra) fingerlings fed a formulated diet with different doses of vitamin E (alpha-tocopherol). Arch Pol Fish. 2016b; 23: 47–52. Publisher Full Text\n\nMuchlisin ZA, Batubara AS, Siti-Azizah MN, et al.: Feeding habit and length weight relationship of keureling fish, Tor tambra. Valenciennes, 1842 (Cyprinidae) from the western region of Aceh Province, Indonesia. Biodiversitas. 2015a; 16(1): 89–94. Publisher Full Text\n\nBruno GG, Ana R, Turnbull JF: The use and selection of probiotic bacteria for use in the culture of larval aquatic organisms. Aquaculture. 2000; 191(1–3): 259–270. Publisher Full Text\n\nBalcázar JL, Blas ID, Ruiz Z, et al.: The role of probiotics in aquaculture. Veterinary Microbiology. 2006; 114(3–4): 173–186. Publisher Full Text\n\nAl-Baadani HH, Abudabos AM, Al-Mufarrej SI, et al.: Effects of dietary inclusion of probiotics, prebiotics and synbiotics on intestinal histological changes in challenged broiler chickens. S Afr J Anim Sci. 2016; 46(2): 157–165. Publisher Full Text\n\nAndriyanto S, Listyanto N, Rosmawati R: Effect difference doses of probiotic on the survival and growth rates Pangasius djambal. Prosiding Forum Inovasi Teknologi Akuakultur. 2010; 117–122.\n\nSetiawati JE, Tarsim Adiputra AY, Hudaidah S: Effect of different doses of probiotic on the diet on the growth performance, survival rate, feed efficiency and protein retention of Pangasius hypophthalmus. Jurnal Rekayasa dan Teknologi Budidaya Perairan. 2013; 1: 151–162.\n\nJatobá A, Vieira Fdo N, Buglione-Neto CC, et al.: Diet supplemented with probiotic for Nile tilapia in polyculture system with marine shrimp. Fish Physiol Biochem. 2011; 37(4): 725– 732. PubMed Abstract | Publisher Full Text\n\nBandyopadhyay P, Das Mohapatra PK: Effect of a probiotic bacterium Bacillus circulans PB7 in the formulated diets: on growth, nutritional quality and immunity of Catla catla (Ham.). Fish Physiol Biochem. 2009; 35(3): 467–478. PubMed Abstract | Publisher Full Text\n\nAbdullah IA: Effect of EM-4 probiotic addition on the diet on the growth performance and survival rate of Osphronemus gouramy. Thesis, Universitas Muhammadiyah Malang,. Malang. 2007; 68.\n\nJafaryan H, Sahandi J, Dorbadam JB: Growth and length-weight relationships of Trichopodus trichopterus (Pallas, 1770) fed a supplemented diet with different concentrations of probiotic. Croatian Journal of Fisheries. 2014; 72(3): 118–122. Reference Source\n\nHelrich K, Association of Official Analytical Chemistry (AOAC): Official methods of analysis of the association of official analytical chemists. 15th edn (ed. by W. Horwitz). Association of Official Analytical Chemists, Arlington, VA, USA. 1298. Reference Source\n\nDe Silva SS, Anderson TA: Fish nutrition in aquaculture. Chapman and Hall. London. 1995. Reference Source\n\nCard LE, Nesheim MC: Poultry production.11th Edit. Lea and Febiger. Phildelphia. 1976. Reference Source\n\nSofyan H, Werwatz A: Analyzing XploRe download profiles with intelligent miner. Comput Stat. 2001; 16(3): 465–479. Publisher Full Text\n\nArief M, Kusumaningsih E, Rahardja BS: Crude protein and crude fiber on the formulated diet which fermented with probiotic. Jurnal Ilmiah Perikanan dan Kelautan. 2008; 3(2): 53–58.\n\nIrianto A: Probiotic in aquaculture. Universitas Gadjah Mada Press,. Yogjakarta. 2003; 125.\n\nMarzouk MS, Moustafa MS, Mohamed N: The influence of some probiotics on the growth performance and intestinal microbial flora of O. niloticus. Proceeding of the 8th International Symposium on Tilapia in Aquaculture. 2008. 1059–1071. Reference Source\n\nVerschuere L, Rombaut G, Sorgeloos P, et al.: Probiotic bacteria as biological control agents in aquaculture. Microbiol Mol Biol Rev. 2000; 64(4): 655–671. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAraújo CC, Leon LL: Biological activities of Curcuma longa L. Mem Inst Oswaldo Cruz. 2001; 96(5): 723–728. PubMed Abstract | Publisher Full Text\n\nHassan MA, Aftabuddin M, Meena DK, et al.: Effects of black thorn, Kaempferia galanga single or in combination with yeast probiotic on feed palatability, growth performance and product quality of Labeo rohita fingerling (Hamilton). Turk J Fish Aquat Sci. 2014; 14(4): 915–920.\n\nJagetia GC, Aggarwa BB: \"Spicing up\" of the immune system by curcumin. J Clin Immunol. 2007; 27(1): 19–35. PubMed Abstract | Publisher Full Text\n\nAtlas RM, Bartha R: Microbial Ecology: Fundamentals and Applications. Fourth Edition. Addison Wesley Longman, Menlo Park, California. 1998; 576. Reference Source\n\nPelczar MJR, Chan ECS: Microbiology. McGraw-Hill Book Company,. New York. 1997; 952.\n\nMuchlisin ZA, Murda T, Yulvizar C, et al.: Dataset 1 in: Growth performance and feed efficiency of keureling fish Tor tambra (Pisces: Cyprinidae) fed formulated diet with enhanced probiotic. F1000Research. 2017. Data Source\n\nMuchlisin ZA, Murda T, Yulvizar C, et al.: Dataset 2 in: Growth performance and feed efficiency of keureling fish Tor tambra (Pisces: Cyprinidae) fed formulated diet with enhanced probiotic. F1000Research. 2017. Data Source" }
[ { "id": "20452", "date": "22 Feb 2017", "name": "Usman M. Tang", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn my opinion, state of the art of the study has been well presented, and problem statement was also clear. We assume that authors are already familiar with this species indicate by used references. The experimental design was suitable and the data were presented very well. The results have been discussed in a good manner and compared to other previous studies.\nAs known Tor tambra is one of the native species in Indonesia and this species has been cultured. As already mention by the authors that one the main problem in aquaculture is lack of growth rate and low feed efficiency. Therefore, application of probiotic can solve this problem, this proven by the presented data that the feed with probiotic gave higher of growth rate and feed efficiency compared to control. The new innovation of this study is using  temulawak (Curcuma xanthorrhiza) and kencur (Kaempferia galanga) to enhanced the probiotic. Therefore, the finding is very useful for practical use and gives a significant contribution to scientific community.", "responses": [] }, { "id": "20658", "date": "13 Mar 2017", "name": "Ahmad Jibril Nayaya", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe research was developed on the concept of probiotics being microbes that enhance good health and thereby promote optimal utilization of feed. The authors have shown that use of probiotics can improve feed uptake by the fish thereby reducing the cost of production especially where the cost of feed in some places is on the high side.\n\nThe title is most appropriate for the work and the Abstract gave a concise summary of the research work. Furthermore, the design and methods fit the work done. The analysis correctly explained what has been obtained from the designed work and appropriately reflects the topic studied.\nThe Conclusion is balanced and has justified the basis of the results. Useful data with adequate information has been provided to warrant replication of the experiment elsewhere.\nOverall, the research has made a remarkable contribution by providing the first well documented information on positive advantage of use of probiotics in the culture of T. tambra.", "responses": [] } ]
1
https://f1000research.com/articles/6-137
https://f1000research.com/articles/5-2891/v1
21 Dec 16
{ "type": "Case Report", "title": "Case Report: Use of reinforced buccal mucosa graft over gracilis muscle flap in management of post high intensity focused ultrasound (HIFU) rectourethral fistula", "authors": [ "Shrikant Jai", "Arvind Ganpule", "Abhishek Singh", "Mohankumar Vijaykumar", "Vinod Bopaiah", "Ravindra Sabnis", "Mahesh Desai", "Shrikant Jai", "Abhishek Singh", "Mohankumar Vijaykumar", "Vinod Bopaiah", "Ravindra Sabnis", "Mahesh Desai" ], "abstract": "High intensity focused ultrasound (HIFU) has come forward as alternative treatment for carcinoma of the prostate. Though minimally invasive,HIFUhas potential side effects. Urethrorectal fistula is one such rare side effect. To our knowledge this is first case in which rectourethral fistula secondary to HIFU was repaired with buccal mucosa graft (BMG) over a harvest bed of gracilis flap. This case report describes points of technique that will help successful management of resilient rectourethral fistula. Urinary and faecal diversion in the form of suprapubic catheter and colostomy is vital. Adequate time between stoma formation, fistula closure and then finally stoma closure is needed. Lithotomy position and perineal approach gives best exposure to the fistula. The rectum should be dissected 2cm above the fistula; this aids in tension free closure of the rectal defect. Similarly buccal mucosal graft was used on the urethra to achieve tension free closure. A good vascular pedicle gracilis muscle flap is used to interpose between the two repairs. This not only provides a physical barrier but also provides a vascular bed for BMG uptake. Perfect haemostasis is essential, as any collection may become a site of infection thus compromising results.  We strongly recommend rectourethral fistula be directly repaired with gracilis muscle flap with reinforced buccal mucosa graft without attempting any less invasive repairs because the “first chance is the best chance”.", "keywords": [ "HIFU", "Urethro–rectal fistula", "Fistula", "Buccal Mucosa graft", "Gracilis Muslce flap", "Complicated Fistula", "Carcinoma Prostate." ], "content": "Introduction\n\nHigh intensity focused ultrasound (HIFU) is a treatment option in the management of prostate cancer1. When combined with transurethral resection of prostate (TURP), risk of post procedure retention of urine and other side effects are significantly reduced. Urethrorectal fistula is a serious complication of HIFU. Literature reports a rate of urethrorectal fistula following HIFU2 of approximately 0.7%. This case report describes management of recurrent urethrorectal fistula after HIFU with buccal mucosa graft (BMG) over a bed of gracilis flap.\n\n\nCase report\n\nA 52-year-old man was evaluated for lower urinary tract symptoms (LUTS) and found to have raised PSA levels of 18.70 ng/ml. Transrectal ultrasound (TRUS) guided biopsy showed adenocarcinoma of the prostate with a Gleason’s score of 3+4 with evidence of extracapsular spread on the left side. Bone scan showed osteoblastic activity at the distal end of the right femur. Ultrasound (USG) showed 30 g prostate. He underwent an initial TURP to debulk the gland. Following the intervention in the same sitting he underwent HIFU. Histopathology showed 50% of the cores were positive for adenocarcinoma with a Gleason’s score of 4+4. The Foley catheter (PUC) was removed on the 5th post-operative day (POD). On the 15th POD, the patient had urine leak via the rectum. Diagnostic cystoscopy showed a single fistulous opening above the level of the external sphincter. As conservative management failed in form of suprapubic catheter (SPC), he underwent robotic assisted laparoscopic excision of the fistula. The bladder and rectum were closed separately with interposition of an a cellular matrix sheet in between.\n\nOn the 6th POD following robotic repair, the patient developed fecaluria which was managed with loop sigmoid colostomy and SPC. Repeat cystoscopy after 3 months showed persistent fistula (Figure 1). He was planned for repeat surgery via perineal approach in view of his previous failed abdominal surgery and faecal contamination of abdomen.\n\nThe patient was placed in lithotomy position with the perineum nearly horizontal. An inverted smiling incision was made in the perineum above the anus (Figure 2). Dissection showed dense adhesions between the rectum and surrounding tissue. Digital rectal examination done intraoperatively ensured rectal wall integrity. The fistula was at the 1 o’ clock position between the prostatic urethra and rectum (Figure 3). All scar tissue and fistula was excised to create healthy margins. The rectal defect was repaired in transverse fashion in a single layer with monocryl 2-0 sutures (Figure 4). BMG was harvested and positioned to bridge the urethral defect; it was anchored with interrupted 3-0 monofilament sutures (Figure 5). A separate incision was made on the left thigh from the adductor tubercle to 2cm above the medial condyle. The gracilis muscle flap was harvested, rotated towards the perineum (Figure 6) and interposed between the rectal and urethal repair (Figure 7). Prior to closure of wound, adequate haemostasis was ensured.\n\nOn the 14th POD the PUC was removed and SPC blocked. The patient was voiding well with a satisfactory uroflow without any leak of urine from the rectum. Colostomy closure was done after 3 months. On follow-up visits at 3 and 6 months, the patient was asymptomatic.\n\n\nDiscussion\n\nThe grascilis muscle flap was first described by Ryan et al3 for closure of rectouretral fistula. The gracilis muscle flap fulfills all the criteria of an ideal flap for interposition in such situations due to its rich vascular supply and ease of rotation.\n\nRabau et al4 described a series of 10 patients who under went grascilis flap repair for rectourethral or rectovaginal fistula. Of these, 3 patients had fistula post radical prostatectomy and a prior failed attempt of fistula repair. On mean follow-up at 26 months they reported a 100% success rate. The results of our report closely resemble those of Michab et al.\n\nIn a series of 35 patients by Ulrich et al5, 4 patients had fistula post radical prostatectomy and all were treated successfully with a mean follow up time of 28 months. The patients included those with iatrogenic rectal injury during retropubic prostatectomy. Our case represents an injury due to high intensity focal ultrasound given for prostate cancer.\n\nIn a series of 11 cases by Zmora et al6, 9 patients healed without complication and 2 others required further surgical management. Thus a success rate of 81% was achieved. This series included two patients with post radical prostatectomy fistula in two instances. The authors advocated this approach in failed previous repairs as in our case.\n\nThe technique of harvesting BMG was first described by Allen F. Morey et al7 in 1996. Andrich DE8 reported better results for dorsal as opposed to ventral onlay due to more vascular and better bed of corporal bodies for graft uptake. Further it was found that strictures in sittings of ischemia are better repaired with flaps due to poor surrounding blood supply. In our case the urethral defect was 2.5cm and in the prostatic urethra with local ischemia, thus a BMG without the grascilis muscle flap bed would result in a poorer outcome. Zinman9 described 68 patients with rectourethral fistula who underwent gracilis muscle flap repair out of which 27 were performed in combination with BMG. This article confirms the feasibility of combined BMG and gracilis muscle flap repair and thus provides a proof of concept for our case report.\n\n\nConclusion\n\nRectourethral fistula secondary to HIFU should be categorised as acomplicated fistula owing to the hostile environment caused by the local heat generated by primary treatment. This report suggests rectourethral fistula post HIFU should be repaired with gracilis muscle flap with reinforced buccal mucosa graft as the “first chance in the best chance” in such situations.\n\n\nKey messages from our case report\n\n1. Good exposure and adequate dissection is vital; this was achieved by the perineal approach.\n\n2. Tension free repair of the rectum was achieved by dissection of rectum 2 cm cranial to the fistula and on the urethral side; buccal mucosa graft was used for tension free repair.\n\n3. As both rectum and urethra are high pressure zones, there is a high probability of failure if both the repairs are not separated by live tissue10. Ideal tissue for this interposition is a tissue with its own blood supply, in this case a pedicle gracilis muscle flap. The advantage of this flap was it acts as a physical barrier as well as a vascular bed for BMG.\n\n4. Adequate haemostasis and good closure is equally important, as any collection is likely to get infected leading to recurrent fistula. Closure over suction drain helps in reducing the chances of collection, and also to keep buccal mucosal graft adherent to surrounding vascular tissue, thus helping in graft uptake.\n\n\nConsent\n\nWritten informed consent for publication of the patient’s clinical details and clinical images was obtained from the patient.", "appendix": "Author contributions\n\n\n\nAG,SJ,AS and MV wrote, revised the manuscript. SJ,AG, AS,MV,VB,RS, and MD are responsible for the concept and content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nPoissonnier L, Chapelon JY, Rouvière O, et al.: Control of prostate cancer by transrectal HIFU in 227 patients. Eur Urol. 2007; 51(2): 381–7. PubMed Abstract | Publisher Full Text\n\nBlana A, Walter B, Rogenhofer S, et al.: High-intensity focused ultrasound for the treatment of localized prostate cancer: 5-year experience. Urology. 2004; 63(2): 297–300. PubMed Abstract | Publisher Full Text\n\nRyan JA Jr, Beebe HG, Gibbons RP: Gracilis muscle flap for closure of rectourethral fistula. J Urol. 1979; 122(1): 124–5. PubMed Abstract\n\nRabau M, Zmora O, Tulchinsky H, et al.: Recto-vaginal/urethral fistula: repair with gracilis muscle transposition. Acta Chir Iugosl. 2006; 53(2): 81–4. PubMed Abstract | Publisher Full Text\n\nUlrich D, Roos J, Jakse G, et al.: Gracilis muscle interposition for the treatment of recto-urethral and rectovaginal fistulas: a retrospective analysis of 35 cases. J Plast Reconstr Aesthet Surg. 2009; 62(3): 352–6. PubMed Abstract | Publisher Full Text\n\nZmora O, Potenti FM, Wexner SD, et al.: Gracilis muscle transposition for iatrogenic rectourethral fistula. Ann Surg. 2003; 237(4): 483–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorey AF, McAninch JW: Technique of harvesting buccal mucosa for urethral reconstruction. J Urol. 1996; 155(5): 1696–7. PubMed Abstract | Publisher Full Text\n\nAndrich DE, Leach CJ, Mundy AR: The Barbagli procedure gives the best results for patch urethroplasty of the bulbar urethra. BJU Int. 2001; 88(4): 385–9. PubMed Abstract | Publisher Full Text\n\nZinman L: The management of the complex recto-urethral fistula. BJU Int. 2004; 94(9): 1212–3. PubMed Abstract | Publisher Full Text\n\nNyam DC, Pemberton JH: Management of iatrogenic rectourethral fistula. Dis Colon Rectum. 1999; 42(8): 994–7; discussion 997–9. PubMed Abstract | Publisher Full Text" }
[ { "id": "18843", "date": "29 Dec 2016", "name": "Sanjay B. Kulkarni", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have highlighted and important complex subject of post HIFU Recto Urethral fistula.\nAs stated this should be classified as complex fistula.\nManagement requires multiple options and an experienced team of Reconstructive Urologists.\nUsing a vascularised flap is important interposition tissue.\nThe authors have simultaneously augmented the urethra with buccal graft.\n\nWe agree with all the key messages as highlighted by authors.\n\nWe suggest this article be indexed and be made available as early as possible", "responses": [] }, { "id": "18709", "date": "09 Jan 2017", "name": "Alex J. Vanni", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article demonstrates that a HIFU induced RUF can be successsfully closed with a buccal graft and gracilis muscle flap. I agree with the authors that this technique is the preferred way to treat these RUF.\nThe article contends this is the first case demonstrating this in the literature. In fact, we published this first in 2010, in which 2 patients of our cohort had HIFU RUF that were successfully repaired with this technique. This report should be changed to appropriately acknowledge that we (Vanni et al) were the first to demonstrate closure of HIFU RUF with this technique with the appropriate reference cited.\nHere is this reference:\n\nVanni AJ1, Buckley JC, Zinman LN. Management of surgical and radiation induced rectourethral fistulas with an interposition muscle flap and selective buccal mucosal onlay graft.J Urol. 2010 Dec;184(6):2400-4.\n- Another point: In the discussion the authors state:\n\"Zinman9 described 68 patients with rectourethral fistula who underwent gracilis muscle flap repair out of which 27 were performed in combination with BMG\".\nThis reference is antiquated and we have published 2 more extensive papers on the topic more recently that should be cited instead of the one used by the authors. Vanni et al is a better reference for this sentence and the one I previously mentioned above. In this paper, 74 patients had RUF repair with a gracilis muscle flap. 39 of these patients had a RUF from an ablative source (radiation and 2 HIFU). Of these 39 patients, 34 had a buccal mucosa graft (including the 2 HIFU cases) used to close the urethral defect. 37 of these patients had at least 1 gracilis muscle flap, while the other 2 patients had an inferior gluteus maximus flap and a Singapore fasciocutaneous flap.", "responses": [] } ]
1
https://f1000research.com/articles/5-2891
https://f1000research.com/articles/6-43/v1
13 Jan 17
{ "type": "Research Article", "title": "Molecular signature of anastasis for reversal of apoptosis", "authors": [ "Ho Man Tang", "C. Conover Talbot Jr", "Ming Chiu Fung", "Ho Lam Tang", "C. Conover Talbot Jr" ], "abstract": "Apoptosis is a type of programmed cell death that is essential for normal organismal development and homeostasis of multicellular organisms by eliminating unwanted, injured, or dangerous cells. This cell suicide process is generally assumed to be irreversible. However, accumulating studies suggest that dying cells can recover from the brink of cell death. We recently discovered an unexpected reversibility of the execution-stage of apoptosis in vitro and in vivo, and proposed the term anastasis (Greek for “rising to life”) to describe this cell recovery phenomenon. Promoting anastasis could in principle preserve injured cells that are difficult to replace, such as cardiomyocytes and neurons. Conversely, arresting anastasis in dying cancer cells after cancer therapies could improve treatment efficacy. To develop new therapies that promote or inhibit anastasis, it is essential to identify the key regulators and mediators of anastasis – the therapeutic targets. Therefore, we performed time-course microarray analysis to explore the molecular mechanisms of anastasis during reversal of ethanol-induced apoptosis in mouse primary liver cells. We found striking changes in transcription of genes involved in multiple pathways, including early activation of pro-survival genes, cell cycle arrest, stress-inducible responses, and at delayed times, cell migration and angiogenesis. Here, we present the time-course whole-genome gene expression dataset revealing gene expression profiles during the reversal of apoptosis. This dataset provides important insights into the physiological, pathological, and therapeutic implications of anastasis.", "keywords": [ "Anastasis", "Cell Death", "Cell Survival", "Cell Suicide", "Gene Expression", "Recovery", "Repair", "Reversal of Apoptosis" ], "content": "Introduction\n\nApoptosis (Greek for “falling to death”) was generally assumed to be an irreversible cell suicide process because it involves rapid and massive cell destruction1–7. During apoptosis, intrinsic and extrinsic pro-apoptotic signals can converge at mitochondria, leading to mitochondrial outer membrane permeabilization (MOMP), which releases cell execution factors, such as cytochrome c to trigger activation of apoptotic proteases including caspase-3 and -78,9, small mitochondria-derived activator of caspases (Smac)/direct IAP binding protein with low pI (DIABLO) to eliminate inhibitor of apoptosis protein (IAP) inhibition of caspase activation10,11, and apoptosis-inducing factor (AIF) and endonuclease G to destroy DNA12–15. Activated caspases commit cells to destruction by cleaving hundreds of functional and structural cellular substrates2,16. Crosstalk between signalling pathways amplify the caspase cascade to mediate cell demolition via nucleases (DNA fragmentation factor [DFF]/caspase-activated DNase [CAD]) to further destroy the genome17–19, and alter lipid modifying enzymes to cause membrane blebbing and apoptotic body formation20,21. Therefore, cell death is considered to occur after caspase activation within a few minutes22,23.\n\nHowever, we and other groups have demonstrated reversal of early stage apoptosis, such as externalization of phosphatidylserine (PS) in cultured primary cells and cancer cell lines24–27. We have further demonstrated that dying cells can reverse apoptosis even after reaching the generally assumed “point of no return”, such as MOMP-mediated cytochrome c release, caspase activation, DNA damage, nuclear fragmentation, and apoptotic body formation26–28. Our observation of apoptosis reversal at late stages is further supported by an independent study, which shows recovery of cells after MOMP29. To detect reversal of apoptosis in live animals, we have further developed a new in vivo caspase biosensor, designated “CaspaseTracker”30, and successfully identified and tracked somatic, germ and stem cells to survive transiently-induced cell death, and potentially during normal development and homeostasis in Drosophila melanogaster after caspase activation30,31, the hallmark of apoptosis2,32. We refer to this recovery phenomenon as “anastasis”27, which means “rising to life” in Greek, for the reversal of apoptosis. Anastasis appears to be an intrinsic cell survival phenomenon, as removal of cell death stimuli is sufficient to allow dying cells to recover26–28,30.\n\nThe physiological, pathological and therapeutic importance of anastasis is not yet known. We proposed that anastasis could be an unexpected tactic that cancer cells use to escape cancer therapy26–28. Many tumours undergo dramatic initial responses to cell death-inducing radiation or chemotherapy33–36; however, these cells relapse, and metastasis often occurs in most types of cancer33–35. Therefore, the ability of cells to recover from transient induction of cell death may allow tumour cells to escape treatment, and survive and proliferate, resulting in relapse26–28. Furthermore, cells may acquire new oncogenic mutations and transformation phenotypes during anastasis27,28, such as DNA damage caused by apoptotic nucleases. Therefore, anastasis could be one mechanism underlying the observation that repeated tissue injury increases the risk of cancer in a variety of tissues37, such as liver damage due to alcoholism38, chronic thermal injury in the oesophagus induced by the consumption of very hot beverages39–41, evolution of drug resistance in recurrent cancers26–28, and development of a second cancer during subsequent therapy42–45. Anastasis can also occur in primary cardiac cells and neuronal cell lines27,28, and potentially in cardiomyocytes in vivo following transient ischemia46. These findings suggest anastasis as an unexpected cellular protective mechanism. Therefore, uncovering the mechanisms of anastasis may provide new insights into the regulation of cell death and survival, and harnessing this mechanism via suppression or promotion of anastasis would aid treatment of intractable diseases including cancer, heart failure and neurodegeneration.\n\nOur previous study demonstrated reversibility of ethanol-induced apoptosis at late stages in mouse primary liver cells, and revealed that new transcription is important to reverse apoptosis27,28. During recovery, there was up-regulation of genes involved in pro-survival pathways and DNA damage responses during anastasis (Bag3, Mcl1, Dnajb1, Dnajb9, Hsp90aa1, Hspa1b, and Hspb1, Mdm2)27. Interestingly, inhibiting some of those genes by corresponding specific chemical inhibitors significantly suppresses anastasis27. However, the molecular mechanism of anastasis remains to be elucidated. To study the cellular processes of anastasis, we performed time-course RNA microarray analysis to determine the gene expression profiles of cultured mouse primary liver cells undergoing anastasis following exposure to ethanol, and identified unique gene expression patterns during reversal of apoptosis. Here, we present our time-course microarray data, which reveals the molecular signature of anastasis.\n\n\nMethods\n\nMouse primary liver cells were isolated from BALB/c mice using collagenase B and cultured as described27,47. The cells were treated with 4.5% ethanol in DMEM/F-12 (DMEM:nutrient mixture F-12) supplemented with 10% fetal bovine serum, 100 U/ml penicillin, and 100 μg/ml streptomycin (Life Technologies, Carlsbad, CA, USA) at 37°C under an atmosphere of 5% CO 2/95% airfor 5 hours (R0), and then washed and further incubated in fresh culture medium for 3 hours (R3), 6 hours (R6), 24 hours (R24), and 48 hours (R48). Three biological replicates were performed at each time point. The untreated cells served as control (Ctrl). Total RNA in the corresponding cell conditions was harvested using TRIzol Reagent, and RNA was purified using the RNeasy Mini Kit (Qiagen, Cologne, Germany). Reverse transcription was performed using SABiosciences C-03 RT2 First Strand Kit to construct cDNA (SABiosciences-Qiagen, Frederick, MD, USA). The cDNA samples were analysed using the Illumina MouseWG-6 v2.0 Expression BeadChip (Illumina, San Diego, CA, USA).\n\nThe Partek Genomics Suite 6.6 (Partek, St. Louis, MO, USA) was used for principal component analysis48. Spotfire DecisionSite 9.1.2 (TIBCO, Palo Alto, CA, USA) platform was used to evaluate the fold change of gene expression levels between time points when compared with a common starting point49. Signal values were converted into log2 space and quality control tests were performed to ensure data integrity by comparing the signals of the three biological replicates at each time point. The fold change was based on averaged values of the three replicates at each time point; two-sample Student's t-test was used to determine statistical significance as p-values of less than 0.05, using Partek Genomics Suite v6.5 (Partek Inc., St. Louis, MO, USA).\n\nFor the time-course gene expression analysis using Spotfire, all time points were compared with time point Ctrl, which represents untreated cells. Spotfire was used to show the genes that displayed specific changes in gene expression after removal of apoptotic inducers for 3 hours and 6 hours, as well as the genes that were up-regulated from apoptosis (R0) to 6 hours (R6) after removal of the inducer. Genes with specific and significant change (Log2 > 1 or <-1) in expression at the corresponding timepoint are highlighted. Interaction network analysis of the up-regulated genes during anastasis was performed using GeneMANIA database (http://genemania.org/)50,51.\n\n\nResults and discussion\n\nWe have demonstrated that mouse primary liver cells can reverse the apoptotic process at the execution stage, despite experiencing important checkpoints commonly believed to be the “point of no return”, including caspase-3 activation, DNA damage, and cell shrinage27,28. To pursue the mechanisms of anastasis, we performed time-course high-throughput microarray to evaluate gene expression profiles during reversal of ethanol-induced apoptosis in mouse primary liver cells. RNA samples were collected from the untreated primary liver cells (Ctrl), the cells treated with 4.5% ethanol for 5 hours when cells exhibited hallmarks of apoptosis (R0), and the treated cells that were then washed and cultured in fresh medium for 3 (R3), 6 (R6), 24 (R24) and 48 (R48) hours. Apoptosis was confirmed previously in the ethanol-treated cells (R0), which displaced hallmarks of apoptosis, including plasma membrane blebbing, cell shrinkage, cleavage of caspase-3 and its substrates, such as PARP and ICAP (Figure 1A and B, images reprinted with permission27). The features of apoptosis vanished after removal of the cell death inducer (R24), indicating recovery of the cells (Figure 1A and B). Three biological replicates were performed at each time point. Principal component analysis indicated that all three biological replicates of each time point exhibited a very high correlation, as indicated by clustering, for the dataset of all 18 samples (Figure 2A; see Data availability52). Unsupervised hierarchical clustering confirms the similarity between all the replicates at each time point (Figure 2B; see Data availability52; Supplementary Figure 1).\n\nMouse primary liver cells were treated with 4.5% ethanol for 5 hours (R0) and then washed and cultured in fresh medium for 3 (R3), 6 (R6), 24 (R24), and 48 (R48) hours. The untreated cells served as control (Ctrl). (A) Light microscopy and (B) western blot analysis validated apoptosis to occur at R0, and anastasis at R24. Cells were collected at the indicated timepoints of (A) for RNA extraction. Gene expression profiling was performed by microarray, and analysed by Spotfire. The images from Figure 1A and B are adopted from the Mol Biol Cell 23, 2240–52 (2012)27. Reprinted with permission.\n\nThe three biological replicate samples of microarray data were shown to cluster together by using (A) principal component analysis (PCA) and (B) unsupervised hierarchical clustering of the RNA microarray data of eighteen samples.\n\nGenes that display significant changes in expression during anastasis at the earliest time point of 3 hours, following the removal of the apoptotic inducer, may represent critical first responders of anastasis (Figure 3A, Table 1), including transcription factors of the activator protein-1 (AP-1) family (Atf3, Fos, Fosb, Jun, Junb), transforming growth factor‑β (TGF-β) signal pathway and its related regulators (Inhba, Snai1, Tgif1, Sox4, Klf4, Klf6, Klf9), pro-survival Bcl-2 family member (Bag3), inhibitor of p53 (Mdm2), anti-proliferation (Btg1), DNA damage (Ddit3, Ddit4) and stress-inducible (Dnajb1, Dnajb9, Herpud1, Hspb1, Hspa1b) responses. Starting at 6 hours of anastasis, other groups of gene pathways displace the peak of transcription, such as cell cycle arrest (Cdkn1a, Trp53inp1), autophagy (Atg12, Vps37b), and cell migration (Mmp10 and Mmp13) (Figure 3B, Table 2 and Table 3). Expression of potent angiogenic factors, such as Vegfa and Angptl4, peaks at 3 and 6 hours of anastasis, respectively. Changes in expression of most of these genes peak at the 3–6-hour time points after removal of the apoptotic stimulus and then return to baseline (Figure 3A and B; Supplementary Figure 1). Interestingly, certain genes involve in splicing of pre-mRNA (Rnu6), and growth arrest and DNA repair (Gadd45g) stay up-regulated during both apoptosis and anastasis (Figure 3C; Supplementary Figure 1).\n\nLog2 fold change of gene expression comparison between untreated cells (Ctrl), ethanol-induced apoptotic cells (R0), and induced cells that were then washed and further cultured in fresh medium for 3 (R3), 6 (R6), 24 (R24), and 48 (R48) hours. Genes that displaced specific (A) up-regulation at R3, (B) up- or down-regulation, and (C) up-regulation during R0 to R6 absolute log2 fold change >1 are highlighted. The log2 signal values from three biological replicates were averaged (geometric mean) for each time point.\n\nThe change in transcriptional profiles during anastasis provides us mechanistic insights into how dying cells could reverse apoptosis (Figure 4). In early anastasis (R3), our microarray data reveals that the regulators of the TGF-β signalling pathway, which control various fundamental cellular process, including proliferation, cell survival, apoptosis and transformation53–55, are upregulated. The activation of the TGF-β pathway is further supported by the upregulation of AP-1 (Jun-Fos) during early anastasis. The up-regulation of the TGF-β pathway also promote the expression of murine double minute 2 (Mdm2)56,57, an inhibitor of p53 that is also up-regulated during early anastasis27. As p53 plays a critical role in regulating apoptosis and DNA repair58,59, the expression of Mdm2 could not only promote cell survival by inhibiting p53-mediated cell death, but also cause mutations as we have observed in the cells after anastasis27. Expression of Mdm2 can also activate XIAP60, which inhibits caspases 3, 7 and 961–66, and therefore, could promote anastasis by suppressing the caspase-mediated cell destruction process. Up-regulation of anti-apoptotic BCL2 protein (Bag3) and heat shock proteins (Hsps) during anastasis can also neutralize pro-apoptotic proteins to promote cell recovery67–69. Notably, Bbc3 is a pro-apoptotic BH3-only gene to encode PUMA (p53 upregulated modulator of apoptosis)70,71. Its expression peaks at anastasis (R3-R6), suggesting the sign of anastasis vs apoptosis in the recovering cells during the early stage of the cell recovery process.\n\nThe 33 up-regulated genes during anastasis were selected for analysis using GeneMANIA.\n\nTo reverse apoptosis, the recovering cells need to remove or recycle the destroyed cellular components, such as the toxic or damaged proteins that are cleaved by caspases, and dysfunctional organelles like the permeabilized mitochondria. Autophagy could contribute to anastasis, as the recovering cells display up-regulation of Atg12 (Figure 3B, Table 2), which is important to the formation of autophagosome to engulf the materials that are then transported to lysosomes or vacuoles for degradation72–75. Recently studies reveal that autophagy can be activated by the DNA damage response, and play a role in maintaining the nuclear and mitochondrial genomic integrity through DNA repair and removal of micronuclei and damaged nuclear parts76,77. This could suppress mutagenesis and oncogenic transformation to occur in the cells that reverse apoptosis as observed after DNA damage27,28. Autophagy is also implicated in the exosome secretory pathway78–80, which could allow rapid clearance of damaged or toxic materials during anastasis through exosomes. Interestingly, our microarray data shows that the recovering cells display up-regulation of potent angiogenic factors such as Vegfa and Angptl4 (Figure 3A and B, Table 1 and Table 2), which promote vascular permeability and angiogenesis81–84. This could facilitate anastasis by supplying nutrient and clearing waste products. However, this could also enhance tumour progression and metastasis when anastasis occurs cancer cells. In fact, our data also reveals the up-regulation of genes involved in cell migration during anastasis27, such as Mmp 10 and 13 that encode matrix metalloproteinases85–88. This could be a stress-inducible response that promotes cell migration, like what was observed in HeLa cells after anastasis28, which might contribute to wound healing, or metastasis during cancer recurrence89,90.\n\nArresting cell cycle during anastasis is important as it can allow damaged cells to be repaired before they restore proliferation. This hypothesis is supported by the microarray data that reveals up-regulation of genes that suppress cell cycle (Figure 3A–C). For example, B-cell translocation gene 1 (Btg1) is an anti-proliferative gene91,92, which is up-regulated during the early anastasis (R3). At later stage of anastasis (R6), other cell cycle inhibitors express, including Cdkn1a which encodes p21 that induces cell cycle arrest and senescence93–95, and also Trp53inp1 which encodes tumor protein p53-inducible nuclear protein 1 that can arrest cycle independent to p53 expression96. These suggest that cell cycle is suppressed by multiple pathways during anastasis.\n\nWe also identified genes that are up-regulated both during apoptosis and anastasis, such as Gadd45g, and Rnu6 (Figure 3C, Table 4). Gadd45g functions in growth arrest and DNA repair97,98, and therefore, could be the cytoprotective mechanism that preserves the dying cells during cell death induction (R0), and promotes the injured cells to repair when environment is improved (R3 and R6). Rnu6 encodes U6 small nuclear RNA, which is important for splicing of a mammalian pre-mRNA99–102. Upregulation of Rnu6 from R0 to R6 suggests that post-transcription regulation could involve during apoptosis and anastasis. In fact, translational regulation also contributes to anastasis. For example, caspase-3, PARP and ICAP are cleaved in dying cells during apoptosis, and the non-cleaved form of corresponding proteins restores after anastasis (Figure 1B). Interestingly, the mRNA level of caspase-3 and PARP did not show significant increase (see Data availability52). This suggests translational regulation to occur during anastasis.\n\nOur study provides new insights into the mechanisms and consequences of anastasis (Figure 5) Researchers can analyse our microarray data to further identify the hallmarks of anastasis, understand its role, elucidate molecular mechanisms that reverse apoptosis, and develop therapeutic strategies to control anastasis. To identify the genes that displace specific change on a transcriptional level, software such as Spotfire can be used to view the gene expression pattern at different time points during the reversal of apoptosis49. To study the molecular mechanism of anastasis, Ingenuity Pathway Analysis can be used to create mechanistic hypotheses according to the transcriptional profile103. To identify drugs that modulate anastasis, Connectivity Map can be used to identify small molecules that promote or suppress anastasis based on its gene expression signature104,105. Anastasis could be a cell survival phenomenon mediated by multiple pathways26–28,30, so by comparing the gene expression profiles, researchers can study its potential connection to other cellular processes, such as anti-apoptotic pathways, autophagy, and stress-inducible responses75,106–110. By searching the molecular signature of anastasis, researchers can study the potential contribution to physiopathological conditions, such as metastasis during cancer recurrence, recovery from heart failure and wound healing89,90,111. Further data analysis will stimulate generation of hypotheses for future studies involving anastasis. As our understanding of anastasis mechanism expands, it will uncover its potential impacts on physiology and pathology, and offer exciting new therapeutic opportunities to intractable diseases by mediating cell death and survival (Figure 6).\n\n\nData availability\n\nRaw data for Tang et al., 2016 “Molecular signature of anastasis for reversal of apoptosis” available at: doi, 10.6084/m9.figshare.450273252\n\n(http://dx.doi.org/10.6084/m9.figshare.4502732)", "appendix": "Author contributions\n\n\n\nH.L.T., H.M.T. and M.C.F. conceived the idea and designed the research; H.L.T. and H.M.T. wrote the article, conducted the analyses together with C.T.J. and M.C.F. All authors agreed to the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Life Science Research Foundation fellowship (H.L.T.).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank J. Marie Hardwick for valuable advice to this work, and the Johns Hopkins Deep Sequencing and Microarray Core Facility for data analysis. Ho Lam Tang is a Shurl and Kay Curci Foundation Fellow of the Life Sciences Research Foundation.\n\n\nSupplementary material\n\nSupplementary Figure 1: Genes display unique expression patterns at each timepoint. Output of all genes analysed by Spotfire (K cluster, also see Data availability52).\n\nClick here to access the data.\n\n\nReferences\n\nKerr JF, Wyllie AH, Currie AR: Apoptosis: a basic biological phenomenon with wide-ranging implications in tissue kinetics. Br J Cancer. 1972; 26(4): 239–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRiedl SJ, Shi Y: Molecular mechanisms of caspase regulation during apoptosis. Nat Rev Mol Cell Biol. 2004; 5(11): 897–907. PubMed Abstract | Publisher Full Text\n\nGreen DR, Kroemer G: The pathophysiology of mitochondrial cell death. Science. 2004; 305(5684): 626–9. PubMed Abstract | Publisher Full Text\n\nChipuk JE, Bouchier-Hayes L, Green DR: Mitochondrial outer membrane permeabilization during apoptosis: the innocent bystander scenario. Cell Death Differ. 2006; 13(8): 1396–402. PubMed Abstract | Publisher Full Text\n\nKroemer G, Galluzzi L, Vandenabeele P, et al.: Classification of cell death: recommendations of the Nomenclature Committee on Cell Death 2009. Cell Death Differ. 2009; 16(1): 3–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGalluzzi L, Vitale I, Abrams JM, et al.: Molecular definitions of cell death subroutines: recommendations of the Nomenclature Committee on Cell Death 2012. Cell Death Differ. 2012; 19(1): 107–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolland AJ, Cleveland DW: Chromoanagenesis and cancer: mechanisms and consequences of localized, complex chromosomal rearrangements. Nat Med. 2012; 18(11): 1630–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang X: The expanding role of mitochondria in apoptosis. Genes Dev. 2001; 15(22): 2922–33. PubMed Abstract\n\nGalluzzi L, Kepp O, Kroemer G: Mitochondria: master regulators of danger signalling. Nat Rev Mol Cell Biol. 2012; 13(12): 780–8. PubMed Abstract | Publisher Full Text\n\nDu C, Fang M, Li Y, et al.: Smac, a mitochondrial protein that promotes cytochrome c-dependent caspase activation by eliminating IAP inhibition. Cell. 2000; 102(1): 33–42. PubMed Abstract | Publisher Full Text\n\nVerhagen AM, Ekert PG, Pakusch M, et al.: Identification of DIABLO, a mammalian protein that promotes apoptosis by binding to and antagonizing IAP proteins. Cell. 2000; 102(1): 43–53. PubMed Abstract | Publisher Full Text\n\nSusin SA, Lorenzo HK, Zamzami N, et al.: Molecular characterization of mitochondrial apoptosis-inducing factor. Nature. 1999; 397(6718): 441–6. PubMed Abstract | Publisher Full Text\n\nMiramar MD, Costantini P, Ravagnan L, et al.: NADH oxidase activity of mitochondrial apoptosis-inducing factor. J Biol Chem. 2001; 276(19): 16391–8. PubMed Abstract | Publisher Full Text\n\nJoza N, Susin SA, Daugas E, et al.: Essential role of the mitochondrial apoptosis-inducing factor in programmed cell death. Nature. 2001; 410(6828): 549–54. PubMed Abstract | Publisher Full Text\n\nLi LY, Luo X, Wang X: Endonuclease G is an apoptotic DNase when released from mitochondria. Nature. 2001; 412(6842): 95–9. PubMed Abstract | Publisher Full Text\n\nLüthi AU, Martin SJ: The CASBAH: a searchable database of caspase substrates. Cell Death Differ. 2007; 14(4): 641–50. PubMed Abstract | Publisher Full Text\n\nLiu X, Zou H, Slaughter C, et al.: DFF, a heterodimeric protein that functions downstream of caspase-3 to trigger DNA fragmentation during apoptosis. Cell. 1997; 89(2): 175–84. PubMed Abstract | Publisher Full Text\n\nEnari M, Sakahira H, Yokoyama H, et al.: A caspase-activated DNase that degrades DNA during apoptosis, and its inhibitor ICAD. Nature. 1998; 391(6662): 43–50. PubMed Abstract | Publisher Full Text\n\nMukae N, Enari M, Sakahira H, et al.: Molecular cloning and characterization of human caspase-activated DNase. Proc Natl Acad Sci U S A. 1998; 95(16): 9123–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColeman ML, Sahai EA, Yeo M, et al.: Membrane blebbing during apoptosis results from caspase-mediated activation of ROCK I. Nat Cell Biol. 2001; 3(4): 339–45. PubMed Abstract | Publisher Full Text\n\nOrlando KA, Stone NL, Pittman RN: Rho kinase regulates fragmentation and phagocytosis of apoptotic cells. Exp Cell Res. 2006; 312(1): 5–15. PubMed Abstract | Publisher Full Text\n\nTyas L, Brophy VA, Pope A, et al.: Rapid caspase-3 activation during apoptosis revealed using fluorescence-resonance energy transfer. EMBO Rep. 2000; 1(3): 266–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTakemoto K, Nagai T, Miyawaki A, et al.: Spatio-temporal activation of caspase revealed by indicator that is insensitive to environmental effects. J Cell Biol. 2003; 160(2): 235–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHammill AK, Uhr JW, Scheuermann RH: Annexin V staining due to loss of membrane asymmetry can be reversible and precede commitment to apoptotic death. Exp Cell Res. 1999; 251(1): 16–21. PubMed Abstract | Publisher Full Text\n\nGeske FJ, Lieberman R, Strange R, et al.: Early stages of p53-induced apoptosis are reversible. Cell Death Differ. 2001; 8(2): 182–91. PubMed Abstract | Publisher Full Text\n\nTang HL, Yuen KL, Tang HM, et al.: Reversibility of apoptosis in cancer cells. Br J Cancer. 2009; 100(1): 118–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang HL, Tang HM, Mak KH, et al.: Cell survival, DNA damage, and oncogenic transformation after a transient and reversible apoptotic response. Mol Biol Cell. 2012; 23(12): 2240–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang HL, Tang HM, Hardwick JM, et al.: Strategies for tracking anastasis, a cell survival phenomenon that reverses apoptosis. J Vis Exp. 2015; (96). PubMed Abstract | Publisher Full Text\n\nIchim G, Lopez J, Ahmed SU, et al.: Limited mitochondrial permeabilization causes DNA damage and genomic instability in the absence of cell death. Mol Cell. 2015; 57(5): 860–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang HL, Tang HM, Fung MC, et al.: In vivo CaspaseTracker biosensor system for detecting anastasis and non-apoptotic caspase activity. Sci Rep. 2015; 5: 9015. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDing AX, Sun G, Argaw YG, et al.: CasExpress reveals widespread and diverse patterns of cell survival of caspase-3 activation during development in vivo. eLife. 2016; 5: pii: e10936. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTaylor RC, Cullen SP, Martin SJ: Apoptosis: controlled demolition at the cellular level. Nat Rev Mol Cell Biol. 2008; 9(3): 231–41. PubMed Abstract | Publisher Full Text\n\nDavis AJ, Tannock JF: Repopulation of tumour cells between cycles of chemotherapy: a neglected factor. Lancet Oncol. 2000; 1(2): 86–93. PubMed Abstract | Publisher Full Text\n\nKim JJ, Tannock IF: Repopulation of cancer cells during therapy: an important cause of treatment failure. Nat Rev Cancer. 2005; 5(7): 516–25. PubMed Abstract | Publisher Full Text\n\nWagle N, Emery C, Berger MF, et al.: Dissecting therapeutic resistance to RAF inhibition in melanoma by tumor genomic profiling. J Clin Oncol. 2011; 29(22): 3085–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nManjila S, Ray A, Hu Y, et al.: Embryonal tumors with abundant neuropil and true rosettes: 2 illustrative cases and a review of the literature. Neurosurg Focus. 2011; 30(1): E2. PubMed Abstract | Publisher Full Text\n\nBoffetta P, Hashibe M: Alcohol and cancer. Lancet Oncol. 2006; 7(2): 149–56. PubMed Abstract | Publisher Full Text\n\nMcKillop IH, Schrum LW: Alcohol and liver cancer. Alcohol. 2005; 35(3): 195–203. PubMed Abstract | Publisher Full Text\n\nCastellsagué X, Muñoz N, De Stefani E, et al.: Influence of mate drinking, hot beverages and diet on esophageal cancer risk in South America. Int J Cancer. 2000; 88(4): 658–64. PubMed Abstract | Publisher Full Text\n\nIslami F, Pourshams A, Nasrollahzadeh D, et al.: Tea drinking habits and oesophageal cancer in a high risk area in northern Iran: population based case-control study. BMJ. 2009; 338: b929. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoomis D, Guyton KZ, Grosse Y, et al.: Carcinogenicity of drinking coffee, mate, and very hot beverages. Lancet Oncol. 2016; 17(7): 877–8. PubMed Abstract | Publisher Full Text\n\nSmith RE, Bryant J, DeCillis A, et al.: Acute myeloid leukemia and myelodysplastic syndrome after doxorubicin-cyclophosphamide adjuvant therapy for operable breast cancer: the National Surgical Adjuvant Breast and Bowel Project Experience. J Clin Oncol. 2003; 21(7): 1195–204. PubMed Abstract | Publisher Full Text\n\nTravis LB, Fosså SD, Schonfeld SJ, et al.: Second cancers among 40,576 testicular cancer patients: focus on long-term survivors. J Natl Cancer Inst. 2005; 97(18): 1354–65. PubMed Abstract | Publisher Full Text\n\nChaturvedi AK, Engels EA, Gilbert ES, et al.: Second cancers among 104,760 survivors of cervical cancer: evaluation of long-term risk. J Natl Cancer Inst. 2007; 99(21): 1634–43. PubMed Abstract | Publisher Full Text\n\nCowell IG, Austin CA: Mechanism of generation of therapy related leukemia in response to anti-topoisomerase II agents. Int J Environ Res Public Health. 2012; 9(6): 2075–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKenis H, Zandbergen HR, Hofstra L, et al.: Annexin A5 uptake in ischemic myocardium: demonstration of reversible phosphatidylserine externalization and feasibility of radionuclide imaging. J Nucl Med. 2010; 51(2): 259–67. PubMed Abstract | Publisher Full Text\n\nZurlo J, Arterburn LM: Characterization of a primary hepatocyte culture system for toxicological studies. In Vitro Cell Dev Biol Anim. 1996; 32(4): 211–20. PubMed Abstract | Publisher Full Text\n\nDowney T: Analysis of a multifactor microarray study using Partek genomics solution. Methods Enzymol. 2006; 411: 256–70. PubMed Abstract | Publisher Full Text\n\nKaushal D, Naeve CW: An overview of Spotfire for gene-expression studies. Curr Protoc Hum Genet. 2005; Chapter 11: Unit 11 9. PubMed Abstract | Publisher Full Text\n\nWarde-Farley D, Donaldson SL, Comes O, et al.: The GeneMANIA prediction server: biological network integration for gene prioritization and predicting gene function. Nucleic Acids Res. 2010; 38(Web Server issue): W214–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZuberi K, Franz M, Rodriguez H, et al.: GeneMANIA prediction server 2013 update. Nucleic Acids Res. 2013; 41(Web Server issue): W115–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang HL, Tang HM, Fung MC, et al.: Molecular signature of anastasis for reversal of apoptosis. Figshare. 2016. Data Source\n\nMassagué J: How cells read TGF-beta signals. Nat Rev Mol Cell Biol. 2000; 1(3): 169–78. PubMed Abstract | Publisher Full Text\n\nMassagué J: TGF-β signaling in development and disease. FEBS Lett. 2012; 586(14): 1833. PubMed Abstract | Publisher Full Text\n\nSiegel PM, Massagué J: Cytostatic and apoptotic actions of TGF-beta in homeostasis and cancer. Nat Rev Cancer. 2003; 3(11): 807–21. PubMed Abstract | Publisher Full Text\n\nOliner JD, Kinzler KW, Meltzer PS, et al.: Amplification of a gene encoding a p53-associated protein in human sarcomas. Nature. 1992; 358(6381): 80–3. PubMed Abstract | Publisher Full Text\n\nAraki S, Eitel JA, Batuello CN, et al.: TGF-beta1-induced expression of human Mdm2 correlates with late-stage metastatic breast cancer. J Clin Invest. 2010; 120(1): 290–302. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLakin ND, Jackson SP: Regulation of p53 in response to DNA damage. Oncogene. 1999; 18(53): 7644–55. PubMed Abstract | Publisher Full Text\n\nWade M, Li YC, Wahl GM: MDM2, MDMX and p53 in oncogenesis and cancer therapy. Nat Rev Cancer. 2013; 13(2): 83–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGu L, Zhu N, Zhang H, et al.: Regulation of XIAP translation and induction by MDM2 following irradiation. Cancer Cell. 2009; 15(5): 363–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSun C, Cai M, Gunasekera AH, et al.: NMR structure and mutagenesis of the inhibitor-of-apoptosis protein XIAP. Nature. 1999; 401(6755): 818–22. PubMed Abstract | Publisher Full Text\n\nChai J, Shiozaki E, Srinivasula SM, et al.: Structural basis of caspase-7 inhibition by XIAP. Cell. 2001; 104(5): 769–80. PubMed Abstract | Publisher Full Text\n\nHuang Y, Park YC, Rich RL, et al.: Structural basis of caspase inhibition by XIAP: differential roles of the linker versus the BIR domain. Cell. 2001; 104(5): 781–90. PubMed Abstract | Publisher Full Text\n\nRiedl SJ, Renatus M, Schwarzenbacher R, et al.: Structural basis for the inhibition of caspase-3 by XIAP. Cell. 2001; 104(5): 791–800. PubMed Abstract | Publisher Full Text\n\nSrinivasula SM, Hegde R, Saleh A, et al.: A conserved XIAP-interaction motif in caspase-9 and Smac/DIABLO regulates caspase activity and apoptosis. Nature. 2001; 410(6824): 112–6. PubMed Abstract | Publisher Full Text\n\nShiozaki EN, Chai J, Rigotti DJ, et al.: Mechanism of XIAP-mediated inhibition of caspase-9. Mol Cell. 2003; 11(2): 519–27. PubMed Abstract | Publisher Full Text\n\nChipuk JE, Moldoveanu T, Llambi F, et al.: The BCL-2 family reunion. Mol Cell. 2010; 37(3): 299–310. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichter K, Haslbeck M, Buchner J: The heat shock response: life on the verge of death. Mol Cell. 2010; 40(2): 253–66. PubMed Abstract | Publisher Full Text\n\nKampinga HH, Bergink S: Heat shock proteins as potential targets for protective strategies in neurodegeneration. Lancet Neurol. 2016; 15(7): 748–59. PubMed Abstract | Publisher Full Text\n\nHan J, Flemington C, Houghton AB, et al.: Expression of bbc3, a pro-apoptotic BH3-only gene, is regulated by diverse cell death and survival signals. Proc Natl Acad Sci U S A. 2001; 98(20): 11318–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNakano K, Vousden KH: PUMA, a novel proapoptotic gene, is induced by p53. Mol Cell. 2001; 7(3): 683–94. PubMed Abstract | Publisher Full Text\n\nMizushima N, Noda T, Yoshimori T, et al.: A protein conjugation system essential for autophagy. Nature. 1998; 395(6700): 395–8. PubMed Abstract | Publisher Full Text\n\nMizushima N, Komatsu M: Autophagy: renovation of cells and tissues. Cell. 2011; 147(4): 728–41. PubMed Abstract | Publisher Full Text\n\nWalczak M, Martens S: Dissecting the role of the Atg12-Atg5-Atg16 complex during autophagosome formation. Autophagy. 2013; 9(3): 424–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFarré JC, Subramani S: Mechanistic insights into selective autophagy pathways: lessons from yeast. Nat Rev Mol Cell Biol. 2016; 17(9): 537–52. PubMed Abstract | Publisher Full Text\n\nVessoni AT, Filippi-Chiela EC, Menck CF, et al.: Autophagy and genomic integrity. Cell Death Differ. 2013; 20(11): 1444–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHewitt G, Korolchuk VI: Repair, Reuse, Recycle: The Expanding Role of Autophagy in Genome Maintenance. Trends Cell Biol. 2016; pii: S0962-8924(16)30209-4. PubMed Abstract | Publisher Full Text\n\nSettembre C, Fraldi A, Medina DL, et al.: Signals from the lysosome: a control centre for cellular clearance and energy metabolism. Nat Rev Mol Cell Biol. 2013; 14(5): 283–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesdín-Micó G, Mittelbrunn M: Role of exosomes in the protection of cellular homeostasis. Cell Adh Migr. 2016; 1–8. PubMed Abstract | Publisher Full Text\n\nPapandreou ME, Tavernarakis N: Autophagy and the endo/exosomal pathways in health and disease. Biotechnol J. 2016. PubMed Abstract | Publisher Full Text\n\nFerrara N, Gerber HP, LeCouter J: The biology of VEGF and its receptors. Nat Med. 2003; 9(6): 669–76. PubMed Abstract | Publisher Full Text\n\nSimons M, Gordon E, Claesson-Welsh L: Mechanisms and regulation of endothelial VEGF receptor signalling. Nat Rev Mol Cell Biol. 2016; 17(10): 611–25. PubMed Abstract | Publisher Full Text\n\nBabapoor-Farrokhran S, Jee K, Puchner B, et al.: Angiopoietin-like 4 is a potent angiogenic factor and a novel therapeutic target for patients with proliferative diabetic retinopathy. Proc Natl Acad Sci U S A. 2015; 112(23): E3030–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo L, Li SY, Ji FY, et al.: Role of Angptl4 in vascular permeability and inflammation. Inflamm Res. 2014; 63(1): 13–22. PubMed Abstract | Publisher Full Text\n\nNabeshima K, Inoue T, Shimao Y, et al.: Matrix metalloproteinases in tumor invasion: role for cell migration. Pathol Int. 2002; 52(4): 255–64. PubMed Abstract | Publisher Full Text\n\nBonnans C, Chou J, Werb Z: Remodelling the extracellular matrix in development and disease. Nat Rev Mol Cell Biol. 2014; 15(12): 786–801. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPaul CD, Mistriotis P, Konstantopoulos K: Cancer cell motility: lessons from migration in confined spaces. Nat Rev Cancer. 2016. PubMed Abstract | Publisher Full Text\n\nMittal R, Patel AP, Debs LH, et al.: Intricate Functions of Matrix Metalloproteinases in Physiological and Pathological Conditions. J Cell Physiol. 2016; 231(12): 2599–621. PubMed Abstract | Publisher Full Text\n\nSteeg PS: Targeting metastasis. Nat Rev Cancer. 2016; 16(4): 201–18. PubMed Abstract | Publisher Full Text\n\nEming SA, Martin P, Tomic-Canic M: Wound repair and regeneration: mechanisms, signaling, and translation. Sci Transl Med. 2014; 6(265): 265sr6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatsuda S, Rouault J, Magaud J, et al.: In search of a function for the TIS21/PC3/BTG1/TOB family. FEBS Lett. 2001; 497(2–3): 67–72. PubMed Abstract | Publisher Full Text\n\nWinkler GS: The mammalian anti-proliferative BTG/Tob protein family. J Cell Physiol. 2010; 222(1): 66–72. PubMed Abstract | Publisher Full Text\n\nGartel AL, Radhakrishnan SK: Lost in transcription: p21 repression, mechanisms, and consequences. Cancer Res. 2005; 65(10): 3980–5. PubMed Abstract | Publisher Full Text\n\nCazzalini O, Scovassi AI, Savio M, et al.: Multiple roles of the cell cycle inhibitor p21CDKN1A in the DNA damage response. Mutat Res. 2010; 704(1–3): 12–20. PubMed Abstract | Publisher Full Text\n\nMuñoz-Espín D, Serrano M: Cellular senescence: from physiology to pathology. Nat Rev Mol Cell Biol. 2014; 15(7): 482–96. PubMed Abstract | Publisher Full Text\n\nTomasini R, Seux M, Nowak J, et al.: TP53INP1 is a novel p73 target gene that induces cell cycle arrest and cell death by modulating p73 transcriptional activity. Oncogene. 2005; 24(55): 8093–104. PubMed Abstract | Publisher Full Text\n\nAzam N, Vairapandi M, Zhang W, et al.: Interaction of CR6 (GADD45gamma ) with proliferating cell nuclear antigen impedes negative growth control. J Biol Chem. 2001; 276(4): 2766–74. PubMed Abstract | Publisher Full Text\n\nNiehrs C, Schäfer A: Active DNA demethylation by Gadd45 and DNA repair. Trends Cell Biol. 2012; 22(4): 220–7. PubMed Abstract | Publisher Full Text\n\nOhshima Y, Okada N, Tani T, et al.: Nucleotide sequences of mouse genomic loci including a gene or pseudogene for U6 (4.8S) nuclear RNA. Nucleic Acids Res. 1981; 9(19): 5145–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu JA, Manley JL: Base pairing between U2 and U6 snRNAs is necessary for splicing of a mammalian pre-mRNA. Nature. 1991; 352(6338): 818–21. PubMed Abstract | Publisher Full Text\n\nDatta B, Weiner AM: Genetic evidence for base pairing between U2 and U6 snRNA in mammalian mRNA splicing. Nature. 1991; 352(6338): 821–4. PubMed Abstract | Publisher Full Text\n\nYean SL, Wuenschell G, Termini J, et al.: Metal-ion coordination by U6 small nuclear RNA contributes to catalysis in the spliceosome. Nature. 2000; 408(6814): 881–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrämer A, Green J, Pollard J Jr, et al.: Causal analysis approaches in Ingenuity Pathway Analysis. Bioinformatics. 2014; 30(4): 523–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLamb J, Crawford ED, Peck D, et al.: The Connectivity Map: using gene-expression signatures to connect small molecules, genes, and disease. Science. 2006; 313(5795): 1929–35. PubMed Abstract | Publisher Full Text\n\nLamb J: The Connectivity Map: a new tool for biomedical research. Nat Rev Cancer. 2007; 7(1): 54–60. PubMed Abstract | Publisher Full Text\n\nFuchs Y, Steller H: Live to die another way: modes of programmed cell death and the signals emanating from dying cells. Nat Rev Mol Cell Biol. 2015; 16(6): 329–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKruiswijk F, Labuschagne CF, Vousden KH: p53 in survival, death and metabolic health: a lifeguard with a licence to kill. Nat Rev Mol Cell Biol. 2015; 16(7): 393–405. PubMed Abstract | Publisher Full Text\n\nWang J, Zhang J, Lee YM, et al.: Quantitative chemical proteomics profiling of de novo protein synthesis during starvation-mediated autophagy. Autophagy. 2016; 12(10): 1931–1944. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFüllgrabe J, Klionsky DJ, Joseph B: The return of the nucleus: transcriptional and epigenetic control of autophagy. Nat Rev Mol Cell Biol. 2014; 15(1): 65–74. PubMed Abstract | Publisher Full Text\n\nHetz C: The unfolded protein response: controlling cell fate decisions under ER stress and beyond. Nat Rev Mol Cell Biol. 2012; 13(2): 89–102. PubMed Abstract | Publisher Full Text\n\nNarula J, Haider N, Arbustini E, et al.: Mechanisms of disease: apoptosis in heart failure--seeing hope in death. Nat Clin Pract Cardiovasc Med. 2006; 3(12): 681–8. PubMed Abstract | Publisher Full Text" }
[ { "id": "19354", "date": "20 Jan 2017", "name": "Takafumi Miyamoto", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study unravels the gene regulatory network that seems to be involved in the process of anastasis. It is interesting that the authors found various genes that appear to participate in ethanol-induced anastasis, suggesting that the dynamic reconstitution of gene regulatory networks might be a prerequisite for rescuing cells from the brink of cell death. Overall, this work is worth being indexed. However, I would like to see the following points in the research addressed, before approval:\nAnastasis is a developing concept rather than an established one. It would be better to show the expression dynamics of caspase-3, PARP, and ICAD at all analyzed time points (Cont, R0, R3, R6, R24, and R48). In addition, why don't the authors show apoptotic DNA fragmentation to make sure that all the analyzed cells in the anastasis stage definitely underwent apoptosis?\n\nI may have missed noting this, but there is no statistical analysis of the gene expression changes observed in the microarray data. In Fig. 2B, the expression levels of several genes seem different in the same time point replicates. It would be better to show the genes that were induced or suppressed during anastasis, along with the statistical significance of the differences.\n\nGiven the importance of understanding the mechanism of anastasis, it would be better to verify the data obtained from microarray analysis, by using quantitative PCR or Western blotting.", "responses": [ { "c_id": "2467", "date": "09 Feb 2017", "name": "Ho Lam Tang", "role": "Author Response", "response": "We thank for the enthusiasm and valuable input from the reviewer, and have made the following changes: We have included the Western blot data (Figure 1B), which shows that caspase-3, PARP and ICAD were cleaved during apoptosis, but then recovered to their original level at 24 hours after removal of the cell death stimulus. Interestingly, our microarray data shows that their level of mRNA remained no significant change at all time points (3, 6, 24 and 48 hours) after removal of the cell death stimulus, compared with the untreated (control) cells (data available at figshare, please see Data availability in the manuscript), suggesting the recovery of corresponding proteins is contributed by the regulation of translation during and after anastasis. The related data and discussion are included in our revised manuscript. Our earlier studies using time-lapse live cell microscopy and comic assay demonstrated that the current apoptotic induction (4.5% ethanol, 5 hours) can trigger DNA damage. After removal of the stimulus, major of the dying cells can recover. Interestingly, some cells that reversed apoptosis display chromosomal abnormality and oncogenic transformation, indicating reversibility of apoptosis after DNA damage. In our current study, we further found significant reduction of mRNA level of multiple histone genes during anastasis. Notably, cellular levels of histones reduce in response to DNA damage, as to enhance DNA repairing. Therefore, reduction of expression of histones during anastasis could be a sign of cells that recover from DNA damage after apoptosis.    We have included supplementary data with corresponding p-value for statistical significance of fold change for all of the 3 biological replicants of each gene (see Data availability). The software for the microarray data analysis is mentioned at the “Materials and methods” section.   We have verified our data by RT-PCR in human liver cancer HepG2 cell line, and included the data in the new Figure 4." } ] }, { "id": "19503", "date": "30 Jan 2017", "name": "Sanzhen Liu", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Tang et al. was focused on the elucidation of the molecular mechanisms of an important phenomenon, anastasis, through time-course expression profiling. Anastasis was recently discovered and has not been fully studied yet. It's molecular basis remains to be uncovered. The study provided useful information to better understand this underexplored process. Overall, the experiment was well designed. The time course experiment included six time points, untreated samples as the control, toxin-induced apoptosis, and four time points after removal of toxin. Three biological replicates were performed at each time point. Figure 1 illustrated the experimental design very well. The biological interpretation of microarray results is reasonable. The reviewer has no major concerns. However, several minor changes are needed, especially for the presentation of figures, which could be improved.\n\nFirst, no multiple test correction was mentioned in the microarray analysis section. It was described that the p-value less than 0.05 was used to declare statistical significance. The reviewer would suggest the authors confirm that. A false discovery rate (FDR) method is needed for multiple test correction.\n\nSecond, the PCA result from Figure 2A showed that three biological replicates were closely clustered, which showed a good repeatability. However, the goal of PCA is not just check the repeatability of three replicates of each group (time point). PCA can be also used to examine the relationship among groups. My recommendation is that the authors provide more description for the PCA result. In addition, in Figure 2A, the percentage of PC2 explaining total variation was masked. But based on the value of PC3, it should be greater than 5.89%. Given the high value of PC1, I would suggest plotting a two dimension PCA plot to display the result or re-plotting this three-dimension plot.\n\nThird, it would be useful to list the number of significant differential expression for each comparison. And I guess the clustering result in Figure 3 presented all significant genes.\n\nFigure 4 showed some interesting result about gene interactions. I did not see enough description about this figure in the main text.\n\nEditorial comments: In the Abstract, “whole genome” can be replaced by “genome-wide”.", "responses": [] } ]
1
https://f1000research.com/articles/6-43
https://f1000research.com/articles/6-118/v1
08 Feb 17
{ "type": "Research Note", "title": "Health care and social media: What patients really understand", "authors": [ "Kyle Hoedebecke", "Lindsey Beaman", "Joy Mugambi", "Sanam Shah", "Marwa Mohasseb", "Cheyanne Vetter", "Kim Yu", "Irini Gergianaki", "Emily Couvillon", "Lindsey Beaman", "Joy Mugambi", "Sanam Shah", "Marwa Mohasseb", "Cheyanne Vetter", "Kim Yu", "Irini Gergianaki", "Emily Couvillon" ], "abstract": "Background: Low health literacy is associated with decreased patient compliance and worse outcomes - with clinicians increasingly relying on printed materials to lower such risks. Yet, many of these documents exceed recommended comprehension levels. Furthermore, patients look increasingly to social media (SoMe) to answer healthcare questions. The character limits built into Twitter encourage users to publish small quantities of text, which are more accessible to patients with low health literacy. The present authors hypothesize that SoMe posts are written at lower grade levels than traditional medical sources, improving patient health literacy. Methods: The data sample consisted of the first 100 original tweets from three trending medical hashtags, leading to a total of 300 tweets. The Flesch-Kincaid Readability Formula (FKRF) was used to derive grade level of the tweets. Data was analyzed via descriptive and inferential statistics. Results: The readability scores for the data sample had a mean grade level of 9.45. A notable 47.6% of tweets were above ninth grade reading level. An independent-sample t-test comparing FKRF mean scores of different hashtags found differences between the means of the following: #hearthealth versus #diabetes (t = 3.15, p = 0.002); #hearthealth versus #migraine (t = 0.09, p = 0.9); and #diabetes versus #migraine (t = 3.4, p = 0.001). Conclusions: Tweets from this data sample were written at a mean grade level of 9.45, signifying a level between the ninth and tenth grades. This is higher than desired, yet still better than traditional sources, which have been previously analyzed. Ultimately, those responsible for health care SoMe posts must continue to improve efforts to reach the recommended reading level (between the sixth and eighth grade), so as to ensure optimal comprehension of patients.", "keywords": [ "Social Media", "Twitter", "Web 2.0", "health literacy", "patient comprehension" ], "content": "Introduction\n\nHealth literacy - defined as the degree to which an individual has the capacity to obtain, communicate, process, and understand basic health information and services to make appropriate health decisions - is considered to be the single best predictor of an individual’s health status (http://www.cdc.gov/healthliteracy/learn/)1. Low health literacy correlates with decreased patient compliance and poorer outcomes, leading to an increase in clinician reliance on printed materials to mitigate such risks2. Yet, a recent study identified that many of these materials exceed the recommended sixth to eighth grade reading level of the American Medical Association (AMA), National Institute of Health (NIH) and Center for Disease Control and Prevention (CDC) (http://www.nlm.nih.gov/medlineplus/etr.html; http://www.cdc.gov/DHDSP/cdcynergy_training/Content/activeinformation/resources/simpput.pdf)3,4. As medical vocabulary becomes more integrated into social media (SoMe), the healthcare community must remember to employ comprehensible language when engaging audiences through platforms such as Facebook, Twitter, and LinkedIn.\n\nGenerally, patients are increasingly relying on SoMe as a primary avenue for answering healthcare questions5,6. For example, this may be due to the character limits built into Twitter that encourage users to publish small chunks of text, which are increasingly accessible to patients with low health literacy7. As health literacy directly impacts patient outcomes, it remains imperative for healthcare providers to intentionally tailor their writing level of SoMe posts to enhance patient-centred communication and comprehension.\n\nThe present authors hypothesized that SoMe posts on the Twitter platform are written at a lower grade level than traditional medical sources, allowing for better patient health literacy.\n\n\nMethods\n\nThe data sample consisted of the first 100 original tweets in 2016 via the pay-to-access Symplur Signal analytics tools (http://www.symplur.com/signals/) from each of the March 2016 top trending hashtags: #hearthealth, #diabetes and #migraines, leading to a total of 300 tweets being analyzed. Trending hashtags related to primary care were selected, as these tweets would have the greatest impact and overall reach worldwide. Exclusion criteria included non-English or non-medical tweets, as well as those that encompassed links with non-medical webpages or product advertisements.\n\nThe Flesch-Kincaid Readability Formula (FKRF) is a validated tool to assess the grade level of written material and is calculated with the following formula: 206.835 - 1.015 (total words/total sentences) - 84.6 (total syllables/total words). The FKRF Grade Level Scores can be interpreted as shown in Table 1 between the fifth grade to graduate levels8. Each tweet was evaluated via FKRF to derive grade level. SPSS (version 21.0 for Mac; http://www.ibm.com/analytics/us/en/technology/spss/) was used for data analysis, and data was analyzed using descriptive and inferential statistics. Descriptive statistics included the mean with 95% confidence interval, median, range and standard deviation of FKRF scores. All p values were derived from two-sided t-tests. The project was approved by Stanford’s IRB and Medical Ethics Team, as a part of the 2016 Stanford MedX/Symplur Social Media Competition.\n\n\nResults\n\nThe readability scores for the 300 total tweets evaluated are presented in Table 2. The mean FKRF grade level was 9.45, signifying a level between the ninth and tenth grades. A notable 47.6% of tweets were above the ninth grade reading level (Table 2). There was a wide range of FKRF scores, as shown in Table 3, varying from elementary to postgraduate levels.\n\nAn independent-sample t-test comparing the FKRF mean scores of different hashtags found differences between the means of groups as follows: #hearthealth versus #diabetes (t = 3.15, p = 0.002); #hearthealth versus #migraine (t = 0.09, p = 0.9); and #diabetes versus #migraine (t = 3.4, p = 0.001). Therefore, there was a significant difference between the means of two groups: #hearthealth versus #diabetes, and #diabetes versus #migraine. Although it is unclear why the differences exist, this identifies that the grade level comprehension varies significantly when dealing with tweets surrounding differing health issues. One such explanation could be the differing characteristics of the tweet author and their health care experience. Additionally, the differing incidences of migraines and heart disease may affect the availability of reading materials as well as the grade level at which each is written.\n\n\nDiscussion\n\nSoMe - especially Twitter - is a cost-effective, interactive communication tool with increasing applicability within the medical sector9. Although limited health literacy of the audience poses a real threat in disseminating health messages, few studies have examined the readability of Twitter healthcare posts for the general public. In the present study, the authors found that a Twitter sample (n=300) was written at a mean of FKRF grade level 9.45, signifying a level between the ninth and tenth grade (Table 1). This outcome proves much closer to the NIH readability goal, as compared to previous studies that found patient medical consent forms to be written between the eleventh to thirteenth grade levels (three to five grades higher than the current NIH recommendation), and on major associations’ websites and educational materials, which were written above the recommended reading level (http://www.nlm.nih.gov/medlineplus/etr.html).\n\nOne potential reason for this outcome lies in Twitter’s character limit itself, which permits only 140 characters to be written. Undoubtedly, this may prove a double-edged sword as this limitation creates a more manageable length, but also forces the composer to employ more concise terminology carrying a more complex readability factor. Given the increasing number of Twitter users, readability should be further evaluated towards meaningful health messaging, diminishing disparities in comprehension and ameliorating patient difficulties to understand and follow instructions and recommendations.\n\nThis study has some limitations, including the relatively small sample, the use of a single readability scale and a single SoMe platform. On the other hand, there are major strengths, as our study provides an updated focus of readability of web 2.0 communication tools. The findings highlight the possibility that Twitter can be a way of reaching the readability guidelines, as compared to written educational materials or online materials on websites. Twitter was used as a model, but more platforms on SoMe should be evaluated, so that guidelines could be shaped to recognize the unmet needs of health communication in a modern era. Ultimately, those responsible for health care SoMe and other relevant platforms posts must continue to improve efforts to reach the recommended reading level, so as to ensure optimal comprehension and enhance the capacity of patients and doctors to mutually interact.\n\n\nConclusions\n\nThe sample studied identifies that health care SoMe posts allow for better patient health literacy than traditional medical sources. Health care advocates must remain vigilant, so that posts improve upon current readability levels. Lastly, respectable medical sources should consider additional use of SoMe avenues to dispense more comprehensible health care information to a wider patient audience.\n\n\nData availability\n\nDataset 1: The 300 tweets analysed by the present study divided by #migraine, #hearthealth and #diabetes. doi, 10.5256/f1000research.10637.d15043710\n\nDataset 2: Raw data for SPSS. doi, 10.5256/f1000research.10637.d15043811", "appendix": "Author contributions\n\n\n\nKH - lead author, team leader, wrote background; LB - data collection and grade level interpretation, wrote the results section; SS- statistical analysis and wrote methods/results section; KM- data collection, wrote the results section; EC - literature search, background writer; MM - statistical analysis and wrote methods/results section; JM - data collection, wrote the results section; CV - data collection, wrote the results section; IG - statistical analysis, wrote the results/conclusion section. All authors edited and approved the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nA presentation of the results of this study was a global semi-finalist for the 2016 Stanford-MedX/Symplur Healthcare Social Media Competition. Accepted for presentation at the March 2017 Uniformed Service Academy of Family Physicians Conference (Seattle, USA).\n\n\nReferences\n\nBaker DW, Parker RM, Williams MV, et al.: The relationship of patient reading ability to self-reported health and use of health services. Am J Public Health. 1997; 87(6): 1027–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerkman ND, Sheridan SL, Donahue KE, et al.: Health literacy interventions and outcomes: An updated systematic review. Evid Rep Technol Assess (Full Rep). 2011; (199): 1–941. PubMed Abstract | Free Full Text\n\nBerry-Caban CS, Portee CL, Beaman LA, et al.: Readability of research consent forms in a military treatment facility. Ethics & Medicine: An International Journal of Bioethics. 2014; 30(2): 109–117. Reference Source\n\nWeiss B: Health Literacy: A Manual for Clinicians. Chicago, IL: American Medical Association, American Medical Foundation, 2003. Reference Source\n\nFox S: The social life of health information, 2011. Pew Research Center: Internet, Science & Tech, 2011. Reference Source\n\nMcInnes N, Haglund BJ: Readability of online health information: Implications for health literacy. Inform Health Soc Care. 2011; 36(4): 173–189. PubMed Abstract | Publisher Full Text\n\nBell J III, Mertz J: Hashtags and health literacy: How social media transforms engagement. HIMSS Patient Literacy and Health IT Work Group. 2015. Reference Source\n\nCalderón JL, Morales LS, Liu H, et al.: Variation in the readability of items within surveys. Am J Med Qual. 2006; 21(1): 49–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee JY, Sundar SS: To tweet or to retweet? That is the question for health professionals on twitter. Health Commun. 2013; 28(5): 509–524. PubMed Abstract | Publisher Full Text\n\nHoedebecke K, Beaman L, Mugambi J, et al.: Dataset 1 In: Health care and social media: What patients really understand. F1000Research. 2017. Data Source\n\nHoedebecke K, Beaman L, Mugambi J, et al.: Dataset 2 In: Health care and social media: What patients really understand. F1000Research. 2017. Data Source" }
[ { "id": "20061", "date": "07 Mar 2017", "name": "Shailendra Prasad", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI compliment the authors on this study. It is fascinating to see the group from various parts of the world collaborate on this project.  - The project/manuscript is a much-needed one with the rapid growth of social media in society. There are indications that access to social media platforms have increased dramatically in the last decade and further democratizes information sharing. - The authors use a novel approach to look at the content of various social media posts in Twitter regarding health and determine that the readability of the posts are much better than conventional health education methods. - The article is well written and easily readable.\nI have a couple of minor corrections to recommend. 1) Highlight that this analysis is particular to Twitter. The authors have done this, but I would recommend reinforcing this as the title indicates Social Media ( which has different platforms).  2) The issue then is possibly of brevity rather than the platform. I would recommend that the authors highlight this in the discussion too. 3) I recommend that the authors highlight the other issues in understanding health-related information, including the assimilation of the information and behavior change that needs to follow and that further studies should look at this.\nAgain, my compliments to the authors.", "responses": [] }, { "id": "20791", "date": "30 Mar 2017", "name": "Shabir A. H. Moosa", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nCongrats to this global set of young authors from the Wonca circle! Really good application of research method to an important issues.\n\nThere are some suggestions that would help the authors.\n\n1. The authors say “ present authors hypothesize that SoMe posts are written at lower grade levels than traditional medical sources, improving patient health literacy”. The research method, results, and discussion do not reflect that with no clarity as to what ‘traditional medical sources” are compared with. Reference to previous studies under discussion is referenced to the MedlinePlus webpage on “How to write…” with no clear evidence of ‘traditional medical sources’ readability scores with the FKRF. Printed material and websites would have been a useful comparator vs. consent forms. The comparison of different hashtags and statistical tests seem irrelevant/tangential to the hypothesis or enquiry.\n\n2. The choice of Twitter and assumptions about it ie. small chunks of text making it more accessible are contradicted later when the Twitter limitations of complex readability factor are later related due to small chunks. It would have been useful to relate to global use of SoMe platforms, by population and possibly patients.\n\n3. The method of using the FKRF is not clearly articulated and seems difficult to relate to Table 2 readability score.\n\n4. The conclusion \"The sample studied identifies that health care SoMe posts allow for better patient health literacy than traditional medical sources.” seems largely unsupported.\n\nKeep it up.", "responses": [] } ]
1
https://f1000research.com/articles/6-118
https://f1000research.com/articles/6-117/v1
08 Feb 17
{ "type": "Research Note", "title": "A novel educational module to teach neural circuits for college and high school students: NGSS-neurons, genetics, and selective stimulations", "authors": [ "Zana Majeed", "Felicitas Koch", "Joshua Morgan", "Heidi Anderson", "Jennifer Wilson", "Robin L. Cooper", "Zana Majeed", "Felicitas Koch", "Joshua Morgan", "Heidi Anderson", "Jennifer Wilson" ], "abstract": "This report introduces various approaches to target defined neural pathways for stimulation and to address the effect of particular neural circuits on behavior in a model animal, the fruit fly (Drosophila melanogaster). The objective of this novel educational module described can be used to explain and address principle concepts in neurobiology for high school and college level students. A goal of neurobiology is to show how neural circuit activity controls corresponding behavior in animals. The fruit fly model system provides powerful genetic tools, such as the UAS-Gal4 system, to manipulate expression of non-native proteins in various populations of defined neurons: glutamergic, serotonergic, GABAergic, and cholinergic. The exhibited behaviors in the examples we provide allows teachers and students to address questions from behaviors to details at a cellular level. We provided example sets of data, obtained in a research lab, as well as ideas on ways to present data for participants and instructors. The optogenetic tool, channelrhodpsin 2 (ChR2), is employed to increase the activity of each population of neurons in a spatiotemporal controlled manner in behaving larvae and adult flies. Various behavioral assays are used to observe the effect of a specific neuron population activation on crawling behavior in larvae and climbing behavior in adult flies. Participants using this module become acquainted with the actions of different neurotransmitters in the nervous system. A pre- and post- assessment survey on the content is provided for teachers, as templates, to address learning of content and concepts.", "keywords": [ "Optogenetics", "locomotor activity", "Serotonin (5-HT)", "Glutamate", "GABA", "acetylcholine" ], "content": "Introduction\n\nControlling the activity in neural circuits, while monitoring the effects on acute and chronic behaviors, is a means of addressing the function of defined neural pathways. This concept is related to the more common pharmacological approaches but in some cases with less precision to accomplish some of the same tasks. To selectively manipulate a subset of neurons within an easy organism to rear and maintain, Drosophila combined with optogenetics is ideal. The animal model and exercises provided allows for a ‘hands on’ inquiry-based learning module for high school and college courses, which emphasize life science topics. The current article presents a teaching module that is designed to integrate modern genetics, engineering, physics, life sciences, modeling and experimental design for use with high school and college students. Researching the primary scientific literature and utilizing the related findings, as well as postulating the outcome for newly designed experiments based on the results one collects, allows students to test their own predictions and draw hypotheses. This approach provides autonomous learning among student groups. The measureable outcomes with obtaining quantitative data for analysis and interpretation are valuable learning experiences. Based on one’s findings in the initial experiments, one can readily redesign experimental paradigms to test the formulated hypotheses utilizing one’s own prior data. The integration with Arduino hardware and software opens the doors for students to a world of writing code with an experimental purpose, and independence in experimental design.\n\nThe underlying science in the module proposed by the current article focuses on neurobiology. The seminal discoveries by Hubel & Wiesel (1970) demonstrated that activity in sensory input and within the central nervous system (CNS) is indispensable in the development and maintenance of neural circuits. This concept is also essential for development and maintenance of synaptic establishment at the neuromuscular junction (NMJ) of skeletal muscles (Balice-Gordon et al., 1990; Lømo, 2003). In some cases, the activity profile must occur prior to developmental time points before the neural circuits become more hardwired. After such a critical period in synaptic formation, the circuit is not as dependent on activity for competition with other neurons for the establishment of connections. This fundamental phenomenon occurs in organisms from fruit flies to humans. It is known in mice that even after established connections are made in adults, the terminals at NMJs are not fixed to one spot on a muscle fiber: The motor nerve terminals grow out and pull back over time while continuing to communicate with the muscle fibers (Lichtman & Sanes, 2003).\n\nIf motor neurons, which are normally innervating a muscle, are removed, then other motor neurons will take control of the target and innervate it. Thus, motor nerves are searching out targets not already committed by other synaptic inputs (Chang & Keshishian, 1996). This was examined in embryonic and larval Drosophila by laser ablating various body wall muscle fibers during development. Even pharmacologically activating or silencing neural circuits during development can have long term consequences in neural connections and overall physiological functions (Smith et al., 2015). For example, exposing rodents to nicotine during development changes the dendritic morphology within the CNS, which lasts into adulthood (McDonald et al., 2005). Even short exposures to nicotine in the juvenile stages have long lasting effects in adults for these mice (Ehlinger et al., 2014). It is also established that collective synchronized synaptic activity is important for development of the neural structure (Winnubst et al., 2015). Thus, long term consequences in the established neural circuitry within the CNS and at the NMJ can occur based on neural activity when the initial circuits are being wired.\n\nA guided self-inquiry based approach to learning science has been demonstrated to be a very effective means for student learning in the long term (Bradforth, et al., 2015; Waldrop, 2015). The engineering design with Arduino systems is a very engaging educational experience, which is sought after in many schools within the USA and abroad (see educational web pages: https://www.adafruit.com/educators; https://www.arduino.cc/en/Main/Education; Bender, 2012; Escudero et al., 2013; Junior et al., 2013; Maxwell & Meeden, 2000; Zalewski et al., 2014). The surge in the use of the Arduino system in high school and college teaching is partly due to the low cost and ease in writing code for operating the system. Students can design experiments with various computer codes to control the duration of light on-off time period and frequency of stimulation to observe how activating or inhibiting a specific set on neurons alter development and behavior of Drosophila larvae or adults. Arduino and associated LED required hardware is relatively inexpensive, <$20 USD for an individual unit; however, making a series of units with one power supply is cheaper for adding additional units. Class sets can be used in subsequent years, so an initial investment has a long term use. There are dozens of demonstration videos on YouTube for a wide variety of inventions and coding using Arduino.\n\nIn the educational module presented in this article, we demonstrate an approach with optogenetics to selectively activate the neurons synthesizing the neurotransmitter GABA (glutamate, serotonin, and acetylcholine). The approach used to stimulate these selective neurons is to activate light sensitive channels expressed in these neurons. Different Drosophila lines are used for each type of neurotransmitter. The ability to control the stimulation with light is managed by an Arduino system or a simpler system with a 9 Volt battery and an LED source. Students can readily add single units or build parallel outputs with discrete parameters for controlling the LEDs. Thus, this allows various parameters to be tested simultaneously in the same laboratory setting. Since many of the experimental paradigms presented by the module are novel, many unanswered questions remain to be answered in neurobiology, and students may uncover unique findings worthy of publication in scientific journals.\n\nThis educational module is also designed to embrace the Next Generation Science Standards (NGSS Lead States, 2013), through approaches scientists employ in the development of scientific knowledge. NGSS recommends that models be used in Developing, Evaluating, Using, and Revising explanations and predictions of science phenomena. The students will be able to construct models in neural circuits to explain the observed behavioral phenomenon to make sense of what they observe. The direct real life examples concerning how neural circuits develop in one’s self, as well as in other animals, is of general interest, but also has applied implications for medicine and health. The ability to manipulate various neurotransmitter systems and stimulation paradigms promotes experimental design and redesign based on the observed findings from each experiment. This is an integral aspect of the NGSS. The approach presented herein promotes explanations of the findings in order to set a new or altered stimulation paradigm, as the students continue to study a phenomenon in different contexts. In addition, this article discusses some of the techniques used in a trial of this module for sophomore high school students in Louisville (KY, USA), senior high school students in Sommerset (KY, USA) and college level juniors and seniors at the University of Kentucky. (The outcomes of the trial are detailed further in the section Instructor feedback).\n\n\nModule overview\n\nSome of the experimental procedures require being able to make selective genetic crosses of two different Drosophila lines. To perform the crosses, it may be necessary to identify male and female adults and to be able to obtain virgin females (Figure 1). The instructors of the module can decide on their resources (dissecting microscopes and time management of students) for either performing the crosses themselves, or if the students should be given the time to make the crosses or be provided an explanation. As a learning experience, the teacher could allow the students to try these procedures, but have a cross already prepared for class use. A number of online resources are available to see the differences in males and female adult flies; the presence of black tuft of hairs on the forelegs indicates a male fly (Figure 1). It is good to compare the flies side by side to tell the differences.\n\n(A) Morphological characteristics and sexual dimorphism of adult Drosophila melanogaster (lateral view). Adult female fly (top) has a light colored abdomen region; however, adult male fly (bottom) has a dark posterior abdomen region. (B) Morphological differences between male and female flies (ventral view). (C) Magnified view of the male fly foreleg shows male specific sex comb structure.\n\nThere are some procedures where the fly lines obtained can be directly examined without having to make filial 1 (F1) generations with selective crosses. The line that expresses the light-activated channelrhodopsin-2 in motor neurons is OK371-Gal4;UAS-ChR2H134R-mcherry (homozygous line, there are two copies for each construct). This line is made by crossing w1118;P{GawB}VGlutOK371 (Bloomington Drosophila Stock Center at Indiana University (BDSC); catalog no., 26160) with w*; P{UAS-H134R-ChR2}2 (BDSC; catalog no., 28995; Pulver et al., 2011). When trialing this module, we used another recently created ChR2 line, which is very sensitive to light, called y1 w1118; PBac{UAS-ChR2.XXL}VK00018 (BDSC; catalog no., 58374; Dawydow et al., 2014). Virgin females from w*; P{UAS-H134R-ChR2}2 were crossed with males of D42-Gal4 (BDSC; catalog no., 8816) for also being expressed in motor neurons. Trh-Gal4 (BDSC; catalog no., 38389), Gad1-Gal4 (BDSC; catalog no., 51630), or ppk-Gal4 (BDSC; catalog no., 32078) to express ChR2-XXL variant in serotonergic neurons, GABAergic neurons or type IV sensory neurons, respectively. In the trial module, we also used UAS-H134R-ChR2;Trh-Gal4 (III) homozygous line, which was kindly provided by Dr. Andreas Schoofs (University of Bonn Life & Medical Sciences Institute (LIMES), Bonn, Germany; Schoofs et al., 2014), to compare behavioral effects with the more light sensitive ChR2 line. Table 1 outlines which crosses of flies can be used from these lines mentioned above for targeting the desired neuronal subtypes.\n\nw*; P{UAS-H134R-ChR2}2; Trh-Gal4 (homozygous line) ChR2 expressed in serotonergic neurons.\n\nThere is no need to make crosses as this line is homozygous. The larvae or adults should be raised on food supplemented with all trans retinal (ATR), which is a cofactor essential for ChR2 function, since unlike mammals the flies cannot synthesize sufficient amount of ATR for ChR2 function) and a control group without ATR (use ethanol (EtOH) as a vehicle since ATR is dissolved inside absolute ethanol).\n\nAll-trans-retinal is a cofactor for the channel rhodopsin which increases the sensitivity to light and increases single channel conductance (Dawydow et al., 2014). ATR (500mg; available from Sigma-Aldrich, St. Louis, MO, USA) is dissolved in 17.6 ml absolute ethanol to make 100mM stock solutions. Then, 100µl of 100mM stock solution is transferred to small tubes, wrapped with aluminum foil and kept in a -20°C freezer. The ATR should be kept away from light, since it is light sensitive; it would be degraded and become ineffective if it is exposed to light for a long time.\n\nPreparation of fly food supplemented with ATR. In order to prepare fly food supplemented with 1mM ATR, 10ml fly food is dissolved in the microwave. The food is left to cool, then 100µl of 100mM ATR is mixed well with the fly food, or 100µl of absolute ethanol is mixed with food as a control. The food vial should be wrapped in aluminum foil and the food left until well solidified (flies may stick to wet food). The larvae or adult flies for whichever experimental lines to be tested are then transferred from their vial to an ATR-food-containing vial and are kept in a dark place (to keep the ATR from degradation) at room temperature (22-23°C).\n\nLocomotion behavior of larvae is assessed by placing a single larva on an apple-juice agar plate. The larva is left for one minute to acclimate to the new environment. Having the room lights off or very dim while the students work might be difficult to achieve in some classrooms. The body wall contractions (BWCs) are counted for one minute (BWCs/min) while the larva is exposed to regular white light. Then the body wall contractions are counted for one minute while the larvae is exposed to blue light (470nm wavelength; a dispersed-soda-can device can be used, see Figure 2). Also, body wall contractions are counted while the larva is exposed to focused focal blue light (a focused light through a microscope eyepiece with a mounted LED can be used, see Figure 2). This assay can be performed for first, second and third instar larva. In the module trial, the typical behaviors of third instar larvae are shown in Figure 3 for flies fed and not fed ATR, as well as for dim white light, diffuse blue light delivered by a soda can set up and a focused blue light with a microscope eyepiece objective. Notice in Figure 3B3 the contracted larvae. The microscope eyepiece can be bought on Amazon.com as 10X eyepieces; a wide base type is most useful, so the LED can fit inside. Table 2 provides a template in which students can record the type of behaviors observed with this experimental paradigm.\n\n(A) A blue light emitting diode (LED; wavelength = 470nm) is glued on a cooler plate with a temperature resistant glue. The LED light is connected to a 9V battery. Various intensities of LED light can be used by attaching the LED to (B) a microscope ocular lens (x10), which gives off high intensity light or (C) a soda can with the bottom removed and the LED placed though the top, which gives a low intensity diffuse light.\n\nOK371-Gal4 (Gal4 driver specific for motor neurons) is crossed with UAS-ChR2H134R-mcherry Drosophila line (this line is homozygous foe both Gal4 and UAS constructs). The progeny expresses ChR2 in motor neurons. (A) The larvae were raised in fly food, which was not supplemented with all-trans-retinal (ATR), a cofactor important for ChR2 membrane integration and function. (A1) The body wall contractions (BWCs) are counted on an apple juice agar plate for 1min when the larva is exposed to regular light. (A2) The larvae is exposed to low intensity blue LED light (470nm) for 1min while the BWCs are counted. (A3) The crawling behavior of larva is observed when it is exposed to intense blue light for 1min. (B) The larva was fed ATR (1mM), which was mixed with fly food. The body wall contractions are counted when the lava is being exposed to regular light (B1), low intensity blue light (B2), or high intensity blue light (B3). The larva does not respond to the low intensity light although when it is being exposed to high intensity blue light, the BWCs contract, which can be observed by shortened body length (B3).\n\nRolling behavior in larvae. Assessing rolling behavior is performed by placing a single larva on the surface of an apple-juice agar plate. The occurrence of rolling behavior can be counted for the 1st and 2nd minute. The percentage of larvae that show rolling behavior can be presented in graphical form, as shown in Figure 4 (module trial results), for a ChR2 being expressed in type IV sensory neurons in third instar larvae and stimulated with blue light. The fly lines crossed for this experiment are the UAS-ChR2-XXL and ppk-Gal4 and the food was without ATR or supplemented with ATR (1mM). A sample size of 20 larvae were tested for this data set\n\n(A) Shows the occurrence of rolling behavior in the 1st and 2nd minute of light exposure (normal scope light). (B) Most of the larvae showed rolling behavior when they are exposed to light.\n\nThe participants can fill in a data table such as this one presented.\n\nCoded behavior can be used. Type of behavior coding used: continue crawling forward (CC), crawling backward (CB), stop (S), head wagging (HW), rolling (R) , keeps turning left or right while crawling (T).\n\nFor adult behaviors, left over larvae from conducting the larval behaviors can be used, the 1st crosses should be saved and grown to adults. Thus, the differences between the larval and the adult lines can be compared with the same crosses. Also, if ATR-tainted food from the larval assays is saved, this can be used to feed the adults. The adults should be a few days old before conducting the following behavioral experiments to insure they have built back up the levels of ATR in the body. There are a number of behavioral assays that are commonly used for adult Drosophila (Badre & Cooper, 2008; Nichols et al., 2012; http://www.sdbonline.org/sites/fly/aimain/6behavior.htm). For some of the assays, the separation of males and females should be considered, as there are differences in the size and weight of the adult flies. Also, as the adults age there may be differences in their behaviors.\n\nThe two commonly used behavioral assays that are relatively easy to implement, but informative for the biological concepts, are the negative geotaxic and phototaxic assays, which are described below. These assays can be expanded on for deeper investigation into the neurobiology of the flies. Also, these behavioral assays allow for data gathering, redesign and vivid discussion for inquiry based labs.\n\nNegative geotaxic assay. Adult flies aged 2–8 days can be anesthetized with CO2. Males and females are sorted and transferred into separate vials, containing food, in cohorts of 10–14 flies. The flies should be left to recover for 24h before running the experiments. A plastic vial (Drosophila culture cylindrical vial 1-1/4\" diameter x 4\" tall; http://www.enasco.com/product/SB11136M) can be marked at 8cm length, and the 10–14 fly cohort transferred to an empty marked vial. Another plastic vial can be placed on top of the marked one (Figure 5). The flies should be left for one minute. The vials can be tapped on a table to knock down the flies to the bottom of the tube. Then the number of flies that climb across the 8cm mark within 10sec can be recorded, as shown in Figure 5. This procedure can be repeated a few times with tapping to knock the flies down to the bottom of the vial each time. Table 3 provides a template in which students can record the data for this assay of adult behaviors.\n\n(A) Two plastic tubes are put together for this assay. 10–14 adult male or females flies are transferred to an empty plastic tube which is marked at 8cm length. Second plastic tube is put on the top of the first marked plastic tube and sealed with tape. (B1–B3) The tube is tapped until all the flies fall into the bottom of the first tube (B1). The flies start climbing up on the wall of the plastic tubes. After 10 seconds the number of the flies that cross the 8cm red line is counted which shows the percentage of the flies that are crossed the line in 10sec. (B1) 0%, (B2) 30%, (B3) 80% of the flies are crossed the red line.\n\nDuring the module trial, we used flies that were expressing ChR2 in motor neurons and fed ATR 1mM (UAS-CHR2H134R-mcherry;OK371-Gal4). This particular line (ChR2H134R) is a strain where the protein (the channel rhodopsin) has been altered with different amino acids and is not as sensitive as the ChR-XXL line. This line did not show a large difference from the 1st min of recovery time after the blue light was turned off to the 3rd minute of recovery time for the percent number of flies passing the 8 cm mark. The bars that are labeled ‘crawl’ represents the flies that are crawling around the bottom of the vial (Figure 6). Using the very sensitive strain of flies (ChR2-XXL), where the channel demonstrates high sensitivity to blue light, the percentage of flies recovering took much longer than for the UAS-CHR2H134R-mcherry;OK371-Gal4 cross (Figure 7). Also, the UAS-ChR2-XXL/+;D42-Gal4/+ cross targeted motor neurons, which express D42. The recovery time of paralyzed flies for this fly strain was not even fully recovered after 14 minutes after the blue light exposure.\n\nThese flies are expressing ChR2 in motor neurons and also are fed ATR 1mM (UAS-CHR2H134R-mcherry;OK371-Gal4). The blue light does not exert influence on the negative geotaxic assay since the blue light cannot penetrate well the thick dark adult cuticle.\n\n(A) The crawling and negative geotaxic behavior of adult female flies is decreased after 25 second blue light (low intensity) exposure. After 14 minutes the flies restored their normal climbing ability. (B) The ability to crawl and climb was markedly compromised in adult male flies being exposed to blue light for 25 seconds. The crawling ability restored after 6min of paralysis although the climbing behavior went back to normal after 12 minutes of paralysis. These flies were raised of food supplemented with ATR 1mM.\n\nExtra details on this behavioral assay is found in Ali et al. (2011). When using a similar assay, one can also measure the percent of flies which start to crawl as an index. This can be tried with motor neurons drivers or other types of neuronal drivers. In the module trial, we used a subset of sensory neurons, referred to as Type VI sensory or pickpocket neurons, which had ChR2-XXL expressed (Figure 8). It is obvious in the first experiment (by observing how many flies crawled or moved up the tube) that it was difficult for the flies to walk up the walls of the tube, but in subsequent experiments more were able to walk up the tube.\n\nAfter 5 sec blue light exposure, some of the flies were paralyzed for 1–2 seconds then they recovered well. As it is shown that the first trial (T1) after blue light exposure, the flies do well in climbing assay; although, in the second trial (T2 after blue light exposure), the flies climb the middle of the bottom tube then they stop climbing further. They recover quickly in the following trials.\n\nPhototaxic assay. To conduct this assay, a device with a 25cm long plastic tube and light source at one end in a dimly lit room is used to assess the phototaxic behavior of the adult flies. The tube is narrow enough not to allow the adults to fly, but only walk up the tube. Also a standard small LED maglight fits snuggly in one end (Figure 9). The male or female flies can be anesthetized by a quick exposure to CO2 or by placing a vial in ice for 25–30sec. Individual flies are placed in each apparatus. The flies need to recover for at least 10min. Each apparatus with an individual fly should be positioned vertically and tapped until the fly falls to the bottom of the tube, which is closed by a rubber stopper. The time the fly crosses a 10cm line and a 20cm line can be recorded as a measure. This apparatus could be positioned horizontally or vertically, but vertical placement examines both geotaxic, as well as light sensitivity. A sample table to enter student data is presented as Table 4.\n\nA single male or female fly is transferred into the tube. The tube is tapped until the fly falls into the bottom of the tube. The time that the fly takes to reach 10cm and 20cm is recorded.\n\n\nData collection and interpretation\n\nThe results from the various experiments can be tabulated or graphed in various ways, depending on the variables the instructor and students wish to investigate. Data that can be plotted over time, such as time for the adults to cross the 10 and 20 cm line, can be graphed using free web based graphing software Joinpoint, (http://joinpoint.software.informer.com/), which allows students to work at home or at school. Also, graphing the values for the different experimental lines of flies allows for discussion of the data in relation to biological significance. For high school teachers focusing on The Next Generation Science Standards (NGSS Lead States, 2013), or college instructors wanting more of an inquiry based experience in real life topics for students, the exercises provided can be varied or expanded.\n\nFor example, different instar stages can be compared for a particular strain. In the module trial, we used the ChR2 channels in motor neurons with a less sensitive strain (UAS-ChR2H134R-mcherry;OK371-Gal4) which showed different responses between the various instars and measured body wall contractions for one minute (BWCs/min). These larvae were fed ATR 1mM (Figure 10). Also central neurons that utilize different neurotransmitters can be examined for changes in larval, as well as adult, behaviors. We used a line that results in activation of the GABAergic neurons and measured locomotor activity in third instar larvae fed ATR (UAS-ChR2-XXL;Gad1-Gal4). The blue light stimulation resulted in a substantial decrease in body wall movements (Figure 11). This same line can be used for measuring adult behaviors and measuring climbing. A sample of such responses is shown in Figure 12. The adults were fed ATR (1mM) and they showed a reduced ability to climb.\n\nThe body wall contractions for one minute (BWCs/min) are being counted while the larva is being exposed to regular light, low intensity blue light or high intensity blue light. The data shows that the first instar larvae do not respond well to even high intensity blue light.\n\n(A) The locomotor activity in larvae fed ethanol (vehicle) does not change with either normal light or blue light exposure. (B) Although, when the larvae which were raised on food supplemented with ATR 1mM and exposed to white light, the locomotor activity significantly decreased. During the exposure to light the larvae first started to contract which was followed by body muscle relaxation.\n\nThe climbing ability was measured in three different trials before the blue light exposure. After 5sec blue light exposure, the climbing ability was performed in six different trials by knocking the flies down to the bottom of the vial without blue light exposure and measuring how many could crawl. ChR2-XXL activation significantly reduced climbing ability. Flies were fed food supplemented with ATR 1mM. The climbing assay was carried out a dim light room since the bright light might also activate ChR2 channels which make it difficult to perform the assay in a well-lighted classroom.\n\nInstructors can also use published literature and standard textbooks to explain to students that the locomotor behaviors are driven by motor neurons, which activate body wall muscles (Marieb & Hoehn, 2013; Sherwood, 2001). In addition, an instructor can use the illustrations in Figure 13 to examine how the electrical responses are monitored in the muscles and the effect of stimulating various types of neurons. In addition, there are published studies and figures readily found on the web that highlight the various neuronal types within the CNS and ventral cord of larval, as well as adult, Drosophila, which contain different types of transmitters, such as serotonin (as shown in Figure 14). In the module trial, we have shown the effect of activating serotonin producing neurons on behaviors for body wall movements of larvae with a line less sensitive to blue light (Figure 14B; UAS-mCherry-ChR2 H134R; Trh-GAL4, homozygous for both constructs), as well as a line very sensitive to blue light (Figure 15; UAS-ChR2-XXL;Trh-Gal4). The two levels of blue light sensitive lines can also be examined as adults, as we demonstrate in Figure 16.\n\n(A) The nervous system in Drosophila melanogaster third instar larvae expression green fluorescent protein (GFP) in the whole nervous system. (B) Dissected third instar larvae shows m6 muscle fibers and intracellular microelectrode to record excitatory postsynaptic potentials (EPSPs) while the ChR2 in motor neurons are being activated by blue light exposure. (C1) Shows the intracellular recording in OK371-ChR2 minus ATR (CNS intact) third instar larvae. The blue light (low intensity) exposure does not produce any postsynaptic responses in muscle fiber M6 since the larvae is not fed ATR, which is a required supplementation for the action of ChR2. (C2) Shows the intracellular recording in OK371-ChR2 plus ATR 1mM (CNS intact) third instar larvae. Blue light (low intensity) exposure produces responses in M6 muscle fiber which is presented as EPSPs. (D1) Intracellular recording from M6 fiber muscle in OK371-ChR2 minus ATR (CNS intact) third instar larva. Blue light (high intensity) exposure does not activate motor neurons. No EPSPs are seen in this trace although the miniature EPSPs are still present. (D2) the evoked response is being recorded in OK371-ChR2 plus ATR 1mM (CNS intact) third instar larvae while it is being exposed to blue light (high intensity).\n\n(A1) Central nervous system in third instar larva. (A2) Serotonergic neurons expressing mcherry fluorescent protein (UAS-mCherry-ChR2 H134R; Trh-GAL4, homozygous for both constructs). (B) Activation of 5-HTergic neurons did not produce a significant effect on locomotor activity in third instar larvae.\n\nTo change serotonergic neuron activity, CHR2 were expressed in serotonergic neuron population (UAS-ChR2-XXL;Trh-Gal4). The body wall contractions were counted in third instar larvae fed on food supplemented with ATR 1mM or ethanol (vehicle). When the larvae, which were fed on ATR (1mM) were exposed to blue light (high intensity), the locomotor activity significantly compromised. However, when the larvae fed on a food without ATR supplementation were exposed to blue light (high intensity), the locomotor activity were not affected (as shown in Majeed et al., 2016).\n\nThe electrical activity in serotonergic neuros is increased by expressing ChR2. When the adult flies were exposed to blue light (low intensity), the climbing ability significantly reduced. Both flies groups (UAS-ChR2-XXL;Trh-Gal4) which were fed supplemented with ATR (1mM) or ethanol (vehicle) were affected by the blue light exposure. However, blue light did not have effect on the control lines (UAS-ChR2-XXl/+) (As shown in Majeed et al., 2016).\n\nIt may be confusing for students to understand that sensory neurons can present a behavior similar to that of stimulating motor neurons. Instructors should help students to understand neural circuits and that activating sensory neurons can lead to motor neuron activation. The activation of type IV sensory neurons with blue light and then recording in motor neurons can help in explanation. A representative intracellular recording in muscles with sensory neurons stimulation is shown and can be used for instructive purposes. This line is UAS-ChR2-XXL;ppk-Gal4, and supplemented with ATR (1mM) produces robust responses in the muscle fibers (Figure 17).\n\nActivation of ChR2 in type IV sensory neurons makes motor neurons to fire action potentials which in turn depolarize muscle fibers. The motor output (EPSPs traces) is being recorded while the third instar larva is being exposed to various intensity of blue light (see shading of blue light as intensity).\n\nThere are many additional types of behavioral assays that students may develop and try out. Another fun larval behavior is one in which the larvae are lined up on an agar dish and exposed to blue or white light and then the dish is placed in a dark spot or left exposed to dim room light. The larvae with ChR expressed in sensory or motor neurons demonstrate a paralyzed stance, which takes time to recover, as we illustrated in the module trial (Figure 18). The recovery time can be viewed by the movement away from the original line over time. Snap shots with a cell phone camera is an easy way for students to document movements over time.\n\nWhen the larvae were exposed to regular light, they were all contracted and did not move (n=30, 10 larvae per agar plate per condition). 10 third instar larvae were placed on an apple juice agar plate. The larvae were exposed to regular light for 2 hours. The larvae stayed in their location without any movement. Three different conditions were used to show how much time it would take for the larvae to start moving again after 2 hour regular light exposure. The data shows that it takes about 15min for the larvae to restore their locomotor activity.\n\n\nElectronics and code overview for using the Arduino control\n\nThe use of the Arduino hardware and sample codes for flashing the LED light on and off is provided in Supplementary File 1. If instructors or students were so inclined to design experiments using the automated light controls for longer term studies on the effects of pulsing the lights on and off, this system is ideal, due to the low cost and ease in programing various codes. We are now using the system to teach subsets of freshman biology majors concepts of integrating engineering design with biological application at the University of Kentucky in the Department of Biology.\n\n\nDiscussion\n\nThe exercises presented here promote investigations of practical neurobiological phenomenon in relation to human disorders (Parkinson’s, Stiff man syndrome, epilepsy, and autism), as well as promote discussion of potential medical interventions by pharmacological agents on these various neurotransmitter systems. An instructor might even have participants conduct a literature research and make predictions of the behavioral outcomes when stimulating the particular subsets of neurotransmitter systems for the larva and adult Drosophila before conducting the experiments on the flies. Establishing a conceptual model of the neurotransmitter and the neural circuits related to the mammalian behavior, and then testing if the model holds for Drosophila, is an important concept of the NGSS in the use of models and redesigning to observations (Krajcik & Merritt, 2012; NGSS Lead States, 2013). Titlow et al. (2015) and Pulver et al. (2011) used optogenetics and neurophysiological recordings with Drosophila for a college level educational activity with a similar context, which focused on neural circuits and synaptic function. Furthermore, body wall movements and adult behaviors can be recorded with a webcam (for example, WEBCAM HD4110, Hewlett-Packard Company, Palo Alto, CA, USA) connected to a computer, with a rate of 25 frames per second, for analysis outside of class time. See Titlow et al. (2014) for details in recording and analysis from captured data files.\n\n\nAnalysis of the learning and understanding of the module content\n\nModule instructors might wish to conduct pre- and post-assessment surveys, for students to provide their views of the exercises. The results of this brief survey is helpful for instructors to know what the students would likely know before the module starts and what might be gained from these exercises, since the module is intended to be an educational activity with in-depth content. A pre-assessment given a day before the laboratory experiments and the post-assessment given a few days after the exercises would be informative for instructors.\n\nThe pre- and post- assessment questions in Supplementary File 2 could be useful to future instructors.\n\nIn both high school and college settings, a very similar power point introduction of the lab exercises can be shown to students. This introduction should be given after the pre-assessment survey, so the presentation is part of the educational module. Instructors can decide on their own if they wish to use the power point content provided or modify for the level of the participants. The power point presentation we used for the trial module in a high school and a college class are provided as Supplementary File 3.\n\n\nInstructor feedback after implementing the module\n\nA high school teacher, who has >10 years of experience teaching high school Introductory Biology, Anatomy and Physiology, and Advanced Biology and has level one Biology certification, taught this module to sophomores to about 30 students high school students in Louisville (KY, USA). In addition, the teacher had received a MS in entomology prior to beginning a teaching career. These classes were 50 minutes in length and the class was divided into groups to work with different subsets of the exercises. After the data collection was completed, the various groups shared out the results with other class mates. Groups were divided into a pair of students and each group was given a different line of flies. Some were provided with the sensory lines while others with the motor drivers or serotonin lines. Each group conducted a larval body wall movement assay and an adult climbing assay. A second cohort of 15 senior high school students in Sommerset (KY, USA) was introduced to this educational module. The high school instructors were pleased to expose students to molecular biology in how the fly lines were produced with genetic manipulation, addressing neural circuits and allowing the students to produce various behavioral assays while collected data which later was graphed and discussed as a class. The teachers integrated this content along with teaching the nervous system, which was part of the normal curriculum.\n\nThe high school teacher’s comments were:\n\n“Sometimes it is hard to focus the light on the larvae crawling on the dish.”\n\n“Having the room lights off or very dim while the students work might be difficult to achieve in some classrooms.”\n\n“Students might not understand the physiological concepts of how nerves and muscles work until they [have] covered this concept in a biology class”.”\n\nThese modules were taught in the beginning of the school year in a college level biology major class with juniors and seniors with about 120 students. The college teacher’s comments were as follows:\n\n“Implementing this module for a senior college level biology class, which has a laboratory component, provided a different perspective than for high school students. The teaching instructors had a three hour period in one setting to explain and conduct the experiments. An aggregate of instructors’ comments were:\n\n“A three hour lab is just about right for this series of experiments if various groups work on different fly lines.”\n\n\nConclusions\n\nIn summary, the presented exercises have been beta tested with students at different educational levels, and these students appear to be learning novel content and have an interest in learning. However, we have not quantified student learning assessments beyond the causal discussion with students. The instructors have provided informative feedback after implementing the activities, so modifications can be made for future classes such as addressing if the room can be dark enough to conduct the assays and if dissecting scopes are available to students to examine the 1st and 2nd instar larvae. The topics presented are rich in physiological history and show how the current state of biotechnology, engineering and science have merged into the ability of controlling the development of defined neural circuits that regulate animal behavior. The future applications for human disease states are just now being probed with this technology of optogenetics, so these exercises should be exciting to students and teachers, if they are made aware in the beauty in the integration of computer coding, biotechnology, and implications for neurobiology by embracing the content presented.\n\n\nData availability\n\nDataset 1: Raw data for all module trial results. Sigma Plot version 13 files for Figure 4, Figure 6–Figure 8, Figure 10–Figure 12, Figure 14–Figure 16. doi, 10.5256/f1000research.10632.d150960 (Majeed, 2017).", "appendix": "Author contributions\n\n\n\nZM, FK, RLC conceived the study. ZM, FK, JM, HA, JW, RLC designed the experiments. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nZRM supported by Higher Committee for Education Development (HCED) scholarship in Iraq. FK supported by Deutscher Akademischer Austausch Dienst (DAAD); German Academic Exchange Service; RISE - Program (Research Internships in Science and Engineering). Part funded by Kentucky Science and Engineering Foundation (KSEF-3712-RDE-019) at the Kentucky Science and Technology Corporation (RLC) and personal funds supplied by RLC.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1: Use of the Arduino hardware and sample codes for flashing LED lights.\n\nClick here to access the data.\n\nSupplementary File 2: Pre and post-test educational sample assessment is provided.\n\nClick here to access the data.\n\nSupplementary File 3: Power point presentation used to introduce the exercises for the participating classes. The content can readily be modified by the instructor according to the level of instruction required.\n\nClick here to access the data.\n\n\nReferences\n\nAli YO, Escala W, Ruan K, et al.: Assaying locomotor, learning, and memory deficits in Drosophila models of neurodegeneration. J Vis Exp. 2011; (49): pii: 2504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBadre NH, Cooper RL: Reduced calcium channel function in Drosophila disrupts associative learning in larva, and behavior in adults. International Journal of Zoological Research. 2008; 4(3): 152–164. Publisher Full Text\n\nBalice-Gordon RJ, Breedlove SM, Bernstein S, et al.: Neuromuscular junctions shrink and expand as muscle fiber size is manipulated: in vivo observations in the androgen-sensitive bulbocavernosus muscle of mice. J Neurosci. 1990; 10(8): 2660–2671. PubMed Abstract\n\nBender KK: Arduino Based Projects in the Computer Science Capstone Course. Journal of Computer Sciences in Colleges. 2012; 27(5): 152–157. Reference Source\n\nBradforth SE, Miller ER, Dichtel WR, et al.: University learning: Improve undergraduate science education. Nature. 2015; 523(7560): 283–284. PubMed Abstract | Publisher Full Text\n\nChang TN, Keshishian H: Laser ablation of Drosophila embryonic motoneurons causes ectopic innervation of target muscle fibers. J Neurosci. 1996; 16(18): 5715–5726. PubMed Abstract\n\nDawydow A, Gueta R, Ljaschenko D, et al.: Channelrhodopsin-2-XXL, a powerful optogenetic tool for low-light applications. Proc Natl Acad Sci U S A. 2014; 111(38): 13972–13977. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEhlinger DG, Bergstrom HC, Burke JC, et al.: Rapidly emerging adolescent-nicotine induced dendritic remodeling is D1-dopamine receptor dependent. Brain Structure and Function. 2014. Reference Source\n\nEscudero MA, Hierro CM, Pérez de Madrid y Pablo A, et al.: Using Arduino to enhance computer programming courses in science and engineering. Proceedings of the EDULEARN13 Conference. 2013; 5127–5133. Reference Source\n\nHubel DH, Wiesel TN: The period of susceptibility to the physiological effects of unilateral eye closure in kittens. J Physiol. 1970; 206(2): 419–436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJunior LA, Neto OT, Hernandez MF, et al.: A low-cost and simple arduino-based educational robotics kit. Journal of Selected Areas in Robotics and Control (JSRC). 2013; 3(12): 1–7. Reference Source\n\nKrajcik J, Merritt J: Engaging students in scientific practices: What does constructing and revising models look like in the science classroom? Science and Children. 2012; 49: 10–13. Reference Source\n\nLichtman JW, Sanes JR: Watching the neuromuscular junction. J Neurocytol. 2003; 32(5–8): 767–775. PubMed Abstract | Publisher Full Text\n\nLømo T: What controls the position, number, size, and distribution of neuromuscular junctions on rat muscle fibers? J Neurocytol. 2003; 32(5–8): 835–848. PubMed Abstract | Publisher Full Text\n\nMajeed ZR, Abdeljaber E, Soveland R, et al.: Modulatory Action by the Serotonergic System: Behavior and Neurophysiology in Drosophila melanogaster. Neural Plast. 2016; 2016: 7291438. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMajeed Z, Koch F, Morgan J, et al.: Dataset 1 In: A novel educational module to teach neural circuits for college and high school students: NGSS-neurons, genetics, and selective stimulations. F1000Research. 2017. Data Source\n\nMarieb EN, Hoehn KN: Human Anatomy & Physiology. 9th Edition, USA: Pearson, ISBN13: 9780321743268. 2013. Reference Source\n\nMaxwell BA, Meeden LA: Integrating robotics research with undergraduate, education. Intell Syst App IEEE. 2000; 15(6): 22–­ 27. Publisher Full Text\n\nMcDonald CG, Dailey VK, Bergstrom HC, et al.: Periadolescent nicotine administration produces enduring changes in dendritic morphology of medium spiny neurons from nucleus accumbens. Neurosci Lett. 2005; 385(2): 163–167. PubMed Abstract | Publisher Full Text\n\nNational Research Council: Inquiry and the National Science Education Standards: A Guide for Teaching and Learning. Washington, DC: National Academy Press, 2000. Publisher Full Text\n\nNGSS Lead States: Next generation science standards: For states, by states. Washington, DC: The National Academies Press, 2013. Publisher Full Text\n\nNichols CD, Becnel J, Pandey UB: Methods to assay Drosophila behavior. J Vis Exp. 2012; (61): pii: 3795. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPulver SR, Hornstein NJ, Land BL, et al.: Advances in Physiological Education. 2011; 35: 82–91.\n\nSchoofs A, Hückesfeld S, Surendran S, et al.: Serotonergic pathways in the Drosophila larval enteric nervous system. J Insect Physiol. 2014; 69: 118–25. PubMed Abstract | Publisher Full Text\n\nSherwood L: Human Physiology: From Cells to Systems. (4th ed.), Pacific Grove, CA: Brooks-Cole. ISBN 0-534-37254-6. 2001. Reference Source\n\nSmith RF, McDonald CG, Bergstrom HC, et al.: Adolescent nicotine induces persisting changes in development of neural connectivity. Neurosci Biobehav Rev. 2015; 55: 432–443. PubMed Abstract | Publisher Full Text\n\nTitlow JS, Anderson H, Cooper RL: Lights and Larvae: Using optogenetics to teach recombinant DNA and neurobiology. The Science Teacher National Science Teacher Association. 2014; 81: 3–9. Reference Source\n\nTitlow JS, Johnson BR, Pulver SR: Light Activated Escape Circuits: A Behavior and Neurophysiology Lab Module using Drosophila Optogenetics. J Undergrad Neurosci Educ. 2015; 13(3): A166–173. PubMed Abstract | Free Full Text\n\nWaldrop MM: Why we are teaching science wrong, and how to make it right. Nature. 2015; 523(7560): 272–274. PubMed Abstract | Publisher Full Text\n\nWinnubst J, Cheyne JE, Niculescu D, et al.: Spontaneous Activity Drives Local Synaptic Plasticity In Vivo. Neuron. 2015; 87(2): 399–410. PubMed Abstract | Publisher Full Text\n\nZalewski J, Gonzalez F, Kenny R: Small is beautiful: embedded systems projects in an undergraduate software engineering program. Annals of Computer Science and Information Systems. 2014; 4: 35–41. Publisher Full Text" }
[ { "id": "20045", "date": "13 Feb 2017", "name": "William Grisham", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes an educational module that incorporates the utilization of sophisticated optogenetic techniques but employs fairly low-cost, easily obtainable subjects and apparatus. This article makes the exercises described very accessible to institutions of almost every size and budget.\n\nAbsolutely stunning details are provided from fly feeding, care, and sexing to constructing the apparatus and Arduino codes. These details not only should allow reconstruction of the experience at various high schools, colleges, and universities but also should allow students and instructors to go further and design their own questions and experiments.\n\nSupplementary materials, including PowerPoint slides and a pre/post evaluation instrument, are provided for instructors to use. No data from this pre/post instrument are presented in the article, however, which is the article’s biggest weakness. Rather, the pedagogical data provided consist of a series of qualitative statements, mostly from high school instructors.\n\nThe introduction needs to be re-written. It should more explicitly relate issues raised about the development of sensory and motor systems to the Drosophila literature rather than to the mammalian literature. Stylistic alterations, such as avoiding passive voice and multiple clauses, would render it more readable and intelligible. Similarly, the first sentence in the discussion seems inappropriate, considering the actual scope of the model.\n\nSome figures need to be re-worked to be more informative. The rolling behavior data displayed (Figure 4) would make a more powerful point if there was some comparison with different lighting conditions or comparing different lines of flies with different constructs. Similarly, Figure 7 would make its point more forcefully if two conditions were displayed on a single graph, say with and without light exposure, such as in Figures 8 and 12.", "responses": [] }, { "id": "20049", "date": "02 Mar 2017", "name": "James E. Jepson", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis Research Note by Majeed et al. is a comprehensive educational module for high school and college students, focusing on optogenetic modulation of Drosophila locomotor behaviour (either larval or adult) using a combination of channel-rhodopsin effectors and neuron-specific driver lines.\nThe methods and results are clearly presented, detailed and economically viable to initiate. Teaching students using this approach will expose them to the scientific method and directly observe a link between neuroscience theory and its application. As such, I support this module, but have several suggestions below for how it might be improved:\nThe introduction requires re-writing. My main critique is that the topics discussed in the introduction do not match the experiments in the main text. All experiments focus on acute modulation of behaviour by activating subsets of the fly's neuronal repertoire. However, the introduction focuses more on the link between activity and neuronal development. However, neurodevelopment in Drosophila (e.g studying synaptic growth at the larval NMJ) is not a component of this module. It would be more appropriate to discuss how different circuits in the mammalian and fly brain regulate specific behavioural patterns, and how activity within these circuits can be manipulated by researchers using opsins etc. The sentence '...GABA (glutamate, serotonin, acetylcholine)...' in paragraph 5 might lead students to think that GABA is an acronym for 'glutamate, serotonin, acetylcholine' - this should be re-worded. The grammar in the first paragraph could also be improved.\n\nThe methods related to fly strains and crosses could be expanded. For example, details of the fly life cycle would be helpful, and to stress the temperature-dependence of the generation time. If an instructor or a student is setting up crosses, they need to know when the progeny will be ready to experiment on: this in turn will depend on how the flies are housed. For sexing, while testing for the presence of sex-combs is indeed reliable, memorising the differences in male/female abdominal anatomy is a much more rapid method. Will the students/examiners know the difference between a virgin and non-virgin female? Methods to discern this could be described, or at least referenced. The fly food recipe could be referenced too. Finally, two of the transgenic fly strains (58374 and 51630) have the CyO balancer floating, so it may be useful to mention that these need to be used as non-curly homozygotes for larval experiments when setting up crosses.\n\nThe authors give some nice examples of electrophysiological recordings from muscles during optogenetic stimulation of motor- or ppk-neurons. Are these recordings from their lab or from the literature (if so, a reference would be appropriate)? More importantly, as mentioned by the high school teacher, these results might be difficult for high school students to understand if they have not been taught the basics of neurotransmission. Could the authors more clearly delineate which data are more appropriate for high school vs. college students?\n\nIn the legend of Figure 8, the authors state that type VI sensory neurons were studied, but I think they mean type IV.\n\nIn Figure 13B, the schematic shows the stimulation pipette injecting current into the CNS, whereas it should be the other way round: the segmental nerve should be innervating the muscle and have the electrode attached at the proximal end relative to the CNS.", "responses": [] }, { "id": "20684", "date": "03 Mar 2017", "name": "Michele L. Lemons", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes several transgenic fly strains and numerous behavioural assays that can be used to challenge students to understand connections between cellular activity and behaviour. The strengths of this approach are numerous and include the ability to observe rapid changes in fly behaviour following exposure to blue light. Thus, students are able to observe changes in behaviour within a single lab session or class. Another advantage of this approach includes the relatively low cost associated with conducting the proposed experiments. The authors describe how to make low-cost behavioural instrumentation, which is useful.  In addition to these advantages, I have several questions and concerns, which are listed below.\n\nTitle:  In the title of the manuscript, the phrase “NGSS-neurons” is unclear. It could give the impression that NGSS is a type of a neuron. The authors state in the manuscript that NGSS stands for Next Generation Science Standards.\n\nArticle content: This manuscript mentions several fly strains and numerous behavioural techniques that could be performed under a variety of conditions (e.g. focused light, diffused light, pulsing light patterns) on various ages of flies. In addition, the strains listed have various subsets of neurons that are specifically stimulated by blue light, adding another possible variable. The combination of possible experiments seems nearly endless, which is exciting. While the number of possible experiments is impressive, it can also be confusing, particularly from the perspective of someone who is not familiar with Drosophila. Considering this is a pedagogical-themed paper, it would be very helpful if: 1) the manuscript could be re-organized with this comment in mind and 2) at least one or two specific lesson plans would be provided. The lesson plans could give the reader an example of a specific subset of techniques that could be used during one, three-hour undergraduate lab, for example. The authors state that experiments have been executed in an undergraduate teaching lab and in a high school setting. It would be extremely beneficial to share hand outs used during these lessons and/or instructors’ notes. Similar to other pedagogical papers, it would also be helpful if authors include specific learning objectives in the body of the manuscript. In this regard, some of the points addressed in the PowerPoint slides could be helpful if more detail was provided.\n\nIn the introduction, it would be very helpful to include: 1) a description of the subsets of neurons (e.g. type IV sensory neurons/pickpocket neurons, GABAergic neurons, etc..) that are activated in these proposed experiments, 2) diagram/explanation of the circuits that are tested in these experiments and how activation of these circuits induces behavioural changes and 3) explanation of optogenetics with a focus on the ChR2 model used in this manuscript. Data in Figure 17 could be used to demonstrate the ability of blue light to cause EPSP in specific subsets of neurons. The reader would benefit from learning about the neurons that are excited by blue light, where these neurons project and how ChR2 works in these fly strains. Some information that is currently in the introduction could be removed.\n\nPlease provide additional explanation regarding tables, such as Table 2. There are two main points that could be clarified. First, why is “soda can” written three times in the table? What is the difference between “soda can” and the term “motor neuron” towards the top of Table 2 and the term “soda can” and “motor neurons” written approximately in the middle of Table 2? Similar questions arise for “focused light.” I understand that a soda can is used with an LED light to generate “unfocused light” while an eye piece of a microscope is used to generate “focused light.” However, I don’t understand why these terms appear multiple times in the tables, with similar subheadings, such as motor neurons. Secondly, it is not clear how students would complete such a table and how this data would be graphed and interpreted. For example, does a student write “CC” on the Table each time “continued crawling forward” behaviour is seen? Or is it a binary reporting system where one reports CC as occurring or not? What if multiple behaviours or seen? Are the numbers of behaviours recorded? It is not clear how data in Table 2 would be graphed, displayed or analyzed. The authors state the data could be graphed similar to that shown in Figure 4, but it is not clear where “CC” and other codes are represented in Figure 4. Is “CC” a kind of rolling behaviour? How was rolling behaviour occurrence calculated in Figure 4A? Please provide additional details. Similar questions apply to Tables 3 and 4.\n\nData in Figure 4 reveal behaviours of transgenic flies subjected to blue light. How do flies without blue light or without ATR behave? Controls would clarify the impact of activating type IV sensory neurons on rolling behaviour. Similar questions regarding the need for controls arise for data in other Figures (such as 6, 7, 12 and 18.) It would be helpful to use and explain the importance of controls and the variety of controls available (e.g. absence of ATR, vehicle (ethanol) only, absence of blue light without ATR, absence of blue light with ATR, etc…) in these experiments.\n\nThe authors mention that the Arduino system can be used to control stimulation of light. A supplemental file with sample codes is provided under supplemental material. Please provide at least one example of how the Arduino system could be used in any of the proposed techniques. What are examples of light stimulation patterns that produce distinct behaviours compared to single light exposure? Which of the sample codes would generate distinctive rolling behaviour, for example?\n\nThis manuscript could be significantly enhanced by including pre and post assessment of students. An assessment file is provided as a supplemental document, but was not used. In the discussion the authors state that students appear to be learning novel content, but this is not documented. An assessment piece would confirm the ability of this module to enhance student learning. The feedback from instructors was very limited. Additional feedback would prove helpful.\n\nOverall, the manuscript could be improved by writing in an active voice and fixing several typos and grammatical errors.\n\nMinor points: A few examples of typing errors include on page 3…. “neurotransmitter GABA (glutamate, serotonin and acetylcholine).” Why are three neurotransmitters listed in brackets after the term “GABA”? Do the authors mean to write neurotransmitters such as GABA, glutamate, serotonin and acetylcholine?\n\nIn Table 2, the term “focuses light” and “focused light” are used. Perhaps the “s” in “focuses light” should be a “d” instead of an “s”? The same is true for other Tables. In Figure 3, the term “foe” is written instead of the word “for”. There are additional typos in the manuscript. Please address those.\n\nIn the module overview, the description of: 1) all-trans-retinal preparation and 2) the preparation of the fly food supplemented with ATR, is very clear. However, the level of detail regarding the genetic crosses that are suggested should be elevated to match the level of detail of other techniques/procedures. References should be cited to document methods.\n\nUnder “larval locomotion behaviour”, authors mention that asking students to conduct tests in dimly lit or dark rooms could be challenging. This seems to stem from a limited set of responses from teachers.  Please address how this challenge could be addressed in the classroom to help teachers who wish to use this module.\n\nPlease clarify how the % of movement was calculated in Figures 6 and 7. Is this the % of worms that were moving?\n\nPlease clarify what time points images in Figure 18 were taken. The figure caption does not specify times points across the three images.\n\nShould all BWC data be displayed in sequential figures? For example, BWC data is shown in Figures 3, 10, 11, 14 and 15. It could be advantageous to present BWC data in sequential figures rather than in 5 figures spread throughout the paper. Please comment.\n\nFigure 17 demonstrates that blue light induces EPSPs in type IV sensory neurons expressing ChR2. This is convincing data that demonstrates the power of optogenetics and in particular, the efficacy of ChR2 in these sensory neurons. This could be powerful to show in the beginning of the manuscript to document that the blue light is causing EPSPs within the neurons, as predicted. This is the basis of why blue light induces a behavioural change in these flies.", "responses": [] } ]
1
https://f1000research.com/articles/6-117
https://f1000research.com/articles/6-116/v1
08 Feb 17
{ "type": "Research Article", "title": "Examination on Brain Training Method: Effects of n-back task and dual-task", "authors": [ "Kazue Sawami", "Yukari Katahata", "Chizuko Suishu", "Tomiko Kamiyoshikawa", "Emi Fujita", "Mika Uraoka", "Hiroko Nishikawa", "Yukari Katahata", "Chizuko Suishu", "Tomiko Kamiyoshikawa", "Emi Fujita", "Mika Uraoka", "Hiroko Nishikawa" ], "abstract": "Background: Alzheimer's dementia (AD) is the most common dementia, accounting for more than 60% of all dementia cases. For adults aged >65 years, the incidence rate doubles for every 5 years of increased age; therefore, preserving cognitive function is a pressing issue. Thus, our team screens for AD in older adults with mild cognitive impairment, at 11 public halls in Kashihara City, Japan, and offers follow-up to those with cognitive difficulties. The purpose of this research was to measure the effects of two interventions, a dual-task (requiring the participant to perform two tasks at the same time) and an n-back task (test of memory retention, requiring the participant to identify the item occupying the nth-back position in a sequence of items). A comparison group performed a single learning task in place of the dual-task.  Moreover, the majority of non-drug therapies for the maintenance of cognitive function help promote a positive mood, activating reward systems in the brain and motivating the individual to continue the task. Therefore, the correlation between cognitive function, and positive and negative mood was investigated.  Methods: Dual and n- back task (n = 304) and single-task (n = 78) groups were compared in a 6-month intervention. Salivary α-amylase concentration was measured, which reflects positive and negative mood, and correlations with cognitive function were analyzed.  Results: Cognitive function improved in both the dual-task and the single-task groups, and many cognitive domains had improved in the dual-task group. A correlation between salivary α-amylase and cognitive function was found, indicating that a greater positive mood was associated with greater cognitive function. Conclusion: The results of this research show that functional decline can be improved by a cognitive intervention. Positive mood and cognitive function were correlated, suggesting that encouraging comfort in the participant can increase the effectiveness of the intervention.", "keywords": [ "MCI", "MoCA test", "dual-task", "n-back task", "positive feeling" ], "content": "Introduction\n\nThe Ministry of Health, Labour and Welfare estimates that 15% of the Japanese population aged >65 years have dementia, i.e., a total of approximately 4.62 million individuals (estimated in 2012)1. Alzheimer's disease (AD) is the most common type of dementia and accounts for 60% or more of dementia cases; vascular dementia is the second most prevalent type of dementia, accounting for 15–20 % of cases2. However, many individuals have a mixed dementia type, with lesions characteristic of both AD and vascular dementia3. AD has increased year-by-year4, and among adults >65 years of age, the incidence rate doubles with every 5 years of increased age5. This means that the construction of countermeasures is an urgent issue. For instance, although the amyloid vaccine was developed in 2000, it could not suppress the decline in cognitive function, even after amyloid β is removed from the brain6.\n\nSince taking preventive measures before the onset of dementia is important, a worldwide project, the Alzheimer's Disease Neuroimaging Initiative (ADNI; http://www.adni-info.org/), has been implemented to establish a test that can indicate AD before onset, and therefore offer an opportunity for intervention to stop the progression of disease7. J-ADNI was also launched in Japan in 2007 (http://www.alz.org/research/funding/partnerships/WW-ADNI_japan.asp), though regional efforts in Japan are still at an early stage.\n\nWith this in mind, we started a brain training preventive intervention for AD in Kashihara City (Nara, Japan), at all 11 of the city’s public halls. This initiative is a joint project between Nara Medical University and the dementia prevention project of the Kashihara City Council of Social Welfare, and screens individuals for Mild Cognitive Impairment (MCI), with follow-up for those meeting criteria for an MCI diagnosis. Screening is conducted twice a year, using the Montreal Cognitive Assessment (MoCA), a screening scale for MCI (http://www.mocatest.org/). Public health nurses conduct follow-up visits to participants with low scores. In addition, a brain training class is held once a month, and prevention interventions and cognitive function evaluations are continuously conducted.\n\nThe monthly brain training class aims to inform participants how to take precautions against risk factors for AD, and transform their lifestyle. Risk factors for AD that are targeted comprise of the following: high blood pressure, obesity, smoking, dyslipidemia, and lifestyle-related diseases, such as diabetes, and complications of these8. It is reported that the probability of developing AD in the future is 2 times as high in people with hypertension, 2.1 times in those who are obese (with a BMI of 30 or more), 1.8 times in those who smoke, 2.9 times in those with dyslipidemia (total cholesterol 250mg/dl or more), and 4.6 times in people with diabetes (HbA1c, 7% or higher)9. Furthermore, in the brains of people with AD, there is an increased quantity of oxidatively modified products10. Therefore, improving diet is an important part of disease prevention.\n\nAerobic exercise is also essential in preventing AD. For instance, it has been reported that participation in continuous aerobic exercise is linked to increases in brain derived neurotrophic factor (BDNF) and increases in the capacity of the hippocampus11. In addition, brain training has shown some positive effects on the brain: the n-back task (a test of memory retention, requiring the participant to identify the item occupying the nth-back position in a sequence of items) has been validated as an effective brain training task, and a meta-analysis indicates that this task is associated with activation of frontal and parietal cortex12. Furthermore, it has been reported that, compared to a single-task (such as exercise or learning only), a dual-task (requiring the participant to perform two activities at the same time) generates more activation in the brain, in particular in the prefrontal cortex13,14.\n\nDrawing on the above-mentioned previous studies, in this study we considered that an intervention combining healthy-eating habit guidance, aerobic exercise, an n-back task and a dual-task might be effective. In addition, the majority of non-drug therapies, such as music therapy, for the maintenance of cognitive function promote feelings of comfort. These feelings of comfort activate reward systems in the brain, motivating the individual to continue the task in which they are engaged. For instance, in a study comparing the effects of negative and positive feeling, positive feeling increased an individual’s repertoire of thinking, behavior and attention, whereas negative feeling reduced an individual’s repertoire of thinking and behavior15. In addition, it is reported that improvement in self-efficacy is related to memory improvement16. Therefore, we thought it was important that the intervention assessed herein improved positive mood and feelings of comfort; therefore, we decided to use physical recreation in this intervention. The full intervention consisted of diet guidance, recreation, exercise, an n-back task, and a dual-task. The purpose of this study was to measure the effect of this intervention on improving cognitive function, and to clarify whether there is a correlation between cognitive function and positive mood.\n\n\nMethods\n\nA total of 382 adults of 65 years or older participated in the present study between June 2015 and June 2016, who volunteered for the intervention. We distributed public information to all houses in Kashihara city and invited participants. The participants were split into two groups: 304 people in a dual-task group, and 78 people in a single-task group.\n\nThe dual-task group focused on dietary guidance and recreation, combined with exercise, the n-back task and the dual-task. The single-task group performed learning tasks only.\n\nWe explained the contents of the intervention program and divided people who chose composite tasks combining dual-task and n-back task, and those who chose a single task of learning alone. Participants chose the program by themselves, so they were divided into unbalanced groups.\n\nThe exclusion requirement for intervention was people who could not move by themselves; however, all participants that volunteered were able to clear the requirement.\n\nThe intervention is once a month, the evaluation of cognitive function is before intervention and once every six months after intervention. Stress checks were carried out using salivary amylase before intervention.\n\nIntervention method. As an intervention method, the cognitive prevention class was held once a month and focused on diet guidance and recreation, combined with exercise, the n-back task, and the dual-task. Subjects received 12 interventions a year. Diet guidance was carried out using the following content:\n\n1. Correlation between vascular age and cognitive function: Diet that prevents arteriosclerosis.\n\n2. Reduced salt and reduced trans fatty acids.\n\n3. Increased omega-3 fatty acid.\n\n4. Diet to prevent saccharification.\n\n5. Food containing antioxidants.\n\nThis has been continued for about 20 minutes.\n\nFor the exercise requirement, aerobic exercise was conducted under the guidance of an occupational therapist. The aerobic exercise undertaken was rhythmic gymnastics tailored to music, and was continued for about 20 minutes at a time.\n\nThe n-back task is a delayed recall task, in which subjects answer tasks a certain number of times. Subjects started this experiment from a set of back tasks and kept adding an extra set of back task each time for 40 minutes. Gradually, the difficulty of the tasks increased.\n\nFor a set of back tasks, subjects first memorized ten random words by reading them aloud. Subjects were not allowed to write them down. After that, subjects performed a dual task game and then wrote down what they remembered from those ten words. For the second set of back tasks, subjects memorized a new set of ten words and then played two dual task games. After playing two games, they tried to write down as many words as possible from those new ten words that they had memorized. For the third set of back tasks, they again memorized a new set of ten words, played an additional dual task game and did the same thing. They continued this process for 40 minutes and increased the difficulty of the n-back task.\n\nUpon conducting the dual-task (http://www.ncgg.go.jp/cgss/department/cre/cognicise.html), we have registered at Japan Cognisize Spread Secretariat, which spreads exercise method by dual-task (http://www.ncgg.go.jp/topics/20150512.html), and conducted it. The dual-task requires the participant to perform two activities at the same time, such as arithmetic calculations while stepping.\n\nAs a comparison to the dual-task, a single learning task was conducted. In this method, we conducted a lecture style learning task for 90 minutes with a 10 minute break in the middle.\n\nMCI screening. MCI screening was used for comparison before intervention and after 6 months and was conducted using the Montreal Cognitive Assessment (MoCA: http://www.mocatest.org/pdf_files/instructions/MoCA-Instructions-English_2010.pdf). This is a 30-point scale; a higher score indicates higher cognitive function. The cut-off value for MCI is 26 points. We obtained a license to use the scale from Dr. Ziad Nasreddine, the developer of the original version.\n\nMeasurement of positive and negative mood. Positive and negative mood was measured by collecting sublingual saliva and measuring salivary α-amylase concentration (NIPRO; catalog number, 34549000). This was measured before intervention. Salivary α- amylase reflects sympathetic nervous activity. It rises following a negative stimulus, and reduces following a positive one17. The reference values of salivary α- amylase by NIPRO, the manufacturer of the measurement device, is as follows:\n\n-    0–30 KU/L: There is no negative stress.\n\n-    31–45 KU/L: There is slight negative stress.\n\n-    46–60 KU/L: There is negative stress.\n\n-    61 KU/L or more: There is a high amount of negative stress.\n\nTo compare MoCA scores before and after the intervention, paired t-test were conducted. Correlations of MoCA scores with age and salivary α-amylase were computed using Pearson product-moment correlation coefficients. SPSS 21.0 for Windows was used for analysis.\n\nThis study was approved by the Ethics Committee of the Nara Medical University (approval number: 741). For the benefit of the participants, we explained in speech and writing: (i) the purpose and method of the study; (ii) the freedom and veto each individual had over participation; (iii) the measures taken to protect privacy; (iv) the approach to data management; (v) and our intentions regarding publication of the results. Written informed consent was required for participation.\n\nThis study has been retrospectively registered in the clinical trial registration database: University Hospital Medical Information Network (UMIN), registration date: December 31, 2016; registration number: R000028956. (https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr_view.cgi?recptno=R000028956).\n\n\nResults\n\nThe intervention had a 100% completion rate. The average age of the subjects was 72.3 (± 6.6) years old. The group consisted of 129 men and 253 women. The MoCA scores for each age category (50s, 60s, 70s, and 80s) are shown in Figure 1. This was measured before intervention. The correlation coefficients between age and each cognitive function at the beginning of the intervention are also shown in Figure 1. These are classified by MoCA Assessment items.\n\nAs shown in Figure 1, the cognitive function with the strongest negative correlation with age at the start of the intervention was the short-term memory recall task; the MoCA score rapidly decreased in participants in their 70s and 80s compared to participants in their 50s and 60s (r = -0.56). Other tests whose results were negatively correlated with age were the trail making task and the clock-task (together indicating visuospatial cognitive ability, r = -0.38, -0.49), verbal fluency (memory retrieval ability, r = -0.46), attention, concentration and working memory (ability to concentrate and attention and memory, r = 0.53), repetition task (memory, r = -0.49), abstract thinking (r = -0.31), and orientation (r = -0.34). Cognitive functions that were maintained with increasing age were visuoconstructional skills (cube: graphic replication) and naming (animal name recall).\n\nWith an intervention once a month, an evaluation can be seen before and six months after the intervention in Table 1. This is a comparison of the results of all ages. The average of the total scores before the intervention was >26 points in both the dual-task group and the single-task group; this met the cut-off value for MCI (26 points). After the intervention, there was significant improvement in both the dual-task group and single-task group, and the average value on the MoCA was above the cutoff value for MCI (p < 0.01).\n\nCorresponding paired t-test, n=382 (dual task, n= 304; single task, n= 78). These are the average scores of all participants in all age groups (mean).\n\nComparing the results of the dual-task group and the single-task group, only the dual-task group showed a significant improvement in the trail making and the cube drawing tests (visual-spatial cognitive abilities), and in abstract thinking, speaking in order, speaking in reverse, sustained attention, and calculation (ability to concentrate & attention & memory) (p < 0.01). There were significant improvements in both the dual-task and single-task groups in the verbal fluency, the repetition task (memory), delayed recall task (memory playback capability), and in the overall score on the MoCA (the presence or absence of MCI) (p < 0.05).\n\nRegarding collection of salivary amylase, was taken at a briefing session before the intervention. At that time, the subjects, after receiving the MoCA test, remained in the venue and collected saliva. As it takes time to collect individual saliva and measure alpha amylase, only subjects with time remained in the venue. In addition, there were subjects who were unable to measure α-amylase (measurement error) subject to influence of hypertensive internal medicine and others. For this reason, the number of subjects that were able to collect the salivary α-amylase, is 280 people. In the measurement of salivary α-amylase, the minimum value was 2 KU/L and the maximum value was 216 KU/L (mean, 49.7 ± 47.0). The correlation of salivary α-amylase and MoCA total score was negative (r = -0.31).\n\nTherefore, there was a trend for MoCA scores to be lower in individuals experiencing greater stress (Figure 2).\n\n\nDiscussion\n\nProphylactic interventions of AD, which were carried out in each municipality, are still at the stage of trial and error (http://www.mhlw.go.jp/file/06-Seisakujouhou-12300000-Roukenkyoku/0000136616.pdf). As shown in this study, dementia preventive measures in Japan, there is a disparity of each municipality. A preventive program for dementia has not been established yet. It can be noted that general cognitive function, which decreased along with age, can be dissociated from specific cognitive functions, some of which were relatively maintained with increasing age. The specific cognitive function that had the strongest negative correlation with age was the delayed recall task, which requires the individual to memorize five nouns and to recall them after about five minutes. The brain’s ability to memorize new things, maintain them, and reproduce them rapidly decreased in proportion to age. Performance on the trail making and clock-drawing tasks (both measuring visuospatial cognitive ability) also rapidly decreased with age. Visuospatial cognition refers to the brain’s ability to process visual information. When this ability declines, people tend to get lost18. There were also correlations between age and word recall (thinking ability), speaking in order, speaking in reverse, sustained attention and calculation (ability to concentrate & attention & memory), and the repetition task (memory). Decreased performance in the word recall task indicates that an individual may have trouble recalling words during a conversation. Decreasing ability to concentrate affects safety and the continuity of actions; when attention is impaired, people struggle to maintain attention on stimuli, so their actions become distracted19. Memory was assessed in this study through tasks requiring the participants to speak in order, to speak in reverse, and to listen to and reproduce sentences. A decline in memory due to aging (age-associated memory impairment) is caused by a decline in the function of cranial nerve20 and a failure in the network of the brain21. Disruption to the cognitive functions considered above adversely affects daily life and lowers safety. Therefore, reducing the risks associated with cognitive decline is an important issue. By contrast, some cognitive functions were maintained even with increasing age, as indicated by performance on the tasks requiring shape replication, animal name recall. These tasks require the reproduction of familiar forms, learned from a young age, and are not tasks in which the participant memorizes new things. Abstract thinking is also a common task of the situations that require common knowledge. In tasks based on familiar stimuli, cognitive function is maintained with increasing age. Due to this, familiar experiences are easier to retrieve22.\n\nComparing scores before and after the intervention, the cognitive tasks that improved significantly in the course of the invention were the trail making and clock drawing tests (visual-spatial cognitive ability), abstract thinking, speaking in order, speaking in reverse, sustained attention and calculation (ability to concentrate & attention & memory), the repetition task (memory), the delayed recall task (memory playback capability). All these cognitive functions decrease with age, but here showed an improvement throughout the intervention; importantly, these functions are associated with the ability of an older person to keep safe and secure in their everyday life.\n\nOn comparing the results of the dual-task group and the single-task group, a greater number of cognitive functions showed significant improvement in the dual-task group. Comparing the results after intervention, visuospatial cognition, abstract thinking, concentration, attention, and memory, only those within the dual-task group showed significant improvement. To run two tasks at the same time, the frontal lobe, and in particular the most anterior region, the prefrontal cortex, is essential23. Based on this understanding, the dual-task is considered to train the frontal lobe, and this view has been supported by neuroscience studies that show frontal activation during the dual-task24,25. The prefrontal cortex ages earlier than other brain regions26, meaning that cognitive training for the elderly needs to be considered a priority on maintaining the function on the prefrontal cortex. Interestingly, it has been found that the number of neural stem cells, which are needed for the regeneration of nerve cells, is maintained, even with increasing age27,28. Thus, it is reasonable to expect that cognitive training can maintain cognitive function by drawing on these neural stem cells.\n\nThe n-back task, first introduced by Wayne Kirchner in 195829, measures the capacity of an individual’s working memory. It has been demonstrated that the n-back task not only measures working memory, but also can improve it when structured in a brain training intervention. Improvements in fluid intelligence30, and increased density of dopamine31 have been demonstrated. A synergistic effect in combination with dual-task can be expected.\n\nIn addition, the present intervention improved positive mood, presumably activating the brain’s reward system. An increase in positive feelings is reported to have effects on body and mind, such as increasing satisfaction and success32, improving immune function33, increasing confidence towards others and fostering closer relationships34, positively affecting physical and mental health35, and speeding up recovery from disease36. Thus, by targeting an improved positive mood, other positive effects might be expected. In the present study, a correlation was found between salivary α-amylase, which reflects mood, and cognitive function; when negative stress was high, the MoCA was low. Our research group, in a study in the last year, demonstrated a correlation between the ability to cope with stress (Sense of Coherence SOC) and cognitive functions37, and the current study supports this finding.\n\n\nConclusion\n\nAn intervention, combining exercise, an n-back task, and a dual-task, improved performance in a greater number of areas of cognitive function, compared with a similar intervention in which a single learning task was substituted for the dual-task. The cognitive functions that decreased most with increasing age were delayed recall, visuospatial cognition, thinking, concentration, attention, and memory. The cognitive tasks that had no correlation with age, and were maintained even with age, were graphic replication, animal name recall, abstract thinking, and orientation – all of which require the reproduction of familiar forms or names. The results of this study show that it is possible to improve cognition by a structured intervention. In addition, a correlation between cognitive function and positive mood has been demonstrated by the present study. It would be interesting to investigate whether improvement in positive feeling directly improves the effectiveness of brain training.\n\n\nData availability\n\nDataset 1. MoCA scores and age. Sheet 1 is the data before age and MoCA test intervention and after intervention. Sheet 2 is the data before and after the intervention divided into the dual-task group and the single-task group. doi, 10.5256/f1000research.10584.d15082538\n\nDataset 2. MoCA and salivary amylase. This is the data showing the score of the MoCA test before the intervention and the measured value of salivary α-amylase. doi, 10.5256/f1000research.10584.d15082639", "appendix": "Author contributions\n\n\n\nKS, YK, and CS conceived the study. All authors implemented this intervention, carried out the data collection and reported the results of the functional evaluation to the participants.\n\nKS directed the project and drafted the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nThis study was a collaborative project led by the Kashihara City Council of Social Welfare. No competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors appreciate the help of everyone who took part in the study.\n\n\nSupplementary material\n\nSupplementary File 1: CONSORT checklist.\n\nClick here to access the data.\n\n\nReferences\n\nMinistry of Health, Labour and Welfare: About the Current State of Dementia Measures. 115th Nursing Care Payment Subcommittee Material. 2014; 1–21.\n\nShimohama S: Early Detection and Prevention of Dementia. Academic Trends. 2015; 20(6): 76–80. Publisher Full Text\n\nHanyu H: [Diagnosis and treatment of mixed-type dementia]. Brain Nerve. 2012; 64(9): 1047–1055. PubMed Abstract | Publisher Full Text\n\nMinistry of Health, Labour and Welfare: Situation of Dementia Hospitalized Patients in Mental Hospital Beds. 13th Review Team Material for Construction of New Community Mental Health Care System. 2010; 1–63.\n\nQiu C, Kivipelto M, von Strauss E: Epidemiology of Alzheimer's disease: occurrence, determinants, and strategies toward intervention. Dialogues Clin Neurosci. 2009; 11(2): 111–128. PubMed Abstract | Free Full Text\n\nHolmes C, Boche D, Wilkinson D, et al.: Long-term effects of Aβ42 immunisation in Alzheimer’s disease: follow-up of a randomised, placebo-controlled phase I trial. Lancet. 2008; 372(9634): 216–23. PubMed Abstract | Publisher Full Text\n\nAlzheimer’s Association Research Center: World Wide Alzheimer’s Disease Neuroimaging Initiative. Alzheimer’s Association National Office. 2014.\n\nJapan Society of Neurology: Dementia Disease Treatment Guidelines: Chapter 5 Alzheimer’s Disease. 2010; 219–250.\n\nKivipelto M, Ngandu T, Fratiglioni L, et al.: Obesity and vascular risk factors at midlife and the risk of dementia and Alzheimer disease. Arch Neurol. 2005; 62(10): 1556–60. PubMed Abstract | Publisher Full Text\n\nNunomura A: Present and Future of Alzheimer’s Disease Therapy Development Based on the Oxidative Stress Hypothesis. Clinical Nerve. 2013; 53: 1043–1045.\n\nErickson KI, Voss MW, Prakash RS, et al.: Exercise training increases size of hippocampus and improves memory. Proc Natl Acad Sci U S A. 2011; 108(7): 3017–3022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOwen AM, McMillan KM, Laird AR, et al.: N-back working memory paradigm: a meta-analysis of normative functional neuroimaging studies. Hum Brain Mapp. 2005; 25(1): 46–59. PubMed Abstract | Publisher Full Text\n\nAl-Yahya E, Johansen-Berg H, Kischka U, et al.: Prefrontal Cortex Activation While Walking Under Dual-Task Conditions in Stroke: A Multimodal Imaging Study. Neurorehabil Neural Repair. 2016; 30(6): 1–9. PubMed Abstract | Publisher Full Text\n\nOhsugi H, Ohgi S, Shigemori K, et al.: Differences in dual-task performance and prefrontal cortex activation between younger and older adults. BMC Neurosci. 2013; 14: 10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFredrickson BL, Branigan C: Positive emotions broaden the scope of attention and thought-action repertoires. Cogn Emot. 2005; 19(3): 313–332. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWest RL, Bagwell DK, Dark-Freudeman A: Self-efficacy and memory aging: the impact of a memory intervention based on self-efficacy. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn. 2008; 15(3): 302–29. PubMed Abstract | Publisher Full Text\n\nHaginoya H, Saeki Y: The Usefulness of Salivary α-Amylase Activity to Evaluate Degree of Stress. Jpn J Nurs Art Sci. 2012; 10(3): 19–28. Publisher Full Text\n\nMonacelli AM, Cushman LA, Kavcic V, et al.: Spatial disorientation in Alzheimer’s disease: the remembrance of things passed. Neurology. 2003; 61(11): 1491–1497. PubMed Abstract | Publisher Full Text\n\nKato M: Concept of Attention - the Function and Structure. Physical Therapy Journal. 2003; 37(12): 1023–1028.\n\nYamazaki D, Horiuchi J, Ueno K, et al.: Glial dysfunction causes age-related memory impairment in Drosophila. Neuron. 2014; 84(4): 753–763. PubMed Abstract | Publisher Full Text\n\nHe Y, Chen Z, Evans A: Structural insights into aberrant topological patterns of large-scale cortical networks in Alzheimer’s disease. J Neurosci. 2008; 28(18): 4756–4766. PubMed Abstract | Publisher Full Text\n\nBriñol P, Petty RE, Tormala ZL: The malleable meaning of subjective ease. Psychol Sci. 2006; 17(3): 200–206. PubMed Abstract | Publisher Full Text\n\nYogev-Seligmann G, Hausdorff JM, Giladi N: The role of executive function and attention in gait. Mov Disord. 2008; 23(3): 329–342; quiz 472. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoltzer R, Mahoney JR, Izzetoglu M, et al.: fNIRS study of walking and walking while talking in young and old individuals. J Gerontol A Biol Sci Med Sci. 2011; 66(8): 879–887. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnguera JA, Boccanfuso J, Rintoul JL, et al.: Video game training enhances cognitive control in older adults. Nature. 2013; 501(7465): 97–101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOtsuka Y, Osaka N: Working memory in the elderly: Role of prefrontal cortex. Japanese Psychological Review. 2005; 48: 518–529.\n\nBernal GM, Peterson DA: Neural stem cells as therapeutic agents for age-related brain repair. Aging Cell. 2004; 3(6): 345–351. PubMed Abstract | Publisher Full Text\n\nBrazel CY, Rao MS: Aging and neuronal replacement. Ageing Res Rev. 2004; 3(4): 465–483. PubMed Abstract | Publisher Full Text\n\nKirchner WK: Age differences in short-term retention of rapidly changing information. J Exp Psychol. 1958; 55(4): 352–358. PubMed Abstract | Publisher Full Text\n\nJaeggi SM, Studer-Luethi B, Buschkuehl M, et al.: The relationship between n-back performance and matrix reasoning - implications for training and transfer. Intelligence. 2010; 38(6): 625–635. Publisher Full Text\n\nMcNab F, Varrone A, Farde L, et al.: Changes in cortical dopamine D1 receptor binding associated with cognitive training. Science. 2009; 323(5915): 800–802. PubMed Abstract | Publisher Full Text\n\nLosada M, Heaphy E: The role of positivity and connectivity in the performance of business teams: A nonlinear dynamics model. Am Behav Sci. 2004; 47(6): 740–765. Publisher Full Text\n\nCohen S, Doyle WJ, Turner RB, et al.: Emotional style and susceptibility to the common cold. Psychosom Med. 2003; 65(4): 652–657. PubMed Abstract | Publisher Full Text\n\nDunn JR, Schweitzer ME: Feeling and believing: the influence of emotion on trust. J Pers Soc Psychol. 2005; 88(5): 736–748. PubMed Abstract | Publisher Full Text\n\nLyubomirsky SL, King L, Diener E: The benefits of frequent positive affect: does happiness lead to success? Psychol Bull. 2005; 131(6): 803–855. PubMed Abstract | Publisher Full Text\n\nCohen S, Pressman SD: Positive affect and health. Curr Dir Psychol Sci. 2006; 15(3): 122–125. Publisher Full Text\n\nSawami K, Fujii W, Suishu C: Development of the New Preventive Care for Elderly and Families. 2015; 53–59. Reference Source\n\nSawami K, Katahata Y, Suishu C, et al.: Dataset 1 in: Examination on Brain Training Method: Effects of n-back task and dual-task. F1000Research. 2017. Data Source\n\nSawami K, Katahata Y, Suishu C, et al.: Dataset 2 in: Examination on Brain Training Method: Effects of n-back task and dual-task. F1000Research. 2017. Data Source" }
[ { "id": "20910", "date": "18 Apr 2017", "name": "Sarah Anne Fraser", "expertise": [ "Reviewer Expertise Dual-task", "Aging", "Cognitive Function", "Physical Function", "Neuroimaging" ], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExamination on brain training method: Effects of n-back task and dual task\nSummary of overview: Although the authors have a large sample and an innovative approach providing their participants with several interventions in a monthly session, the groups are poorly defined and do not adhere to the literature on dual-task (see meta-analysis or reviews on the topic, for example: Verhaeghen et al., 20031, Fraser & Bherer, 20132). Dual task is a combination of two tasks and when these two tasks are examined alone these are the single tasks. In the study described, I believe that the authors are comparing a combined intervention group (including dual-task, n-back, education, etc.) and an active control group (that listened to lectures), but the manuscript does not present it this way.The rationale for examining the individual components of the MoCA between the groups and correlating salivary samples is also not clear to the reader from the overview provided. There is a possible self-selection bias and confound that the \"dual task group\" has lower MoCA scores at baseline. There is no indication of whether or not the individual components of the combined intervention group (dual task group) improved...Did those that completed dual task training, n-back training get better on those tasks? The paper needs to be completely revised properly labelling the two groups and identifying the MoCA and the salivary measures as the primary outcome measures of interest in this study.\nTitle: Examination on brain training method: Effects of n-back task and dual task This title does not reflect the contents of the paper\n\nAbstract: Background: Rationale for mood measures not clearly stated. Perhaps: The majority of non-drug interventions (this way there is a link with the 2 interventions (dual task + n-back studied)) Methods: 6 month intervention? Was this conducted in groups? How many times per week? Results: Many cognitive domains improved – What cognitive domains were measured? In methods, should state that participants had neuropsychological testing (if this is the measure of cognitive domains). Correlations with cognitive function and salivary α-amylase: Which cognitive functions? N-back performance, dual task performance or single task? The statement of these results is too vague. Conclusions: First sentence…perhaps functional decline can be reduced with cognitive intervention? Second sentence: “Positive mood and cognitive function were correlated, suggesting that encouraging comfort in the participant can increase the effectiveness of the intervention.” I do not understand how this conclusion stems from the correlation between salivary α-amylase and cognitive function.\n\nIntroduction: “In addition, a brain training class is held once a month, and prevention interventions and cognitive function evaluations are continuously conducted.” This sentence from the introduction sounds like methodology…perhaps stated differently: In these public halls brain training classes were offered once a month and prevention interventions…. Also, how is brain training and prevention interventions different here? And what is meant by continuously conducted? Is this every week, every day? Brain training – is risk prevention class? The description that follows brain training suggests that it is an educational class not a training class.\n\nTo my knowledge, the n-back task has not been validated as an effective brain training task – Can you provide a reference for this statement? There has been specific demonstration n-back training on fluid intelligence (Jaeggi and colleagues)…but broadly stating that it is valid and effective based on these results is not accurate.\n\nRationale for all the different “interventions” is not clear.  In the one session of brain training, all participants got aerobic training, n-back, dual task, lifestyle education for brain health, recreation?\n\nMethods: The distinction between what has been labeled dual task group and single task group is not clear. I believe these groups have not been appropriately labeled. Especially since the dual task group is doing more than a dual task. This is a combined intervention group that is exposed to many different interventions including dual task but is not a dual-task specific intervention group. And single task, typically this means that the person would only complete one component of the dual task  - so if the task is walking and talking, then participants would only walk alone and talk alone but never perform the two tasks together.\n\nThe participants chose the groups themselves? This creates a selection bias – how do you control for this? Description of the n-back task unclear. Did they have to remember one item back, two items back? See Jaeggi et al., 20033 for visual description of one type of n-back task…Was this an auditory n-back task?\n\nFor the dual-task, the example provided is stepping with arithmetic – were there other combinations of tasks? This is not clear from the description. Typically dual task performance is compared to single task performance on the component tasks (i.e., if stepping and arithmetic as dual task then this would be compared to performance on arithmetic alone (no stepping) or stepping alone (no arithmetic).\n\nThe use of a completely different task: lecture style learning task is not an appropriate comparison for the dual task performance. Also what is a lecture style learning task? Are participants presented a lecture on some topic?\n\nWas anything measured in these trained tasks? Reaction times? Accuracy? Steps taken?\n\nResults: All participants are pooled for the correlational results? Initially you mentioned having 2 groups (dual task and single task).\nThe MoCA is typically a global cognitive function score on 30…I have never seen this broken down by each item…it is not a neuropsychological battery – it is a global score. Can you provide rationale for this type of analysis? Was there a significant correlation between age and the global score on 30?\n\nIn Table 1. Based on the global score the combined intervention group (or dual task group) had lower overall MoCA scores than the Single task group at baseline. But the change from pre to post in the global score on 30, seems larger in the dual task group (up by 4.21 points) versus single (up by only 1.49 points).\n\nDiscussion: Not reviewed because other sections need major clarification.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No", "responses": [] } ]
1
https://f1000research.com/articles/6-116
https://f1000research.com/articles/5-2762/v1
25 Nov 16
{ "type": "Research Note", "title": "Optimal threshold estimation for binary classifiers using game theory", "authors": [ "Ignacio Enrique Sanchez" ], "abstract": "Many bioinformatics algorithms can be understood as binary classifiers. They are usually trained by maximizing the area under the receiver operating characteristic (ROC) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of “specificity equals sensitivity” maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.", "keywords": [ "Binary classifier", "ROC curve", "accuracy", "optimal threshold", "optimal cutoff", "class imbalance", "game theory", "minimax principle." ], "content": "Introduction\n\nMany bioinformatics algorithms can be understood as binary classifiers, as they are used to investigate whether a query entity belongs to a certain class1. Score-based binary classifiers assign a number to the query. If this score surpasses a threshold, the query is assigned to the class under consideration. A minority of users are able to choose a threshold using their understanding of the algorithm, while the majority uses the default threshold.\n\nBinary classifiers are often trained and compared under a unified framework, the receiver operating characteristic (ROC) curve2. Briefly, classifier output is first compared to a training set at all possible classification thresholds, yielding the confusion matrix with the number of true positives (TP), false positives (FP), true negatives (TN) and false negatives (FN) (Table 1). The ROC curve plots the true positive rate (TPR = TP/(TP + FN)), also called sensitivity,) against the false positive rate (FPR = FP/(FP + TN)) , which equals 1-specificity) (Figure 1, continuous line). Classifier training often aims at maximizing the area under the ROC curve, which amounts to maximizing the probability that a randomly chosen positive is ranked before a randomly chosen negative2. This summary statistic measures performance without committing to a threshold.\n\nTP: Number of true positives. FP: Number of false positives. FN: Number of false negatives. TN: Number of true negatives.\n\nThe descending diagonal TPR = 1 – FPR (dashed line) minimizes classifier performance with respect to qP. The intersection between the receiver operating characteristic (ROC) curve (continuous line) and this diagonal maximizes this minimal, worst-case utility and determines the optimal operating point according to the minimax principle (empty circle).\n\nPractical application of a classifier requires using a threshold-dependent performance measure to choose the operating point1,3. This is in practice a complex task because the application domain may be skewed in two ways4. First, for many relevant bioinformatics problems the prevalence of positives in nature qP = (TP + FN)/(TP + TN + FP + FN) does not necessarily match the training set qP and is hard to estimate2,5. Second, the yields (or costs) for correct and incorrect classification of positives and negatives in the machine learning paradigm (YTP, YTN, YFP, YFN) may be different from each other and highly context-dependent1,3. Points in the ROC plane with equal performance are connected by iso-yield lines with a slope, the skew ratio, which is the product of the class skew and the yield skew4:\n\nThe skew ratio expresses the relative importance of negatives and positives, regardless of the source of the skew4. Multiple threshold-dependent performance measures have been proposed and discussed in terms of skew sensitivity3,4, but often not justified from first principles.\n\n\nTheory\n\nGame theory allows us to consider a binary classifier as a zero-sum game between nature and the classifier6. In this game, nature is a player that uses a mixed strategy, with probabilities qP and qN=1-qP for positives and negatives, respectively. The algorithm is the second player, and each threshold value corresponds to a mixed strategy with probabilities pP and pN for positives and negatives. Two of the four outcomes of the game, TP and TN, favor the classifier, while the remaining two, FP and FN, favor nature. The game payoff matrix (Table 2) displays the four possible outcomes and the corresponding classifier utilities a, b, c and d. The Utility of the classifier within the game is:\n\na: Player I utility for a true positive. b: Player I utility for a false positive. c: Player I utility for a false negative. d: Player I utility for a true negative.\n\nThe payoff matrix for this zero-sum game corresponds directly to the confusion matrix for the classifier, and the game utilities a, b, c, d correspond to the machine learning yields YTP, YFP, YFN, YTN, respectively (Table 1). Without loss of generality4, we can study the case a=d=1 and b=c=0. Classifier Utility within the game then reduces to the Accuracy or fraction of correct predictions2–4. In sum, maximizing the Utility of a binary classifier in a zero-sum game against nature is equivalent to maximizing its Accuracy, a common threshold-dependent performance measure.\n\nWe can now use the minimax principle from game theory6 to choose the operating point for the classifier. This principle maximizes utility for a player within a game using a pessimistic approach. For each possible action a player can take, we calculate a worst-case utility by assuming that the other player will take the action that gives them the highest utility (and the player of interest the lowest). The player of interest should take the action that maximizes this minimal, worst-case utility. Thus, the minimax utility of a player is the largest value that the player can be sure to get regardless of the actions of the other player.\n\nIn our classifier versus nature game, Utility/Accuracy of the classifier is skew-sensitive, depending on qP for a given threshold3,4:\n\nThe derivative of the Utility with respect to qP is zero along the TPR = 1 − FPR line in ROC space (Figure 1, dashed line). The derivative is negative below this line and positive above it, indicating that points along this line are minima of the Utility function with respect to the strategy qP of the nature player. According to the minimax principle, the classifier player should operate at the point along the TPR = 1 − FPR line that maximizes Utility. In ROC space, this condition corresponds to the intersection between the ROC curve and the descending diagonal (Figure 1, empty circle) and yields a minimax value of 1 − FPR for the Utility. It is worth noting that this analysis regarding class skew is also valid for yield/cost skew4.\n\n\nDiscussion\n\nWe showed that binary classifiers may be analyzed in terms of game theory. From the minimax principle, we propose a criterion to choose an operating point for the classifier that maximizes robustness against uncertainties in the skew ratio, i.e., in the prevalence of positives in nature and in yield skew, i.e., the yields/costs for true positives, true negatives, false positives and false negatives. This can be of practical value, since these uncertainties are widespread in bioinformatics and clinical applications.\n\nIn machine learning theory, TPR = 1 − FPR is the line of skew-indiference for Accuracy as a performance metric4. This is in agreement with the skew-indifference condition imposed by the minimax principle from game theory. However, to our knowledge, skew-indifference has not been exploited for optimal threshold estimation. Furthermore, the operating point of a classifier is often chosen by balancing sensitivity and specificity, without reference to the rationale behind7. Our game theory analysis shows that this empirical practice can be understood as a maximization of classifier robustness.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nANPCyT [PICT 2012-2550]. IES is a CONICET career investigator\n\n\nAcknowledgements\n\nI would like to thank Juan Pablo Pinasco and Francisco Melo for discussion.\n\n\nReferences\n\nSwets JA, Dawes RM, Monahan J: Better decisions through science. Sci Am. 2000; 283(4): 82–7. PubMed Abstract | Publisher Full Text\n\nFawcett T: An introduction to ROC analysis. Pattern Recognit Lett. 2006; 27(8): 861–874. Publisher Full Text\n\nOkeh UM, Okoro CN: Evaluating Measures of Indicators of Diagnostic Test Performance: Fundamental Meanings and Formulars. J Biomet Biostat. 2012; 3(1): 132. Publisher Full Text\n\nFlach PA: The geometry of ROC space: understanding machine learning metrics through ROC isometrics. Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003). 2003; 194–201. Reference Source\n\nTompa M, Li N, Bailey TL, et al.: Assessing computational tools for the discovery of transcription factor binding sites. Nat Biotechnol. 2005; 23(1): 137–44. PubMed Abstract | Publisher Full Text\n\nVon Neumann J, Morgenstern O: Theory of games and economic behavior. 6th ed., USA: Princeton university press. 1955. Reference Source\n\nCarmona SJ, Nielsen M, Schafer-Nielsen C, et al.: Towards High-throughput Immunomics for Infectious Diseases: Use of Next-generation Peptide Microarrays for Rapid Discovery and Mapping of Antigenic Determinants. Mol Cell Proteomics. 2015; 14(7): 1871–84. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "17996", "date": "06 Dec 2016", "name": "Pieter Meysman", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Ignacio Enrique Sanchez concerns a common problem in machine learning, namely the selection of the optimal classification threshold, and provides a mathematical solution based on the principles of game theory. The main concern of the article deals with the unknown distribution of positive and negative samples in the ‘real world’ or ’nature', thus beyond the provided training data set. The provided derivation is very elegant, and luckily for those researchers in the field the solutions turns out to be to select a threshold where sensitivity and specificity are equal in the training data set.\nThe biggest concern from the perspective of game theory is that ’nature’ is not a conscious agent, and thus will not mischievously choose a positive/negative fraction where the classifier will perform the worst. However as stated in the article, this is to simulate the worst case scenario. However this also means that the threshold calculation may only be optimal in this worst case scenario, but suboptimal in all other cases. It is therefore still not the final word in threshold optimisation, and still leaves machine learning researchers the flexibility to choose other thresholds.\nHowever I do have a minor comment on the derivation, that I expect can be addressed with small clarifications to the text:\nThe Accuracy equals the Utility as defined by the payoff matrix in the specific case a=d=1 and b=c=0, which is stated without a loss in generality. However in my understanding, this step makes the assumption that the cost for a false negative and the cost for a false positive is equal, which may not be the case for all classifiers. Thus it is unclear if this specific case can be transposed to all classifiers in general.", "responses": [] }, { "id": "17994", "date": "07 Dec 2016", "name": "Luis Diambra", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author presents a criterion to choose the operating point for a binary classifier. This criterion is analyzed in term of the game theory. By using the mininax principle author proposes to use as classifier threshold the intersection between the ROC curve and the descending diagonal in ROC space. This operating point for the classifier could maximizes the robustness against some bias in the training set. I found some novelty in the fact to consider such bias for an optimal threshold estimation. The paper is well written and organized but I think also that it could be improved by incorporating some general considerations that helping readers to a better understanding of the problem and the present proposition [1,2].\nIn the binary classification problem one is trying to deduce the answers to new questions, rather than just recall the answers to old ones. In order to do that we need to train the classifier from question-answer pairs (the training set). This is called supervised learning, because it requires a teacher, knowing the rule, which gives the correct answer to the example questions. In the case here, the author consider score-based binary classifiers, which does not need such learning stage. Could the author put the problem in the context supervised vs. no-supervised?\nIn the supervised learning context the classifier threshold is a parameter that is found during the learning stage. Training the classifier maximizing the area under ROC curve is an strategy for the classifier learn the training set. Consequently, the proposed strategy could be considered as a \"learning rule\". However, the performance over new examples is not guaranteed. Other point which can improve the manuscript would be to consider the ability of generalization of the proposed strategy. Could the author add a discussion in this sense?\nI believe that this manuscript is of an acceptable scientific standard, and that it will be of interest to a wide audience; however, the manuscript could be revised, as outlined above.", "responses": [] } ]
1
https://f1000research.com/articles/5-2762
https://f1000research.com/articles/6-107/v1
06 Feb 17
{ "type": "Data Note", "title": "In silico discovery of terpenoid metabolism in Cannabis sativa", "authors": [ "Luca Massimino" ], "abstract": "Due to their efficacy, cannabis based therapies are currently being prescribed for the treatment of many different medical conditions. Interestingly, treatments based on the use of cannabis flowers or their derivatives have been shown to be very effective, while therapies based on drugs containing THC alone lack therapeutic value and lead to increased side effects, likely resulting from the absence of other pivotal entourage compounds found in the Phyto-complex. Among these compounds are terpenoids, which are not produced exclusively by cannabis plants, so other plant species must share many of the enzymes involved in their metabolism. In the present work, 23,630 transcripts from the canSat3 reference transcriptome were scanned for evolutionarily conserved protein domains and annotated in accordance with their predicted molecular functions. A total of 215 evolutionarily conserved genes encoding enzymes presumably involved in terpenoid metabolism are described, together with their expression profiles in different cannabis plant tissues at different developmental stages. The resource presented here will aid future investigations on terpenoid metabolism in Cannabis sativa.", "keywords": [ "terpenoid metabolism", "cannabis sativa", "in silico gene expression", "evolutionarily conserved genes" ], "content": "Introduction\n\nDue to its astonishing efficacy1, nowadays cannabis is prescribed by physicians for the treatment of neurological, psychiatric, immunological, cardiovascular, gastrointestinal, and oncological conditions2–7. Although therapies based on the use of cannabis flowers or their derivatives are recognized to be very effective, treatments centered on drugs containing Δ9-tetrahydrocannabinol (THC) alone lack efficacy and lead to increased side effects8,9. This discrepancy seems to result from the absence of the synergistic effects of additional pivotal compounds found in the Phyto-complex, the so-called entourage effect10. Among these molecules are other cannabinoids and terpenoids, which are thought to play major roles in the modulation of THC11.\n\nTerpenes are small hydrocarbon (isoprenoid) molecules classified as either as monoterpenes, sesquiterpenes, diterpenes or carotenoids, depending on the number of isoprene units (C5) used to synthetize them. Terpenoids are small lipids derived from terpenes, often accompanied by a strong odor useful for the plants to protect themselves against possible predators12. Terpenoids not only have important functions when working in concert with cannabinoids; they have been widely investigated in many different plant species and are being exploited as anti-fungal, anti-bacterial, anti-oxidant, anti-inflammatory, anti-stress, anti-cancer and analgesic agents13–18. However, whilst the gene networks controlling the biosynthesis of cannabinoids and their precursors have been extensively studied19–22, the biosynthetic pathway of terpenoid molecules in Cannabis sativa is only recently being elucidated. Only two genes have been characterized, one encoding (-)-limonene synthase, the other (+)-α-pinene synthase23, two enzymes responsible for the conversion of geranyl pyrophosphate into limonene and pinene, respectively24,25. Remarkably, while cannabinoids are only found in cannabis plants, terpenoids are also produced by a variety of other plant species, so they must share many of the enzymes involved in their metabolism.\n\nIn the present work, evolutionarily conserved genes encoding enzymes predicted to be involved in terpenoid metabolism have been identified within the transcripts of the canSat3 reference transcriptome of Cannabis sativa21. Moreover, by taking advantage of available gene expression data26, gene expression profiling of these enzymes was performed in cannabis plant tissue at different developmental stages. The data note presented here will provide researchers with a corollary of candidate genes that will considerably accelerate future investigations on terpenoid metabolism in Cannabis sativa.\n\n\nMaterial and methods\n\nCannabis sativa transcript sequences (n=23,630) taken from the canSat3 genome assembly (http://genome.ccbr.utoronto.ca/)21 were annotated with Blast2GO 4.0.727 using NCBI blastx and InterProScan databases. Terpene metabolism related genes were selected if found to be present in datasets downstream of the “terpene metabolic process” gene ontology category from the AmiGO 2 repository (GO:0042214), including the carotene metabolic process (GO:0016119), ent-kaurene metabolic process (GO:0033331), ent-pimara-8(14),15-diene metabolic process (GO:1901539), isoprene metabolic process (GO:0043611), miltiradiene metabolic process (GO:1901944), monoterpene metabolic process (GO:0043692), sesquarterpene metabolic process (GO:1903192), sesquiterpene metabolic process (GO:0051761), terpene biosynthetic process (GO:0046246), and terpene catabolic process (GO:0046247)28.\n\nGene expression profiles from cannabis plant tissue at different developmental stages were downloaded from the NCBI GEO repository (https://www.ncbi.nlm.nih.gov/geo/). Gene expression heatmaps and unsupervised hierarchical clustering were performed with GENE-E 3.0.21329.\n\n\nResults\n\nAlthough the Cannabis sativa reference genome and transcriptome has been publicly released21, only a few genes have been characterized and surveyed for their molecular functions. To define the possible roles of these genes, 40,197 canSat3 transcript sequences were downloaded from the cannabis genome browser (http://genome.ccbr.utoronto.ca/), translated in silico, and scanned for evolutionarily conserved protein domains for functional annotation (Figure 1). To identify the genes presumably playing a role in terpenoid metabolism, annotated transcripts were filtered for gene ontology (GO) categories involved in terpene biosynthesis and catabolism using the AmiGO 2 reference database. A total of 288 transcripts representing 215 different genes were predicted to be involved in the metabolism of bisabolene, cadinene, carotene, copaene, ent-kaurene, farnesol, geraniol, germacrene, lycopene, limonene, myrcene, phytoene, pinene, squalene, and others (Supplementary table 1). Functional characterization of this subset confirmed an enrichment for GO categories involved in different terpene biosynthetic and catabolic processes (Figure 2).\n\nSchematic of bioinformatics pipeline utilized in this work. Cannabis sativa transcript sequences were taken from the canSat3 reference genome, functionally annotated with Blast2GO 4.0.7, filtered for terpenoid metabolism categories (AmiGO 2), and integrated with gene expression data downloaded from NCBI.\n\nFunctional enrichment analysis of putative terpenoid metabolism related transcripts taken from the canSat3 reference genome. Enrichment for terpene biosynthesis and catabolism is shown for Biological Process (A) and Molecular function (B) Gene ontology categories.\n\nTerpenoids are produced by several plants species and in several types of plant tissue as defense against predators30. Similar to other biological compounds, their abundance directly correlates with the expression levels of the enzymes involved in their metabolism. To this end, gene expression analysis of genes likely to be involved in terpenoid metabolism was performed using previously published datasets (Figure 3; Supplementary table 2)26. Notably, unsupervised hierarchical clustering identified four gene clusters. Cluster 1 genes display high expression in roots and stems; cluster 3 genes in hemp flowers; cluster 4 genes in leaves and flowers. Cluster 2 genes were constitutively expressed in all tissues. These results highlight which enzymes are expressed by specific tissues and will provide a strong rationale for further investigations on the molecular basis of terpenoid metabolism.\n\nHeatmap showing relative expression values (log2 RPKM) of putative terpenoid metabolism related genes from cannabis plant tissue taken at different developmental stages (shoot, root, stem, young and mature leaf, early-, mid- and mature-stage flower). Five gene clusters were defined in accordance to unsupervised hierarchical clustering.\n\n\nDiscussion\n\nThe active principles inside plants have been exploited by humans for centuries, with Cannabis sativa being one of the oldest ever used for medicinal purposes31. Surprisingly, contrary to whole plant extracts, medicinal products containing exclusively THC have been found to lack efficacy and lead to unbearable side effects8,9. These results arise from the fact that these products lack other important co-factors typically found in the Phyto-complex, such as terpenoids and other cannabinoids10 that contribute to the synergistic effects seen with whole plant extracts.\n\nWhile genes involved in cannabinoid biosynthesis have been widely investigated19–22, the gene network controlling terpenoid metabolism is only recently being elucidated, with genes encoding (-)-limonene synthase and (+)-α-pinene synthase being the only two characterized23. To this end, Cannabis sativa transcripts21 have been scanned for evolutionarily conserved protein domains and annotated according to their presumptive molecular function. As a result, 215 evolutionarily conserved genes were predicted to be involved in terpenoid metabolism. Furthermore, in silico gene expression profiling26 of these enzymes in cannabis plant tissue at different developmental stages highlighted different gene clusters with peculiar expression patterns. For instance, cluster 3 genes (Figure 3) displayed high expression specifically in hemp flowers, which could be of great interest as different cannabis strains harbor different entourage effects.\n\nSince the current cannabis reference transcriptome is still at preliminary stages21, it is very likely that false negatives have caused important transcripts to still be missing. For example, the two genes encoding for (-)-limonene synthase and (+)-α-pinene synthase23 align on the same transcript predicted to encode for Myrcene synthase (PK25781.1 in Supplementary table 1), and therefore cannot be discriminated. Unfortunately, to overcome this issue at whole genome level we need the complete version of the reference transcriptome to be available. Until that time, researchers are forced to validate single transcripts with classic low-throughput technology, such as molecular cloning followed by Sanger sequencing.\n\nNevertheless, the data presented here will ease future investigations on terpenoid metabolism in Cannabis sativa by providing researchers with a collection of candidate genes. For instance, one of these genes was predicted to encode for β-bisabolene synthase (PK05069.1 in Supplementary table 1). Bisabolene is being used as an antimicrobial agent32, as well as a biofuel33. However, prior to this report nothing was known about the gene network controlling its metabolism in Cannabis sativa. As soon as future studies will integrate gene expression data with chemical analysis, the complete molecular scenario underlying terpenoid metabolism will be revealed.\n\n\nData availability\n\nProcessed gene expression data can be found in the NCBI GEO repository (https://www.ncbi.nlm.nih.gov/geo/) with accession number GSE93201.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary table 1. Evolutionarily conserved terpenoid metabolism transcripts\n\nList of putative terpenoid metabolism genes obtained with Blast2GO.\n\nClick here to access the data.\n\nSupplementary table 2. Terpenoid metabolism gene profiling in different tissues and developmental stages\n\nGene expression matrix of predicted terpenoid metabolism genes. Expression units are expressed in RPKM.\n\nClick here to access the data.\n\n\nReferences\n\nCommittee on the Health Effects of Marijuana: The Health Effects of Cannabis and Cannabinoids. (National Academies Press). 2017. Publisher Full Text\n\nHosking R, Zajicek J: Pharmacology: Cannabis in neurology--a potted review. Nat Rev Neurol. 2014; 10(8): 429–30. PubMed Abstract | Publisher Full Text\n\nCurran HV, Freeman TP, Mokrysz C, et al.: Keep off the grass? Cannabis, cognition and addiction. Nat Rev Neurosci. 2016; 17(5): 293–306. PubMed Abstract | Publisher Full Text\n\nKlein TW: Cannabinoid-based drugs as anti-inflammatory therapeutics. Nat Rev Immunol. 2005; 5(5): 400–11. PubMed Abstract | Publisher Full Text\n\nDi Marzo V, Després JP: CB1 antagonists for obesity--what lessons have we learned from rimonabant? Nat Rev Endocrinol. 2009; 5(11): 633–8. PubMed Abstract | Publisher Full Text\n\nGerich ME, Isfort RW, Brimhall B, et al.: Medical marijuana for digestive disorders: high time to prescribe? Am J Gastroenterol. 2015; 110(2): 208–14. PubMed Abstract | Publisher Full Text\n\nSwami M: Cannabis and cancer link. Nat Rev Cancer. 2009; 9: 148. Publisher Full Text\n\nBen Amar M: Cannabinoids in medicine: A review of their therapeutic potential. J Ethnopharmacol. 2006; 105(1–2): 1–25. PubMed Abstract | Publisher Full Text\n\nKowal MA, Hazekamp A, Grotenhermen F: Review on clinical studies with cannabis and cannabinoids 2010–2014. Cannabinoids. 2016; 11(special issue):: 1–18. Reference Source\n\nRusso EB: Taming THC: potential cannabis synergy and phytocannabinoid-terpenoid entourage effects. Br J Pharmacol. 2011; 163(7): 1344–1364. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAndre CM, Hausman JF, Guerriero G: Cannabis sativa: The Plant of the Thousand and One Molecules. Front Plant Sci. 2016; 7: 19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPichersky E, Noel JP, Dudareva N: Biosynthesis of Plant Volatiles: Nature’s Diversity and Ingenuity. Science. 2006; 311(5762): 808–811. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhosla C, Keasling JD: Metabolic engineering for drug discovery and development. Nat Rev Drug Discov. 2003; 2(12): 1019–1025. PubMed Abstract | Publisher Full Text\n\nHakkim FL, Al-Buloshi M, Al-Sabahi J: Frankincense derived heavy terpene cocktail boosting breast cancer cell (MDA-MB-231) death in vitro. Asian Pac J Trop Biomed. 2015; 5(10): 824–828. Publisher Full Text\n\nSmanski MJ, Zhou H, Claesen J, et al.: Synthetic biology to access and expand nature’s chemical diversity. Nat Rev Microbiol. 2016; 14(3): 135–149. PubMed Abstract | Publisher Full Text | Free Full Text\n\nd’Alessio PA, Bisson JF, Béné MC: Anti-stress effects of d-limonene and its metabolite perillyl alcohol. Rejuvenation Res. 2014; 17(2): 145–149. PubMed Abstract | Publisher Full Text\n\nKlauke AL, Racz I, Pradier B, et al.: The cannabinoid CB2 receptor-selective phytocannabinoid beta-caryophyllene exerts analgesic effects in mouse models of inflammatory and neuropathic pain. Eur Neuropsychopharmacol. 2014; 24(4): 608–620. PubMed Abstract | Publisher Full Text\n\nBonamin F, Moraes TM, Dos Santos RC, et al.: The effect of a minor constituent of essential oil from Citrus aurantium: the role of β-myrcene in preventing peptic ulcer disease. Chem Biol Interact. 2014; 212: 11–19. PubMed Abstract | Publisher Full Text\n\nSirikantaramas S, Morimoto S, Shoyama Y, et al.: The gene controlling marijuana psychoactivity: molecular cloning and heterologous expression of Delta1-tetrahydrocannabinolic acid synthase from Cannabis sativa L. J Biol Chem. 2004; 279(38): 39767–74. PubMed Abstract | Publisher Full Text\n\nTaura F, Tanaka S, Taguchi C, et al.: Characterization of olivetol synthase, a polyketide synthase putatively involved in cannabinoid biosynthetic pathway. FEBS Lett. 2009; 583(12): 2061–2066. PubMed Abstract | Publisher Full Text\n\nvan Bakel H, Stout JM, Cote AG, et al.: The draft genome and transcriptome of Cannabis sativa. Genome Biol. 2011; 12(10): R102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGagne SJ, Stout JM, Liu E, et al.: Identification of olivetolic acid cyclase from Cannabis sativa reveals a unique catalytic route to plant polyketides. Proc Natl Acad Sci U S A. 2012; 109(31): 12811–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGünnewich N, Page JE, Köllner TG, et al.: Functional expression and characterization of trichome- specific (-)-limonene synthase and (+)-α-pinene synthase from Cannabis sativa. Nat Prod Commun. 2007; 2: 223–232.\n\nColby SM, Alonso WR, Katahira EJ, et al.: 4S-limonene synthase from the oil glands of spearmint (Mentha spicata). cDNA isolation, characterization, and bacterial expression of the catalytically active monoterpene cyclase. J Biol Chem. 1993; 268(31): 23016–24. PubMed Abstract\n\nBohlmann J, Meyer-Gauen G, Croteau R: Plant terpenoid synthases: molecular biology and phylogenetic analysis. Proc Natl Acad Sci U S A. 1998; 95(8): 4126–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMassimino L: In silico gene expression profiling in Cannabis sativa. F1000Research. 2017; 6: 69. Publisher Full Text\n\nConesa A, Götz S, García-Gómez JM, et al.: Blast2GO: A universal tool for annotation, visualization and analysis in functional genomics research. Bioinformatics. 2005; 21(18): 3674–3676. PubMed Abstract | Publisher Full Text\n\nCarbon S, Ireland A, Mungall CJ, et al.: AmiGO: Online access to ontology and annotation data. Bioinformatics. 2009; 25(2): 288–289. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGENE-E. Cambridge (MA): The Broad Institute of MIT and Harvard. Reference Source\n\nMartin DM, Gershenzon J, Bohlmann J: Induction of volatile terpene biosynthesis and diurnal emission by methyl jasmonate in foliage of Norway spruce. Plant Physiol. 2003; 132(3): 1586–1599. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPain S: A potted history. Nature. 2015; 525(7570): S10–1. PubMed Abstract | Publisher Full Text\n\nSatyal P, Setzer WN: Chemotyping and Determination of Antimicrobial, Insecticidal, and Cytotoxic Properties of Wild-Grown Cannabis sativa from Nepal. 2014; 3(1–4): 9–16. Reference Source\n\nPeralta-Yahya PP, Ouellet M, Chan R, et al.: Identification and microbial production of a terpene-based advanced biofuel. Nat Commun. 2011; 2: 483. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19961", "date": "10 Mar 2017", "name": "Akan Das", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the present work, 23,630 transcripts were scanned for evolutionarily conserved protein domains and annotated using Gene Ontology analysis. A total of 215 evolutionarily conserved genes encoding enzymes presumably involved in terpenoid metabolism are described on the basis of Gene Expression profiles of NCBI GEO repository. The identification of candidate genes on terpenoid metabolism in Cannabis sativa will ease the future investigations on terpenoid metabolism pathways.\nIt is a small piece of work on bioinformatics analysis of transcripts which present useful information to the plant researcher. Hence, the article can be recommended to be published as a Data Note. I suggest the author make a minor change in the title to be \"In silico discovery of terpenoid metabolism associated transcripts in Cannabis sativa\" and that the discussion, rather than describing general things, should be modified to discuss the findings in the work.", "responses": [] }, { "id": "23373", "date": "09 Jun 2017", "name": "Jonathan E Page", "expertise": [ "Reviewer Expertise Plant metabolism" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present study is appropriate as a data note, and may provide a useful shortcut for future studies on terpenoid biosynthesis in cannabis. The author identifies transcripts putatively involved in terpenoid biosynthesis, including processes of both primary and specialized metabolism. The rationale is well explained, and the methods are technically sound. Major changes:\n\nMore information is required in the Materials and Methods for the study to be replicable. Two versions of the canSat3 transcriptome are available, and the authors should specify whether they used the full or representative transcript set. The authors must also provide the NCBI accession numbers for the GEO dataset they used. There is an unsupported statement in the results section, which reads: “Similar to other biological compounds, their abundance directly correlates with the expression levels of the enzymes involved in their metabolism.” In a complex system, this is not obvious and requires textual support.\n\nMinor changes:\n\nThe Introduction generally provides a good rationale for the study. However, the definitions provided for ‘terpene’ and ‘terpenoid’ in the second sentence of the second paragraph do not reflect the definitions used in reference 12, nor are they the traditional definitions for those terms. References 24 and 25 address enzymes characterized in other organisms, and so do not support the statement regarding their biosynthetic activities in cannabis. That information is provided in reference 23. The final statement of the second introductory paragraph is unsupported and requires citation. Figure 2 caption states that the transcripts were taken from the reference genome, but the Materials and Methods indicate that a transcriptome was used. Figure 3 caption insufficiently explains x axis labels. The caption should define ‘PK’ and ‘Finola’. Otherwise, this figure shows an interesting and valuable result.\nReaders should note the specific products of terpene synthases and other enzymes in specialized metabolism are currently difficult to predict using in silico methods. A recent paper (Booth et al., 20171) includes biochemical and phylogenetic analysis of some of the candidate genes highlighted here, and may be of interest to readers.\n\nIs the rationale for creating the dataset(s) clearly described? Partly\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Partly\n\nAre the datasets clearly presented in a useable and accessible format? Yes", "responses": [] }, { "id": "23344", "date": "19 Jun 2017", "name": "Meirong Jia", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe bioinformatics study here has reported 215 genes that are putatively involved in terpenoid biosynthesis in Cannabis sativa using available genome and transcriptome resources. This knowledge would predictably accelerate the terpenoid investigations in this specific species, or even broader. The research approaches are clear and sound. The writing is generally logical and organized. So I would recommend the publication of this work as a data note.\n\nI would suggest the author to make the following minor changes.\n\n1.  The author might wish to clarify throughout the text as to the original “transcriptome” (or “genome”) resources they have used for initial study. For instance, adding the specific link of that file name. 2.  In the caption of figure 3, the author claimed “five gene clusters were defined” while in the text, it was “four gene clusters”, the author needs to correct this. Also, it is confused here that some genes clustered are expressed in “Finola” because from the previous text in “introduction” “para 3”, it was said the study was conducted “within the transcripts of the canSat3 reference transcriptome of Cannabis sativa21”. Please clarify this. 3. The 215 gene candidates for terpenoids have been further predicted to express distinctively in tissues and developmental stages. While it is not easy to specifically check the expression pattern of each gene from the currently provided data. It would be great if possible that the author could add this piece of information, which would be valuable for future characterization of these genes. An example study showing the expression pattern of terpene genes would be found in the paper by Wang, et al. 20161.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Partly", "responses": [] } ]
1
https://f1000research.com/articles/6-107
https://f1000research.com/articles/5-2739/v1
22 Nov 16
{ "type": "Research Article", "title": "Inhibition of CD34+ cell migration by matrix metalloproteinase-2 during acute myocardial ischemia, counteracted by ischemic preconditioning", "authors": [ "Dominika Lukovic", "Katrin Zlabinger", "Alfred Gugerell", "Andreas Spannbauer", "Noemi Pavo", "Ljubica Mandic", "Denise T. Weidenauer", "Stefan Kastl", "Christoph Kaun", "Aniko Posa", "Inna Sabdyusheva Litschauer", "Johannes Winkler", "Mariann Gyöngyösi", "Katrin Zlabinger", "Alfred Gugerell", "Andreas Spannbauer", "Noemi Pavo", "Ljubica Mandic", "Denise T. Weidenauer", "Stefan Kastl", "Christoph Kaun", "Aniko Posa", "Inna Sabdyusheva Litschauer", "Johannes Winkler", "Mariann Gyöngyösi" ], "abstract": "Background. Mobilization of bone marrow-origin CD34+ cells was investigated 3 days (3d) after acute myocardial infarction (AMI) with/without ischemic preconditioning (IP) in relation to stromal-derived factor-1 (SDF-1α)/ chemokine receptor type 4 (CXCR4) axis, to search for possible mechanisms behind insufficient cardiac repair in the first days post-AMI. Methods. Closed-chest reperfused AMI was performed by percutaneous balloon occlusion of the mid-left anterior descending (LAD) coronary artery for 90min, followed by reperfusion in pigs. Animals were randomized to receive either IP initiated by 3x5min cycles of re-occlusion/re-flow prior to AMI (n=6) or control AMI (n=12). Blood samples were collected at baseline, 3d post-AMI, and at 1-month follow-up to analyse chemokines and mobilized CD34+ cells. To investigate the effect of acute hypoxia, SDF-1α and matrix metalloproteinase (MMP)-2 in vitro were assessed, and a migration assay of CD34+ cells toward cardiomyocytes was performed. Results. Reperfused AMI induced significant mobilisation of CD34+ cells (baseline: 260±75 vs. 3d: 668±180; P<0.001) and secretion of MMP-2 (baseline: 291.83±53.40 vs. 3d: 369.64±72.89; P=0.011) into plasma, without affecting the SDF-1α concentration. IP led to the inhibition of MMP-2 (IP: 165.67±47.99 vs. AMI: 369.64±72.89; P=0.004) 3d post-AMI, accompanied by increased release of SDF-1α (baseline: 23.80±12.36 vs. 3d: 45.29±11.31; P=0.05) and CXCR4 (baseline: 0.59±0.16 vs. 3d: 2.06±1.42; P=0.034), with a parallel higher level of mobilisation of CD34+ cells (IP: 881±126 vs. AMI: 668±180; P=0.026), compared to non-conditioned AMI. In vitro, CD34+ cell migration toward cardiomyocytes was enhanced by SDF-1α, which was completely abolished by 90min hypoxia and co-incubation with MMP-2. Conclusions. Non-conditioned AMI induces MMP-2 release, hampering the ischemia-induced increase in SDF-1α and CXCR4 by cleaving the SDF-1α/CXCR4 axis, with diminished mobilization of the angiogenic CD34+ cells. IP enforces CD34+ cell mobilization via inhibition of MMP-2.", "keywords": [ "acute myocardial infarction", "stem cell mobilization", "preconditioning", "SDF-1/CXCR4 axis", "MMP-2" ], "content": "Introduction\n\nHeart regeneration after ischemic insult is still a matter of debate in spite of extensive research conducted in this field. One of the endogenous cardiac repair mechanisms is the mobilization of regenerative cells derived from bone marrow (BM), followed by migration and homing of the cells in the ischemic myocardial tissue1. Several factors have been identified that play a role in the mobilization of BM-origin stem and progenitor cells, and assist in migration and homing, such as chemotactic factors, complement fractions, cytokines, microRNAs or microvesicles. Among these substances, the axis of the stromal-derived factor-1 alpha [SDF-1α; chemokine receptor 12 (CXCL12)] and chemokine receptor type 4 (CXCR4) exerts the strongest chemoattractant stimulus for migration and homing of cells in the BM and tumors, but also in ischemic tissues, such in case of myocardial ischemia or ischemic stroke2. The local upregulation of SDF-1α attracts the cells covered with CXCR4 receptors towards the SDF-1α gradient, facilitating the migration of the cells into the target organ tissues.\n\nExploiting the beneficial effect of the SDF-1α/CXCR4 axis in cardiac repair has been performed by repeated injections of granulocyte-colony stimulating factor (G-CSF) applied in patients, which aims to release the stem and premature cells from BM and activate the cellular CXCR4 expression of the reparative cells by interrupting the BM-SDF-1α/CXCR4 axis3. However, despite the enhanced cell release and migration, and stimulation of the endogenous cardiac progenitor cells, the efficacy of clinical cardiac cell-based therapy in patients with recent acute myocardial infarction (AMI) led to ambiguous results4, especially if the regenerative cell therapy was performed very early after ischemic injury5.\n\nAmong several mechanisms explaining the cardiac regenerative processes, secretion of distinct chemokines, cytokines, and growth factors may play an important role; which factors are released after myocardial infarction2. Tang et al demonstrated upregulated SDF-1α expression in infarcted mouse myocardial tissue after implantation of mesenchymal stem cells induced with vascular endothelial growth factor (VEGF). This led to increased mobilisation of BM-derived stem cells6. Also, the upregulation of pro-inflammatory cytokines, such tumor necrosis factor (TNF)α7 and interleukin (IL)-88, might initiate processes triggering increased cell trafficking, since myocardial infarction is associated with the inflammatory response8. In contrast, matrix metalloprotease (MMP)-2 cytokine is known to be the inhibitor of SDF-1α, implicating its inactivation9. Another mechanistic process is cardioprotection, induced by either ischemic pre-, post- or remote conditioning, or by numerous cardioprotective substances, but their clinical importance is doubtful. Ischemic preconditioning (IP) has been shown to exhibit cardioprotective mechanisms, and stimulates the recruitment and homing of progenitor cells toward ischemic myocardium in early phases of cardioprotection in several animal models7.\n\nIn our present experiment, we have investigated the mobilization of BM-origin CD34+ cells 3 days after reperfused acute myocardial infarction (AMI) in relation to the SDF-1α/CXCR4 axis. We measured the release of several cytokines, such as MMP-2, VEGF, fibroblast growth factor (FGF)-2, IL-8 and TNFα, to investigate and explain the possible mechanism behind insufficient cardiac repair in the first days post-AMI. In addition, the current study explored a possible counteracting effect of IP on cytokines, CD34+ cell release and MMP-2 expression.\n\n\nMethods\n\nRandomly selected female domestic pigs (n=18; weight, 30–35kg) underwent percutaneous coronary intervention (PCI) under general anaesthesia in order to perform either ischemic preconditioning (group IP; n=6) or non-conditioned AMI (group control; n=12), (randomisation 1:2). Animals in both groups underwent 90min percutaneous balloon occlusion of the left anterior descending (LAD) coronary artery at the origin of the first diagonal branch following reperfusion (balloon deflation). IP was initiated prior to 90min LAD occlusion by 3×5min repetitive cycles of artery re-occlusion and reperfusion. All animals survived for 1 month after the experimental procedure (Figure 1).\n\n(A) Time scale: 30min anaesthesia, followed by 90min occlusion of the mid left anterior descending coronary artery (LAD), followed by reperfusion. Follow-up (FUP) times 3 days and 1 month. (B) Control group (n=12). (C) Ischemic preconditioning group (n=6) was induced by 3×5min cycles of ischemia/reperfusion (balloon inflation/deflation) prior to 90min balloon occlusion of the LAD. (D) Angiographic pictures of the balloon occlusion of the LAD (left, left anterior oblique acquisition at 45°) and control angiography after restoration of the reperfusion (right, anteroposterior view).\n\nAll procedures were performed with the approval of the local Experimental Animal Care Committee (EK SOI/31/26-11/2014) of the University of Kaposvar, Hungary, conforming to the Guide for the Care and Use of Laboratory Animals published by the US National Institute of Health (NIH Publication No. 85–23, revised 1996). All animal experiments were conducted at the institute of diagnostic imaging and radiation oncology, University of Kaposvar.\n\nFemale domestic pigs received 12mg/kg ketamine, 1mg/kg xylazine and 0.04mg/kg atropine as anaesthesia. The anaesthesia was deepened via mask maintaining 1.5–2.5 vol % isofluran, 1.6–1.8 vol % O2 and 0.5 vol % N2O. In total, 200IU/kg of heparin was administrated via the right femoral artery, and selective angiography of LAD arteries was performed prior to induction of myocardial ischemia (MI). MI was induced by 90min balloon occlusion (3.0mm ø, 15mm length, 5atm; Maverick, Boston Scientific, MA, USA) at the mid-part of the LAD artery following balloon deflation. The % O2 saturation, blood pressure and electrocardiogram were continuously measured during the intervention.\n\nBlood samples were collected from the femoral vein for the detection of biological markers. Samples were centrifuged at 2000×g for 10min, and the plasma and serum samples were stored at -20°C until the analysis was performed. For fluorescent activated cell sorting (FACS) analysis, whole blood was collected into EDTA-treated tubes (BD Vacutainer®; Becton, Dickinson and Company, New Jersey, USA) at baseline, 3 days post MI and 1 month follow-up (FUP). All blood samples were processed within 6h.\n\nPlasma level of stromal cell-derived factor-1 (porcine SDF-1α ELISA Kit; Neoscientific, Germany), chemokine (C-X-C motif) receptor 4 (pig CXCR4 ELISA Kit; Abbexa, UK), 72kDa isoform of matrix metalloproteinase-2 protein (porcine MMP-2 ELISA Kit; MyBioSource, CA, USA), fibroblast growth factor-2 (porcine FGF-2 ELISA Kit; Neoscientific, Germany), and vascular endothelial growth factor (porcine VEGF ELISA Kit; Neoscientific, Germany) were detected using commercial ELISA kits, according to the manufacturer's instructions. Tumor necrosis factor alpha (Porcine TNFα Quantikine ELISA Kit; R&D Systems, MN, USA), and interleukin-8 (pig IL-8 ELISA Kit; Abcam, UK) were detected from serum, according to the manufacturer’s instructions.\n\nAbsorbance readings at wavelength 450nm were performed on the automated plate reader VIKTOR3 (Perkin Elmer, MA, USA), and the resulting values were determined by interpolation from a standard curve. Measurements were performed in duplicates. Plasma or serum levels of markers were measured at baseline, 3d post MI and at 1 month FUP.\n\nFACS analysis of whole blood samples was performed at baseline, 3d post MI and at 1 month FUP in order to address the kinetics of mobilized CD34+ cells in vivo. EDTA-treated venous blood samples (100μl) were labelled with PE-DY647-conjugated CD34+ antibody (monoclonal antibody; host/isotype: mouse/IgG1; cat# MA1-19770; Thermo Fisher Scientific, Waltham, MA, USA) or the corresponding isotype control (PE-conjugated mouse IgG1; cat# MA1-10415; Thermo Fisher Scientific, Waltham, MA) for 20min at room temperature. Anti-human CD34+ antibody was utilized due to lack of commercially available porcine-specific CD34+ marker (dilution: 5μl antibody/100μl whole porcine blood). Subsequently, erythrocyte cell lysis was performed, according to the manufacturer’s protocol, using Dako-Uti LyseTM (Dako, Agilent Technologies, Santa Clara, CA, USA) following fixation with PBS containing 1% paraformaldehyde. FACS analysis was performed on CyFlow® ML/space flow cytometer (Sysmex Partec, Görlitz, Germany) with acquired 100.000 events within the gated region of mononuclear cells of forward versus side scatter. Absolute counts of CD34+ cells were obtained by multiplying the ratio of the CD34+ cells obtained in the flow cytometry analysis and absolute count of leucocytes per 1μl of blood.\n\nHuman adult cardiac myocytes (HACMs) were isolated from the left ventricular tissue obtained from the hearts of patients undergoing heart transplantation. Mechanical dissociation of the tissue and separation of the cardiomyocytes from fibroblasts detached to Petri-dish surface was performed, as described previously10. All tissue donors gave their informed written consent to the study. The study was approved by the local ethical committee (Medical University of Vienna, Austria; EK 151/2008) and complies with the Declaration of Helsinki.\n\nHuman cord blood CD34 positive cells (CD34+ cells) were purchased from StemCell Technologies Company (Grenoble, France).\n\nMigration of CD34+ cells was monitored by commercially available Roche xCELLigence System (Acea Bioscience, CA, USA), according to the manufacturer´s instructions. Briefly, 160μl suspension of HACM cells (conc. 10.000 cells/well) was resuspended in M199 cardiac cell culture media (Sigma-Aldrich, Vienna, Austria) containing 20% FBS and 1% Pen/Strep solution (Gibco™, Thermo Fischer Scientific, MA, USA). Cell suspension was transferred to the lower chamber of the CIM-Plate with integrated gold microelectrode sensors (Figure 2).\n\nHuman adult cardiac myocytes (HACMs) were incubated in the lower chamber. After 30min incubation with different treatments: either adding stromal-derived factor-1-alpha (SDF-1α) in increasing concentrations, or adding the highest concentration of SDF-1α and matrix-metalloprotease-2 (MMP-2), or the cells were kept under hypoxia for 90min, followed by the addition of SDF-1α. Lower and upper chamber were then combined, and after adding serum-free media to the upper chamber, baseline impedance measurements were performed, followed by adding CD34+ cells. Impedance measurements were performed to quantify the CD34+ cell migration towards HAMCs.\n\nHACMs were incubated under normoxic conditions with increasing doses (0.005, 0.5 and 5.0ng/mL) of SDF-1α (Sigma-Aldrich, Vienna, Austria) to evaluate the maximal SDF-1α chemoattractant effect on CD34+ cells towards HCAMs.\n\nIn order to analyse the effect of MMP-2 and hypoxia on the mobilisation of CD34+ cells toward HCAMs, HCAMs were incubated with SDF-1α (with the elaborated maximal effect of 5.0ng/mL) either with co-incubation with MMP-2 (5.0ng/mL; Sigma-Aldrich, Vienna, Austria) under normoxia, or under 90min hypoxic conditions.\n\nHypoxia (90min, 37°C, 1% O2) was induced in HCAM cell culture on the CIM-Plate by sealing the cell culture plate in an airtight plastic bag (Microbiology Anaerocult® IS Bag; Merck Millipore, Vienna, Austria) containing a dry anaerobic indicator strip.\n\nIn total, 25μl serum-free medium (M199 containing 0.1% FBS) was added to the upper chamber 30min after treatments with the various substances, and the chambers were combined for background measurements. Subsequently, CD34+ cells (100.000 cells/well) were transferred to the upper chamber of the CIM-Plate with polyethylene terephthalate membrane (PET) with 8μm pore diameter and measurements were repeated. Migrated cells translocated through the PET-membrane and changed the impedance signal captured by sensors in the lower chamber. The background was subtracted from all results and each experiment was repeated three times (Figure 2).\n\nContinuous parameters were expressed as means ± standard deviation. The effects between the groups and within the groups (baseline vs. 3d post-AMI) were analyzed by two-way analysis of variance (ANOVA) with repeated measures model with Bonferroni correction. The mean differences between the groups were detected by independent Student’s t-test. Differences were considered statistically significant at P<0.05. Statistical analyses were performed with SPSS software (version 17.0; Macintosh; SPSS IBM).\n\n\nResults\n\nAs shown by the control group, reperfused AMI induced a significant increase in the circulating level of CXCR4 (baseline: 0.47±0.22 vs. 3 days post-AMI: 1.15±0.95ng/ml; P=0.034) with parallel significant mobilization of CD34+ cells (baseline: 260±75 vs. 3 days post-AMI: 668±180cells/μl; P<0.001), but without an increase in SDF-1α at 3 days post-AMI (baseline: 32.02±24.35 vs. 3 days post-AMI: 26.97±15.43pg/ml; P=0.41). The circulating levels of the angiogenic cytokines (FGF-2, VEGF, IL-8 and TNFα) were not changed at 3-days post-AMI. However, the level of MMP-2 was increased significantly (baseline: 291.83±53.40 vs. 3 days post-AMI: 369.64±72.88pg/ml; P=0.011), which might explain the cleaved SDF-1α/CXCR4 axis (Figure 3).\n\nPlasma concentrations of circulating (A) SDF-1α; (B) CXCR4; (C) MMP-2 (72kDa) protein isoform measured by porcine-specific ELISAs. (D) Absolute count of circulating CD34+ cells determined by FACS analysis. Concentrations are expressed as mean± standard deviation. *P<0.05 between the IP and control group; +P<0.05 between baseline and 3 day values within the IP group; #P<0.05 between baseline and 3 day values within the control (non-conditioned AMI) group. SDF-1α, stromal-derived factor-1 alpha; CXCR4, C-X-C motif chemokine receptor 4; MMP-2 (72kDa), matrix metalloproteinase-2, 72kDa isoform; IPC, ischemic preconditioning; AMI, acute myocardial infarction; 1 Mo FUP, 1 month follow-up.\n\nIP led to the significantly higher stimulation of SDF-1α chemokine release with its putative receptor, CXCR4, into circulation, accompanied by downregulation of MMP-2. The number of CD34+ cells significantly increased as compared to the animals in the control group (non-conditioned AMI).\n\nThe plasma level of SDF-1α significantly increased 3 days post infarction in the IP group as compared to control AMI group (IP: 45.29± 11.31 vs. control: 27.00±15.43pg/ml; P=0.037), with normalization at the 1-month FUP (IP: 28.87± 3.81 vs. control: 26.91± 20.24pg/ml; P=0.85) (Figure 3A). Enhanced SDF-1α secretion was accompanied by significant increase of its soluble CXCR4 receptor after 3 days post-AMI (baseline: 0.59±0.16 vs. 3 days post-AMI: 2.06±1.42ng/ml; P=0.034); however, this did not reach statistical significance between the groups (IP: 2.06 ±1.42 vs. control: 1.15 ±0.95ng/ml; P=0.79) (Figure 3B).\n\nIP significantly downregulated the secretion of MMP-2 into plasma at 3 days FUP as compared to the control AMI group (IP: 165.67±47.99 vs. control: 369.64±72.89pg/ml; P=0.004), which returned to the baseline level at the 1-month FUP control (IP: 334.00±93.10 vs. control: 347.58±80.47pg/ml; P=0.074) (Figure 3C).\n\nFACS analysis was performed to reflect the impact of chemoattractant release on cell migration. We observed a significant parallel increase of mobilized CD34+ cells in both non-conditioned AMI and IP groups 3 days post infarction (IP: 881±126 vs. control: 668±180cells/μl; P=0.026, returning to the baseline level after 1 month FUP (IP: 255±50 vs. control: 275±118cells/μl; P=0.85) (Figure 3D).\n\nA trend towards increase of IL-8 was observed in the IP group at 3 days and 1 month post infarction (IP: 100.18±60.42 vs. control: 49.52±16.68pg/ml; P=0.055) at day 3 and (IP: 59.32±32.88 vs. control: 25.19±5.76pg/ml; P=0.059) at 1 month (Figure 4A).\n\nPlasma concentrations of circulating (A) IL-8, (B) VEGF, (C) TNFα, (D) FGF-2 Concentrations are expressed as mean± standard deviation. *P<0.05 between the IP and control group. FGF-2, fibroblast growth factor-2; VEGF, vascular endothelial growth factor; IL-8, interleukin-8; TNFα, tumor necrosis factor alpha; IPC, ischemic preconditioning; AMI, acute myocardial infarction; 1 Mo FUP, 1 month follow-up.\n\nThe concentration of VEGF in plasma significantly increased in IP group at day 3 post infarction as compared with controls (IP: 41.35±5.12 vs. control: 29.01± 10.18pg/ml; P=0.021) (Figure 4B).\n\nIP did not affect the changes in serum concentrations of TNFα as compared to the control group (Figure 4C).\n\nThe plasma level of FGF-2 was not significantly changed at day 3 by IP as compared to the control group (IP: 1.90±0.41 vs. control: 2.22±0.88ng/ml; P=0.45) (Figure 4D).\n\nIn order to prove the oppositional effect of MMP-2 on CD34+ cell mobilisation, we added MMP-2 to cultured HACMs, stimulated with SDF-1α, and quantified the CD34+ cell migration towards the HACMs (Figure 5).\n\nMigration was quantified as fold change impedance compared to the baseline conditions. Adding of SDF-1α in different concentrations induced chemotaxis of the CD34+ cells in a dose-dependent manner. A total of 90min hypoxia followed by a change of the medium eliminated the chemotactic effect of SDF-1α, and blocked CD34+ cell migration. This migration effect of SDF-1α was similarly eliminated if MMP-2 was added to the normoxic cell culture. Depicted results express impedance values measured at 6h post-treatment. Background results were subtracted from each impedance measurement. Parameters are expressed as mean ± standard deviation. Each experiment was repeated three times. *P<0.001 compared to the baseline normalized value. SDF-1α, stromal derived factor-1α; MMP-2, matrix metalloproteinase-2.\n\nSDF-1α treatment stimulated the migration of CD34+ cells toward HACMs under normoxic conditions in a dose-dependent manner. The maximal chemotactic effect (1.6± 0.11 fold change; P<0.001) was achieved by adding 5ng/ml concentration of SDF-1α, while 0.5ng/mL and 0.05ng/mL SDF-1α resulted in a migration rate of 1.35± 0.18 fold (P<0.001) and 1.08±0.02 fold change (P=0.43), respectively, compared to the control HACMs and CD34+ cells culture without SDF-1α.\n\nCo-incubation of HACMs with MMP-2 under normoxic conditions completely eliminated the SDF-1α chemotactic effect to CD34+ cell migration towards the cardiomyocytes (0.98±0.1 fold change; P=0.71).\n\nInterestingly, incubation of the cardiomyocytes under 90min hypoxic conditions inhibited migration of CD34+ cells, even if the highest effective dose of SDF-1α (5ng/ml) was added to the cell culture as a chemoattractant (1.00±0.04 fold change; P=0.75).\n\n\nDiscussion\n\nHere, we demonstrate that 1) myocardial ischemia triggers the release of circulating MMP-2, which inhibits SDF-1α and CXCR4 release; 2) SDF-1-induced migration of CD34+ cells towards cardiomyocytes was inhibited by MMP-2 in vitro; 3) IP inhibited MMP-2 release, thereby increasing both SDF-1α and CXCR4 levels, resulting in a higher level of CD34+ cell mobilization 3 days post ischemic injury in in vivo condition; 4) IP induced VEGF secretion in the second window of cardioprotection.\n\nReperfused AMI led to an increase in CXCR4, but not SDF-1α, at 3 days post-infarction, with moderate enhancement of circulating CD34+ counts. Similarly, AMI caused significant elevation of MMP-2, produced by macrophages in case of acute tissue injury. MMP-2 disrupts the SDF-1α/CXCR4 axis, by cleaving SDF-1α to N-terminally truncated SDF-111. This form of SDF-1 is unable to trigger CXCR4 signalling and prevent the chemoattractant function of SDF-1α/CXCR4 in human progenitor cells12. Since the increased upregulation of MMP-2 post-AMI may inhibit retention of hematopoietic stem cells in the ischemic injury site, targeted modulation of MMP-2 expression has potential to improve outcome of regenerative therapies9.\n\nOur previous study demonstrated that IP in early phase post-infarction (early window of protection, 2h after reperfusion start) induced mobilization of BM-derived haematopoietic (HSCs) and mesenchymal stem cells (MSCs) involving the release of distinct cytokines7. In our present work, we analyzed the effect of IP on the mobilization of CD34+ regenerative cells and measured the cytokine release (MMP-2, VEGF, FGF-2, IL-8 and TNFα) in the late (second) window of protection.\n\nIn contrast with the non-conditioned AMI group, we observed significantly elevated SDF-1α plasma level in the IP group at 3 days post infarction, as compared to the AMI group. This confirmed our earlier assumptions that SDF-1α is released in a later time window after IP7. Previous in vitro and in vivo experiments have shown an increased cell migration ability responding to treatment with SDF-1α13 or increased mobilisation of BM-derived cells toward injured tissue after SDF-1α overexpression6,14. The putative receptor for SDF-1α chemokine is CXCR4, which is expressed also in mouse cardiomyocytes14 and mobilises mesenchymal stem cells in the ST-segment elevation of myocardial infarction patients2. The elevated level of SDF-1α was paralleled by an increased number of circulating CD34+ cells. This suggests that IP stimulates CD34+ cell migration by SDF-1α/CXCR4 upregulation within the first days after AMI.\n\nThe increased concentration of MMP-2 (72kDa) at 3 days post-infarction was completely abolished by IP, which might be an additional beneficial effect of IP in a translational large animal model, and is similar to mice experiments15. IP has shown cardioprotective effects against ischemia/reperfusion injury in accepted experimental models. Induction of IP in a mouse model led to improvement of cardiac function and increasing cell survival, accompanied by release of BM- derived cells16. Accordingly, our previous4 and present study suggest that IP stimulates endogenous mechanisms, promoting the recruitment of CD34+ cells in both early and late windows of cardioprotection.\n\nIn order to prove the direct confounding effects of MMP-2 on SDF-1α/CXCR4, we have performed in vitro experiments, and observed that MMP-2 completely inhibited SDF-1α -induced CD34+ cell mobilization.\n\nInterestingly, our experiments also revealed that 90min hypoxia abolishes the SDF-1α chemotactic effect in vitro. By contrast, it has been reported that hypoxia inducible factor 2, which is released in hypoxia, binds to the promotor sequence of CXCR4, the SDF-1α putative receptor, and activates the migratory activity of the endothelial progenitor cells17. We cannot completely explain our findings, but we assume that the release of hypoxia-triggered factors, such as MMP-2, may locally inhibit the migratory capacity of the regenerative cells. This is also in concordance with the findings in humans; early administration of regenerative cells has debatable effects on myocardial regeneration18.\n\nNon-conditioned AMI did not influence the release of circulating cytokine FGF-2, VEGF, IL-8 and TNFα. In contrast, IP induced a marked release of circulating VEGF and a trend towards increase in IL-8 3 days post-AMI, indicating the stimulation of additional pro-migratory cytokines by IP for enhanced cardioprotection.\n\nIn our previous experiments, IP induced the release of VEGF plasma levels immediately after myocardial infarction (first window of protection)7. In the present experiment, VEGF was still increased 3 days post AMI in the IP group (second window of protection) as compared to the control AMI group. Similarly to our study, Kamota et al. showed an amplified secretion of VEGF and SDF-1α up until 6 hours post infarction in a mouse model of IP16. Tang et al. also reported induced mobilisation of stem cells by VEGF/ SDF-1α trafficking in a rat model6.\n\nFGF-2 is an important chemotactic factor and it is also a prominent cardioprotective and angiogenic agent19. Since FGF-2 was not significantly induced by IP in our experiment, we assume that this protein did not participate in mechanisms of IP-elicited late window of cardioprotection.\n\nAcute phase of AMI after IP is characterized by an increased level of TNFα triggering a release of additional cytokines, such as IL-6, IL-8, and cell adhesion molecules. Our previous data demonstrated that IP resulted in elevated levels of TNFα in serum with concomitant IL-8 increase immediately after reperfusion induction7. A later time window after AMI revealed heterogeneous results. TNFα remained moderately increased 3 days post infarction with continuous moderate increase after 1 month FUP in both groups, most probably due to developing chronic phase of myocardial infarction. Interestingly, IP induced a trend towards enhanced IL-8 release, which is a potent progenitor cell mobilisation enhancer responding to ischemia, although it is also associated with pro-inflammatory processes8,9.\n\nIn conclusion, the present study revealed that AMI induces MMP-2 release, which hampered the ischemia-induced increase in SDF-1α and CXCR4 by cleaving the SDF-1α/CXCR4 axis. This led to diminished mobilization of the angiogenic CD34+ cells. IP induced CD34+ cell mobilization in the late phase (second window) of cardioprotection via inhibition of MMP-2 release, thereby also increasing circulating SDF-1α and CXCR4, parallel with enhanced VEGF secretion. In vitro migration assay confirmed the anti-migratory effect of MMP-2 and the direct negative association of MMP-2 and SDF-1α-induced cell migration. Accordingly, our experiment might explain the inhibited homing of mobilized or transplanted cells in the ischemic myocardium resulting in decreased efficacy of cell-based therapies early after AMI.\n\nEven though we demonstrate IP-induced mobilisation of CD34+ cells in a large animal model of reperfused AMI, the clinical relevance of IP remains uncertain. We have concentrated on mechanisms involved in cell mobilisation in terms of chemokine and cytokine secretion.\n\nAn important limitation is the utilisation of human CD34+ FACS antibody due to lack of commercially available porcine products. However, the number of mobilized CD34+ cells correspond with the available mobilized cell numbers published several times2,16,20; bearing in mind, that the normal count of white blood cells of pigs is 12–20 thousand cells/μl blood.\n\n\nData availability\n\nDataset 1. Raw data for XCelligence measurements of cell migration assay (DOI: 10.5256/f1000research.9957.d14207921).\n\nDataset 2. Raw data obtained from ELISA and FACS analyses (DOI: 10.5256/f1000research.9957.d14208022).", "appendix": "Author contributions\n\n\n\nDL and MG conceived the study. MG designed and carried out large animal experiments. AP, DL, KZ and IS performed laboratory experiments and analysis. DL, AG, NP and AS contributed to the design and preparation of large animal experiments. LM, DW and JW were involved in design of FACS analysis. IS, CK and SK designed and carried out the cell migration assay. DL, JW and MG prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nIi M, Horii M, Yokoyama A, et al.: Synergistic effect of adipose-derived stem cell therapy and bone marrow progenitor recruitment in ischemic heart. Lab Invest. 2011; 91(4): 539–552. PubMed Abstract | Publisher Full Text\n\nWang Y, Johnsen HE, Mortensen S, et al.: Changes in circulating mesenchymal stem cells, stem cell homing factor, and vascular growth factors in patients with acute ST elevation myocardial infarction treated with primary percutaneous coronary intervention. Heart. 2006; 92(6): 768–774. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaba F, Soleimani M, Kaviani S, et al.: G-CSF induces up-regulation of CXCR4 expression in human hematopoietic stem cells by beta-adrenergic agonist. Hematology. 2014; 20(8): 462–468. PubMed Abstract | Publisher Full Text\n\nGyöngyösi M, Wojakowski W, Lemarchand P, et al.: Meta-Analysis of Cell-based CaRdiac stUdiEs (ACCRUE) in patients with acute myocardial infarction based on individual patient data. Circ Res. 2015; 116(8): 1346–1360. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJanssens S, Dubois C, Bogaert J, et al.: Autologous bone marrow-derived stem-cell transfer in patients with ST-segment elevation myocardial infarction: double-blind, randomised controlled trial. Lancet. 2006; 367(9505): 113–121. PubMed Abstract | Publisher Full Text\n\nTang JM, Wang JN, Zhang L, et al.: VEGF/SDF-1 promotes cardiac stem cell mobilization and myocardial repair in the infarcted heart. Cardiovasc Res. 2011; 91(3): 402–411. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGyöngyösi M, Posa A, Pavo N, et al.: Differential effect of ischaemic preconditioning on mobilisation and recruitment of haematopoietic and mesenchymal stem cells in porcine myocardial ischaemia-reperfusion. Thromb Haemost. 2010; 104(2): 376–384. PubMed Abstract | Publisher Full Text\n\nSchömig K, Busch G, Steppich B, et al.: Interleukin-8 is associated with circulating CD133+ progenitor cells in acute myocardial infarction. Eur Heart J. 2006; 27(9): 1032–1037. PubMed Abstract | Publisher Full Text\n\nShirvaikar N, Marquez-Curtis LA, Janowska-Wieczorek A: Hematopoietic Stem Cell Mobilization and Homing after Transplantation: The Role of MMP-2, MMP-9, and MT1-MMP. Biochem Res Int. 2012; 2012: 685267. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacfelda K, Weiss TW, Kaun C, et al.: Plasminogen activator inhibitor 1 expression is regulated by the inflammatory mediators interleukin-1alpha, tumor necrosis factor-alpha, transforming growth factor-beta and oncostatin M in human cardiac myocytes. J Mol Cell Cardiol. 2002; 34(12): 1681–1691. PubMed Abstract | Publisher Full Text\n\nMcQuibban GA, Butler GS, Gong JH, et al.: Matrix metalloproteinase activity inactivates the CXC chemokine stromal cell-derived factor-1. J Biol Chem. 2001; 276(47): 43503–43508. PubMed Abstract | Publisher Full Text\n\nPeng H, Wu Y, Duan Z, et al.: Proteolytic processing of SDF-1α by matrix metalloproteinase-2 impairs CXCR4 signaling and reduces neural progenitor cell migration. Protein Cell. 2012; 3(11): 875–882. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHou CJ, Qi YM, Zhang DZ, et al.: The proliferative and migratory effects of physical injury and stromal cell-derived factor-1α on rat cardiomyocytes and fibroblasts. Eur Rev Med Pharmacol Sci. 2015; 19(7): 1252–1257. PubMed Abstract\n\nDong F, Harvey J, Finan A, et al.: Myocardial CXCR4 expression is required for mesenchymal stem cell mediated repair following acute myocardial infarction. Circulation. 2012; 126(3): 314–324. PubMed Abstract | Publisher Full Text\n\nBencsik P, Kupai K, Giricz Z, et al.: Role of iNOS and peroxynitrite-matrix metalloproteinase-2 signaling in myocardial late preconditioning in rats. Am J Physiol Heart Circ Physiol. 2010; 299(2): H512–518. PubMed Abstract | Publisher Full Text\n\nKamota T, Li TS, Morikage N, et al.: Ischemic pre-conditioning enhances the mobilization and recruitment of bone marrow stem cells to protect against ischemia/reperfusion injury in the late phase. J Am Coll Cardiol. 2009; 53(19): 1814–1822. PubMed Abstract | Publisher Full Text\n\nTu TC, Nagano M, Yamashita T, et al.: A Chemokine Receptor, CXCR4, Which Is Regulated by Hypoxia-Inducible Factor 2α, Is Crucial for Functional Endothelial Progenitor Cells Migration to Ischemic Tissue and Wound Repair. Stem Cells Dev. 2016; 25(3): 266–276. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSurder D, Manka R, Lo Cicero V, et al.: Intracoronary injection of bone marrow-derived mononuclear cells early or late after acute myocardial infarction: effects on global left ventricular function. Circulation. 2013; 127(19): 1968–1979. PubMed Abstract | Publisher Full Text\n\nHouse SL, Bolte C, Zhou M, et al.: Cardiac-specific overexpression of fibroblast growth factor-2 protects against myocardial dysfunction and infarction in a murine model of low-flow ischemia. Circulation. 2003; 108(25): 3140–3148. PubMed Abstract | Publisher Full Text\n\nWojakowski W, Tendera M, Michalowska A, et al.: Mobilization of CD34/CXCR4+, CD34/CD117+, c-met+ stem cells, and mononuclear cells expressing early cardiac, muscle, and endothelial markers into peripheral blood in patients with acute myocardial infarction. Circulation. 2004; 110(20): 3213–3220. PubMed Abstract | Publisher Full Text\n\nLukovic D, Zlabinger K, Gugerell A, et al.: Dataset 1 in: Inhibition of CD34+ cell migration by matrix metalloproteinase-2 during acute myocardial ischemia, counteracted by ischemic preconditioning. F1000Research. 2016a. Data Source\n\nLukovic D, Zlabinger K, Gugerell A, et al.: Dataset 2 in: Inhibition of CD34+ cell migration by matrix metalloproteinase-2 during acute myocardial ischemia, counteracted by ischemic preconditioning. F1000Research. 2016b. Data Source" }
[ { "id": "17966", "date": "24 Nov 2016", "name": "Zhengyuan Xia", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments: This is an interesting study with novel findings showing that myocardial ischemia triggers the release of circulating MMP-2, which inhibits SDF-1α and CXCR4 release, and that ischemic preconditioning (IP) inhibits MMP-2 release, thereby increasing both SDF-1α and CXCR4 levels, resulting in a higher level of CD34+ cell mobilization 3 days post ischemic injury in in vivo condition in large animals. In addition to the limitations mentioned by the authors, we have the following comments that the authors may need to consider to improve the manuscript.\n\nSpecific comments:\nIs there any particular consideration why female but not male pigs were chosen as the study subjects? This needs to be clarified. Similarly, the reason why n=6 in the IP group but n=12 in the non-conditioned AMI group is not clear.\n\nIP has been shown to reduce post-ischemic IL-8 release in most, if not all, models of myocardial ischemia and reperfusion or in some clinical trial studies. While in your model, IP almost significantly increased, and could actually significantly (see below), post-ischemic IL-8. This should be discussed in comparison of other studies in more detail.\n\nOne may expect that IP may also could have significantly increased post-ischemic CXCR4, if the authors have 1) observed more time points; or 2) the sample size of the two groups were 9 per group rather than 6 vs. 12; or 3) the area under curve (AUC) data are compared. This applies to the above IL-8 story.\n\nHave the authors considered to perform repeated measures ANOVA with post hoc comparison to assess time effects or time-treatment interaction?\n\nThe conclusion that IP enforces CD34+ cell mobilization via inhibition of MMP-2 does not seem to have well supported by data, in particular you did not have intervention of  MMP-2 in the presence or absence of IP in the in vivo model, and you did not measure cardiac MMP-2 proteins. Therefore, it needs to be toned down.", "responses": [ { "c_id": "2374", "date": "20 Dec 2016", "name": "Dominika Lukovic", "role": "Author Response", "response": "Dear Dr. Zhengyuan Xia,   We greatly appreciate the efforts to carefully review our paper and the valuable comments. We have modified the manuscript accordingly, as follows:   General comments: This is an interesting study with novel findings showing that myocardial ischemia triggers the release of circulating MMP-2, which inhibits SDF-1α and CXCR4 release, and that ischemic preconditioning (IP) inhibits MMP-2 release, thereby increasing both SDF-1α and CXCR4 levels, resulting in a higher level of CD34+ cell mobilization 3 days post ischemic injury in in vivo condition in large animals. In addition to the limitations mentioned by the authors, we have the following comments that the authors may need to consider to improve the manuscript.   Specific comments: A. Is there any particular consideration why female but not male pigs were chosen as the study subjects? This needs to be clarified.  Answer: Thank you for the comment. Indeed, we have chosen female pigs, because a clear gender differences have been observed in female and male rodents, rabbits, dogs and pigs (Murphy CVRes); the incidence of cardiogenic shock and life-threatening arrhythmias were more frequent in male than female pigs by using the closed-chest reperfused AMI model. We have added this comment into the Limitation in our revised paper (page…)   B. Similarly, the reason why n=6 in the IP group but n=12 in the non-conditioned AMI group is not clear.  Answer: Due to expected higher mortality in the control group, the block randomisation of 1:2 was chosen. We have added this comment, and corrected the Method with the following text:   Randomly selected female domestic pigs (n=21; weight, 30–35kg) underwent percutaneous coronary intervention (PCI) under general anaesthesia in order to perform either ischemic preconditioning (group IP) or non-conditioned AMI (group control), with a 1:2 block randomization, due to expected higher mortality in the control group. Animals in both groups underwent 90min percutaneous balloon occlusion of the left anterior descending (LAD) coronary artery at the origin of the first diagonal branch following reperfusion (balloon deflation). IP was initiated prior to 90min LAD occlusion by 3×5min repetitive cycles of artery re-occlusion and reperfusion. One pig in the IP and 2 animals in the control group died during the AMI intervention, all remaining animals (n=6 in IP and n=12 in control groups) survived for 1 month after the experimental procedure.   (page 2)     2. And 3. IP has been shown to reduce post-ischemic IL-8 release in most, if not all,  models of myocardial ischemia and reperfusion or in some clinical trial studies. While in your model, IP almost significantly increased, and could actually significantly (see below), post-ischemic IL-8. This should be discussed in comparison of other studies in more detail.One may expect that IP may also could have significantly increased post-ischemic CXCR4, if the authors have 1) observed more time points; or 2) the sample size of the two groups were 9 per group rather than 6 vs. 12; or 3) the area under curve (AUC) data are compared. This applies to the above IL-8 story.   Answer: Thank you for the remark. We agree that results of plasma cytokine levels would be more informative if it measured more often. However, an additional blood sampling would require a full anaesthesia in the pigs with recent AMI with expected higher mortality. Accordingly, we have added the following text in the revised manuscript:   IL-8 is a pro-inflammatory C-X-X chemokine that is also involved in activation of pro-angiogenic processes and re-introduction of progenitor cells into the circulation. The study of Schomig et al. demonstrated significantly increased IL-8 level in AMI patients as compared to patients diagnosed with stable angina8. In our study, we observed a trend toward increased release of IL-8 in the clinically relevant porcine reperfused “STEMI” model. The levels of CXCR4 increased both in controls (with AMI) and IP groups, with a trend towards higher increase in the IP group 3-day post AMI. The differences between our and other studies might be explained by the pre- and peri-AMI medication of patients with standard care that may contribute to changes in plasma levels of cytokines in AMI patients8. Results of plasma cytokine levels would be more informative if it measured more often, in an extended time window. The area under the curve (AUC) calculation of the cytokine release data might have delivered additional results. However, for a simple blood sampling, the animals must have been fully anaesthesized, which procedure signifies an additional stress for the animals with recent AMI with predicted higher mortality.   (page 9)     4 Have the authors considered to perform repeated measures ANOVA with post hoc comparison to assess time effects or time-treatment interaction?   Answer: Thank you for the comment. Indeed, we have performed two-way ANOVA with repeated measurements, supplemented with Bonferroni correction, as described in the originally submitted paper (Statistical analysis section). According to the Reviewer`s suggestion, we have also tested the intra-group differences according to the time-factor (statistical comparison between baseline and 3-day and 1 month), and also the “between-groups” differences by using independent t-test. The results are added into the Figure legends as follows: +P<0.05  between  baseline  and  3  day  values  within  the  IP group;  #P<0.05 between baseline and 3 day values within the control (non-conditioned  AMI) group.  *P<0.05 between  the IP and  control  group   The non-significant changes were not labeled.   We are aware, that serial blood sampling would have given more information, eg. the evident changes in mobilisation of bone marrow-derived cells following myocardial infarction occur at day 3, 7 and 14 post-ischemia23 . However, we have focused on the second window of protection, which ends at day 3.   We have added this comment in the Limitation (page 10).     5. The conclusion that IP enforces CD34+ cell mobilization via inhibition of MMP-2 does not seem to have well supported by data, in particular you did not have intervention of MMP-2 in the presence or absence of IP in the in vivo model, and you did not measure cardiac MMP-2 proteins. Therefore, it needs to be toned down.   Answer: We agree with the reviewer in this aspect. Accordingly, we have corrected the Conclusion in the Abstract and Discussion and supplemented the Limitation section as follows:   Abstract: Conclusions. Non-conditioned AMI induces MMP-2 release, hampering the ischemia-induced increase in SDF-1α and CXCR4 by cleaving the SDF-1α/CXCR4 axis, with diminished mobilization of the angiogenic CD34+ cells. IP might influence CD34+ cell mobilization via inhibition of MMP-2. (page 1)   Discussion: IP induced CD34+ cell mobilization in the late phase (second window) of cardioprotection, thereby also increasing circulating SDF-1α and CXCR4, parallel with enhanced VEGF secretion. One mechanism of this beneficial effect of IP might be the inhibition of AMI-induced MMP2-release. (page 10)   Limitation We revealed one additional possible beneficial mechanism of IP, namely the inhibition of MMP-2 release with consequent higher mobilization of CD34+ cells, which was confirmed in our in vitro experiment. However, a direct association between IP - MMP-2 - CD34+ axis had to be confirmed in vivo, by blocking MMP2 in animals subjected to AMI and IP. We have not measured myocardial MMP-2 level, which analysis would require harvesting of the animals max. 72h post IP-AMI (second window of protection), and our animals survived 1-month follow-up. (page 10)" } ] } ]
1
https://f1000research.com/articles/5-2739
https://f1000research.com/articles/6-89/v1
30 Jan 17
{ "type": "Opinion Article", "title": "Seven perspectives on GPCR H/D-exchange proteomics methods", "authors": [ "Xi Zhang" ], "abstract": "Recent research shows surging interest to visualize human G protein-coupled receptor (GPCR) dynamic structures using the bottom-up H/D-exchange (HDX) proteomics technology. This opinion article clarifies critical technical nuances and logical thinking behind the GPCR HDX proteomics method, to help scientists overcome cross-discipline pitfalls, and understand and reproduce the protocol at high quality. The 2010 89% HDX structural coverage of GPCR was achieved with both structural and analytical rigor. This article emphasizes systematically considering membrane protein structure stability and compatibility with chromatography and mass spectrometry (MS) throughout the pipeline, including the effects of metal ions, zero-detergent shock, and freeze-thaws on HDX result rigor. This article proposes to view bottom-up HDX as two steps to guide choices of detergent buffers and chromatography settings: (I) protein HDX labeling in native buffers, and (II) peptide-centric analysis of HDX labels, which applies (a) bottom-up MS/MS to construct peptide matrix and (b) HDX MS to locate and quantify H/D labels. The detergent-low-TCEP digestion method demystified the challenge of HDX-grade GPCR digestion. GPCR HDX proteomics is a structural approach, thus its choice of experimental conditions should let structure lead and digestion follow, not the opposite.", "keywords": [ "GPCR", "H/D-exchange", "lipids", "membrane proteins", "detergents", "structural proteomics" ], "content": "Abbreviations\n\nGPCR, G protein-coupled receptor; HDX, H/D-exchange; TM, transmembrane; DDM, n-dodecyl-β-D-maltopyranoside; TCEP, Tris-2-carboxyethylphosphine; DLT, DDM-low-TCEP; CHS, cholesteryl hemisuccinate; β2AR, β2 adrenergic receptor; CcO, cytochrome c oxidase; TSPO, translocator protein; UPLC, ultra-performance LC; EM, electron microscopy; LCP, lipidic cubic phase; PC, phosphatidylcholine; DPC, dodecyl phosphatidylcholine (12:0); DMPC, dimyristoyl phosphatidylcholine, 1,2-dimyristoyl-sn-glycero-3-phosphocholine (14:0/14:0); PO, 16:0/18:1, 1-palmitoyl-2-oleoyl; DO, 18:1/18:1, 1,2-dioleoyl; PE, phosphatidylethanolamine; PG, phosphatidylglycerol; PS, phosphatidylserine; PA, phosphatidic acid; PI, phosphatidylinositol; PIPn, PI phosphate.\n\n\nIntroduction\n\nThis opinion article is a response to the recent call to “strive for reproducible science”1. January 2010 saw the publication of the first fully automated membrane protein bottom-up H/D-exchange (HDX) proteomics method, which can map human G protein-coupled receptor (GPCR) dynamic conformations in solution at repeated HDX coverage of 89%, out of ~90% MS/MS coverage2. This method broke the years-long sub-25% coverage impasse, provided the first useful HDX proteomics protocol to obtain meaningful structural information of seven-transmembrane (TM) GPCR for drug discovery, and established HDX proteomics as a powerful mainstage tool for GPCR structure-function investigation. These peptides were robustly reproduced in over two hundred independent HDX runs, using several ligand-states of prototypic human GPCR β2 adrenergic receptor (β2AR) from numerous batches of purifications (2 and unpublished study by Xi Zhang and Patrick R. Griffin, et al.). Enabled by a DDM-low-TCEP (DLT) digestion method (DDM, n-dodecyl-β-D-maltopyranoside; TCEP, Tris-2-carboxyethylphosphine), this protocol integrates autosampler control programs to coordinate continuous full sets of HDX incubation, online digestion and data acquisition of high-performance liquid chromatography mass spectrometry (HPLC MS), and is flexible for users to choose 0-to-3600-second or longer-hour incubation modules, and regular or longer HPLC for MS/MS sequencing. Subsequently, this protocol has been applied in large-scale GPCR efforts and attracted broad interests from the GPCR community3–9.\n\nHowever, mis-representations have also emerged3,6, reflecting misunderstanding of the GPCR HDX proteomics approach at multiple levels. Outstanding problems include: confusing the HPLC MS/MS and HPLC MS steps; confusing the various roles of detergents; incorrectly claiming the 2010 study analyzed HDX-labeled peptides with 120-min HPLC MS experiment; and calling the 89% HDX coverage invalid3. Questionable procedures include: neglecting the HPLC MS/MS part of HDX and the critical optimization of pepsin column digestion3; destabilizing membrane proteins by dilution with zero-detergent buffers and introducing Na/K interference to the GPCR protein system, such as using a quench/digestion buffer composed of 20 mM TCEP and 0.1 M KH2PO4 pH 2.01 to dilute GPCR DDM/NaCl solution3; disturbing proteins or labels with extra freeze-thaws3; neglecting the effects of the bicelle detergents, lipids and adducts on MS, data-dependent MS/MS acquisition and peptide identification3; and switching to manual mode for membrane protein HDX. Backed by these problematic procedures, the JASMS May 2015 paper repeatedly claimed CHAPSO/DMPC (dimyristoyl phosphatidylcholine, 14:0/14:0) bicelle specifically as a “better solubilization method than DDM for HDX-MS analysis of GPCRs”3, but minimized discussing the structural concerns of CHAPSO/DMPC on proteins. The peptide MS spectra of CHAPSO/DMPC appeared unusually noisy compared with DDM3, raising questions about potential effects on spectrum quality and HPLC column health over long-term practice. Meanwhile, another study submitted in May 2015 reported that human β2AR purified from Sf9 did not predominantly sequester PC nor 14:0/14:0 chains from membrane, but enriched for cholesterol by 17.7 fold, and 18:1/18:1 chains by 80 fold10, raising structural concerns against CHAPSO/DMPC.\n\nThis opinion article reasons that these problems reflect a common lack of systematic thinking and confusions of fundamentals in the GPCR bottom-up HDX proteomics approach, such as the MS versus MS/MS step, the protein structure HDX labeling versus label analysis step, and the roles of detergent/lipid additive for structure versus for digestion. Given the recent surging interest to study GPCRs using bottom-up HDX3–9,11,12, this viewpoint clarifies critical technical nuances and logical thinking, and emphasizes systematically considering membrane protein structure stability and HPLC MS compatibility, throughout the pipeline. This article explains bottom-up HDX as two flexibly coupled modules to guide choices of detergent buffers and HPLC settings, and highlights the effects of metal ions, zero-detergent shock and freeze-thaws on membrane protein structure, stability and HDX result rigor. Rather than a comprehensive overview of HDX or membrane protein methods8,9,12,13, or a refutation of particular publications, this article aims to provide a systematic practical guide to help scientists overcome cross-discipline pitfalls, and understand and reproduce the GPCR HDX methods at high quality. The ignorance of these nuances, rather than the lack of care or diligence, likely caused the previous impasse and emerging problems and endangers future success. Strengthened by important non-HDX biophysics studies published after 2010, these seven first-hand insights are critical to clear emerging misconceptions, but are not discussed in the original 2010 report.\n\n\n1. Deep-sequencing-based bottom-up HDX MS: a two-stage analysis\n\nAlthough HDX descriptions usually list multiple steps and elaborate on the well-established logistics of bottom-up proteomics3,6, what has critically enabled the membrane protein HDX breakthroughs2,14 is to think in terms of two distinct yet flexibly-coupled modules beyond the routines (Figure 1). The overall method workflow of bottom-up HDX structural proteomics of membrane protein GPCR can be viewed as two steps: (I) label via H/D-exchange, and (II) analyze—identify and quantify—H/D labels using bottom-up proteomics. H/D label analysis is peptide-centric and also has two stages: (1) identify and construct peptide matrix (a set of reproducibly identified peptides) using HPLC MS/MS deep sequencing, and (2) quantify H/D-labels in MS for each identified peptide, using MS peak area summed from the peptide’s isotopic envelope (Figure 1). These two stages share the same protease digestion method and as similar as possible temperature and HPLC-MS instrument, but can differ in some other HPLC and MS/MS or MS conditions to best fulfill distinct purposes.\n\nId, identification; Qt, quantitation.\n\nThe MS/MS deep sequencing stage aims to identify as many robust peptides as possible for target proteins. TM sequences often fall short of ionization efficiency (overdigestion is explained below), thus a longer HPLC gradient is desired to simplify elution population and allow these peptides a better chance to get picked for MS/MS scan in the ion-abundance-ranked data-dependent acquisition. Because these peptides do not carry H/D-labels, longer HPLC MS/MS analysis time causes no harm here. To obtain a robust MS/MS peptide matrix, the 2010 protocol then matched and iteratively filtered these MS/MS spectra using a multilayer method: (1) each peptide should score above 20 in MASCOT search against the target sequence, but spectra stay unmatched in decoy search against the reversed sequence (most decoy matches scored way below 10); (2) peptide sequences should comply with the pepsin preference sites reported by Hamuro et al. in 200815; (3) fragment ions in MS/MS spectra appear reasonable in manual inspection; and (4) precursor ions should be repeatedly confirmed in high-resolution MS using the HDX MS experiment’s HPLC gradient. This MS/MS stage provides an initial peptide matrix, which is further refined in subsequent HDX MS.\n\nHowever, restricted by the minutes’ time window of HDX pipeline to minimize H/D label back-exchange, the HDX MS stage uses shorter HPLC gradient and just MS scans. Peptide identification in HDX MS data is based on: (1) accurate peptide mass matching to those in the pre-constructed MS/MS peptide matrix; (2) retention time reproducibility over all HDX runs and correlation with the longer gradient; and (3) iterative confirmations via checking consistency across redundant peptide ladders, multiple charge states, and overall HDX profile trend throughout the H/D-incubation time points. Targeted MS/MS may further confirm ambiguous peptide ions. Ideally peptide MS/MS identification should be performed at the same HPLC gradient as used in HDX MS quantitation, but it challenges the capacity and scan speed of current popular HPLC and mass spectrometer instruments, and proved often unnecessary for simpler purified protein samples on high-resolution orbitrap analyzers (2,16 and unpublished study by Xi Zhang and Patrick R. Griffin et al.). Nonetheless, the rapidly growing data-independent acquisition MS/MS, which uses wide precursor isolation window for simultaneous fragmentation, may reconcile this gap17–20.\n\nTherefore, constructing the MS/MS peptide matrix favors longer HPLC gradient for exhaustive identification (no H/D labels), but the MS-based H/D label quantitation can apply short HPLC to minimize H/D label loss, as this step is based on MS peptide mass matching. The 2010 protocol achieved the 89% HDX coverage by devising a total 9.5-min HPLC method for HDX MS2, not the 120 min claimed by Duc et al.3. This short HPLC for HDX MS was repeated in subsequent large-scale GPCR HDX studies. As a part of the 2010 strategy, changing from the regular 60-min to the 120-min HPLC method for MS/MS sequencing successfully recovered multiple TM peptides, and they were robustly identified throughout HDX MS mapping.\n\n\n2. DDM as a tool for making structural-grade protein versus a tool for digestion\n\nDDM/cholesteryl hemisuccinate (DDM/CHS) bicelle-like micelle served as a tool to prepare upstream structural-grade membrane protein solution samples, and to mark these conformations with matching D2O buffer. As a tool for downstream digestion, DDM-low-TCEP alone suffices to support protease activity and to solubilize and stabilize substrates against aggregation throughout digestion. Importantly, the combination of these two modules—protein preparation-labeling and digestion—is flexible (Figure 1).\n\nUpstream protein states can vary vastly with sample preparation methods, which should thus be screened with rigorous function assays (multi-facet, including activity, ligand binding and stability) and matched by the H/D-labeling buffer. However, this does not void the broad utility of DLT method for downstream digestion. Not only can DLT digestion be applied to various upstream protein preparations, including myriad detergent/lipid bicelles, lipid bilayer nanodiscs, DDM/CHS bicelle-like micelle, membrane pellets and intact organelles21, but also DLT HDX-proteomics provides a tool to visualize their different effects on protein in-solution conformations. Remarkably, the DLT digestion method proved highly compatible with soluble protein projects to share the same regular reversed-phase (RP)-HPLC ESI MS and MS/MS instrument platform. Across large-scale applications, no deterioration was observed in peptide MS spectra (smooth not noisy), column health or sample carryover, similar to non-DLT soluble proteins (2 and unpublished data by Xi Zhang and Patrick R. Griffin, et al.) (Supplementary Figure 1 and Supplementary Figure 2B). Therefore, the HDX-grade digestion of GPCRs is technically solved and is no longer hampered by solubilization during the digestion.\n\nBy contrast, CHAPSO/DMPC is less suitable as a tool for digestion in broader proteomic applications. Both CHAPSO and DMPC form net strong fixed positive charges under acidic pH, combined with high concentration (CHAPSO critical micelle concentration cmc is 8 mM, 5x cmc is 40 mM), they are long observed to dominate ionization, likely interfere with peptide RP-HPLC data-dependent MS/MS, and may harm long-term RP-HPLC column health, despite possible chromatograph improvement in ultra-performance LC (UPLC). Even anionic cholate entailed UPLC22, and anionic deoxycholate (cmc 6 mM, no fixed positive charges) proved to require removal by ethyl acetate extraction before HPLC injection23. Samples that contain CHAPS, similar to CHAPSO (same charged groups, same 8 mM cmc, one less hydroxyl) find routine rejections at proteomics facilities: “Non acceptable buffers include NP40, CHAPS, Triton X, and PEG” (https://mass-spec.stanford.edu/sample-preparation; Sample Preparation, Stanford University Vincent Coates Foundation Mass Spectrometry Laboratory; Sept 28, 2015 access).\n\n\n3. CHAPSO/DMPC bicelle versus DDM/CHS bicelle-like micelle: not unique to the HDX approach\n\nHow the presentation methods influence membrane proteins’ native structures is the premier concern common to all solution-based biophysical approaches that aim to measure their functional/native states14,24. For solution-based structural technologies, the freedom from high-resolution crystallogenesis—itself a quality control of how comfortable (though not always native) membrane proteins are in these conditions—presents both advantages and pitfalls, and calls for extra rigor and caution in protein handling, data interpretation, and cross-examination with other function and structure measurements. To avoid masking the effects of intended perturbations, such as ligand stimulation, protein buffers often aim to approach native-like and function-neutral: stabilize the protein and minimize distortion (deactivation or over-activation).\n\nThe 2010 study prepared human GPCR protein in DDM/CHS solution2, because mammalian GPCR natural habitats include 20–25% cholesterol, and membrane proteins are increasingly resolved to contain conserved binding sites for cholesterol, CHS and other derivatives25–29. The natural 20–25% cholesterol habitat proved possible to be re-established by using DDM/CHS that forms wide bicelle-like micelles around membrane proteins, and greatly enhances GPCR activity and stability from just DDM25,30. Although open to improvement, DDM/CHS emerges as a viable method to unify solution-phase means—crystallography, electron microscopy, structural proteomics and nuclear magnetic resonance (NMR)—to spearhead charting the solution-phase structures of GPCRs and complexes. Such actionable atomic clarity is in urgent need and provides the pivotal foundation to further understand interactions with molecules, such as certain lipids. Although CHAPSO/DMPC bicelle has produced membrane protein crystals and NMR results31,32, the 2010 protocol is cautious and chooses not to present proteins in CHAPSO/DMPC for multiple reasons, as specified below and in Supplementary File 1, and increasing evidence since 2010 supports these cautions. NMR appears to favor zwitterionic CHAPSO/DMPC for technical convenience33–35, but mass spectrometry-based structural proteomics approaches are free from such technical constraints.\n\nThe chemical structures of lipids matter. Indeed CHAPSO/DMPC presents a lipid-rich environment, but by no means resembles human GPCRs’ native lipid bilayer habitats. In-depth consideration of lipids is essential (Figure 2) and is detailed in Supplementary File 1 and briefly summarized here. First, as a tool to present human GPCRs and complexes in near-native states (Step I, Figure 1), bilayer reconstitution is not restrained to 14:0/14:0 DMPC. CHAPSO/DMPC differs vastly from GPCR native lipid bilayer habitats in chemistry, and shall not dictate the choice of lipids to recreate bilayers. Neither does CHAPSO/DMPC bring much technical advantage to Step I for HDX-proteomics and most other solution-phase biophysical methods. To the contrary, the broad adaptability of downstream proteomics readout allows Step I to maximally prioritize protein structures, such as using various micelle, bicelle, bilayer, pellet, nanodisc or cell organelles (Figure 1). Second, as a tool to solubilize GPCR for HDX-grade digestion and peptide-centric label analysis (Step II, Figure 1), the compatibility of high-dose CHAPSO/DMPC with large-scale direct HPLC MS and MS/MS runs appears controversial. By contrast, DDM alone with optional low TCEP proves effective and well suited for RP-HPLC MS instruments when applied rationally (further cross-discipline pitfalls discussed below), thus HDX-grade GPCR digestion (Step II) is no longer limited by solubilization. As a structural approach, HDX-proteomics choice of experimental conditions should let structure lead (Step I) and digestion follow (Step II). Touting CHAPSO/DMPC specifically for the HDX-proteomics approach—by arguing CHAPSO/DMPC is a better solubilization method than DDM for GPCR digestion based on questionable practices, yet minimizing structural considerations on proteins—is misleading.\n\nBlue or cyan, structures from two independent crystallizations in monoolein/cholesterol LCP (4UC1 or 4RYO); orange, structure from DPC-micelle NMR (2MGY). Distortions in all three domains of 5TM TSPO were seen in DPC-produced NMR structure (2MGY), contrasting the well-aligned independently acquired crystal structures from LCP (4UC1 and 4RYO) or DDM micelle and EM structure (not shown)62,63. TSPO structures were directly aligned in PyMOL.\n\nTherefore, lipid choice in bilayer reconstitution is unrestricted to just CHAPSO/DMPC, NMR-favored zwitterionic head groups, or 14:0/14:0 chains, but should and could prioritize protein conformation, activity and stability. Indeed proteins may differ, and extensive method optimization is necessary. Membrane proteins’ responses to bilayer environment can be highly dynamic, diverse and sensitive; thus, multifaceted structure-activity measurements are essential to data interpretation. Recent rigorous bilayer reconstitutions for activity measurement typically examined various phospholipid head groups, chain lengths and cholesterol additive36–38, and increasingly chose POPS38, POPE27, POPE/POPG39–41, POPC/POPE/POPG27 or DOPE/POPC/POPS40 mixtures, with 16:0/18:1 or 18:1/18:1 fatty acid chains37,40, rather than 100% 14:0/14:0 DMPC.\n\n\n4. Optimization of pepsin column reaction is a key for coverage\n\nIn the chosen digestion buffer, HDX proteolysis is completed within seconds of column residence time: the highly reactive pepsin column is obviously the most sensitive component of the platform to affect coverage and reproducibility. Pepsin column length, diameter, manufacturing of beads and column, temperature and flow rate may all change digestion products’ peptide length and reproducibility. Particularly, the pepsin-beads coupling reaction conditions affect pepsin surface density, activity and extent of autolysis—thus the effective enzyme surface concentration of final columns—and may vary between operators and manufacturers.\n\nBecause the 2010 HDX method is a completely automated protocol that integrates all experimental conditions, such as HDX incubation time, pepsin column flow rate, HPLC gradients and MS methods, manual operation only involves placing samples in designated sample trays, and selecting whether to use or not use the additional long-hour incubation module. However, the typical shelf life of each batch of pepsin beads and columns for peptide reproducibility is only about 10 months at 4°C. Therefore, rigorous practice means at least checking the optimal digestion flow rate and temperature based on the batch of beads and columns in use. These parameter updates are allowed in the 2010 protocol by simply typing the numbers, without changing the programming for sample handling and data acquisition.\n\nInstead of under-digestion, low TM coverage is often caused by pepsin over-digestion, and may be rescued by optimizing pepsin column flow rate and temperature, and by applying longer HPLC for MS/MS2. Alternatively, longer TM peptides may be generated by reversible partial deactivation of the pepsin column, and by blocking TM substrate access with bulkier, tighter or more facial amphiphiles and lipids.\n\nDuring initial digestion method development, the 30-min incubation with one column volume of pepsin bead slurry is commonly used to predict whether the pepsin column can reach digestion completion at seconds scale. But, the bead slurry format falls behind in peptide reproducibility, so all actual HDX data acquisitions in this protocol use pepsin-column digestions at precisely controlled flow rates.\n\nBesides these four major points, this article further emphasizes systematic considerations of the subtle yet critical effects on membrane protein stability as follows.\n\n\n5. Na+ or K+ matters\n\nTo the structure-function of membrane proteins, especially GPCRs, Na+ and K+ are not always inter-exchangeable: thus the measurement itself shall not introduce Na/K interference. To proteomics, mixing Na+ and K+ may cause adduct ion formation of both Na+ and K+ with peptides and other components, complicating peptide MS spectra. Upon 2009, development of the DLT GPCR HDX protocol started with asking whether to use Na+ or K+ buffers, and chose Na+ for all buffers (protein dilution, H/D-incubation and quench/digestion buffers) for multiple reasons. First, Na+ and K+ may differ in structural and functional effects on TM proteins. Previous projects used K+-based buffers to purify physiological-state high-activity CcO because both mitochondria and Rs bacteria have high internal K+, whereas Na+ was empirically screened as a tool to aid crystallogenesis29,42–44. Na+ and K+ affect CcO Ca2+ binding differently45,46, though the exact actions remain obscure. For GPCRs, unique among all common cations, Na+ was long-observed to act as a fast allosteric mediator itself and control agonist/antagonist-distinct GPCR activities, and the structural bases started to be resolved by crystallography26,36,47–56. Physiological levels of Na+ (and Li+ to less extent) favored opiate receptor binding with antagonists against agonists48–50; thus the predominant use of NaCl buffers in GPCR purification may partly account for the larger difficulty to obtain stable GPCR-agonist complex upfront, than GPCR-antagonist complex. Consistently, crystallography revealed that Na+ also modulates the ion flux activity of TM G protein-gated K+ channel (GIRK2) by binding at specific sites in its intracellular domain immediate to the TM domain interface57,58. Consistently using NaH2PO4 rather than KH2PO4 for the digestion buffer avoided fast side effects that may occur even within the short pre-digestion time window.\n\nSecond, avoiding Na+/K+ mixing also prevented forming both Na+ and K+-adduct ions, such ions may exponentially complicate MS spectra and increase the risk of interfering with useful peptide peaks for both MS/MS isolation and HDX MS. The recent open data search method revealed extensive formation of peptide-Na+ adduct ions59. Third, continuing with Na+ buffers provides a common ground for structure-function and dynamic-static structure correlations, as most other function and structure characterizations of GPCRs were performed in Na+ buffers. Lastly, abrupt changes of chemicals may destabilize membrane proteins; thus the method design sought to achieve effects with minimal disturbance of upstream conditions. Likewise, when membrane protein solution uses KCl60, the H/D labeling and digestion buffers should ideally switch to K+ versions accordingly.\n\n\n6. 0.1% + 0 does not equal 0.05% + 0.05%: the buffering effect on membrane protein stability\n\nAbrupt changes in buffer concentration, particularly dilution with zero (or sub-cmc)-detergent or zero-electrolyte solutions, tend to immediately impact membrane proteins and cause destabilization and aggregation. To provide such buffering protection, ~3× cmc DDM was included in quench solution and proved to increase digestion coverage better than zero-detergent quench2. Similarly, destabilization and aggregation were seen in soluble proteins, such as human peptidyl arginine deiminase, upon zero-electrolyte dilution (unusually high occurrence of bimodal peptide HDX isotope envelopes despite pre-HDX removal of visible aggregates), and were solved also by buffering (unpublished results by Xi Zhang and Patrick R. Griffin). Often GPCR-CHAPSO/DMPC bicelles were prepared by adding CHAPSO/DMPC to, not replacing, the original DDM protein solution; thus the GPCR-CHAPS/DMPC bicelle sample contained double doses of micelle/bicelle, and may present an unequal ground for shielding/buffering effects. High occurrence of bimodal peptide HDX profile could also be artifacts from non-optimized HPLC or MS settings61.\n\n\n7. Full automation facilitates both structural and analytical rigor of membrane protein HDX\n\nThe DLT digestion method enabled membrane proteins to be analyzed on a fully automated HDX platform that orchestrates continuous sample handling and analysis. Rather than a dispensable convenience, the DLT-enabled automated protocol presents special advantages to maximize the structural/analytical rigor and sensitivity for membrane proteins. First, it eliminated detrimental post-labeling freeze-thaws. Post-H/D-labeling freeze-thaws of H/D-bearing membrane proteins or peptides may not only distort their H/D-label profiles, but also destabilize/aggregate membrane proteins, which worsens digestion peptide reproducibility and structural coverage. Second, it precisely controlled the time and temperature during and after GPCR H/D labeling and digestion, presenting a robust level ground that is vital for large-scale sensitive rigorous comparison to precisely locate stimuli-caused conformation changes. Membrane proteins’ non-TM domains are often highly sensitive to ligand and protein interactions, their amide HDX dynamics can vary on a split-second scale, though HDX data recording often starts with seconds. Third, its random acquisition order and insertion of one or more blank buffer runs between every two protein samples minimized carryover, facilitating analytical rigor. Indeed the technical error bars of %D from quadruplicates were tiny throughout the GPCR HDX examination, peptides of the ~89% HDX coverage were well reproduced proving analytical robustness and presented structural validity (2 and unpublished study by Xi Zhang and Patrick R. Griffin, et al.). Such rigor risks being compromised when operators have to frequently see and manually freeze, thaw and transfer protein and peptide samples. This fully automated enclosed membrane protein protocol also offers facile rigor to measure the effects of various light and electromagnetic stimuli. The longer-hour HDX incubation could be useful to directly profile the membrane protein complex stability.\n\nPrevious reviewer(s) since Nov 2015 repeatedly claimed the necessity of including high-pressure digestion and ion-mobility peptide separation for membrane protein HDX, and insisted that GPCR solubilization in HDX is not solved and that automation belongs to the future. However, most GPCR samples that entered the HDX pipeline have indeed been solubilized, many already studied by crystallography, which requires not only GPCR solubilization but also mono-dispersion. Automation was decided to be critical for the membrane protein HDX rigor and was included in the GPCR HDX protocol since its invention in late 2009; thus automation has been a reality since then. This protocol solved the high-coverage GPCR HDX challenge without needing high pressure for digestion or ion mobility MS for peptide separation. These perspectives shall help operators to achieve high-quality applications of this protocol.\n\n\nConclusions\n\nThese perspectives, from the original method development, clarify critical technical nuances and logical thinking for the GPCR bottom-up HDX proteomics approach. The DDM-low-TCEP method resolved the technical barrier of HDX-grade GPCR digestion, and showed 7TM GPCR structures can be robustly approachable with bottom-up HDX proteomics; thus GPCR HDX is no longer hampered by solubilization during the digestion step. For effective application, it helps to view the GPCR HDX experiments as two modules that allow different flexibility in choosing detergent tools and HPLC MS settings (Figure 1). Systematically considering membrane protein conformation and stability throughout the pipeline is vital, because Na+/K+ mixing, zero-detergent shock, freeze-thaws and imprecise sample handling may all affect the structural and/or analytical rigor of GPCR HDX results. The 2010 89% HDX coverage was obtained with both structural and analytical rigor. HDX proteomics is a structural technology, its choice of experimental conditions should—and now could—let structure lead and digestion follow, not vice versa.", "appendix": "Author contributions\n\n\n\nX.Z. conceived and wrote the article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nI thank Dr. Shelagh M. Ferguson-Miller for discussions of lipids, membrane proteins and this manuscript. I thank Dr. Royd Carlson for helping with editing the manuscript. I thank Dr. Keith W. Miller for providing the data summarized in Supplementary Figure 2A.\n\n\nSupplementary material\n\nSupplementary Figure 1: Contrast of hGPCR β2AR coverage using 2010 DLT versus urea digestion method, showing that appropriate application of DDM method did not cause solubilization problem.\n\nClick here to access the data.\n\nSupplementary Figure 2: Contrast of 285 kDa hGABAAR coverage using 2015 FDD DDM-based digestion method versus common brutal force, further confirming that DDM is effective in protein solubilization during digestion.\n\nClick here to access the data.\n\nSupplementary File 1: CHAPSO/DMPC bicelle versus DDM/CHS bicelle-like micelle is not unique to HDX approach; lipid chemical structures matter.\n\nClick here to access the data.\n\n\nReferences\n\nSweedler JV: Striving for Reproducible Science. Anal Chem. 2015; 87(23): 11603–11604. PubMed Abstract | Publisher Full Text\n\nZhang X, Chien EY, Chalmers MJ, et al.: Dynamics of the beta2-adrenergic G-protein coupled receptor revealed by hydrogen-deuterium exchange. Anal Chem. 2010; 82(3): 1100–1108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuc NM, Du Y, Thorsen TS, et al.: Effective application of bicelles for conformational analysis of G protein-coupled receptors by hydrogen/deuterium exchange mass spectrometry. J Am Soc Mass Spectrom. 2015; 26(5): 808–817. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChung KY, Rasmussen SG, Liu T, et al.: Conformational changes in the G protein Gs induced by the β2 adrenergic receptor. Nature. 2011; 477(7366): 611–615. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShukla AK, Westfield GH, Xiao K, et al.: Visualization of arrestin recruitment by a G-protein-coupled receptor. Nature. 2014; 512(7513): 218–222. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi S, Lee SY, Chung KY: Conformational analysis of g protein-coupled receptor signaling by hydrogen/deuterium exchange mass spectrometry. Methods Enzymol. 2015; 557: 261–278. PubMed Abstract | Publisher Full Text\n\nXiao K, Chung J, Wall A: The power of mass spectrometry in structural characterization of GPCR signaling. J Recept Signal Transduct Res. 2015; 35(3): 213–219. PubMed Abstract | Publisher Full Text\n\nKonermann L, Pan J, Liu YH: Hydrogen exchange mass spectrometry for studying protein structure and dynamics. Chem Soc Rev. 2011; 40(3): 1224–1234. PubMed Abstract | Publisher Full Text\n\nForest E, Rey M: Hydrogen exchange mass spectrometry of proteins. Fundamentals, methods and applications. Weis, D. D. (Ed.); John Wiley & Sons, Ltd UK. 2016; 279–294. Publisher Full Text\n\nDawaliby R, Trubbia C, Delporte C, et al.: Allosteric regulation of G protein-coupled receptor activity by phospholipids. Nat Chem Biol. 2016; 12(1): 35–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSavas JN, Stein BD, Wu CC, et al.: Mass spectrometry accelerates membrane protein analysis. Trends Biochem Sci. 2011; 36(7): 388–396. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhitelegge JP: Integral membrane proteins and bilayer proteomics. Anal Chem. 2013; 85(5): 2558–2568. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPirrone GF, Iacob RE, Engen JR: Applications of hydrogen/deuterium exchange MS from 2012 to 2014. Anal Chem. 2015; 87(1): 99–118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang X: Less is More: Membrane Protein Digestion Beyond Urea-Trypsin Solution for Next-level Proteomics. Mol Cell Proteomics. 2015; 14(9): 2441–2453. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHamuro Y, Coales SJ, Molnar KS, et al.: Specificity of immobilized porcine pepsin in H/D exchange compatible conditions. Rapid Commun Mass Spectrom. 2008; 22(7): 1041–1046. PubMed Abstract | Publisher Full Text\n\nWang Y, Kumar N, Solt LA, et al.: Modulation of retinoic acid receptor-related orphan receptor alpha and gamma activity by 7-oxygenated sterol ligands. J Biol Chem. 2010; 285(7): 5013–5025. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVenable JD, Dong MQ, Wohlschlegel J, et al.: Automated approach for quantitative analysis of complex peptide mixtures from tandem mass spectra. Nat Methods. 2004; 1(1): 39–45. PubMed Abstract | Publisher Full Text\n\nGillet LC, Navarro P, Tate S, et al.: Targeted data extraction of the MS/MS spectra generated by data-independent acquisition: a new concept for consistent and accurate proteome analysis. Mol Cell Proteomics. 2012; 11(6): O111.016717. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlack WA, Stocks BB, Mellors JS, et al.: Utilizing Microchip Capillary Electrophoresis Electrospray Ionization for Hydrogen Exchange Mass Spectrometry. Anal Chem. 2015; 87(12): 6280–6287. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDoerr A: DIA mass spectrometry. Nat Meth. 2015; 12: 35. Publisher Full Text\n\nRey M, Forest E, Pelosi L: Exploring the conformational dynamics of the bovine ADP/ATP carrier in mitochondria. Biochemistry. 2012; 51(48): 9727–9735. PubMed Abstract | Publisher Full Text\n\nHebling CM, Morgan CR, Stafford DW, et al.: Conformational analysis of membrane proteins in phospholipid bilayer nanodiscs by hydrogen exchange mass spectrometry. Anal Chem. 2010; 82(13): 5415–5419. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKulak NA, Pichler G, Paron I, et al.: Minimal, encapsulated proteomic-sample processing applied to copy-number estimation in eukaryotic cells. Nat Methods. 2014; 11(3): 319–324. PubMed Abstract | Publisher Full Text\n\nGaravito RM, Ferguson-Miller S: Detergents as tools in membrane biochemistry. J Biol Chem. 2001; 276(35): 32403–32406. PubMed Abstract | Publisher Full Text\n\nCherezov V, Rosenbaum DM, Hanson MA, et al.: High-resolution crystal structure of an engineered human beta2-adrenergic G protein-coupled receptor. Science. 2007; 318(5854): 1258–1265. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu W, Chun E, Thompson AA, et al.: Structural basis for allosteric regulation of GPCRs by sodium ions. Science. 2012; 337(6091): 232–236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPenmatsa A, Wang KH, Gouaux E: X-ray structure of dopamine transporter elucidates antidepressant mechanism. Nature. 2013; 503(7474): 85–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang KH, Penmatsa A, Gouaux E: Neurotransmitter and psychostimulant recognition by the dopamine transporter. Nature. 2015; 521(7552): 322–327. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQin L, Hiser C, Mulichak A, et al.: Identification of conserved lipid/detergent-binding sites in a high-resolution structure of the membrane protein cytochrome c oxidase. Proc Natl Acad Sci U S A. 2006; 103(44): 16117–16122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThompson AA, Liu JJ, Chun E, et al.: GPCR stabilization using the bicelle-like architecture of mixed sterol-detergent micelles. Methods. 2011; 55(4): 310–317. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRasmussen SG, Choi HJ, Rosenbaum DM, et al.: Crystal structure of the human beta2 adrenergic G-protein-coupled receptor. Nature. 2007; 450(7168): 383–387. PubMed Abstract | Publisher Full Text\n\nUjwal R, Bowie JU: Crystallizing membrane proteins using lipidic bicelles. Methods. 2011; 55(4): 337–341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStroud RM: New tools in membrane protein determination. F1000 Biol Rep. 2011; 3: 8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurr UH, Gildenberg M, Ramamoorthy A: The magic of bicelles lights up membrane protein structure. Chem Rev. 2012; 112(11): 6054–6074. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurr UH, Soong R, Ramamoorthy A: When detergent meets bilayer: birth and coming of age of lipid bicelles. Prog Nucl Magn Reson Spectrosc. 2013; 69: 1–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlexandrov AI, Mileni M, Chien EY, et al.: Microscale fluorescent thermal stability assay for membrane proteins. Structure. 2008; 16(3): 351–359. PubMed Abstract | Publisher Full Text\n\nHattori M, Hibbs RE, Gouaux E: A fluorescence-detection size-exclusion chromatography-based thermostability assay for membrane protein precrystallization screening. Structure. 2012; 20(8): 1293–1299. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlthoff T, Hibbs RE, Banerjee S, et al.: X-ray structures of GluCl in apo states reveal a gating mechanism of Cys-loop receptors. Nature. 2014; 512(7514): 333–337. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrohawn SG, del Mármol J, MacKinnon R: Crystal structure of the human K2P TRAAK, a lipid- and mechano-sensitive K+ ion channel. Science. 2012; 335(6067): 436–441. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang W, Whorton MR, MacKinnon R: Quantitative analysis of mammalian GIRK2 channel regulation by G proteins, the signaling lipid PIP2 and Na+ in a reconstituted system. eLife. 2014; 3: e03671. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHite RK, Yuan P, Li Z, et al.: Cryo-electron microscopy structure of the Slo2.2 Na+-activated K+ channel. Nature. 2015; 527(7577): 198–203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang X Thesis (Ph D): Investigating the functional roles of lipids in membrane protein cytochrome c oxidase from Rhodobacter sphaeroides using mass spectrometry and lipid profile modification, Michigan State University, Chemistry and Biochemistry & Molecular Biology. 2009.\n\nZhang X, Hiser C, Tamot B, et al.: Combined genetic and metabolic manipulation of lipids in Rhodobacter sphaeroides reveals non-phospholipid substitutions in fully active cytochrome c oxidase. Biochemistry. 2011; 50(19): 3891–3902. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang X, Tamot B, Hiser C, et al.: Cardiolipin deficiency in Rhodobacter sphaeroides alters the lipid profile of membranes and of crystallized cytochrome oxidase, but structure and function are maintained. Biochemistry. 2011; 50(19): 3879–3890. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee A, Kirichenko A, Vygodina T, et al.: Ca2+-binding site in Rhodobacter sphaeroides cytochrome C oxidase. Biochemistry. 2002; 41(28): 8886–8898. PubMed Abstract | Publisher Full Text\n\nVygodina TV, Kirichenko A, Konstantinov AA: Cation binding site of cytochrome c oxidase: progress report. Biochim Biophys Acta. 2014; 1837(7): 1188–1195. PubMed Abstract | Publisher Full Text\n\nBennett JP Jr, Logan WJ, Snyder SH: Amino acid neurotransmitter candidates: sodium-dependent high-affinity uptake by unique synaptosomal fractions. Science. 1972; 178(4064): 997–999. PubMed Abstract | Publisher Full Text\n\nPert CB, Pasternak G, Snyder SH: Opiate agonists and antagonists discriminated by receptor binding in brain. Science. 1973; 182(4119): 1359–1361. PubMed Abstract | Publisher Full Text\n\nPert CB, Snyder SH: Opiate Receptor Binding of Agonists and Antagonists Affected Differentially by Sodium. Mol Pharmacol. 1974; 10: 868–879. Reference Source\n\nPasternak GW, Snyder SH: Identification of novel high affinity opiate receptor binding in rat brain. Nature. 1975; 253(5492): 563–565. PubMed Abstract | Publisher Full Text\n\nKuhar MJ, Pert CB, Snyder SH: Regional distribution of opiate receptor binding in monkey and human brain. Nature. 1973; 245(5426): 447–450. PubMed Abstract | Publisher Full Text\n\nWilson HA, Pasternak GW, Snyder SH: Differentiation of opiate agonist and antagonist receptor binding by protein modifying reagents. Nature. 1975; 253(5491): 448–450. PubMed Abstract | Publisher Full Text\n\nSnyder SH, Pasternak GW: Historical review: Opioid receptors. Trends Pharmacol Sci. 2003; 24(4): 198–205. PubMed Abstract | Publisher Full Text\n\nHorstman DA, Brandon S, Wilson AL, et al.: An aspartate conserved among G-protein receptors confers allosteric regulation of alpha 2-adrenergic receptors by sodium. J Biol Chem. 1990; 265(35): 21590–21595. PubMed Abstract\n\nFenalti G, Giguere PM, Katritch V, et al.: Molecular control of δ-opioid receptor signalling. Nature. 2014; 506(7487): 191–196. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatritch V, Fenalti G, Abola EE, et al.: Allosteric sodium in class A GPCR signaling. Trends Biochem Sci. 2014; 39(5): 233–244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhorton MR, MacKinnon R: Crystal structure of the mammalian GIRK2 K+ channel and gating regulation by G proteins, PIP2, and sodium. Cell. 2011; 147(1): 199–208. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhorton MR, MacKinnon R: X-ray structure of the mammalian GIRK2-βγ G-protein complex. Nature. 2013; 498(7453): 190–197. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChick JM, Kolippakkam D, Nusinow DP, et al.: A mass-tolerant database search identifies a large proportion of unassigned spectra in shotgun proteomics as modified peptides. Nat Biotechnol. 2015; 33(7): 743–749. PubMed Abstract | Publisher Full Text | Free Full Text\n\nO'Connor C, White KL, Doncescu N, et al.: NMR structure and dynamics of the agonist dynorphin peptide bound to the human kappa opioid receptor. Proc Natl Acad Sci U S A. 2015; 112(38): 11852–11857. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuttman M, Wales TE, Whittington D, et al.: Tuning a High Transmission Ion Guide to Prevent Gas-Phase Proton Exchange During H/D Exchange MS Analysis. J Am Soc Mass Spectrom. 2016; 27(4): 662–668. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi F, Liu J, Zheng Y, Garavito RM, et al.: Protein structure. Crystal structures of translocator protein (TSPO) and mutant mimic of a human polymorphism. Science. 2015; 347(6221): 555–558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo Y, Kalathur RC, Liu Q, et al.: Protein structure. Structure and activity of tryptophan-rich TSPO proteins. Science. 2015; 347(6221): 551–555. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "20226", "date": "21 Feb 2017", "name": "John J Bergeron", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author has provided a description of the methodological issues that were resolved by the author for a 2010 paper using hydrogen deuterium exchange and mass spectrometry to characterize the beta adrenergic GPCR. The article may be a response to another article published by another group in 2015 also on the use of hydrogen deuterium exchange and mass spectrometry to characterize the beta adrenergic GPCR. A recently published review 1 attempts to summarize how hydrogen deuterium exchange and mass spectrometry has transformed not only GPCR characterizations (the author of the submitted opinion article here, Dr. Zhang  is credited for his work in the review1) but also the study of peripheral membrane proteins. In the Opinion article submitted by the author, Dr. Zhang, the two-step approach for the characterization of the beta-adrenergic receptor is indicated in section 1 and Fig1. Detergent considerations are indicated in sections 2,3 as well as Fig2. Proteinase digestion is considered in section 4, monovalent cation considerations in section 5, buffer concentrations in section 6 and automation in section 7. Three supplementary figures are used to compare sequence coverage using different protocols and an extensive 7 page supplementary section on detergent ( solubilisation) choices.\n\nIt is difficult for this reviewer to see the conceptual advance in this submitted Opinion piece and that of the recent review by the same author ( Dr. Zhang) in a 2015 review in Molecular and Cellular Proteomics ( indeed some of the figures are similar for GPCR coverage and credited as such in this submitted Opinion piece ).\n\nOne innovation that may be an extension of hydrogen deuterium exchange mass spectrometry may be the application of “Native” mass spectrometry2 to integral membrane proteins such as the beta-adrenergic receptor. Perhaps the author could consider the important biological discoveries made through hydrogen deuterium exchange mass spectrometry for integral membrane proteins using, as an example, the beta adrenergic receptor and the hope that this can be extended though “Native” mass spectrometry, especially for resolving the dynamics of the interactions with the intracellular subunits of the hetero trimeric signaling complex.", "responses": [] }, { "id": "21854", "date": "20 Apr 2017", "name": "Dapeng Chen", "expertise": [ "Reviewer Expertise Protein mass spectrometry" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe coupling of amide hydrogen/deuterium exchange (HDX) with mass spectrometry has been successfully used for the determination of protein dynamics. Recent studies1-2 have shown that this method can reach high protein sequence coverage (>80%) in the study of GPCRs. However, frustrations exist, such as the confusion of the HPLC method and the inconsistency of the use of detergents. In the opinion article, Zhang discussed three critical technical aspects that should be carefully examined including 1) bottom-up MS analysis, 2) the use of detergents, and 3) the proteolysis strategy. Particularly, the author expanded the discussion on the problematic use of CHAPSO/DMPC for the membrane protein solution. The author claimed that this solution could form positively charged ions and interfere with MS analysis. In addition, the author raised their concerns of the presence of Na+ and K+ in the buffer, which could encourage the production of ion-added peptides and hinder MS data interpretation.\nIt is an undeniable fact that the DIA MS/MS strategy has been highly successful in bottom-up proteomics analysis. However, the application of this method could be very challenging due to the limitation of the development of bioinformatics tools. Therefore, the author’s statement that DIA could improve MS/MS analysis independently is open to argument. Nevertheless, the combination of DDA and DIA is preferable, which would further improve sequence coverage in bottom-up proteomics analysis. Recently, there have been advancements in top-down proteomics analysis. GPCRs are low-mass proteins (~40 kDa) and it is feasible to obtain high resolution of MS/MS data when using the ETD fragmentation method. When coupled with bioinformatics tools, top-down proteomics analysis can be used to determine protein topology3. The author may consider the potential use of top-down proteomics analysis on GPCRs.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes", "responses": [] }, { "id": "22463", "date": "08 May 2017", "name": "Jun Qin", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript presented seven perspectives for investigation of GPCR structure using H/D-exchange with mass spectrometry. The manuscript was well-written and clarified many misconceptions in using H/D exchange in getting GPCR rough structure information. It is nice that the author put emphasis in the description of how to maintain the solution structure of GPCR with its relevant lipid environments during the H/D exchange reaction. It can be accepted after further addressing the following questions.\n\n1). HDX can be used to illustrate the effect of post-translational modifications1 on protein dynamics, ligand binding, and substrate specificity. It is an important part of application of this intriguing technology and should be included in this manuscript.\n\n2). Crosslinking (XL) mass spectrometry of GPCR in its native state (with proper lipids and solubilizing detergents) may be a rival to HDX in the future. XL may be used with endogenous GPCRs. This should be discussed.\n\n3).The manuscript need to be further proof-read. There are many typos and grammar errors.\n\nOne example: (Page 4, Line 25) “How the presentation methods” should be corrected as “how the present methods”.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-89
https://f1000research.com/articles/5-223/v1
26 Feb 16
{ "type": "Opinion Article", "title": "Statin (3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitor)-based therapy for hepatitis C virus (HCV) infection-related diseases in the era of direct-acting antiviral agents", "authors": [ "Sara Sobhy Kishta", "Reem El-Shenawy", "Sobhy Ahmed Kishta", "Reem El-Shenawy", "Sobhy Ahmed Kishta" ], "abstract": "Recent improvements have been made in the treatment of hepatitis C virus (HCV) infection with the introduction of direct-acting antiviral agents (DAAs). However, despite successful viral clearance, many patients continue to have HCV-related disease progression. Therefore, new treatments must be developed to achieve viral clearance and prevent the risk of HCV-related diseases. In particular, the use of pitavastatin together with DAAs may improve the antiviral efficacy as well as decrease the progression of liver fibrosis and the incidence of HCV-related hepatocellular carcinoma. To investigate the management methods for HCV-related diseases using pitavastatin and DAAs, clinical trials should be undertaken. However, concerns have been raised about potential drug interactions between statins and DAAs. Therefore, pre-clinical trials using a replicon system and human hepatocyte-like cells from human-induced pluripotent stem cells should be conducted. Based on these pre-clinical trials, an optimal direct-acting antiviral agent could be selected for combination with pitavastatin and DAAs. Following the pre-clinical trial, the combination of pitavastatin and the optimal direct-acting antiviral agent should be compared to other combinations of DAAs (e.g., sofosbuvir and velpatasvir) according to the antiviral effect on HCV infection, HCV-related diseases and cost-effectiveness.", "keywords": [ "Hepatitis C virus (HCV) infection", "HCV infection-related diseases", "statins", "replicon system", "human hepatocyte-like cells from human induced pluripotent stem cells", "direct-acting antiviral agents (DAAs)" ], "content": "Introduction\n\nHepatitis C virus (HCV) infection is an important public health problem worldwide, as many HCV-infected patients develop liver cirrhosis and/or hepatocellular carcinoma (HCC)1.\n\nRecent improvements in the treatment of HCV infection have focused on the use of direct-acting antiviral agents (DAAs)1. However, despite successful viral clearance, many patients with advanced fibrosis continue to have HCV-related disease progression2,3. Therefore, new treatments must be developed to achieve viral clearance and prevent the risk of HCV-related diseases.\n\nStatins (inhibitors of 3-hydroxy-3-methylglutaryl-coenzyme A reductase) have been proposed as a new candidate for the treatment of HCV infection. According to previously conducted clinical studies using statins in HCV-infected patients4–8, the antiviral effect of statins on HCV infection depends on the type of statin used. Among various statins, pitavastatin showed the highest antiviral efficacy against HCV genotype 1b infection in vitro9. Therefore, pitavastatin represents a novel candidate for the treatment of HCV infection.\n\n\nClinical trials for pitavastatin against HCV infection\n\nAfter performing a search of the PubMed database, we identified three clinical trials using pitavastatin for the treatment of HCV infection7,8,10.\n\nWhen Shimada et al.7 considered two reports published in 2010 investigating the antiviral efficacy of pitavastatin against HCV infection in vitro9,11, they conducted a randomized controlled trial7. The proof-of-concept studies demonstrated the antiviral efficacy and safety of pitavastatin against HCV infection using a replicon system and human hepatocyte-like cells from human induced pluripotent stem cells (hiPSCs)9,11, and these effects of pitavastatin were confirmed in the randomized controlled trial7,12. Indeed, this series of studies7,9,11 would be the first to report the clinical applications of hiPSCs12. After the clinical trial performed by Shimada et al.7, the antiviral efficacy and safety of pitavastatin against HCV infection were further confirmed in two clinical studies8,10. Thus, investigating the antiviral efficacy and safety of pitavastatin against HCV infection using a replicon system and human hepatocyte-like cells from hiPSCs9,11 constitutes a rational approach to discover new drugs and/or new therapeutic methods.\n\n\nClinical trials to investigate the management methods of HCV-related diseases in the era of DAAs\n\nButt et al.13 showed that statin use was associated with improved antiviral efficacy as well as decreased progression of liver fibrosis and a reduced incidence of HCC among a large cohort of HCV-positive veterans. Furthermore, the use of statins among patients with HCV and compensated cirrhosis (n=40,512) was associated with a more than 40% lower risk of cirrhosis decompensation and death14. Moreover, statin users showed a significant reduction in the incidence of HCC15. In addition, pitavastatin showed anti-cancer effects against human hematoma cell lines16–18.\n\nDAAs in combination with statins have been shown to generate increased antiviral efficacy against HCV infection19. However, concerns have been raised about the drug interactions between various statins and DAAs20. For instance, simvastatin and lovastatin should be avoided in patients with HCV infection who are using boceprevir or telaprevir as a DAA21. Atorvastatin should be avoided in patients with HCV infection who are using telaprevir21, and pravastatin plus boceprevir may also pose risks21. Although rosuvastatin could be considered for use in combination with telaprevir and boceprevir21, the drug interactions between pitavastatin and DAAs remain unknown.\n\nFurthermore, according to our search of the PubMed database and UMIN Clinical Trials Registry System (http://www.umin.ac.jp/icdr/index.html), no clinical trial has been conducted for the combination of statins and DAAs. Therefore, although there is likely only a minimal additive benefit for viral clearance using statins, as new combinations of DAA (sofosbuvir and velpatasvir) therapy have shown sustained virologic response (SVR) rates above 95%22, conducting a clinical trial for the combination of pitavastatin and DAAs may be meaningful to investigate management methods to prevent fibrosis and cirrhosis or the development of HCC and other HCV-related diseases in the era of DAAs. However, in pre-clinical trials, the antiviral effects of the combination of pitavastatin and DAAs should be evaluated using a replicon system9. Furthermore, hepatotoxicities should also be evaluated for the combination of pitavastatin and DAAs using human hepatocyte-like cells from hiPSCs11. Using a replicon system and human hepatocyte-like cells from hiPSCs in a pre-clinical trial9,11, an optimal direct-acting antiviral agent could be selected for use in the combination of pitavastatin and DAAs. After the pre-clinical trial, the combination of pitavastatin and the optimal direct-acting antiviral agent should be compared with other DAA combinations (e.g., sofosbuvir and velpatasvir) according to their antiviral efficacy against HCV infection and prevention of HCV-related diseases. Furthermore, because the cost for DAA combination treatment is very high ($83,000 to $153,000 per course of treatment)22, the new and effective hepatitis C treatments seem beyond the reach of low- and middle-income countries23. However, the cost ($0.79 to $2.59 per day) of pitavastatin (2mg)7 is low (http://www.pharmacychecker.com/generic/price-comparison/pitavastatin/2+mg/). Therefore, the above-mentioned comparison should also be investigated in light of cost-effectiveness.\n\nIn conclusion, a pre-clinical trial investigating the combination of pitavastatin and DAAs against HCV infection using a replicon system and human hepatocyte-like cells from hiPSCs9,11 represents a rational approach for discovering a new therapeutic method.", "appendix": "Author contributions\n\n\n\nAll authors (Reem Mohamed Fathy EI-Shenawy, Sara Kishta and Sobhy Kishta) equally contributed to the writing of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nCarpentier A, Tesfaye A, Chu V, et al.: Engrafted human stem cell-derived hepatocytes establish an infectious HCV murine model. J Clin Invest. 2014; 124(11): 4953–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReddy KR, Zeuzem S, Zoulim F, et al.: Simeprevir versus telaprevir with peginterferon and ribavirin in previous null or partial responders with chronic hepatitis C virus genotype 1 infection (ATTAIN): a randomised, double-blind, non-inferiority phase 3 trial. Lancet Infect Dis. 2015; 15(1): 27–35. PubMed Abstract | Publisher Full Text\n\nSimon TG, King LY, Zheng H, et al.: Statin use is associated with a reduced risk of fibrosis progression in chronic hepatitis C. J Hepatol. 2015; 62(1): 18–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatel K, Lim SG, Cheng CW, et al.: Open-label phase 1b pilot study to assess the antiviral efficacy of simvastatin combined with sertraline in chronic hepatitis C patients. Antivir Ther. 2011; 16(8): 1341–6. PubMed Abstract | Publisher Full Text\n\nPatel K, Jhaveri R, George J, et al.: Open-label, ascending dose, prospective cohort study evaluating the antiviral efficacy of rosuvastatin therapy in serum and lipid fractions in patients with chronic hepatitis C. J Viral Hepat. 2011; 18(5): 331–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBader T, Hughes LD, Fazili J, et al.: A randomized controlled trial adding fluvastatin to peginterferon and ribavirin for naïve genotype 1 hepatitis C patients. J Viral Hepat. 2013; 20(9): 622–7. PubMed Abstract | Publisher Full Text\n\nShimada M, Yoshida S, Masuzaki R, et al.: Pitavastatin enhances antiviral efficacy of standard pegylated interferon plus ribavirin in patients with chronic hepatitis C: a prospective randomized pilot study. J Hepatol. 2012; 56(1): 299–300. PubMed Abstract | Publisher Full Text\n\nKohjima M, Enjoji M, Yoshimoto T, et al.: Add-on therapy of pitavastatin and eicosapentaenoic acid improves outcome of peginterferon plus ribavirin treatment for chronic hepatitis C. J Med Virol. 2013; 85(2): 250–60. PubMed Abstract | Publisher Full Text\n\nMoriguchi H, Chung RT, Sato C: New translational research on novel drugs for hepatitis C virus 1b infection by using a replicon system and human induced pluripotent stem cells. Hepatology. 2010; 51(1): 344–5. PubMed Abstract | Publisher Full Text\n\nYokoyama S, Kawakami Y, Chayama K: Letter: Pitavastatin supplementation of PEG-IFN/ribavirin improves sustained virological response against HCV. Aliment Pharmacol Ther. 2014; 39(4): 443–4. PubMed Abstract | Publisher Full Text\n\nMoriguchi H, Chung RT, Sato C: An identification of the novel combination therapy for hepatitis C virus 1b infection by using a replicon system and human induced pluripotent stem cells. Hepatology. 2010; 51(1): 351–2. PubMed Abstract | Publisher Full Text\n\nMoriguchi H: The development of statin-based therapy for patients with hepatitis C virus (HCV) infection using human induced pluripotent stem (iPS) cell technology. Clin Res Hepatol Gastroenterol. 2015; 39(5): 541–3. PubMed Abstract | Publisher Full Text\n\nButt AA, Yan P, Bonilla H, et al.: Effect of addition of statins to antiviral therapy in hepatitis C virus-infected persons: Results from ERCHIVES. Hepatology. 2015; 62(2): 365–74. PubMed Abstract | Publisher Full Text\n\nMohanty A, Tate JP, Garcia-Tsao G: Statins Are Associated With a Decreased Risk of Decompensation and Death in Veterans With Hepatitis C-Related Compensated Cirrhosis. Gastroenterology. 2016; 150(2): 430–440.e1. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTsan YT, Lee CH, Ho WC, et al.: Statins and the risk of hepatocellular carcinoma in patients with hepatitis C virus infection. J Clin Oncol. 2013; 31(12): 1514–21. PubMed Abstract | Publisher Full Text\n\nArii K, Suehiro T, Ota K, et al.: Pitavastatin induces PON1 expression through p44/42 mitogen-activated protein kinase signaling cascade in Huh7 cells. Atherosclerosis. 2009; 202(2): 439–45. PubMed Abstract | Publisher Full Text\n\nWang J, Xu Z, Zhang M: Downregulation of survivin expression and elevation of caspase-3 activity involved in pitavastatin-induced HepG 2 cell apoptosis. Oncol Rep. 2007; 18(2): 383–7. PubMed Abstract | Publisher Full Text\n\nWang J, Tokoro T, Higa S, et al.: Anti-inflammatory effect of pitavastatin on NF-kappaB activated by TNF-alpha in hepatocellular carcinoma cells. Biol Pharm Bull. 2006; 29(4): 634–9. PubMed Abstract | Publisher Full Text\n\nDelang L, Paeshuyse J, Vliegen I, et al.: Statins potentiate the in vitro anti-hepatitis C virus activity of selective hepatitis C virus inhibitors and delay or prevent resistance development. Hepatology. 2009; 50(1): 6–16. PubMed Abstract | Publisher Full Text\n\nPoordad F, McCone J Jr, Bacon BR, et al.: Boceprevir for untreated chronic HCV genotype 1 infection. N Engl J Med. 2011; 364(13): 1195–206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKiser JJ, Burton JR, Anderson PL, et al.: Review and management of drug interactions with boceprevir and telaprevir. Hepatology. 2012; 55(5): 1620–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWard JW, Mermin JH: Simple, Effective, but Out of Reach? Public Health Implications of HCV Drugs. N Engl J Med. 2015; 373(27): 2678–2680. PubMed Abstract | Publisher Full Text\n\nJayasekera CR, Barry M, Roberts LR, et al.: Treating hepatitis C in lower-income countries. N Engl J Med. 2014; 370(20): 1869–71. PubMed Abstract | Publisher Full Text" }
[ { "id": "13680", "date": "11 May 2016", "name": "Susana N. Asin", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRecent improvements in treatment strategies for Hepatitis C virus infected patients have been successful at reaching sustained virological responses yet did not prevent liver fibrosis and progression to hepatocellular carcinoma. The idea proposed in this paper that new treatments must be developed to achieve viral clearance while preventing disease progression although meritorious is not original and should be complemented with further understanding of mechanisms underlying hepatitis C disease progression in the presence of sustained virological response (i.e. absence/ or low level of viral replication). The authors stated that pitasvatin represents a novel candidate to treat HCV infection yet the rationale for selecting this statin, as the likely candidate is not clearly justified. The authors based their selection on data from a retrospective analysis demonstrating no significant difference in sustained virological response rate per protocol analysis between patients treated with PEG-IFN/ribavirin in the presence or absence of pitavastatin. Findings from a second prospective trial demonstrates the safety of pitavastatin in combination with Peg IFN plus ribavirin although the decrease in HCV RNA was only significant at 2 (4 and 12 weeks) of 6 evaluated time points of treatment. The third trial used pitavastin in combination with eicosapentaenoic acid and identified this combination therapy as predictive of sustained virological response in multifactorial analysis only after genetic variation in IL28B was excluded from these factors. Thus the authors could have presented a better rationale for the selection of this statin. An additional point of discussion that should have been included in this paper would have been the evidence that statin use decreased progression to liver fibrosis and hepatocellular carcinoma yet this effect was independent of having attained a sustained virological response. These findings raised the possibility that the statins-mediated  delay in  liver fibrosis are related to their immunomodulatory rather than their antiviral effects. This assumption is supported by findings demonstrating little or no effect of statins use on HCV replication.", "responses": [] }, { "id": "13945", "date": "07 Jun 2016", "name": "Joel D. Baines", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article would be improved with statements quantifying how common progression to disease occurs in patients treated with anti-HCV compounds in the absence of other drugs. The main point of the article, that there should be pre-clinical trials to test for efficacy and toxicity of dual therapy with Pitavastatin are valid.  However, a replicon system in hepatocellular cells is not likely to duplicate the full gamut of potential side effects seen in humans receiving pitavastatin and anti-HCV drugs.  Although it is an important place to start, the risk that complications will not be revealed in a preclinical trial using this system should be acknowledged.", "responses": [ { "c_id": "2045", "date": "07 Jul 2016", "name": "Sara Kishta", "role": "Author Response", "response": "First of all, thank you very much for your suggestions. We think that your comments were very helpful. We agree with your suggestions. If you agree with our revisions, we are very happy.   1)  According to your suggestions, we added the following sentences to our text (please see, last paragraph and conclusion).   On the other hand, a replicon system in hepatocellular cells is not likely to duplicate the full gamut of potential side effects seen in patients with HCV infection receiving pitavastatin and DAAs. The risks (potential side effects) that will not be revealed in a pre-clinical trial using this system should be acknowledged. Therefore, in order to identify the risk (potential side effects) that will not be revealed in a pre-clinical trial using this system, the evaluations using human neurons and human cardiomyocytes from hiPSCs24, 25 should also be done in pre-clinical trials. In conclusion, a pre-clinical trial investigating the combination of pitavastatin and DAAs against HCV infection using a replicon system, human hepatocyte-like cells, human neurons and human cardiomyocytes from hiPSCs9,11,24,25 represents a rational approach for discovering a new therapeutic method.   2)  We revised our abstract.  pre-clinical trials using a replicon system, human hepatocyte-like cells, human neurons and human cardiomyocytes from human-induced pluripotent stem cells should be conducted.   3) We added the new references (No. 24 and 25) in the reference section." } ] } ]
1
https://f1000research.com/articles/5-223
https://f1000research.com/articles/4-173/v1
29 Jun 15
{ "type": "Opinion Article", "title": "The necessity of studying higher brain functions from a first-person frame of reference", "authors": [ "Kunjumon I. Vadakkan" ], "abstract": "Almost all higher brain functions are first-person properties and anyone seeking to study them faces significant difficulties. Since a third-person experimenter cannot access first-person properties, current investigations are limited to examining the latter by using third-person observations that are carried out at various levels. This limits the current studies to correlational experiments using third-person observed findings. In order to initiate a study of explanations for the first-person properties, experimental approaches should be undertaken from the first-person frame of reference. But, there is a huge barrier. I discuss my opinion for crossing this barrier using a three-stage approach – theoretical, computational and experimental – in that order. These stages will naturally lead to the gold standard of understanding the mechanism by replicating it in engineered systems. The hurdles and incentives of undertaking this approach are discussed.", "keywords": [ "first-person sensations", "first-person frame of reference", "higher brain functions", "semblance hypothesis", "third-person observations", "artificial intelligence" ], "content": "Introduction\n\nAttempts to interconnect third-person findings obtained at different levels such as biochemical, cellular, electrophysiological, systems, imaging and behavioral studies by different fields of neuroscience remains a challenge1,2. In contrast to other systems in the body, nervous system functions are unique in that all the higher brain functions are first-person properties of the mind. These functions include the state of being conscious, the ability to perceive sensations, the ability to internally sense retrieved memories and the ability to generate thoughts (Figure 1). Only the owner of the nervous system has access to these functions, making them purely first-person internal sensations3. First-person reports of these sensations through motor activity such as behaviour and speech provide surrogate markers to the third-person observers. Currently, clinical evaluation of neurological and psychiatric diseases is based on assessing the first-person reporting and third-person observed findings. This severely limits our understanding of a) internal sensations in non-responsive patients, b) defects in the mechanism of formation of the internal sensation of memory, and c) the compelling sense of reality of hallucinations in psychiatric disorders. The surrogate markers for assessing the first-person properties may not represent the true nature of the internal sensations. This is because a) behaviors may be altered due to changes in the circuitry, or b) some species of animals may voluntarily hide the truth and exhibit behaviors misrepresenting them. Similarly, almost all the current approaches use third-person observed findings at various levels4 in correlational studies with surrogate makers of biochemical changes, neuronal activations, oscillating potentials, signal changes in imaging studies, and behavioural responses to connect with the first-person properties. Another important area of study is to understand consciousness5. An entire branch of medicine that deals with blocking this first-person property of consciousness – anesthesiology – requires knowledge of its mechanism due to several reports of neurodegenerative diseases associated with anesthetics6.\n\nThird-person observed features include neuronal firing, electrophysiological changes and surrogate markers of internal sensations such as behavioral motor activities and language. It has not been possible to interconnect between different third-person findings. Hypothesizing the mechanism of first-person internal sensations of different higher brain functions capable of interconnecting various third-person observed findings is an optimistic step towards verifying the first-person view.\n\nCurrent third-person studies at various levels assume that first-person internal sensations are emergent properties of the system. Emergence can be adopted as a framework to study properties that cannot be explained using the third person-observed features of the system. However, reductionism can be used to carefully examine the factors upon which the emergent properties are dependent and hypothesize the smallest possible structure-function unit from which internal sensations can be induced. Therefore, views of emergent properties and reductionism can be seen as mutually inclusive. By using them in conjunction, first-person internal sensations can be approached.\n\n\nConverting first-person sensations to third-person features\n\nThe first-person features of the higher brain functions cannot be studied in biological systems due to access issues. Recent research work is attempting to overcome this barrier by approaching higher brain functions from a first-person frame of reference, examining the locations and mechanisms that can lead to the formation of the basic units of internal sensations7,8. This drastically different approach is based on the view that the gold-standard test of understanding the formation of first-person internal sensations is to replicate the mechanism in engineered systems. This approach is being carried out in three stages. The first step is the theoretical derivation of the basic functional units of the system at the correct level that is also connected to the motor system, which can explain all the higher brain functions along with behavioral motor activity. It is found that a) locations from which memories can be retrieved gradually shift from the hippocampus to the cortices over several years, and b) patients recover completely after suffering from small strokes at certain locations of the brain. These suggest that the basic structure-function units are spatially definable and transferable, and that emergent functions can be integrated from multiple locations. Since a large number of functions and loss-of-function states for the system are being studied by different faculties of brain sciences, the solution capable of explaining both the first- and third-person properties is likely a unique one. In other words, there is only one solution. Therefore, theoretical work to hypothesize structure-function units is the first major step. The second step is to carry out computational studies to examine the nature of the algorithms for different modules of functions that can result in expected qualities for the generated internal sensations. The third step is to build engineered systems that can provide readouts of the formed internal sensations based on the rules by which they are built (Figure 2). It is likely to require combining the second and third steps.\n\nDiagram shows a path for the first-person scientific approaches for replicating theoretically feasible hypothesized mechanisms in engineered systems. The readouts obtained from these systems can be used by third-person experimenters to fine-tune the internal sensations to match with both the expected internal sensations and behavioral motor activity. An inevitable end-product of this approach is the development of artificially intelligent systems.\n\n\nFocal points of emergence\n\nOf all the third-person sensed findings, neuronal firing (also known as somatic spikes) can be easily observed, induced and measured. New tools to make the firing of neurons visible provide an advantage in examining the nervous system from the third-person frame of reference. Somatic spikes are one of the different kinds of spikes observed along the neuronal processes. Others are dendritic spikes and axonal spikes. Most importantly, the potentials originated by the dendritic spike at farther locations degrade as they arrive at the neuronal soma. When examined from the third-person frame of reference, it can be seen that a very large number of excitatory postsynaptic potentials (EPSPs) are not being used efficiently to justify their evolutionary preservation. For example, EPSPs during sub- and supra-threshold activations of a neuron are not contributing to any function. Contribution of potentials from synaptic events that occurs remote from the neuronal soma towards neuronal firing is minimal9. Is there a different view possible for their functional attributes? The evolutionarily well-preserved occurrence of all the synaptic potentials, in surplus to what is required for the observed neuronal firing, prompts some important questions. What functional significance can they impart when examined from a first-person frame of reference? For such investigations, the most important question is “At what focal points in the nervous system do the units of internal sensations emerge?” These approaches are expected to ultimately guide the discovery of units for the generation of internal sensations.\n\n\nThe gold standard\n\nThe gold standard for understanding the operational units requires transferring the theoretically-derived operational mechanism in engineered systems. Since internal sensations are virtual in nature, the conversion of the formed internal sensations into third-person observable outputs is indispensable to understanding the operation. In this context, the main limiting step is the theoretical derivation of the basic operational unit, at the correct level, that can explain all the nervous system functions from both first- and third-person frames of reference. This is followed by replication of the mechanism in engineered systems similar to that which are proposed10. Studies of first-person systems will deal with optimizing the properties of the components of the engineered system by seeking specific experimental results from biological systems. These studies will require regular feedback from computational studies to solve optimization problems. Finally, the studies are expected to arrive at the algorithms that can provide the desired outputs. At the advanced stages, the systems science will examine the systems properties from a holistic view, including its interaction with surrounding environment and dynamic behavior through complex paths that are reinforced during certain operations. In addition, the systems science will be able to examine instabilities when the system crosses the “boundary conditions” that can mimic the disease process. Systems design, systems development, systems stability, systems analysis, systems dynamics, and systems viability will become necessary elements of this process.\n\n\nMajor hurdles\n\nThere are two major obstacles in exploring the first-person properties of the nervous system. In my opinion they cannot be separated from the very challenge of discovering the first-person properties. The first one is reaching a consensus among researchers of one faculty of science for funding projects that involve significant changes in the research approach. If a logically-fitting experimental approach is available, then practical difficulties in conducting it should not deter us from undertaking it. Since the mechanism of the nervous system functions has not discovered yet, it is understandable that a novel approach is required. Changing the frame of reference from which to examine the higher brain functions suits such an anticipation. In my opinion, first-person approach should be brought into the main stream investigational methods in neuroscience. In fact, first-person studies should distinguish neuroscience from the studies of other organs in the body.\n\nThe second challenge is to maintain a certain level of confidence that we can discover the mechanism of formation of first-person properties. The necessity of discovering the mechanisms for the disorders of the mind should take precedence over the fear of discovering the operations of the mind. There is a growing concern about ‘the Singularity’, a threshold point above which engineered systems will become more intelligent than humans. The fear comes from the thinking that artificially intelligent machines may take over the human race. Building regulatory bodies and having strategies in place to prevent the development of these foreseeable effects should be simultaneously carried out along with the development of methods for exploring the first-person properties.\n\n\nIn conclusion\n\nA reasonable early expectation from first-person studies is the development of experiments that can provide third-person-sensible outputs at key stages, so that data collection and exploration of further work can become possible. Comparative physiology of the mechanisms of formation of first-person properties using different model nervous system circuitries will be part of this approach. The first-person studies will also aim to identify the focal points at which the mechanism is disrupted in neurological and psychiatric disorders. By taking advantage of the information arriving from first-person studies, we will be able to design methods to prevent and treat several neurological and psychiatric diseases.\n\nThe nervous systems of even very low-level species produce intentionality to carry out survival and reproductive instincts, indicating that an evolutionarily highly conserved mechanism is shared among all species of animals. The presence of nearly ten million existing and predicted animal species on earth11 provides a great deal of confidence to successfully simulate the mechanism in engineered systems that mimic one of them. Such intelligent systems are of paramount importance to help aging populations with the care they need, design strategies to feed the hungry, cure diseases, alleviate human suffering and provide methods to prevent climate change, to name a few. My opinion is that the steps towards finding solution to the virtual nature of the first-person properties will have similarities to the development of complex numbers in mathematics. Therefore, first-person studies will fall into the realm of a completely independent new branch of basic science. The conclusion of this opinion article is that a first-person approach to understand the brain and the natural course of events that will lead to the development of artificial intelligence are two sides of the same coin. A discussion on this topic among neuroscientists, computational scientists, and engineers can spark many bright ideas.", "appendix": "Competing interests\n\n\n\nAuthor has applied for a U.S. patent (application no: 14/068,835) of an electronic circuit model of the inter-postsynaptic functional LINK.\n\n\nGrant information\n\nKV is supported by funding from the Neurosearch Center, Toronto (Grant number: 3:24/2014). KIV is a financial contributor to the Neurosearch Center, Toronto.\n\n\nAcknowledgements\n\nI thank Selena Beckman-Harned for reading the manuscript.\n\n\nReferences\n\nGallistel CR, Balsam PD: Time to rethink the neural mechanisms of learning and memory. Neurobiol Learn Mem. 2014; 108: 136–144. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdelman S: Six challenges to theoretical and philosophical psychology. Front Psychol. 2012; 3: 219. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVarela JF, Shear J: The view from within: First-person approaches to the study of consciousness. Imprint Academic, UK. 1999. Reference Source\n\nLisman J: The Challenge of Understanding the Brain: Where We Stand in 2015. Neuron. 2015; 86(4): 864–882. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalmers DJ: How can we construct a science of consciousness? Ann N Y Acad Sci. 2013; 1303: 25–35. PubMed Abstract | Publisher Full Text\n\nBaranov D, Bickler PE, Crosby GJ, et al.: Consensus statement: First International Workshop on Anesthetics and Alzheimer's disease. Anesth Analg. 2009; 108(5): 1627–1630. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMinsky M: K-Lines: A theory of memory. Cognitive Science. 1980; 4(2): 117–133. Publisher Full Text\n\nVadakkan KI: A supplementary circuit rule-set for the neuronal wiring. Front Hum Neurosci. 2013; 7: 170. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpruston N: Pyramidal neurons: dendritic structure and synaptic integration. Nat Rev Neurosci. 2008; 9(3): 206–221. PubMed Abstract | Publisher Full Text\n\nMcDonnell MD, Boahen K, Ijspeert A, et al.: Engineering Intelligent Electronic Systems Based on Computational Neuroscience. Proc IEEE Inst Electr Electron Eng. 2014; 102(5): 646–651. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMora C, Tittensor DP, Adl S, et al.: How many species are there on earth and in the ocean? PLoS Biol. 2011; 9(8): e1001127. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "12536", "date": "18 Feb 2016", "name": "Xiao Shifu", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article impressed me. I personally respect the thinking of the author on researches. Referring to mathematics, neuroscience, the systems science, biological systems, evolutionary biology, comparative physiology and artificial intelligence, the author has an open mind and provides us a possible direction and approach to study higher brain functions. Meanwhile I have some different ideas, and I hope the author could consider these: In the introduction section, you have introduced a lot for us, but could you tell us more clearly what’s ‘the necessity of studying higher brain functions from a first-person frame of reference’? About the clinical approach we use now, the limitation you described in this article does not only exist in this approach, it exists on every approach because researchers haven’t found out the mechanism of formation of internal sensations. I can’t understand these sentences you said in this article: ‘It is found that a) locations from which memories can be retrieved gradually shift from the hippocampus to the cortices over several years, and b) patients recover completely after suffering from small strokes at certain locations of the brain. These suggest that the basic structure-function units are spatially defiable and transferable, and that emergent functions can be integrated from multiple locations’, please tell us how you draw your conclusion from this. Could you tell us further information about the relationship between ‘structure-function units’ and ‘views of emergent properties and reductionism’, so we can get a clearer impression, even it’s just an assumption.", "responses": [ { "c_id": "2427", "date": "25 Jan 2017", "name": "Kunjumon Vadakkan", "role": "Author Response", "response": "I thank Dr. Xiao Shifu for his comments and helpful suggestions.   I have removed sentences that were causing confusion. I have re-written the article to explain clearly why it is necessary to study the higher brain functions from a first-person frame of reference. I sincerely hope that this revised manuscript provides necessary explanations." } ] }, { "id": "18244", "date": "29 Dec 2016", "name": "Zoltan Nadasdy", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIllposed answers to illposed questions\nThe manuscript raises a seemingly intriguing and provocative question about perspective taking in experimental Neuroscience. It argues that many daunting problems would benefit from a first-person frame of reference, both as a method as well as a subject at the same time. More specifically, the question has two main aspects: (A) One concerns with the method of studying neuronal functions from first-person versus a neutral (third-person) point of view, and (B) the other aspect concerns with the first-person ontology of consciousness. Unfortunately the confusion of these two aspects creates an ill-posed contradiction between third-person methods and first-person subject leading the author to questioning the adequacy of standard scientific method. Since the manuscript fails to disentangle the two aspects above, it necessarily fails to provide a tangible solution.\nMain concerns:\nA) Starting with the first-person methods:\nRegarding the adequacy of third-person scientific method (aspect A): The third-person point of view is typically not a matter of choice. (Actually, the term \"third-person\" is imprecise, as it should be a depersonalized point of view.) The concept of \"impartiality\" is central in the philosophy of science. It has a huge literature and long history from Plato to Karl Popper. There are a number of good reasons why scientific investigations consistently attempted to dissolve the role of observer in scientific paradigms until physicists, including Einstein, Heisenberg, Schrodinger, Planck to name a few, showed that observations necessarily interfere with the observed system. Einstein in general relativity, inspired by Ernst Mach's conjecture (Von Baeyer 2001), approaches the limitation of \"objectivity\" as the dependency of measurements on the position and speed of the observer relative to the speed of light and the time of observation. Acknowledging that there IS NO observation without interference, objectivity is asymptotic. Nevertheless, the independence of data from the observation should be maximized as a general principle. The \"third-person\" perspective in science as being the most \"impartial\" perspective became standard, because minimizes bias while also allowing for replication of results. I don't see any controversy about that. Nonetheless, none of these philosophical historical contexts were mentioned in the manuscript.\n\n\"In contrast to other systems in the body, nervous system functions are unique in that all the higher brain functions are first-person properties of the mind.\" Not true. There is a whole group of allocentric systems in the brain, the hippocampus and a number of temporal lobe areas.\n\nExtract from Introduction: \"First-person reports of these sensations through motor activity such as behaviour and speech provide surrogate markers to the third-person observers. Currently, clinical evaluation of neurological and psychiatric diseases is based on assessing the first-person reporting and third-person observed findings. This severely limits our understanding of a) internal sensations in non-responsive patients, b) defects in the mechanism of formation of the internal sensation of memory, and c) the compelling sense of reality of hallucinations in psychiatric disorders.\" These are not factors that inherently limit our understanding. We all know what hallucinations mean. Mentally healthy people are able to communicate with a certain degree of clarity their internal states and the listeners are able to relate to them. The locked-in syndrome is a different issue, which requires special methods to establish communication. We are also able to communicate our introspection by descriptions. For example, pain scales, memory tests, and tests of awareness are widely used in clinical practice. In contrast there is a much bigger communication barrier between animal species and humans preventing from obtaining first-person accounts that IS an inherent limitation of interpreting data from animal models, such as \"fear\" and \"anxiety\" for example. Because the motivation of the whole paper hinges on the quoted arguments and they are weak, they are unable to support the rest of the paper.\n\nNotably, first-person reports are not at all alien to science. Sigmund Freud's legacy and specifically the psychoanalytic method are fundamentally based on first-person reporting and reference for interpretation. A method, which is often criticized (see for instance by Karl Popper) as lacking of scientific rigor, and more so because of the psychoanalytic reasoning is theoretically unfalsifiable. Other methods of self reports and introspection were always part of scientific resources, from the ancient Greek thinkers to contemporary psychophysics and still are important components of clinical case studies. Nevertheless, consolidation of first-person ontology with third-person objectivity have been initiated by a number of scholars, for example Francisco Varela (Shear and Jonathan 1999), Josephson (Josephson 1996), and Daniel Dennett in his Heterophenomenology (D. C. Dennett 2001). Dennett noted: \"heterophenomenology is nothing new; it is nothing other than the method that has been used by psychophysicists, cognitive psychologists, clinical neuropsychologists, and just about everybody who has ever purported to study human consciousness in a serious, scientific way.\" (D. C. (. C. Dennett 1991). Moreover, Dennett writes in Consciousness Explained, \"I described a method, heterophenomenology, which was explicitly designed to be the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science.\" (CE, p72.)\n\nStill in Introduction: \"..., almost all the current approaches use third-person observed findings at various levels4 in correlational studies with surrogate makers of biochemical changes, neuronal activations, oscillating potentials, signal changes in imaging studies, and behavioural responses to connect with the first-person properties.\" Not at all. A number of those changes (biochemical changes, neuronal activation, oscillating potentials) never reach first-person quality and remain unconscious to the agent. Nobody feels the activity of a place cell firing in his/her hippocampus. Nevertheless, we know where we are at in an environment. The electrophysiological and neurochemical processes underlying the creation of representations and their readouts that may generate a first-person ontology are different entities, which are confused here.\n\n\"Emergence can be adopted as a framework to study properties that cannot be explained using the third-person-observed features of the system.\" Again, \"emergence\", a subject of a number of scientific investigations, one for instance in nonlinear dynamics, provided numerous insights and rigorous formalism, which comply well with third-person (\"agent agnostic\") observations.\n\nIn \"Converting first-person sensations to third-person features\". Quoting: \"Recent research work is attempting to overcome this barrier by approaching higher brain functions from a first-person frame of reference\" is a misrepresentation. A number of disciplines with long established history are devoted to study first-person frame of references. Classical psychophysics, research on episodic memories, consciousness, neuroeconomics, studies of decision-making and consumer behavior are all rely on first-person perspectives and first-person reporting.\nAfter I argued that there is nothing new about first-person methods, I turn to the second aspect, the ontology of first-person reference.\nB) First-person ontology\nThe separation and integration of first-person and third-person experience in a single brain derives from the duality of two systems: an allocentric (hippocampus) and egocentric system (parietal/occipital lobe/basal ganglia). The integration of information deriving from these two systems inside the brain is an intriguing question and a subject of active research. Nevertheless, according to a common working hypothesis, the third-person (allocentric reference) derives from the first-person (egocentric reference) experience. Hence the challenge is not how to explain the first-person experience, since sensory input is genuinely addressed in a first-person coordinates, but rather how does the brain arrive to the third-person (allocentric) representations from the first-person data. To bring an even more banal example, autonomous cars today use a navigation system, which converts first-person referenced information through cameras and the car's radar system to third-person information (map, distances of objects, movement of other objects on the map) and then converts it back to first-person instructions to change the speed and direction of the vehicle. No magic, no emergence, no need to change strategy in science just straightforward yet brilliant engineering.\n\n\"Focal points of emergence\" this chapter is eluding to the idea that dendritic spikes are utilized to form sources of first-person ontology. The assumption that subthreshold postsynaptic activity does not have a contribution to the neuronal signal processing is incorrect. Although these subthreshold events may fail to elicit action potentials (by definition), their contribution to the membrane potential oscillation that affects the integration of subsequent EPSPs in the same neuron is undeniable. Therefore, these subthreshold events are not merely a waste of energy waiting for a function to support. Even if their role is still elusive, nothing implies that they play a specific role in conveying \"first-person\" modality of information. Hence the answer to the question \"At what focal points in the nervous system do the units of internal sensations emerge?\" does not make sense, because \"internal sensations\" are unlikely to be caused by mechanisms different from external sensations, but instead they are pathway dependent.\n\nMoreover, the same question \"At what focal points in the nervous system do the units of internal sensations emerge?\" implies yet another confusion. \"Internal sensations\" and first-person ontology are not the same. \"Internal sensation\" is not a term, introduced by the author's earlier paper (\"The nature of “internal sensations” of higher brain functions may be derived from the design rules for artificial machines that can produce them\" (Vadakkan 2012)). The closest interpretation of it is \"interoception\", which are sensations deriving from inside the body. However, based on the context the author uses \"internal sensation\" as a sensory stimulus with a first-person quality. Most sensory, except interoception, are reporting third-person qualities. A color of an object in my visual field is rarely interpreted as first-person experience, it is a feature attributed to an object, regardless I am looking at the abject or I am not. When the first-person quality of sensory input is concerned, arguably, conscious perception emerges at the level between the primary and secondary sensory cortical areas in the mammalian brain. The exact location and mechanism of the \"conscious quality\" are still argued (Koch 2004).\n\nEven though our consciousness has a first-person ontology, not all higher brain functions are first-person experiences. To give a few examples, declarative memory is lacking first-person ontology. Also, hippocampus-dependent spatial memory transforms egocentric sensory information into allocentric or object-centered coordinate systems. Following up on the dorsal visual stream, we see the sensory information progressively being transformed to an allocentric coordinate system as information become increasingly independent of the sensory data source and also invariant of the position of sensory data acquisition. Hence, higher brain functions do not require first-person perspective.\n\nBecause first person experiences are phenomenologically inaccessible without making them third-person experiences, they can only be simulated by another person. Simulation, however, does not equal understanding. Conveying first-person experiences is not necessarily the function of science. It is more suitable to Art. The systematic studying of the origin of first-person experience is the subject of consciousness research and it is referred as the \"hard-problem\" of consciousness (Chalmers 1995) or \"Qualia\". The history of psychology and neuroscience provides great examples such as the Weber-Fechner's law (and related Stevens' power law) how physical magnitude of the stimulus translates to sensation. The law will never reproduce the sensation but describes the phenomenon. Next is \"The first step is the theoretical derivation of the basic functional units of the system at the correct level that is also connected to the motor system, which can explain all the higher brain functions along with behavioral motor activity.\" This is also called modeling. Indeed, since Turing, our proof of understanding relies on conceptual or quantitative or physical realization of models. Again, nothing new is here. In summary, the description of first-person experience in Neuroscience is only relevant when addressing consciousness and methodology follows the same as any other subject, modeling and reverse engineering (see next).\n\n\"The Gold Standard\": I don't see any innovative approach here. We neuroscientists consciously or by intuition follow Allan Turing's legacy and method of understanding brain functions in terms of modeling them by machines that are able to reproduce those functions and essentially capable of learning from experience. Such an autonomous systems at certain point may generate internal representations that may have a first-person quality as it achieves the capacity of detaching itself from the observed world (have a concept of 'self') and also able to correctly reference (localize) those representations as its own (the Cartesian criterion of consciousness). \"Major Hurdles\": The statements \"Since the mechanism of the nervous system functions has not discovered yet, it is understandable that a novel approach is required\" is overly general and meaningless. It does not imply the next sentence: \"Changing the frame of reference from which to examine the higher brain functions suits such an anticipation.\". Why would changing of the reference suddenly explain the function of the nervous system, if that was not obvious from an objective (third-person) point of reference?\n\nThe \"fear of discovering the operations of the mind\" and the \"growing concern about ‘the Singularity’\" are popular concepts that are fun to entertain, but lacking substance. If I can afford a \"first-person\" comment, in light of recent political events, we seem to way overestimate the collective human intelligence, which is unable to cope with burning issues such as social injustice and terrorism, religious fanaticism, global environmental catastrophe, etc., relative to which the fear of AI turning against humanity is irrelevant. Humanity needs to defend itself from itself.\n\nIn the \"In Conclusions\" section the author lists a bunch of disconnected ideas such as \"low-level species produce intentionality to carry out survival and reproductive instincts\" and \"[first-person intelligent systems will] provide methods to prevent climate change\" and \"first-person properties will have similarities to the development of complex numbers in mathematics\". These predictions are overly ambitious and unsubstantiated while also lacking any references.\n\nFirst-person view is also an interpreted view as much as third-person referenced information. Every measurement is affected by the imprecision of the measuring device. Take for an example sensory transmission. The brain has to compensate for the delay of the information transfer, such as conduction delays relative to the onset of events. Libet showed that this indeed the case and introduced the term of \"subjective referral\" (Libet et al. 1979). Hence, not even the first-person point of view is reliable. We needed a third-person point of view (the observer reading the clock in Libet's experiment) to show its imprecision. The notion raises a question, isn't the first-person point of view just a construct? If it is, then how is it different from a third-person reference?\n\nI return to the question again: How would it help to have first-person point of view? What problem would it solve that could not be solved by third-person point of view?\n\nLastly, the author fails to give us an example for what information may first-person reference provide that the third-person reference cannot. I do not see any symmetry breaking between first-person vs. third-person reference frames. Since we are sharing the same universe instead of each of us encompassing his/her own private universe, using an outside reference is more parsimonious than taking only a first-person perspectives.\n\nWhile the question of how the brain acquires first-person view remains to be a challenging one (not the subject of this manuscript), the author failed to convince the reader that methods of studying the underlying mechanisms need a fundamental revision. If I seriously misunderstood the author's position, I am open to follow up on a discussion and revise my opinion.\nIn summary, the first-person frame of reference is not novel, and the article fails to provide examples of new insights deriving from this approach. Last remark: If the author claims that both point of views are useful for grasping the complexity and multifaceted human mind I must agree, however it is not clear whether the author suggests to switch completely to the first-person reference frame.", "responses": [ { "c_id": "2426", "date": "25 Jan 2017", "name": "Kunjumon Vadakkan", "role": "Author Response", "response": "I thank Dr. Zoltan Nadasdy for his comments.   I understand that I didn’t provide the logical arguments well in the initial manuscript. Explaining the need for a frame change made it difficult. I have rewritten the article to explain the necessity for examining the system from a first-person frame of reference. I have provided additional references to the first-person methods of previous investigators.   I find that there was a confusion between a) the work by previous investigators who used first-person reporting of the inner sensations and b) the subject of the present work that explains the need for making a first-person approach to understand the mechanism of generation of first-person inner sensations. The first-person methods used in the past relied on behavioral expression of the contents of the inner sensations in the form of language. The results of those approaches have not helped us to derive a cellular level mechanism that generates the first-person inner sensations for solving the system. I have explained why the observer has to undertake an examination from the first-person frame of reference at the cellular-level to derive the qualia of the sensory content of the internal sensations.   The first-person inner sensations of different higher brain functions such as perception and memory are not accessible by third-person approaches, making it difficult to understand the operations of the nervous system. The seriousness of this will become very clear when we keep a gold standard of replication of the mechanism in an engineered system as the criteria for understanding the system. It can be initiated by asking the question “What properties should be present in a synaptically-connected nervous system to generate internal sensations when replicated in an engineered system?” This immediately provokes one to examine the system operations for the locations, systems conditions and mechanism for the generation of internal sensations.   It is expected that learning induces signature changes from which internal sensations are generated. By keeping all the constraints, a search for the locations and the conditions where the cue stimulus can induce internal sensations can be carried out. Under appropriate conditions, the mechanism at an optimal location is expected to make an approach “from within” the system to sense the qualia of the retrieved memories. This is expected to involve a retrograde search from the optimal locations towards the sensory receptors for sensing the sensory stimuli that can activate those receptors. In other words, the cue stimulus reaches at the locations where learning has made changes and reactivates the system to make a first-person approach towards the sensory receptor level to sense the sensory qualities of the stimuli required to activate those receptors. The observer who would like to trace the above path has to follow the same path, which will constitute the examination from a first-person frame of reference. No previous first-person studies have undertaken this novel approach.   I have kept strict criteria for the verification of the above mechanism by adhering to the acceptable scientific standards. The derived mechanism should be able to make predictions that can be verified. Different nervous systems can be examined for comparable circuitries. By keeping the gold standard of replicating the mechanism in engineered systems, the investigations can focus directly on the problem.   As the reviewer pointed out, current measurements in physics are being carried out without taking into account the observing subject. Both Ervin Schrödinger and Niels Bohr knew about it very well and they have mentioned it in their papers (1, 2). Recent attempts to incorporate the subject in the measurements (3) is a clear example of the need for understanding perception. In this context, knowing how the nervous system is making subjective assessment through the formation of first-person inner sensations is of great importance. The division of the human brain functions into allocentric and egocentric was made at a time when only one option of third-person observations was available. The current work introduces the option to make a first-person examination of the system for its operations. In this context, we need to take a fresh look at the system. Once the basic principle is discovered, the reasons for differences in the function at different locations can be determined. For example, the hippocampus has new granule neuron formation, which will continuously alter the circuitry at the higher neuronal orders above the level of the granule neurons. The effect of this on the first-person qualia generated at specific locations in the brain circuitry can be examined for details. Reviewer has pointed out that my opinion article has concluded that the current methods of studying the underlying mechanisms need a fundamental revision. Even though, I want to remain modest, I would like to face the reality. The severe difficulties for discovering how first-person properties are generated within the system indicates that some major revision will be required at some level to find the solution. Large amount of data has already been collected by examining different nervous systems from different levels. This will allow for rigorous testing of any newly derived operational mechanism.   I have to admit that both the third-person observations and examination from the first-person frame of reference are required for grasping the human mind. For instance, this opinion article has used third-person observations made by investigators from a large number of laboratories to derive a feasible mechanism for the generation of first-person inner sensations. However, examination of the system from a first-person frame of reference will be a necessary step at some point during the investigation.   References   E. Schrödinger. Eine Entdeckung von ganz ausserordentlicher Tragweite, ed. von Meyenn K 490 Springer (2011). N. Bohr. Atomic Theory and the description of human knowledge. Cambridge University Press, Cambridge. pp. 17-18 (1934). Physics: QBism puts the scientist back into science. Nature. 507:421-423 (2014)." } ] } ]
1
https://f1000research.com/articles/4-173
https://f1000research.com/articles/6-74/v1
24 Jan 17
{ "type": "Research Note", "title": "Molecular dynamic simulations of glycine amino acid association with potassium and sodium ions in explicit solvent", "authors": [ "Ivan Terterov", "Sergei Koniakhin", "Sergey Vyazmin", "Vitali Boitsov", "Michael Dubina", "Sergei Koniakhin", "Sergey Vyazmin", "Vitali Boitsov", "Michael Dubina" ], "abstract": "Salt solutions are the natural environment in which biological molecules act, and dissolved ions are actively involved in biochemical processes. With metal ions, the membrane potentials are maintained. Ions are crucial for the activity of many enzymes, and their ability to coordinate with chemical groups modulates protein-protein interactions. Here we present a comparative study of sodium and potassium coordination with zwitterionic glycine, by means of explicit solvent molecular dynamics. We demonstrated that contact ion pair of cations and carboxylate group splits into two distinct coordination states. Sodium binding is significantly stronger than for potassium. These results can shed light on the different roles of sodium and potassium ions in abiogenic peptide synthesis.", "keywords": [ "Potassium ion", "sodium ion", "ion pairing", "molecular dynamics", "ion coordination" ], "content": "Introduction\n\nSalt solutions are the natural environment in which biological molecules act. Moreover, the dissolved ions themselves largely participate in many biological processes on a molecular level. Metal ions are essentially cofactors of many enzymes and may coordinate with charged groups, thus modulating protein-protein interactions and their activity1. Many of these manifestations are due to specific ion coordination with charged groups on protein surfaces and other counterions in solutions, rather than the alteration of the aqueous solution structure in the bulk2. Such ion - counterion pairing has been validated experimentally, and described theoretically using molecular simulations3.\n\nDespite apparent similarity, the biological roles of sodium and potassium are very different. For example, the potassium to sodium ion concentration ratio is high inside the cell and low outside, which gives rise to membrane potentials. These two vital ions also demonstrate different catalytic capacity in the model reaction of prebiotic peptide synthesis, where potassium shows a higher activity4. In addition, their roles in abiogenesis are of high interest5.\n\nIt was supposed that sodium binds to charged groups on protein surfaces more strongly than potassium does, which probably correlates with the \"salting out\" sodium effect on proteins, and \"salting in\", as is known for potassium6. Using an X-ray absorption study of solutions containing dissolved ions and acetate or glycine molecules, it was demonstrated that sodium has superior affinity to carboxylate, one of the major anionic groups in proteins7. In a number of works, such a difference was explained using a combination of molecular dynamics and ab initio calculations6–9. With this method, the difference between sodium and potassium ion association free energies with carboxylate groups has been calculated9.\n\nUsing molecular dynamics, it was demonstrated that not only direct ion - carboxylate pair but also the solvent shared ion - carboxylate paired configurations are of great importance10–12. Particularly, these solvent-mediated states appear more populated than direct contact ion pair, and can determine the thermodynamics of acetate salt solutions10. A number of ab initio calculations of ion coordination with amino acids in gas-phase were conducted previously13–15; however solvent effects are significant and should be taken into account10,16,17.\n\nTo better understand the molecular details of ion pairing on protein-protein interactions, the spatial distribution of ion positions is of interest. Here we present a molecular dynamic study of the spatial distribution of sodium and potassium coordinated with zwitterionic glycine in a concentrated water ionic solution.\n\n\nSimulation details\n\nMolecular dynamic (MD) simulations were conducted in GROMACS package (version 4.6.7)18. Simulation systems contained one zwitterionic glycine molecule (as at pH 7 it is the most probable glycine form in solution), 33 cations, 33 chloride anions and about 800 water molecules in a cubic periodic box with 3 nm sides, corresponding to a 2 M salt solution. Equilibration of 10 ns preceded 500 ns production MD for each system with constant number of particles (N), constant pressure (P) and temperature (T) conditions – in NPT ensemble. Temperature at 300 K and pressure at 1 bar were maintained with Nose-Hoover thermostat19,20 and Parrinello-Rahman barostat21. Electrostatics PME method was used22 with grid spacing of 0.12 nm and 1.0 nm cutoff, the same as for van der Waals interactions. For zwitterionic glycine, parameters were from OPLS-AA force field23, all bonds were constrained with LINCS algorithm24 (for more details on parameters see run input files available in Dataset 1). Parameters for cations were obtained from 25, for chloride from 26 and a TIP3P water model was used27. Radial distribution functions were calculated with bin width of 0.004 nm using g_rdf utility of GROMACS package. Spatial distribution were calculated with g_spatial GROMACS utility after least square fit of heavy atoms of glycine molecule from each frame to the position of starting MD structure.\n\n\nResults and discussion\n\nTwo systems were investigated, each consisted of one glycine dissolved in explicit water with sodium and chloride ions, or potassium and chloride ions. Radial distribution functions (RDF) of Na+ or K+ with respect to oxygen atoms or carbon atoms of glycine carboxylate group were calculated and are plotted in Figure 1 The O-Me+ RDF for both studied ions shows several coordination shells with a pronounced first maximum that is considerably higher for sodium, which lies in agreement with previous studies indicating a superior Na+ affinity6,7,10,12,17. Analysis of C-Me+ RDF, however, is not so common in the literature. C-Me+ RDFs are plotted in Figure 1B, and show two sharp peaks for sodium ions at 0.28 and 0.34 nm, as well as two weaker separate, but distinct peaks for potassium at 0.32 and 0.36 nm. This figure indicates that there are two favorable coordination states of cations with carboxylate groups which both contribute to the single first peak of O-Me+ RDFs.\n\nFigure 2 shows the iso-density surfaces of sodium or potassium ions calculated around glycine and explicitly reveals these coordination states. One sees the medial (m) coordination state equidistant from the oxygen atoms of the carboxylate group, and the lateral (l) state consisting of the two regions being closer to one of the two oxygen atoms. The asymmetry that is seen in the shape of the (l) state regions, including a bridge connecting the (m) and (l) in the case of K+, occurs due to positively charged NH3 group and overall conformational flexibility of glycine. Density levels on the spatial distribution for sodium considerably exceed that for potassium, and during the simulations we occasionally observed only for sodium glycine coordinated with two ions in (m) and (l) states simultaneously (see Figure 3). Distances depicted in Figure 3 clearly illustrate that the (m) state is closer to the C of carboxylate group and corresponds to the first peak on C-Me+ RDF (0.28 nm for Na+). The (l) state corresponds to the next peak (0.34 nm after minimum at 0.3 nm for Na+), while both states belong to the same peak of O-Me+ RDF (0.23 nm for Na+). In sodium simulation, glycine exists with Na+ in (m) coordinated state for 21% of the observation time and with Na+ solely in (l) coordinated state for 30%. For potassium simulation, we obtained 8% and 18% of time for (m) and (l) coordinated states, respectively.\n\n(A) O-Me+ RDF, (B) C-Me+ RDF. Green, sodium; violet, potassium.\n\nIso-density surfaces around glycine are shown for Na+ (A and B) and K+ (C and D). Note that the density value (cutoff) for Na+ is much higher than for K+.\n\nDistances are given in nanometers. Green, sodium atoms; red, oxygen; gray, carbon; water molecules not shown.\n\n\nConclusions\n\nWe demonstrated that contact ion pair of carboxylate group with Na+ or K+ splits into distinct, well occupied, (m) and (l) coordination states. The effect may be of interest in studies devoted to ab initio calculations and in the interpretation of X-Ray absorption data, as they account for (m) coordination state only6–8. Coordination with ions is thought to be crucial in the first stage of abiogenic peptide polymerization process28 and therefore, the observed differences in sodium and potassium behavior are important for research into primary abiogenic peptide synthesis conditions.\n\n\nData availability\n\nDataset 1: Run input parameters for production MD in GROMACS version 4.6.7. doi, 10.5256/f1000research.10644.d14976429\n\nDataset 2: Input typologies and equilibrated structures for production MD (in zipped file). doi, 10.5256/f1000research.10644.d14976530\n\nDataset 3: MD trajectories (.xtc) and input files (.tpr) for NaCl and KCl systems (in zipped file). Positions of water molecules are not included. doi, 10.5256/f1000research.10644.d14976631", "appendix": "Author contributions\n\n\n\nDesigned the setup of the simulation: IT SK SV VB MD. Conducted simulations: IT. Analyzed the data: IT SK SV VB MD. Wrote the manuscript: IT SK VB MD.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe study was funded by RFBR (14-04-01889) and Program of Fundamental Research of Presidium of RAS “Nanostructures: Physics, chemistry, biology and foundations of technology”.\n\n\nReferences\n\nCollins KD, Neilson GW, Enderby JE: Ions in water: characterizing the forces that control chemical processes and biological structure. Biophys Chem. 2007; 128(2–3): 95–104. PubMed Abstract | Publisher Full Text\n\nLo Nostro P, Ninham BW: Hofmeister phenomena: an update on ion specificity in biology. Chem Rev. 2012; 112(4): 2286–2322. PubMed Abstract | Publisher Full Text\n\nvan der Vegt NF, Haldrup K, Roke S, et al.: Water-Mediated Ion Pairing: Occurrence and Relevance. Chem Rev. 2016; 116(13): 7626–41. PubMed Abstract | Publisher Full Text\n\nDubina MV, Vyazmin SY, Boitsov VM, et al.: Potassium ions are more effective than sodium ions in salt induced peptide formation. Orig Life Evol Biosph. 2013; 43(2): 109–117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRode BM: Peptides and the origin of life. Peptides. 1999; 20(6): 773–786. PubMed Abstract | Publisher Full Text\n\nVrbka L, Vondrásek J, Jagoda-Cwiklik B, et al.: Quantification and rationalization of the higher affinity of sodium over potassium to protein surfaces. Proc Natl Acad Sci U S A. 2006; 103(42): 15440–15444. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAziz EF, Ottosson N, Eisebitt S, et al.: Cation-specific interactions with carboxylate in amino acid and acetate aqueous solutions: X-ray absorption and ab initio calculations. J Phys Chem B. 2008; 112(40): 12567–12570. PubMed Abstract | Publisher Full Text\n\nJagoda-Cwiklik B, Vacha R, Lund M, et al.: Ion pairing as a possible clue for discriminating between sodium and potassium in biological and other complex environments. J Phys Chem B. 2007; 111(51): 14077–14079. PubMed Abstract | Publisher Full Text\n\nVlachy N, Jagoda-Cwiklik B, Vácha R, et al.: Hofmeister series and specific interactions of charged headgroups with aqueous ions. Adv Colloid Interface Sci. 2009; 146(1–2): 42–47. PubMed Abstract | Publisher Full Text\n\nHess B, van der Vegt NF: Cation specific binding with protein surface charges. Proc Natl Acad Sci U S A. 2009; 106(32): 13296–13300. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGanguly P, Schravendijk P, Hess B, et al.: Ion pairing in aqueous electrolyte solutions with biologically relevant anions. J Phys Chem B. 2011; 115(13): 3734–3739. PubMed Abstract | Publisher Full Text\n\nHajari T, Ganguly P, van der Vegt NF: Enthalpy-entropy of cation association with the acetate anion in water. J Chem Theory Comput. 2012; 8(10): 3804–3809. PubMed Abstract | Publisher Full Text\n\nJockusch RA, Lemoff AS, Williams ER: Effect of metal ion and water coordination on the structure of a gas-phase amino acid. J Am Chem Soc. 2001; 123(49): 12255–12265. PubMed Abstract | Publisher Full Text\n\nRemko M, Rode BM: Effect of metal ions (li+, na+, k+, mg2+, ca2+, ni2+, cu2+, and zn2+) and water coordination on the structure of glycine and zwitterionic glycine. J Phys Chem A. 2006; 110:(5): 1960–1967. PubMed Abstract | Publisher Full Text\n\nBush MF, Oomens J, Saykally RJ, et al.: Effects of alkaline earth metal ion complexation on amino acid zwitterion stability: results from infrared action spectroscopy. J Am Chem Soc. 2008; 130(20): 6463–6471. PubMed Abstract | Publisher Full Text\n\nTomé LI, Jorge M, Gomes JR, et al.: Toward an understanding of the aqueous solubility of amino acids in the presence of salts: a molecular dynamics simulation study. J Phys Chem B. 2010; 114(49): 16450–16459. PubMed Abstract | Publisher Full Text\n\nAnnapureddy HV, Dang LX: Molecular mechanism of specific ion interactions between alkali cations and acetate anion in aqueous solution: a molecular dynamics study. J Phys Chem B. 2012; 116(25): 7492–7498. PubMed Abstract | Publisher Full Text\n\nHess B, Kutzner C, Der Spoel DV, et al.: Gromacs 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation. J Chem Theory Comput. 2008; 4(3): 435–447. PubMed Abstract | Publisher Full Text\n\nNosé S: A unified formulation of the constant temperature molecular dynamics methods. J Chem Phy. 1984; 81(1): 511–519. Publisher Full Text\n\nHoover WG: Canonical dynamics: equilibrium phase-space distributions. Phys Rev A Gen Phys. 1985; 31(3): 1695–1697. PubMed Abstract | Publisher Full Text\n\nParrinello M, Rahman A: Polymorphic transitions in single crystals: A new molecular dynamics method. J Appl Phys. 1981; 52(12): 7182–7190. Publisher Full Text\n\nEssmann U, Perera L, Berkowitz ML, et al.: A smooth particle mesh ewald method. J Chem Phys. 1995; 103(19): 8577–8593. Publisher Full Text\n\nKaminski GA, Friesner RA, Tirado-Rives J, et al.: Evaluation and reparametrization of the opls-aa force field for proteins via comparison with accurate quantum chemical calculations on peptides. J Phys Chem B. 2001; 105(28): 6474–6487. Publisher Full Text\n\nHess B, Bekker H, Berendsen HJ, et al.: Lincs: a linear constraint solver for molecular simulations. J Comput Chem. 1997; 18(12): 1463–1472. Publisher Full Text\n\nAqvist J: Ion-water interaction potentials derived from free energy perturbation simulations. J Phys Chem. 1990; 94(21): 8021–8024. Publisher Full Text\n\nChandrasekhar J, Spellmeyer DC, Jorgensen WL: Energy component analysis for dilute aqueous solutions of lithium (1+), sodium (1+), fluoride (1-), and chloride (1-) ions. J Am Chem Soc. 1984; 106(4): 903–910. Publisher Full Text\n\nJorgensen WL, Chandrasekhar J, Madura JD, et al.: Comparison of simple potential functions for simulating liquid water. J Chem Phys. 1983; 79(2): 926–935. Publisher Full Text\n\nCoveney PV, Swadling JB, Wattis JA, et al.: Theory, modelling and simulation in origins of life studies. Chem Soc Rev. 2012; 41(16): 5430–5446. PubMed Abstract | Publisher Full Text\n\nTerterov I, Koniakhin S, Vyazmin S, et al.: Dataset 1 in: Molecular dynamic simulations of glycine amino acid association with potassium and sodium ions in explicit solvent. F1000Research. 2017. Data Source\n\nTerterov I, Koniakhin S, Vyazmin S, et al.: Dataset 2 in: Molecular dynamic simulations of glycine amino acid association with potassium and sodium ions in explicit solvent. F1000Research. 2017. Data Source\n\nTerterov I, Koniakhin S, Vyazmin S, et al.: Dataset 3 in: Molecular dynamic simulations of glycine amino acid association with potassium and sodium ions in explicit solvent. F1000Research. 2017. Data Source" }
[ { "id": "19642", "date": "15 Feb 2017", "name": "Niharendu Choudhury", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the present molecular dynamics simulation study, the authors are intended to study ion-pair formation of sodium and potassium ions with amino acid glycine. The manuscript is well written. However, major revision of the manuscript along the lines mentioned below is required before its publication.\nSpecific points are as follows:\nThe authors have used Na+ Force Field parameters from Ref. 25, in which (most probably) the interaction parameters for ions have been calculated from free energy calculation in SPC water. However, in the present case the authors have used TIP3P water. Isn’t it right to use the same water model as the one used for deriving the parameters?\n\nThe motivation of the present study is to compare ion-pair formation of an amino acid carboxylate with Na+ and K+ ions in the physiological condition. What is the concentration of sodium or potassium ion in the physiological condition? Are the simulations performed under the same concentration?\n\nRather than considering two ions in two separate simulations, it is important to consider a mixture of two ions in a single simulation and compare relative affinity of these two ions towards the amino acid.\n\nWhy is glycine considered? It is already shown (Ref 6) that carboxylic acid groups of aspartate and glutamate are playing the most important role in determining the interaction of a protein with these ions. Therefore, choice of glycine instead of the above mentioned amino acids needs to be justified. I feel in order to get a trend, the author should take more than one amino acids in separate simulation and observe if the trend is general.\n\nI did not understand the conditions for which Figs. 2(A) and (B) (or (C) and (D) are shown. Please specify it in figure caption as well as in main text.\n\nIt is essential to find out the residence time of each of the ions and compare them.\n\nWhy in case of carbon-ion g(r) the first peak splits into two, but not in case of oxygen-ion g(r)?\n\nTrajectory snapshot as presented in Fig. 3 has no meaning in a finite temperature MD simulation run. Instead, please provide either average values of these distances or their time dependent behavior.\n\nThe authors have mentioned that X-Ray absorption data account for (m) coordination state only. However the present simulation shows both m and l states. Rationalize why the present result is different from the experimental result?\n\nThe authors have written “Parameters for cations were obtained from 25, for chloride from 26”. Please write Ref. 25 in place of 25 and Ref. 26 in place of 26.", "responses": [] } ]
1
https://f1000research.com/articles/6-74
https://f1000research.com/articles/5-2741/v1
22 Nov 16
{ "type": "Software Tool Article", "title": "Disambiguate: An open-source application for disambiguating two species in next generation sequencing data from grafted samples", "authors": [ "Miika J. Ahdesmäki", "Simon R. Gray", "Justin H. Johnson", "Zhongwu Lai", "Simon R. Gray", "Justin H. Johnson", "Zhongwu Lai" ], "abstract": "Grafting of cell lines and primary tumours is a crucial step in the drug development process between cell line studies and clinical trials. Disambiguate is a program for computationally separating the sequencing reads of two species derived from grafted samples. Disambiguate operates on alignments to the two species and separates the components at very high sensitivity and specificity as illustrated in artificially mixed human-mouse samples. This allows for maximum recovery of data from target tumours for more accurate variant calling and gene expression quantification. Given that no general use open source algorithm accessible to the bioinformatics community exists for the purposes of separating the two species data, the proposed Disambiguate tool presents a novel approach and improvement to performing sequence analysis of grafted samples. Both Python and C++ implementations are available and they are integrated into several open and closed source pipelines. Disambiguate is open source and is freely available at https://github.com/AstraZeneca-NGS/disambiguate.", "keywords": [ "NGS", "patient derived xenograft", "explant", "disambiguation", "sequencing" ], "content": "Introduction\n\nXenografts, both cell line and primary tumour, are routinely profiled in preclinical and translational research. Xenografts are used to study everything from new target identification to responses to targeted therapeutics and mechanisms of resistance1 in an environment that is more realistic than just 2D cell lines. However, due to mouse stromal contamination of the human tumour, not all the data resulting from studying the extracted samples are guaranteed to be of human origin.\n\nDirect high throughput sequencing of grafted samples with a mixture of two species is routine practice. However with the high volume of data and computational challenges of alignment and kmer identification, new computational strategies are required to computationally separate the two species’ components for more accurate downstream analysis1, especially for the reduction of variant calling artefacts. However, the two-species alignment approach proposed in Bradford et al.1 excludes reads that align to both organisms, clearly dismissing a large portion of the data as evidenced in Table 1 and Table 2 when observing cross species alignment rates.\n\nThe ’Ambiguous’ column includes reads that aligned to neither or had equal quality scores for the alignments and could not be disambiguated.\n\n†Down from 25638785 read pairs with alignment to hg19\n\n††Down from 39686392 read pairs with alignment to mm10\n\nThe ’Ambiguous’ column includes reads that aligned to neither or had equal quality scores for the alignments and could not be disambiguated.\n\n†Down from 3005372 read pairs with alignment to hg19\n\n††Down from 6001230 read pairs with alignment to mm10\n\nAlgorithms designed for disambiguating the host and tumour sequences include e.g. the Xenome tool2, which is based on machine learning applied to k-mers from both species. However, the implementation is not readily available and is not free for non-academic users. In 3 the authors also aligned the reads to both species, but no attempt was taken to disambiguate the data and no implementation is readily available.\n\nHere, an alternative approach using read alignment quality is proposed to further disambiguate reads that can be mapped to both species. Alignment is first performed to both species independently and the reads are disambiguated as a post-processing step. There is no requirement to maintain pseudo reference indices based on combinations of reference sequences. This approach shows a very high sensitivity and specificity on artificially generated samples obtained by mixing reads from the individual species. The Disambiguate tool is community supported and widely used in several open and closed source pipelines.\n\n\nMethods\n\nThe Disambiguate algorithm works by operating on natural name sorted BAM files from alignments to two species. Name sorting is a critical part in not having to read all the data from both species’ alignments into memory simultaneously; the same read aligned to both species is disambiguated on the fly by going through both alignment files synchronously. For reads that have alignments to both species and therefore require disambiguation, the specific details of the disambiguation process are slightly different for the different aligners. Thus far the algorithm has been tested for BWA-MEM4 and Bowtie25 for DNA-seq, and TopHat26, STAR7 and Hisat28 for RNA-seq. Illumina’s paired end sequencing is preferred as the mate can often break a tie. Figure 1 illustrates the disambiguation process.\n\nAlignment is first performed against both species. The disambiguation application then operates on the raw, natural name sorted BAM files to assign the read pairs into one of the two species or as ambiguous for unresolved cases.\n\nDisambiguate assigns the reads on a per-pair basis, based on the highest quality alignment of the read pair. For BWA and STAR the alignment score (AS, higher better) is used as the primary disambiguation metric followed by edit distance (NM, lower better) to the reference. For Tophat2 and Hisat2 based alignments the sum (lower better) of edit distance, number of reported alignments (NH) and the number of gap opens (XO) is used.\n\nThe algorithm is implemented in Python (with dependency on the Pysam package) and C++ (with dependency on BamTools), with the C++ version being approximately four times faster than the Python code. 64 bit unix/linux systems are supported.\n\nGiven name sorted alignment (BAM) files aligned to the two species of interest (e.g. human and mouse), the algorithm infers for each read the most likely origin. The output contains BAM files for both species, BAM files for ambiguous reads and a text file describing how many read pairs were assigned to each BAM file. The simplest way to perform all of the alignment and disambiguation is by running bcbio, in which Disambiguate is integrated, on the raw sequencing data.\n\n\nResults\n\nTo illustrate the utility of Disambiguate, raw publicly available human and mouse exome sequencing reads (100bp paired end Illumina data) were downloaded from the European Nucleotide Archive (ENA) with Run Accessions SRR1176814 and SRR1528269.\n\nThe reads were concatenated, aligned against hg19 and mm10 using BWA MEM, and processed using Disambiguate. Pre-disambiguation, for the human sample (SRR1528269), there were 39686392 read pairs (out of total 77268164), for which at least one read aligned to mouse. Similarly, for the mouse sample (SRR1176814), there were 25638785 read pairs (out of total 47312349) for which at least one read aligned to human. Table 1 summarises the post disambiguation results. As can be seen, the disambiguation algorithm correctly pulls apart virtually all of the read pairs. In other internal studies, Disambiguate has time and again highlighted samples with low human assigned component, correlating with poor extraction or lack of growth of the tumour cells in the host.\n\nSTAR aligned human (SRR387400) and mouse (SRR1930152) RNA-seq data was also analysed with very similar results, see Table 2.\n\n\nConclusions\n\nIn summary, Disambiguate provides an important tool for computationally separating sequence reads originating from two species. In human-mouse studies it also allows the study of the mouse stromal component for gene expression and DNA variation.\n\nIn addition to RNA-seq and whole genome sequencing, it is worth highlighting that for targeted hybridisation capture sequencing of xenograft samples, where baits from a single species are used, disambiguation is still highly recommended. This is best seen in Table 1 where a large number of human exome reads aligned to mouse and would potentially affect downstream interpretation without disambiguation.\n\nDisambiguate has been well adopted in the open source community; it is integrated in the open source bcbio pipeline, and has been successfully used in both RNA and DNA sequencing of xenografts both at AstraZeneca and other research institutes. This is evidenced by the number of support tickets from a variety of organisations on the bcbio-nextgen Github page.\n\n\nData availability\n\nThe data used here is available from the European Nucleotide Archive with Run Accession numbers SRR1176814 and SRR1528269.\n\n\nSoftware availability\n\nSoftware integrating Disambiguate available from: https://github.com/chapmanb/bcbio-nextgen\n\nLatest source code: https://github.com/AstraZeneca-NGS/disambiguate\n\nArchived source code as at time of publication: DOI: 10.5281/zenodo.1660179\n\nLicense: MIT.", "appendix": "Author contributions\n\n\n\nMA authored the bwa and rna-star disambiguation algorithms, co-authored the manuscript and implemented the algorithms in Python. SG wrote the C++ implementation of the algorithms. JJ co-authored the manuscript. ZL designed and implemented the original Tophat (and Hisat2) disambiguation algorithm and co-authored the manuscript.\n\n\nCompeting interests\n\n\n\nAll authors are employees of AstraZeneca.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors wish to thank Brad Chapman, Rory Kirchner and Eric Schelhorn for feedback and fixes on Disambiguate.\n\n\nReferences\n\nBradford JR, Farren M, Powell SJ, et al.: RNA-Seq Differentiates Tumour and Host mRNA Expression Changes Induced by Treatment of Human Tumour Xenografts with the VEGFR Tyrosine Kinase Inhibitor Cediranib. PLoS One. 2013; 8(6): e66003. PubMed Abstract | Publisher Full Text | Free Full Text\n\nConway T, Wazny J, Bromage A, et al.: Xenome--a tool for classifying reads from xenograft samples. Bioinformatics. 2012; 28(12): i172–i178. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossello FJ, Tothill RW, Britt K, et al.: Next-generation sequence analysis of cancer xenograft models. PLoS One. 2013; 8(9): e74432. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H: Aligning sequence reads, clone sequences and assembly contigs with bwa-mem. bioRxiv, arXiv:1303.3997 q–bio.GN. 2013. Reference Source\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim D, Pertea G, Trapnell C, et al.: TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biol. 2013; 14(4): R36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDobin A, Davis CA, Schlesinger F, et al.: STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1): 15–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim D, Langmead B, Salzberg SL: HISAT: a fast spliced aligner with low memory requirements. Nat Methods. 2015; 12(4): 357–360. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAhdesmäki MJ: AstraZeneca-NGS/disambiguate: Release for publication [Data set]. Zenodo. 2016. Data Source" }
[ { "id": "17877", "date": "23 Nov 2016", "name": "Daniel Nicorici", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis papers introduces a tool, named Disambiguate, for computationally separating the DNA/RNA sequencing reads of two species, like for example in case of xenograft samples. The tool takes as input BAM files from wide range of NGS aligners.\nI have made the following minor observations:\n\nThe tool Disambiguate works on RNA-seq and DNA-seq data and this is mentioned for the first time in Methods section. Probably it would help to have this mentioned much earlier, like for example in the abstract too.\n\nIn order to improve the clarity, to the Tables 1 and 2 could be added also the percentages where is relevant, like for example, \"26157\" would become \"26157 (0.0553%)\" and so on.", "responses": [ { "c_id": "2425", "date": "24 Jan 2017", "name": "Miika Ahdesmäki", "role": "Author Response", "response": "Dear Daniel,  Thank you for the review, your comments are much appreciated. We have addressed your points in v2 of the manuscript.  We have explicitly mentioned in the abstract and the introduction that the tool can be used for both DNA and RNA-seq data    We have added percentages into the tables as you suggested  Thank you for the review and helping us improve the manuscript." } ] }, { "id": "17879", "date": "25 Nov 2016", "name": "Matthew D. Eldridge", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes a computational tool for separating sequencing reads from a sample that contains DNA or RNA from two species. This is a necessary pre-processing step for genomic or transcriptomic analysis of patient-derived xenograft cancer models.\nThe approach is based on alignments of sequence reads to the reference genome sequences for the two species in question. The authors have tested their approach on DNA-seq data from publicly available human and mouse exome datasets concatenated to simulate a xenograft sample. The results presented in Table 1 show very good separation of reads from the two species datasets with only a small percentage of reads being assigned to the wrong species (0.06% and 0.01%) and a higher but still very low percentage of reads flagged as ambiguous, i.e. align equally well to both genomes. Similar results were presented for RNA-seq data, although here the percentages of incorrectly assigned and ambiguous reads are unsurprisingly higher than for DNA-seq.\nUse of the alignment scores, and in the event of a tie the edit distance, is a reasonable approach to disambiguate reads and is the method used for BWA and STAR alignments. For TopHat2 and HISAT2 a different scoring function is required, although the reasons for this are not given. Further, the choice of function (sum of edit distance, number of reported alignments and number of gap opens) is not completely obvious and raises the question of whether the authors have attempted to tune the function, e.g. by adjusting the weighting of each component.", "responses": [ { "c_id": "2424", "date": "24 Jan 2017", "name": "Miika Ahdesmäki", "role": "Author Response", "response": "Dear Mathew,  Many thanks for reviewing our manuscript and the comments. We have modified v2 of the manuscript to address the points you raise, namely:  The aligner tags are very similar between BWA and STAR; and between TopHat2 and HISAT2. However, fairly different between BWA/STAR vs TopHat2/Hisat2 and therefore we couldn't use the same scheme originally developed for TopHat2 with BWA/STAR. With the appearance of HISAT2 especially for hg38 we decided to utilise the TopHat2 scheme for HISAT2 given their outputs are almost interchangeable. We have mentioned this in the updated text.    The sum of edit distance, number of reported alignments and number of hap opens has always worked for us well out of the box (as illustrated in the tables) and while tuning their weights may yield some minor benefits, it would risk overfitting to existing data. Any benefits of the weight tuning would have to be measured over a very long time, running multiple versions of weighted and the unweighted algorithms side by side. We have given this reasoning (complexity) in the text as our excuse of not tuning the weights further.     Thank you again for the comments and helping us improve the manuscript." } ] }, { "id": "17881", "date": "05 Dec 2016", "name": "Gavin R. Oliver", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWe believe that overall the software tool article by Ahdesmäki et al. seems sound and provides a solution to a problem that appears to be inadequately addressed in the field currently.\n\nNonetheless, we believe the manuscript would benefit from some minor amendments in order to increase its utility and accessibility to readers.\nIn brief:\nIntro/Background\nNeeds expanded slightly to better set the scene and describe the general approach of read disambiguation.\n\nMethodology\nThe methodology should be expanded slightly and made more explicit.\n\nTables 1&2:\nCombine 1 & 2 into a single table and label the samples by data type, i.e DNA and RNA Show %s as well as numbers Clearly label the species in the tables Clearly label correctly mapped/incorrectly mapped reads in table Clearly label human and mouse genomes as such Tables should clearly show all numbers pre- and post- disambiguation, rather than having superscripted references in the table legend Essentially, a novice should be able to read the paper and extract relevant info more easily.\n\nFigure 1\n\nShould be more granular, informative and descriptive of the process. Include read alignment etc.  Describe the Disambiguate process Use same font size for all text in the Figure\n\nComparison with a competitor product\nThis is something that is clearly missing. If it is literally impossible to compare to a competitor because the software is not accessible, this should be stated clearly as a reason for the lack of comparison in the paper.\nTumor samples\nIt would be interesting to know how performance is affected by use of highly mutated tumor xenografts. This is arguably beyond the scope of the paper, but warrants at least some mention.", "responses": [ { "c_id": "2423", "date": "24 Jan 2017", "name": "Miika Ahdesmäki", "role": "Author Response", "response": "Dear Gavin and Asha,  Many thanks for the very detailed review and comments. We have addressed your points in v2 of the manuscript.   Into/background:  We have added the text in braces: \"Direct high throughput sequencing of grafted samples with a mixture of two species is routine practice. {However, the origin species of each read or read pair is unknown and needs to be determined informatically.}\" to better set the scene. Further, the operation of xenome is now updated and xenome is now included in a comparison study. We have more explicitly stated that \"Alignment is first performed to both species independently and the reads are disambiguated as a post-processing step, {assigning reads to the species with higher quality alignments}\"    Methodology:  We have clarified the methodology section by spelling out the disambiguation algorithm and giving the reasoning why two schemes are used.     Table 1&2:  We have combined Tables 1&2 and revised the contents to address these points.    Figure 1:  We have redrawn the figure to be more descriptive.    Comparison to competitor product:  We have now compared our approach to Xenome, which was recently open sourced, and included the results of the comparison in the updated table with discussion.    Tumor samples:  We agree that evaluating the performance of the disambiguation algorithm in a messy cancer genome like the highly rearranged MCF7 would be extremely interesting. If we get our hands on appropriate data we will consider publishing the results on the program Github page." } ] } ]
1
https://f1000research.com/articles/5-2741
https://f1000research.com/articles/6-72/v1
24 Jan 17
{ "type": "Research Article", "title": "Factors associated with preterm delivery and low birth weight: a study from rural Maharashtra, India", "authors": [ "Anand Ahankari", "Sharda Bapat", "Puja Myles", "Andrew Fogarty", "Laila Tata", "Sharda Bapat", "Puja Myles", "Andrew Fogarty", "Laila Tata" ], "abstract": "Background: Although preterm delivery and low birth weight (LBW) have been studied in India, findings may not be generalisable to rural areas such as the Marathwada region of Maharashtra state. There is limited information available on maternal and child health indicators from this region. We aimed to present some local estimates of preterm delivery and LBW in the Osmanabad district of Marathwada and assess available maternal risk factors.\n\nMethods: The study used routinely collected data on all in-hospital births in the maternity department of Halo Medical Foundation’s hospital from 1st January 2008 to 31st December 2014. Multivariable logistic regression analysis provided odds ratios (OR) with 95% confidence intervals (CI) for preterm delivery and LBW according to each maternal risk factor.\n\nResults: We analysed 655 live births, of which 6.1% were preterm deliveries. Of the full term births (N=615), 13.8% were LBW (<2.5 kilograms at birth). The odds of preterm delivery were three times higher (OR=3.23, 95% CI 1.36 to 7.65) and the odds of LBW were double (OR=2.03, 95% CI 1.14 to 3.60) among women <22 years of age compared with older women. The odds of both preterm delivery and LBW were reduced in multigravida compared with primigravida women regardless of age. Anaemia (Hb<11g/dl), which was prevalent in 91% of women tested, was not significantly related to these birth outcomes.\n\nConclusions: The odds of preterm delivery and LBW were much higher in mothers under 22 years of age in this rural Indian population. Future studies should explore other related risk factors and the reasons for poor birth outcomes in younger mothers in this population, to inform the design of appropriate public health policies that address this issue.", "keywords": [ "maternal age", "gravidity", "birth weight", "Maharashtra", "India" ], "content": "Introduction\n\nBirth weight is an important public health indicator as it is a strong predictor of neonatal as well as lifelong health outcomes1. Low birth weight (LBW) is defined as weight at birth of less than 2500 grams (<2.5 Kilograms)2, which is usually associated with preterm delivery (typically less than 37 weeks of gestation) or restricted intrauterine development3. Maternal factors such as nutrition, body mass index (BMI) and exposure to conditions such as malaria, tuberculosis and HIV may affect birth weight4. Globally more than 20 million LBW infants (15.5% of total births) are born every year, of which about 95% are from developing countries2,3. LBW babies have a 20 times higher risk of death than babies with normal birth weight, and have a higher probability of lifetime morbidity, irrespective of ethnic differences across populations internationally5.\n\nIn India it is estimated that 30% of babies are LBW, with nearly half being born full term3. Whilst LBW prevalence and associated risk factors have been studied using national survey data, the generalizability of previous findings is limited due to the considerable heterogeneity between communities, particularly in rural areas. There is a sizeable population for which these data are not documented, leaving a major gap in existing literature. The Marathwada region in the state of Maharashtra has limited data on birth outcomes for its population of approximately 18 million. A recently published study using Latur District Hospital records from the Marathwada region found a LBW prevalence of 26.7%6. However, no data are available for the more deprived districts of Marathwada, such as Osmanabad, which has a population of approximately 1.5 million and where the overall literacy rate is 67% (57% among females), 20% lower than the state average7. Approximately 18% of the district’s population belongs to scheduled castes and tribes, recognised as being particularly deprived by the Indian government, and only 16% of the total population resides in urban areas7. Healthcare access is not uniform across the region, creating further challenges in implementing routine data collection, particularly in rural and difficult to reach areas8. We conducted a study to provide local estimates of preterm delivery and LBW and investigate some key maternal risk factors using hospital data from a rural Marathwada region in Maharashtra state, India.\n\n\nMethods\n\nHalo Medical Foundation (HMF) is a non-governmental organisation (NGO) with a hospital in the Osmanabad district of Marathwada region that provides medical services to a population of nearly 100,000, spread across 60 villages8. All services are provided at less than 50% of the price charged by neighbouring urban hospitals, and the hospital is attended by patients from all socioeconomic groups8. We conducted a retrospective study using routinely collected data on all in-hospital births in the maternity department of HMF’s hospital from 1st January 2008 to 31st December 2014.\n\nBirth weight was recorded for all live births immediately after birth under the direct supervision of an obstetrician. Low birth weight was defined as a weight of less than 2500 grams (<2.5 Kilograms) recorded immediately after birth3. Determination of gestational age was based on menstrual history, clinical examination and ultrasonography investigation conducted and recorded by an obstetrician. Deliveries occurring before 37 weeks were defined as preterm2. Maternal haemoglobin was measured prior to delivery by a qualified technician using the Sahli’s hemometer method (finger prick technique). This provides instant results, thus it is commonly used in the HMF hospital. Maternal anaemia was defined as haemoglobin levels of less than 11.0 g/dl10.\n\nThe study used HMF hospital data retrospectively, with no communication made with doctor, patients, or any other third party for the project. The data was freely available at HMF. Thus, external approval was not deemed necessary. The HMF governance board approved this project and gave permission to use anonymised data (Dataset 126 ). The study is reported in accordance with the STROBE guidelines (Supplementary Table 1)9.\n\nWe restricted analyses to singleton live births, and following an initial descriptive summary of the deliveries, logistic regression analysis was conducted to investigate the association of maternal factors (age [older or younger than the mean], gravidity [primigravida or multigravida] and anaemia) with preterm delivery and, among full-term deliveries only, having a LBW baby. Results are reported as unadjusted and adjusted odds ratios (OR) with 95% confidence intervals (CI). Statistical significance was ascertained based on a p value <0.05. All analyses used the licensed statistical software package IBM SPSS (version 20).\n\n\nResults\n\nThroughout the study period, 685 deliveries were carried out at the hospital. After excluding missing data (n=4), twin pregnancies (n=8) and stillbirths (n=18), we analysed 655 cases of singleton live births. For these 655 cases, mean maternal age at delivery was 22 years, with 93% normal vaginal deliveries and 7% caesarean sections. The sex ratio at birth was 1.07 (males n=340, females n=315), and none of the study participants had any systemic diseases such as hypertension or diabetes, or habits which may have influenced birth weight or delivery term, such as smoking. Table 1 summarises the descriptive details of the analysed live births, 6.1% of which were preterm deliveries. All preterm deliveries were natural and none were induced by the healthcare provider. Of the full term deliveries, 13.8% were LBW babies.\n\nN=655 unless specified otherwise. SD: standard deviation.\n\nLogistic regression analysis showed higher odds of preterm delivery in women younger than 22 years of age than in older women at the time of delivery (adjusted OR 3.23, 95% CI: 1.36 to 7.65, p=0.008) (Table 2). Gravidity was not associated with the odds of preterm delivery. Maternal anaemia, occurring in 91% (356) of the 391 women tested, was not associated with preterm delivery. Among full term deliveries, the odds of delivering a LBW baby was twice as high in mothers who were <22 years of age at the time of delivery (adjusted OR 2.03, 95% CI: 1.14 to 3.60, p=0.02) (Table 3). Primigravidas were two times more likely to deliver LBW babies compared with multigravidas (adjusted OR 2.87, 95% CI: 1.54 to 5.36, p=0.001). Maternal anaemia was not associated with having a LBW baby.\n\nN=655 singleton live births, unless specified otherwise. Reference category for each variable is indicated as 1.\n\n^ : Odd ratios compare preterm with full term delivery\n\n* : Adjusted for gravidity\n\n+: Adjusted for maternal age (used as a continuous variable following linearity assessment).\n\nN=615 full term singleton live births, unless specified otherwise. Reference category for each variable is indicated as 1.\n\n^ : Odd ratios compare low birth weight with normal birth weight\n\n* : Adjusted for gravidity\n\n+: Adjusted for maternal age (used as a continuous variable following linearity assessment).\n\n\nDiscussion\n\nIn summary, our results show a higher likelihood of preterm delivery and having a LBW baby in women of the Marathwada region younger than 22 years of age at the time of delivery. Gravidity and anaemia were not associated with these birth outcomes.\n\nThis is the first study that uses data from a rural area of the Marathwada region to investigate maternal factors associated with both preterm delivery and LBW. The same obstetrician recorded all maternal health parameters and birth outcomes from in-hospital births throughout the study period. Preterm and full term deliveries were distinguished by the obstetrician through clinical examination and menstrual history and ultrasonography investigation at the time of admission. None of the study participants were diagnosed with hypertension, diabetes or other systemic conditions prior or during pregnancy, thereby limiting the influence of these confounders on our two main outcomes, LBW and preterm delivery.\n\nThe study hospital serves women across all social classes and, thus these estimates are likely to be representative of the local population in Marathwada region. However, our use of retrospective hospital records means that a detailed investigation of other maternal factors and probable confounders associated with birth outcomes is not feasible. Important factors including detailed medical history, birth spacing, maternal body mass index, education, socioeconomic status, healthcare access, knowledge and pregnancy complications which may have had important roles in our study population, were not available.\n\nA community-based prospective study involving 45 villages in the Pune district of Maharashtra in the early 1990s reported that 29% of babies in the study were LBW11. In the Pune study, LBW was significantly more prevalent in primiparae who were less than 20 years of age at the time of delivery than in mothers that were 21 to 25 years of age. A recent hospital based retrospective study from the southern western district of Maharashtra state investigated outcomes of teenage pregnancies (maternal age ≤19 years)12. The study showed that teenage mothers were three times more likely to deliver preterm (OR 2.97, 95% CI: 2.40 to 3.70), and twice as likely to deliver a LBW baby (OR 1.80, 95% CI: 1.50 to 2.20) compared to older mothers. Findings from both studies outlined above are in agreement with our results.\n\nHowever, a case-control study by Mumbare et al from Marathwada region reported no association between maternal age and birth weight (OR 0.53, 95% CI: 0.24 to 1.19)6. The study found that a higher risk of LBW in full term delivery cases was associated with maternal weight (≤ 55 kilograms), maternal height (≤ 155 cm), weight gain during pregnancy (≤ 6 kilograms), and subsequent pregnancy spacing (<36 months). This case-control study6 obtained data from two centres; the Medical College Hospital of Latur city, based in Marathwada region, and the Medical College Hospital of Nasik city, based in western Maharashtra, which has higher socioeconomic profile compared to our study population (data from July 2009 to December 2009). In this study, the mean maternal age at delivery was 23.19 years (SD: 3.37), similar to the mean age of participants in our study (22.15 years, SD: 3.17). Authors of the case-control study stated that the high prevalence of LBW (26.8%) could be because both study hospitals were tertiary care centres located in the main city of their respective districts, where high-risk pregnancy cases are referred to from surrounding villages and blocks6,13. Unlike the Mumbare et al, our data came from a rural hospital with comparatively low risk pregnancies (no systemic diseases or tobacco consumption were observed in our participants)6.\n\nFindings from other parts of the country also showed a higher risk of LBW and preterm delivery in younger mothers (typically defined as less than 20 years)14,15. Mean birth weight in our study was 2.83 kilograms, 16 grams higher than findings from the Karnataka study11. The Karnataka study had a larger sample size (n=1138) and reported a LBW prevalence of 23%, higher than in our study. LBW prevalence of 8% to 30% reported in other Indian studies varied mainly due to study locations, sample size, hospital type (primary health centres based in villages or district hospitals based in cities), and maternal characteristics such as diet, BMI and antenatal services16–21. The recent Indian National Family Health Survey (NFHS-3) reported 34% of LBW babies at national level, with higher prevalence in rural areas compared to urban regions22. Lastly, a very high prevalence of maternal anaemia (91%) among those tested was noted in our study, which is consistent with findings from other regions; however, no significant effect was seen on preterm delivery or birth weight in full term deliveries23. It should be taken into account that half of the participants were tested in the week preceding delivery and the rest were tested on the day of delivery.\n\n\nConclusion\n\nThe practice of early marriage followed by pregnancy is commonly observed in our study area. This is influenced by various factors such as parental education, financial resources, and willingness to support higher education for girls24. Though the current legal age for marriage is 18 years for girls in India, child marriage remains prevalent at both state and national level25. Following our observations, it may be advisable to plan the first pregnancy after 21 years of age. However this needs to be supported by necessary implementation of legislation on marriage age by the government authorities. Future studies should explore the reasons for poor birth outcomes in younger mothers in this population to inform the design of appropriate public health policies to address this issue.\n\n\nData availability\n\nDataset 1: HMF Hospital Delivery Data 2008–2014.\n\nThe attached dataset includes information on maternal age, gravidity, haemoglobin levels, delivery term, and birth weight of 655 study samples.\n\ndoi, 10.5256/f1000research.10659.d14985426", "appendix": "Author contributions\n\n\n\nAA, LT, PM and AF conceptualized the study. AA obtained and validated the data and was responsible for project management, while SB conducted the data analysis. All authors contributed to the interpretation of study findings, manuscript write-up, and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nData collection activities using HMF hospital records were supported by Halo Medical Foundation India. Additional support for the publication was obtained from the Division of Epidemiology and Public Health, The University of Nottingham, UK.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank HMF for providing institutional support for the study. We also acknowledge Ms Sandhya Rankhamb (employed by HMF) for providing support for data entry and verification.\n\n\nSupplementary material\n\nSupplementary material 1: STROBE Guidelines for cross-sectional studies.\n\nThe study is reported in accordance with the following checklist of STROBE guidelines.\n\nClick here to access the data.\n\n\nReferences\n\nWilcox AJ: On the importance--and the unimportance--of birthweight. Int J Epidemiol. 2001; 30(6): 1233–1241. PubMed Abstract | Publisher Full Text\n\nLow Birthweight: Country, regional and global estimates. Accessed June 6, 2016. Reference Source\n\nWHO: The world health report 1995 - bridging the gaps. Accessed June 8, 2016. Reference Source\n\nMuthayya S: Maternal nutrition & low birth weight - what is really important? Indian J Med Res. 2009; 130(5): 600–608. PubMed Abstract\n\nMacDorman MF, Atkinson JO: Infant mortality statistics from the 1997 period linked birth/infant death data set. Natl Vital Stat Rep. 1999; 47(23): 1–23. PubMed Abstract\n\nMumbare SS, Maindarkar G, Darade R, et al.: Maternal risk factors associated with term low birth weight neonates: a matched-pair case control study. Indian Pediatr. 2012; 49(1): 25–28. PubMed Abstract | Publisher Full Text\n\nHIV/AIDS Situation and Response in Osmanabad District: Epidemiological Appraisal Using. Data Triangulation. Accessed June 12, 2016.\n\nHalo Medical Foundation. Accessed July 14, 2016. Reference Source\n\nVon Elm E, Altman DG, Egger M, et al.: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007; 4(10): e296. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuidelines. Governnment of India. Accessed July 25, 2016. Reference Source\n\nHirve SS, Ganatra BR: Determinants of low birth weight: a community based prospective cohort study. Indian Pediatr. 1994; 31(10): 1221–1225. PubMed Abstract\n\nMahavarkar SH, Madhu CK, Mule VD: A comparative study of teenage pregnancy. J Obstet Gynaecol. 2008; 28(6): 604–607. PubMed Abstract | Publisher Full Text\n\nMetgud CS, Naik VA, Mallapur MD: Factors affecting birth weight of a newborn--a community based study in rural Karnataka, India. PLoS One. 2012; 7(3): e40040. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGanesh Kumar S, Harsha Kumar HN, Jayaram S, et al.: Determinants of low birth weight: a case control study in a district hospital in Karnataka. Indian J Pediatr. 2010; 77(1): 87–89. PubMed Abstract | Publisher Full Text\n\nMavalankar DV, Gray RH, Trivedi CR: Risk factors for preterm and term low birthweight in Ahmedabad, India. Int J Epidemiol. 1992; 21(1): 263–272. PubMed Abstract\n\nNegi K, Kandpal S, Kukreti M: Epidemiological factors affecting low birth weight. JK Sci. 2006; 8(1): 31–34. Reference Source\n\nRadhakrishnan T, Thankappan KR, Vasan RS, et al.: Socioeconomic and demographic factors associated with birth weight: a community based study in Kerala. Indian Pediatr. 2000; 37(8): 872–876. PubMed Abstract\n\nRao B, Aggarwal A, Kumar R: Dietary intake in third trimester of pregnancy and prevalence of LBW: A community-based study in a rural area of Haryana. Indian J Community Med. 2007; 32(4): 272–76. Publisher Full Text\n\nSachar R, Kaur N, Soni R: Energy Consumption during Pregnancy & its relationship to Birth Weight–A Population based Study from Rural Punjab. Indian J Community Med. 2000; 25(4): 166–69. Reference Source\n\nKapoor SK, Kumar G, Pandav CS, et al.: Incidence of low birth weight in rural Ballabgarh, Haryana. Indian Pediatr. 2001; 38(3): 271–275. PubMed Abstract\n\nBiswas R, Dasgupta A, Sinha RN, et al.: An epidemiological study of low birth weight newborns in the district of Puruliya, West Bengal. Indian J Public Health. 2008; 52(2): 65–71. PubMed Abstract\n\nNational Family Health Survey. Accessed August 2, 2016. Reference Source\n\nHaralkar SJ, Khandekar SV, Pore PD, et al.: Socio-Demographic Correlates of Anaemia among Married Women in Rural Area of Maharashtra. Indian J Public Heal Res Dev. 2013; 4(3): 107–110. Publisher Full Text\n\nRaj A, Saggurti N, Balaiah D, et al.: Prevalence of child marriage and its effect on fertility and fertility-control outcomes of young women in India: a cross-sectional, observational study. Lancet. 2009; 373(9678): 1883–1889. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDirectorate of Economics and Statistics, Maharashtra. Accessed August 10, 2016. Reference Source\n\nAhankari A, Bapat S, Myles P, et al.: Dataset 1 in: Factors associated with preterm delivery and low birth weight: a study from rural Maharashtra, India. F1000Research. 2017. Data Source" }
[ { "id": "19633", "date": "06 Feb 2017", "name": "Rahul Ramesh Bogam", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIntroduction\nThe present study is the retrospective analysis of hospital based data to identify some local estimates of preterm delivery and Low birth weight(LBW) in the Osmanabad district of Marathwada and to assess available maternal risk factors. As per author's information, this was the first study in Marathwada region of Maharashtra State to explore the information about maternal and child health indicators from this region. It is a well written manuscript with appropriate presentation of results.\nFew suggestions/ recommendations :\nObjectives / Goals\nThere is need to mention clear/specific objectives/ goals. In the introduction section,authors tried to mention objectives but it needs to specified. For example: 'To investigate some key maternal risk factors' can be replaced by ' To determine/ find out association of maternal risk factors with.....'. In short, objectives/goals can be re -framed.\nMethods\nAuthors have not justified the inclusion of this specific period - i.e.1st January 2008 to 31st December 2014. Authors are encouraged to provide justification for the same. Detailed inclusion and exclusion criteria need to be mentioned in METHOD section.\nDiscussion\nStrengths and limitations should be at the end of discussion section rather than at the beginning. The heading ' Comparison with other studies' may be removed from discussion section as DISCUSSION itself reflects comparison with other studies. Please make sure that all TABLES should be a part of RESULT section, not of the DISCUSSION section.\nConclusion\nThis section can be supplemented with the heading \"RECOMMENDATIONS'', or there can be separate section of recommendations as authors have given recommendations based on study findings.\nKey words\nAuthors are encouraged to provide key words for their study.", "responses": [ { "c_id": "2585", "date": "27 Mar 2017", "name": "Anand Ahankari", "role": "Author Response", "response": "Dear Dr Bogam,  Thank you for your valuable time to review our research paper. I have provided a brief response to your comments below.  Regarding study objectives: In the abstract, we followed a recommended guideline of the journal, thus a separate title on the study objective was not included. The last part of the introduction is the study objective (\"We conducted a study to provide local estimates of preterm delivery and LBW and investigate some key maternal risk factors using hospital data from a rural Marathwada region in Maharashtra state, India\"). We are happy to re-frame this, if advised by the journal editors.  Regarding methods: The reason for the specific duration is mainly due to the project timeline. There is no other reason to use the give timeline.  Regarding discussion, conclusion and keywords: We have provided manuscript, tables and datasets seperately to the journal. The article type setting and sequence is solely managed by the journal. We submitted all files in accordance with the journal requirements. We also submitted keywords, and believe that those will appear during the final approved submission.  Thank you once again for your valuable time. We hope that F1000Research readers will find this comment section useful.  Dr Anand Ahankari" } ] }, { "id": "22043", "date": "05 May 2017", "name": "Jayashree Sachin Gothankar", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe term 'Low risk pregnancies' is to be used carefully as it is not clear from the study that the information about absence of systemic disease is based on interpretation of tests conducted during study or history of absence of disease. If based on history then quality of data collected will be poor.\nSahlis method for hemoglobin estimation is a less reliable method for assessment of anemia.\nTo classify the birth as preterm, how were challenges to assess LMP addressed?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [ { "c_id": "2696", "date": "08 May 2017", "name": "Anand Ahankari", "role": "Author Response", "response": "Dear Prof Gothankar,  Thank you very much for reviewing our paper. I have provided explanation below regarding 'Low Risk Pregnancies', which will be useful for readers.  Data in our study: Systematic diseases includes hypertension and diabetes mellitus (DM) were evaluated using investigations in the hospital by a gynaecologist. Serum glucose level was assessed during routine antenatal care, and blood pressure was measured at the same time. The absence of systematic disease was confirmed prior to the delivery at the hospital.  Data from Mumbare et al paper (ref 6): As explained in our paper, the research findings of Mumbare et al (6) used data from a district hospital (tertiary/advance healthcare facility), where high risk pregnancies were predominantly referred. However our data comes from a rural hospital where advance health services were not available thus only low risk pregnancies (with no systematic complications) were conducted at HMF's hospital.  I hope that readers will find this additional explanation useful.  Thank you once again for your valuable time.  Dr Anand Ahankari" } ] } ]
1
https://f1000research.com/articles/6-72
https://f1000research.com/articles/6-71/v1
24 Jan 17
{ "type": "Correspondence", "title": "Comment on the “TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions”", "authors": [ "Corneel Coens", "Jan Bogaerts", "Laurence Collette", "Jan Bogaerts", "Laurence Collette" ], "abstract": "The purpose of this correspondence is to discuss the TrialsTracker, presented by Powell-Smith and Goldacre in their article ‘TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions’ (2016)  as a tool to discover publication bias in clinical trial results. The findings from one specific organization (European Organization for Research and Treatment of Cancer; EORTC) are compared with the actual publication history of the trials in question. We also present shortcomings of the method being used and suggestions for improvement to the proposed algorithm.", "keywords": [ "TrialsTracker", "publication bias", "clinical trials" ], "content": "\n\nWe read with great interest the article by Drs Powell-Smith and Goldacre1 on the incomplete reporting of clinical trial results by pharmaceutical companies and research institutions. The necessity to publish results of all clinical trials, regardless of the trial outcome, cannot be denied2. Failure to do so is unethical not only towards patients who have participated in these trials but also towards the medical community at large which relies on unbiased reporting to make informed decisions both in clinical practice and research3.\n\nThe European Organization for Research and Treatment of Cancer (EORTC), as a non-profit pan European clinical research organization, very much supports this view. EORTC is driven by the mission to improve the survival and quality of life of cancer patients, and adheres to a strict policy to publish all of its completed trials in full4. We were therefore surprised by the results from TrialsTracker5 stating that 52.6% of EORTC trials are missing results (20 trials out of 38). We downloaded the full trial dataset used by the tracker via GitHub (https://github.com/ebmdatalab/trialstracker). After selection according to the set criteria (ie. completed since 01/01/2006, interventional phase II or III, led by EORTC) a total of 29 relevant trials were found. The tracker classified these as: 14 with successful result reporting and 15 (51.7%) without results. We identified the latter 15 trials through the NCT ID number and cross-referenced this with the EORTC internal bibliography list. This (manual) investigation yielded the following results (see Table 1):\n\nCaption:\n\nNCT ID number: Identification number according to ClinicalTrials.gov registry.\n\nEORTC ID number: Identification number according to EORTC.\n\nStudy title: Title of the study protocol.\n\nReference: the reference to the publication of the main results if available.\n\nPublication status: Status of the publication of the main results according to the EORTC bibliography listing.\n\nTrialstracker overdue status: Status of the publication of the main results according to the TrialsTracker algorithm. TRUE = overdue (ie. not published) / FALSE = not overdue (ie published).\n\nReason trialstracker in/exclusion: Main reason for discrepancy or accordance between EORTC and TrialsTracker publication status.\n\n- A total of 9 trials had been successfully published, but the NCT ID number was not listed in PubMed’s Secondary Source ID field. For all of these trials the NCT ID was stated in the article itself and a link to the correct reference was provided in the publications section of ClinicalTrials.gov\n\n- Three further trials had been recently successfully published without mention of the NCT ID number. The reference was not yet present in ClinicalTrials.gov but was scheduled to be updated soon.\n\n- The last three trials were still undergoing analysis, and the planned publications were in various stages of development.\n\nThis would put the EORTC under-reporting “score” at 3/29 or about 10% of its trials being overdue for publication.\n\nOur investigation revealed several shortcomings of the automated tracker algorithm:\n\n- The decision to only accept results posted directly in ClinicalTrials.gov or with a listed NCT ID in PubMed’s Secondary Source ID field is very restrictive. EORTC does not post results in ClinicalTrials.gov directly as this presents a substantial administrative burden, and does not allow results to be put into context. Other organizations may be in the same situation.\n\n- The authors state that since 2005 “all major medical journals (through the International Committee of Medical Journal Editors) have required trials to be registered, and all trials should include their registry ID in the text.” The majority of our trials, as identified by the tracker, fulfill these criteria, yet several were incorrectly classified due to absence of a specific PubMed field provided by the medical journals. Despite this omission, these trial results could be correctly found through recognized databases such as ClinicalTrials.gov and PubMed or even a standard search engine like Google.\n\n- For at least two studies, the NCT ID PubMed link was available for a publication that did not contain the actual study results. The EORTC 55971 study on neoadjuvant treatment in ovarian cancer was published in NEJM in 20106. Yet this publication was not identified by the tracker, but two subsequent publications on exploratory subgroup analyses were considered as evidence of trial results publication7,8.\n\nWe also want to introduce two caveats to take into account when refining a tracker such as the one proposed:\n\n- The algorithm can be easily manipulated to inflate the success rate for any trial sponsor by either not listing trials as completed or by listing them as terminated in the registry.\n\n- Also, once the trial is completed, any publication with an adequate NCT ID PubMed link is sufficient for the TrialsTracker algorithm, which means articles on quality assurance, subgroups, translational research, prognostic models, or other data not containing actual trial results will inflate the statistics.\n\nAs a general observation, we feel that tracking publications linked to trials without checking these publications for accuracy and adequacy represents a simplistic measure of publication reporting. A substantial source of bias lies in the incorrect publication of trial results, often done with the intention to present larger treatment effects9. We feel such a tracking system, by increasing pressure to publish all trials on short notice, may contribute to the problem by leading to compromises on the quality of the publication.\n\nA straightforward approach to resolve this could be to add to clinical trial registries an indicator on the publication status of final trial results. The sponsor would be responsible for updating this indicator and for providing the actual reference. Registry administrators could then check the appropriateness of the reference based on criteria already required to check online posting of results, therefore providing independent confirmation that the trial results are adequately published. Such an indicator would allow for more accurate reporting and could be used to set up an automatic alert system.\n\nThe EORTC welcomes initiatives to improve clinical trial reporting. The EORTC has an explicit data sharing policy (http://www.eortc.org/investigators/data-sharing/) that allows anyone to request direct access to clinical trial data from completed studies. In addition to ClinicalTrials.gov, EORTC also registers all its clinical trials that fall under the EU clinical trial directive (Directive 2001/20/EC) by default into EudraCT (https://eudract.ema.europa.eu). Since January 2016, summary clinical trial results must be made publicly available through the EU Clinical Trials Register for all EudraCT registered trials. The authors may consider this as an additional source of trial results sharing.\n\nOur conclusion is that the proposed TrialsTracker is a much needed and welcome initiative. However, in this first implementation it is too simplistic to be of real informative use and its conclusions are misleading. We hope that improvements to the algorithm will converge in a useful tool that can address the very real and serious concern of unreported clinical trial results.", "appendix": "Author contributions\n\n\n\nCC prepared the first draft. JB and LC were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nAll authors are employees of the EORTC.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nWe thank Caroline De Bie for proofreading and editing of this text.\n\n\nReferences\n\nPowell-Smith A, Goldacre B: The TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions [version 1; referees: 2 approved]. F1000Res. 2016; 5: 2629. Publisher Full Text\n\nChalmers I: Underreporting research is scientific misconduct. JAMA. 1990; 263(10): 1405–8. PubMed Abstract | Publisher Full Text\n\nAntes G, Chalmers I: Under-reporting of clinical trials is unethical. Lancet. 2003; 361(9362): 978–9. PubMed Abstract | Publisher Full Text\n\nEORTC publication policy POL-009. Accessed 13/12/2016. Reference Source\n\nhttps://trialstracker.ebmdatalab.net/#european-organisation-for-research-and-treatment-of-cancer-eortc; Accessed 17/11/2016.\n\nVergote I, Tropé CG, Amant F, et al.: European Organization for Research and Treatment of Cancer-Gynaecological Cancer Group.; NCIC Clinical Trials Group. Neoadjuvant chemotherapy or primary surgery in stage IIIC or IV ovarian cancer. N Engl J Med. 2010; 363(10): 943–53. PubMed Abstract | Publisher Full Text\n\nVizzielli G, Fanfani F, Chiantera V, et al.: Does the diagnosis center influence the prognosis of ovarian cancer patients submitted to neoadjuvant chemotherapy? Anticancer Res. 2015; 35(5): 3027–32. PubMed Abstract\n\nvan Meurs HS, Tajik P, Hof MH, et al.: Which patients benefit most from primary surgery or neoadjuvant chemotherapy in stage IIIC or IV ovarian cancer? An exploratory analysis of the European Organisation for Research and Treatment of Cancer 55971 randomised trial. Eur J Cancer. 2013; 49(15): 3191–201. PubMed Abstract | Publisher Full Text\n\nEmerson GB, Warme WJ, Wolf FM, et al.: Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med. 2010; 170(21): 1934–9. PubMed Abstract | Publisher Full Text" }
[ { "id": "19624", "date": "08 Feb 2017", "name": "Tamas Ferenci", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper from Coens et al discusses an important aspect of the recently published TrialsTracker database/analysis of the EBM Data Lab (University of Oxford). The paper introducing it from Drs. Powell-Smith and Goldacre 1 initiated exciting comments 1, press statements 2 and even tweet exchanges with pharmaceutical companies 3, raising similar concerns; the present article is the first however to formalize such criticisms through the detailed analysis of a particular institute's trials.\n\nI welcome this investigation as a systematic substantiation of the concerns raised in the aforementioned sources. Albeit pertaining to a single institute, the results are at least illustrative – even if not representative – for the entire TrialsTracker project. (A notable – and important – exception that is not discussed in the present manuscript is the question of results posted to company websites.)\n\nThe presentation of the findings from Coens et al is almost flawless in my opinion, with the following minor remarks:\nI don't see how 38 changed to 29 (number of relevant EORTC studies). The said criteria – completed, has completion_date and it is later than 1 Jan 2006, interventional, phase 2 or 3 – results indeed in 38 records. Yet, Coens et al reports only 29, saying that these are \"relevant\", but how they define relevance (ie. what 9 studies were excluded and why) is not discussed at all.\n\nTable 1 should be improved by clearly marking which trial belongs to which category: has results or not, in the latter case the reason from the three listed (NCT ID not in SI field, no NCT ID given in the publication, really not published). In the current form it is difficult to match the concrete trials to the authors' statements (e.g. what are the trials that have a publication but not NCT ID given?).\n\nMy major comments are therefore rather about missing details and potential further improvements:\nTrialsTracker looks up results from clinicaltrials.gov (results section) and Pubmed (only through NCT ID as SI). While non-publication in Pubmed might have several – not necessarily malicious or negligent – reasons, such as the publication being rejected or a long review process, clinicaltrials.gov has no such limitation, so non-disclosure there seems to be much more inexcusable at first glance. Coens et al touches this issue, but only extremely briefly, stating that \"[uploading results to clinicaltrials.gov] presents a substantial administrative burden, and does not allow results to be put into context''. I'd really welcome a more detailed discussion of the first part: how large is this burden, is it in fact prohibitive...? (Especially for organizations with tens of thousands of employees and clinical trials with a budget in excess to ten million US dollars.) As far as the second part is concerned, I disagree: the aim of the deposition of results in repositories is not its presentation \"in context\", but simply making them available. Not that the presentation of the context is not important, but it is a separate issue. (Actually, availability of raw results might even be beneficial, avoiding potential biases introduced by a biased context 4.) Thus, this sentence of the authors should be elaborated in more detail.\n\nCoens et al very instructively point out that the decision on where to look for NCT ID in a publication is a specificity/sensitivity trade-off. Ironically, the restriction of search to SI, which was originally meant to exclude studies not reporting main results does sometimes include false results (as exemplified by EORTC 55971), and more importantly, the reverse can also be true. Extending the search to the whole abstract, however, might allow even more false results to enter. Interestingly, Powell-Smith is somewhat vague about this issue stating that \"in our experience approximately 1.5% of PubMed records include a valid nct_id list in the abstract, but not the Secondary Source ID field\" without further details. I am positive that additional research into this topic would be beneficial.\n\nCoens et al are quoting Powell-Smith et al to justify that EORTC was acting correctly when NCT IDs were published in the text, regardless of where it appears (\"The authors state that since 2005 >>all major medical journals (through the International Committee of Medical Journal Editors) have required trials to be registered, and all trials should include their registry ID in the text.<< The majority of our trials, as identified by the tracker, fulfill these criteria\".) This quotation is somewhat misleading to suggest that the ID can be anywhere in the text (and not necessarily in SI – as is the case for many EORTC publication) and it is completely correct this way: MEDLINE's guideline explicitly requires NCT ID to be recorded in that particular field, i.e. the secondary source ID field 5 (as also cited by Powell-Smith et al). Thus, TrialsTracker's requirement is not arbitrary, as one might believe based on the description of Coens et al: \"several were incorrectly classified due to absence of a specific PubMed field provided by the medical journals\" – SI is not just \"a specific PubMed field\".\n\nMore importantly, in some cases, NCT ID is given not only outside the SI field, but outside the abstract. (E.g. NCT00003941. An even more problematic example is NCT00021450, where the NCT IDs location is behind the paywall.) We can argue whether the script should look for the abstract or only the SI field, but obviously there is no realistic way to scan the full text of the articles, so these cases are clearly invisible to any automated algorithm, no matter how elaborate.\n\nFinally, let me note that the authors – quite rightly – summarize the drawbacks of the automation, but to be balanced, its strengths should have been mentioned: first, that the automated nature allows investigation on a scale in quantity that cannot be achieved – or only through extreme measures – with manual checking, and second, perhaps even more importantly, that the automated algorithm is totally transparent and surely free of any subjective decisions. Finally, even if the algorithm is imperfect, at least it is uniformly imperfect, thus the results are likely comparable in spite of this. As far as the distribution of these false results is similar among sponsors, their TrialsTracker score can still be compared. (That's the reason why one cannot simply correct EORTC's results, for example based on this paper, because that would mean that even this comparability is lost.)", "responses": [] }, { "id": "19623", "date": "13 Feb 2017", "name": "Adam Jacobs", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is necessarily limited in scope as it presents results for just one trial sponsor, and we do not know if that sponsor is representative. However, as the authors are from just one institution, it is of course perfectly reasonable that they have focused on their own institution, so this is not intended as a criticism, merely as an observation.\nCoens et al have done a good job of presenting a more detailed analysis of the studies from their institution, and have shown convincingly that the estimate of their publication rate from the automated Trial Tracker was substantially inaccurate for their institution, by means of the gold standard of a manual search. They provide a sensible and balanced discussion of the limitations of the automated search algorithm more generally, pointing to some possible unintended consequences. While those unintended consequences are at this stage purely speculative, it does no harm to bear in mind what the risks are of an automated process such as the Trials Tracker.\nI have 2 suggestions for improving the paper. First, Coen et al state that only 29 of the 38 trials identified by the Trials tracker were \"eligible studies\", which they define as \"completed since 01/01/2006, interventional phase II or III, led by EORTC\". When I applied those criteria myself to the EORTC trials identified by the Trials Tracker, I found 30 eligible studies. As far as I could tell from the Trials Tracker data, all 38 studies were completed since 01/01/2016 and were led by EORTC, and 8 studies were not drug interventions. It would be helpful if Coen et al could be more explicit about why they excluded 9 trials from their analysis.\nSecond, I think the finding that some trials were not identified as published by the Trials Tracker despite a publication that was clearly linked in the clinicaltrials.gov record deserves more emphasis. Although this is mentioned in the paper, a casual reader might miss it, and this is perhaps the most important finding in terms of a way in which the Trial Tracker could easily be improved.", "responses": [] } ]
1
https://f1000research.com/articles/6-71
https://f1000research.com/articles/6-70/v1
23 Jan 17
{ "type": "Method Article", "title": "ELIXIR pilot action: Marine metagenomics – towards a domain specific set of sustainable services", "authors": [ "Espen Mikal Robertsen", "Hubert Denise", "Alex Mitchell", "Robert D. Finn", "Lars Ailo Bongo", "Nils Peder Willassen", "Hubert Denise", "Alex Mitchell", "Robert D. Finn", "Lars Ailo Bongo", "Nils Peder Willassen" ], "abstract": "Metagenomics, the study of genetic material recovered directly from environmental samples, has the potential to provide insight into the structure and function of heterogeneous microbial communities.  There has been an increased use of metagenomics to discover and understand the diverse biosynthetic capacities of marine microbes, thereby allowing them to be exploited for industrial, food, and health care products. This ELIXIR pilot action was motivated by the need to establish dedicated data resources and harmonized metagenomics pipelines for the marine domain, in order to enhance the exploration and exploitation of marine genetic resources. In this paper, we summarize some of the results from the ELIXIR pilot action “Marine metagenomics – towards user centric services”.", "keywords": [ "Marine", "metagenomics", "pipelines", "gap analysis" ], "content": "Introduction\n\nMarine microbial genomics and metagenomics are arguably still in their infancies, but each discipline is rapidly expanding in terms of research activity and are converging against each other. At present, the lack of specialized databases for marine metagenomics1, as well as dedicated data management e-infrastructures and harmonized pipelines, makes implementation of large-scale studies challenging, and replication of analysis close to impossible. In addition, data production from metagenomics projects is growing exponentially due to reducing sequencing costs2, which demands optimized and flexible solutions for analysis of metagenomic data.\n\nTo address these challenges, UiT The Arctic University of Norway (part of ELIXIR Norway), together with the European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), initiated an ELIXIR pilot action with the following aims: (i) Identify the overlap between two existing metagenomics pipelines, EBI Metagenomics Portal (EMG)3 and META-pipe4, thereby opening the potential for interoperability; (ii) implement new or improve existing components in each pipeline to enrich the output; and (iii) perform a gap analysis to identify deficient areas of marine metagenomics analysis. The overall outcome of this pilot action was aimed at shaping the foundations from which the marine (meta)genomics community could establish long-term, sustainable service platforms. A description of this pilot action and a webinar video can be found at https://www.elixir-europe.org/about/implementation-studies/marine-metagenomics and https://www.elixir-europe.org/documents/update-elixir-pilot-actions-launched-2014-marine-metagenomics-towards-user-centric, respectively.\n\nThe EMBL-EBI has developed EMG, a generic platform, which aims to provide insights into the phylogenetic diversity and functional potential of all environmental samples, while UiT has specifically developed META-pipe towards the marine domain, with a focus on bioprospecting. In this article, we describe a comparison of the two pipelines, using the outputs of equivalent input sequence data to illustrate the similarities and differences.\n\nFigure 1 shows a schematic of the pipeline workflows from META-pipe and EMG. A simple visual comparison of these workflows reveals that while there are some commonalities between the pipelines, there are a series of key differences in the tools and approaches to the analysis.\n\nBriefly, the main differences between the pipelines are in the preprocessing and taxonomic classification steps. More specifically, while both preprocessing steps preform filtering of low quality reads and length filtering they diverge thereafter. META-pipe performs assembly of small-subunit (SSU) ribosomal RNAs (rRNAs) filtered reads, whereas EMG merges overlapping pair-end reads into longer single reads (where appropriate) and performs taxonomic classification and functional analysis on unassembled sequences. Despite the presence or absence of sequence assembly, both pipelines use rRNASelector5 for the identification of 5S, 16S and 23S rRNA sequences, before passing extracted rRNAs to taxonomic classification tools. EMG uses QIIME6 with Greengenes7 and a closed-reference OUT picking strategy for taxonomic classification and annotation of 16S rRNA, while META-pipe uses LCAClassifier8 coupled with a manually curated custom database coined SilvaMod, derived from SILVA9 and especially created for LCAClassifier. EMG masks all rRNA regions before passing them to the functional analysis section of the pipeline, whereas META-pipe removes all rRNA prior to assembly. There are also some minor differences between the pipelines in functional annotation. EMG uses FragGeneScan10 for gene prediction and a subset of the InterPro database together with the InterProScan511 for functional assignment of predicted coding sequences (CDSs). META-pipe use MetaGeneAnnotator (MGA)12 for gene prediction and InterProScan5 with the full InterPro database, and BLAST against PRIAM13 and UniProt14 for additional functional assignment.\n\nTo understand the impact of the outlined differences in functional and taxonomic identifications, and as a prelude to future harmonisation, we undertook a comparison of the results from four different environmental datasets using EMG v2.0 and META-pipe. In addition, as a part of the ELIXIR pilot action project, we performed a gap analysis and concluded on recommendations for developing sustainable ELIXIR services for marine metagenomics domain.\n\n\nMethods\n\nFor comparison of the two pipelines, four previously unpublished environmental datasets were selected and run against both pipelines. These include two environmental samples from sediments at two different locations in the Barents Sea. Two samples, \"Muddy\" from the southeast of Edgøya (N77 08 40, E26 31 16) and \"Sandy\" from the intertidal zone at Nordenskioeldøya located in the Hinlopen Strait (N79 12 49, E19 18 58), were collected during two research cruises in 2010 and 2012, respectively. Sequencing libraries were constructed using the Nextera XT DNA Library Preparation Kit and the Nextera XT Index Kit (Illumina Inc). The samples from the Barents Sea was sequenced with the Illumina MiSeq platform using the MiSeq Reagent Kit v2 (500 cycle) with 2 × 250 bp paired-end read length configuration. The other two samples were from moose and sea urchin; frozen moose faeces found in Grunnfjorden (N69 59 42, E19 36 16) and faeces from a seawater tank containing sea urchin at Nofima (Tromsø), respectively. These two latter samples were sequenced using MiSeq Reagent Kit v3 (600 cycle) in a 2 × 300 bp paired-end read length configuration (Supplementary Table 1). The metagenomic sequence reads have been deposited at the European Nucleotide Archive (http://www.ebi.ac.uk/ena) under the sample accession numbers ERS624612 (muddy), ERS624613 (sandy), ERS624611 (moose) and ERS738393 (sea urchin).\n\nFiltering of homologous sequences may reduce the assembly complexity and the number of misassembled contigs. Since META-pipe, contrary to EMG, uses assembled reads (contigs) for functional analysis, we wanted to investigate the effect of removing small-subunit (SSU) ribosomal RNAs (rRNA) sequences before assembly.\n\nTo filter prokaryotic rRNA reads, including 5S, 16S and 23S rRNA, hidden Markov models (HMMs) from the rRNASelector were implemented as a part of the functional annotation pipeline. HMMs identify metagenomic fragments coding for rRNA genes if they meet the following two conditions: (i) a sequence read shows an overlap (>60 bp) with an rRNA HMM profile and (ii) the E-value is below 10-5. Fragments satisfying these conditions were selected. The unselected fragments are stored for subsequent assembly and functional analyses.\n\nAll datasets, with or without rRNA filtering, were assembled using MIRA (version 4.0.2) in de novo mode, with kmer 31 and forced non-IUPAC bases15.\n\nBefore filtering and assembly, the datasets were quality checked with FastQC (version 0.11.3; available at http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) and filtered with PRINSEQ (version 0.20.4)16 (parameters: –trim_left 10 –trim_right 10 –min_len 50 –ns_max_p 10) for all datasets. Additionally, datasets with particularly low quality at the 3’ end (under Q20) were trimmed using parameters –trim_qual_right 20.\n\nFor evaluation of the rRNA filtering step, the assemblies with or without filtering, MetaQUAST v3.2 was used17. For the two sediment samples, MetaQUAST was run in the reference-based evaluation mode with an in-house generated marine reference database, MarRef, as reference genomes. MarRef consists of 337 manually curated complete prokaryotic genomes (unpublished, curated by Terje Klemetsen), with a total length of 1135 Mb (Supplementary Table 2). For the two faecal metagenomes, moose and sea urchin, MetaQUAST was run in de novo evaluation mode. In this case, instead of using a reference database, MetaQUAST downloads reference sequences automatically based on rRNA sequence alignments. To do so, MetaQUAST searches the SILVA rRNA database using BLASTN with contigs as queries, thereby identifying species present in the dataset. The genomes of these species are then downloaded from NCBI and used as a reference database for assembly evaluation. For these latter samples, MetaQUAST identified 64 reference genome sequences with a total length of 262.8 Mb (Supplementary Table 3).\n\nThe four datasets selected for comparison were run on both pipelines with default parameters. The “Muddy” and the “Moose” dataset were analysed in depth, as we wanted to examine any particular differences using the two pipelines with respect to both marine and gut biomes. EMG and META-pipe both use rRNASelector for selecting rRNA sequences from metagenomics shotgun reads. META-pipe uses LCAClassifier with default parameters (LCA relative range: 2%; minimum bit score: 155) for rRNA annotation, which uses the manually curated SilvaMod database – a database based on the taxonomical annotation used in SILVA9 SSURef NR release 106. The SilvaMod also includes annotations to the NCBI taxonomy database to increase resolution of eukaryotic classifications based on mitochondrial and plastid 16S rRNA sequences. It offers resolution down to genus rank and has been shown to perform especially well on environmental datasets8. EMG uses QIIME for taxonomic classification, with GreenGenes7 version 13.8 database as a reference for the classification (default closed-reference OTU picking protocol with reverse strand matching enabled). Unique taxa identified were counted for each analyses and results was visualized using Krona charts18.\n\nMETA-pipe uses MetaGeneAnnotator (MGA) for prediction of protein-coding (CDS) regions in contigs longer than 500 bp after assembly with MIRA. The MGA uses a self-training model from input sequences for predictions, in addition to statistical models of bacterial, archaeal and prophage genes. The MGA not only sensitively detects typical genes, but also detects atypical genes, such as horizontally transferred genes and prophage genes in prokaryotic sequences. EMG uses FragGeneScan, which combines sequencing error models and codon usages in a hidden Markov model for the prediction of protein-coding region, regardless of species.\n\nFor functional assignment of predicted CDSs, EMG uses a sub set of InterPro release 50.0 (Pfam19,20, TIGRFAM21, PRINTS22,23, PROSITE patterns24, CATH-Gene3d25), while META-pipe uses the full InterPro release 5.10-50.0, in addition to BLAST against PRIAM version 2.0 and UniProtKB release 2014_09 databases. Gene ontology (GO) terms for all predicted CDSs in the “Muddy” dataset obtained from InterProScan5 were converted to GO-slim terms using OBO-files maintained by the Gene Ontology Consortium26,27, and used for functional comparison between META-pipe and EMG. This dataset was selected for in-depth functional comparison to emphasize the marine topic of this pilot project.\n\n\nResults and discussion\n\nAssembly of metagenomics reads is a complex and challenging task, due to both the computational overheads and biological complexity. Near-identical sequences, such as mobile genetic elements, homologous genes and conserved regions, combined with high diversity, low coverage and short reads, often results in errors and chimeric assembly. To analyse the effect of filtering rRNA in the assembly process in META-pipe, we assembled four datasets with and without rRNA reads.\n\nUsing MetaQUAST to evaluate the effect of rRNA filtering (Table 1), we observed a marginal reduction in total number of contigs and total length on the rRNA filtered datasets compared to unfiltered. Both marine sediment rRNA unfiltered datasets contain more possible misassembled contigs compared to the corresponding rRNA filtered dataset, 4 and 8 for Muddy and Sandy datasets, respectively. While the Muddy sample contains no misassemblies, the Sandy dataset contains four misassembles, where flanking sequences may align to different reference genomes, overlap or aligns over 1kb away from each other. Similarly, the unfiltered rRNA datasets have more mismatches and indels compared to the two filtered rRNA assemblies. For the Muddy sample, a reduction of mismatches and indels by a factor of 4 was observed for the filtered dataset, while the Sandy gave a reduction of a factor of 3. We believe these mismatches and indels stem from the inherent conservation of rRNA sequences, which causes spurious contigs in assembly.\n\n1MarRef database length: 1 135 Mb, 2MetaQUAST downloaded reference database length: 262 Mb.\n\nA very low percentage of the assembled contigs from the marine sediment samples mapped to the reference genomes (0.001% – 0.009%). However, as MarRef is still relatively small compared to the huge diversity estimated in marine sediments, the low percentage of mapped contigs is not surprising. Consequently, it is difficult to achieve a thorough estimate of misassemblies, mismatches and indels simply because of poor reference coverage. We believe that the number of misassemblies will increase as the marine reference database increases. The marine sediment datasets were also tested using MetaQUAST in de novo evaluation mode, where references are identified and downloaded automatically. MetaQUAST generated a reference database of 40 genomes, but assembled contigs only mapped to one of these identified references (in comparison to 159 out of 337 using the in-house marine reference database).\n\nFor the faecal datasets, contigs are longer, and more contigs mapped to the reference database (0.083% – 0.286%), which probably is a consequence of higher coverage. However, the number of misassemblies, possible misassemblies, mismatches and indels increased significantly, although the MetaQUAST generated reference database for these samples were considerably smaller than MarRef.\n\nRemoval of rRNA before assembly clearly reduces misassemblies, possible misassembled contigs, mismatches and indels, but the lack of specific marine databases hampers the comparison and benchmarking of the different approaches using MetaQUAST.\n\nIn general, META-pipe with the LCAClassifier/SilvaMod configuration identifies more unique taxa for the marine sediment datasets, while EMG, using Qiime/GreenGenes, identifies more taxa for the faecal datasets, as shown in Table 2. As LCAClassifier generally offers resolution up to genus rank, we also observe that META-pipe is more reluctant to classify at species level, compared to EMG.\n\nNumbers in parenthesis includes eukaryotic hits classified by META-pipe.\n\nOur results are in agreement with Lanzen et al.8, who showed that classification using SilvaMod performed better than with GreenGenes, particularly when applied to environmental sequences. META-pipe also offers eukaryotic classifications based on mitochondrial and plastid 16S rRNA sequences. However, in general, SSU rRNA gives limited resolution as a taxonomic marker for eukaryotic sequences compared to internal transcribed spacers (ITS) or large subunit (LSU) rRNA28.\n\nTo obtain a more detailed overview of the difference between the pipelines, we explored the marine \"Muddy\" dataset and the gut/intestine “Moose” dataset in more depth. While META-pipe was able to predict 6584 16S rRNA sequences, EMG predicted 4339 in the “Muddy” dataset (Figure 2). For the “Moose” dataset, META-pipe predicted 43949 and EMG predicted 25018 (Figure 3). As this step is in practice identical for both pipelines, the dissimilarities in rRNA prediction stems from the preprocessing step in EMG, where overlapping reads are merged and the total read count reduced from 18 to 12 million. Reduction of input sequence reads by one third also reduces predicted rRNA sequences by the same fraction. Although there were dissimilarities in the number of predicted 16S rRNA, the most apparent difference observed between the pipelines was the fraction of unassigned sequences.\n\nKrona chart representation of taxonomic classification of the “Muddy” dataset from META-pipe (A) and EBI Metagenomics Portal (B) pipelines.ppl.\n\nKrona chart representation of taxonomic classification of the “Moose” dataset from META-pipe (A) and EBI Metagenomics Portal (B) pipelines.\n\nIn the “Muddy” dataset, EMG classified 2500 sequences (58%), while META-pipe was able to classify 6119 (93%). However, if we ignore unassigned and eukaryotic sequences from the data from META-pipe, most high-level nodes in the taxonomy hierarchy have comparable relative fractions, e.g. like Planctomycetes, Bacteroidetes, Acidobacteria, Choloflexi, Nitospirae and Actinobacteria. The largest inconsistencies were in the Archaea and Protobacteria were EMG assigned 3.5% (89) and 57.9% (1448), respectively, to these nodes, while META-pipe assigned 5.3% (320) and 49.3% (2913). For the “Moose” dataset EMG classified 15630 sequences (62%), while META-pipe classified 41130 (94%). As in the “Muddy” dataset, trends are similar when ignoring unassigned and eukaryotic fractions, with most identified taxa showing only marginal differences between the two pipelines. The discrepancy observed between the EMG and META-pipe in prediction of rRNA sequences and taxonomic classification relies heavily on the methods, parameters and settings, and the underlying databases used, meaning a more thorough benchmarking of the different methods and databases are needed to determine the sensitivity, specificity and accuracy.\n\nAlthough we observe comparable results from the taxonomic classification, there is a need for benchmarking of tools for rRNA prediction and classification in addition to dedicated rRNA databases for the marine domain.\n\nTo gain more insight into the effect of assembling compared to merging paired-end sequencing reads before CDSs prediction and functional assignment, we compared the output results from the functional analysis of the “Muddy” sample (ERS624612) from META-pipe and EMG.\n\nIn short, we expect the difference will manifest in three different ways when comparing outputs from the two pipelines. Firstly, longer or full length predicted CDSs would give rise to better functional assignment than shorter CDSs. Secondly, since an assembly will reduce the relative coverage to a consensus sequence (contig), the results will not be quantifiable in the same way as an analysis performed on single or merged reads. Thirdly, assembly will reduce the number of candidate CDSs to a subset containing CDSs from the most abundant organisms in the dataset, depending on the complexity of the dataset, sequencing technology and assembly quality.\n\nEMG predict 11 572 617 CDSs (from 12 103 194 merged reads), while META-pipe predicts a total of 47 434 CDSs (from 25 581 assembled contigs > 500 bp), which accounts for 0.4% compared to EMG. We explored the distribution of predicted gene lengths from both pipelines (Figure 4). On average, META-pipe predicts genes of 155 amino acids in length and the longest gene is 1996 amino acids, while EMG predicts genes of 73 amino acids in length and the longest gene is 162 amino acids.\n\nEMG, EBI Metagenomics Portal.\n\nEMG predicts approximately 1.0 CDS per merged read, while META-pipe predicts 1.9 CDSs on average per contig. Not surprisingly, the longer the contigs the more CDSs predicted, but what effect does this have on the functional assignment of each CDSs and the microbial community as a whole? To answer this question we compared the accumulated number of GO-slim annotations for each analysis using the “Muddy” dataset. In general, the more GO-slim annotations for each CDS, the better description of the molecular function, biological process, and cellular component of gene products will be obtained.\n\nEMG provided a total of 28 942 422 accumulated GO-slim annotations for the predicted CDSs, while META-pipe only provided 565 125 accumulated annotations, which accounts for 0.2% compared to EMG. However, META-pipe provided on average 11.9 GO-slim annotations per CDS, while the number for EMG is 2.5 GO-slim annotations per CDS, which indicated that the longer CDSs predicted a better functional description. Additionally, META-pipe utilizes all available databases shipped with Interproscan5 contrary to the reduced set utilized by EMG, which naturally provides more potential GO annotations per annotated gene. How does this effect the functional assignment of the community as a whole?\n\nAs shown in Figure 5, the effects are relatively small on top-level terms when GO-slim annotations are sorted. Most of the top-level terms (e.g. molecular function, biological process and cellular component) in the GO hierarchy ranks similarly due to accumulative counting (accumulated from counts in lower connected nodes in the diacyclic GO-slim graph). Less common terms are ranked somewhat differently e.g. protein binding, cell communication and carbohydrate metabolism. These differences arise from the observed differences in GO-slim annotation for each predicted CDS in the two pipelines. As META-pipe performs assembly, DNA sequences from low abundance organisms will effectively get excluded from the functional analysis due to insufficient coverage, which in turn changes to GO-profile compared to the EMG-analysis.\n\nThickness of bars corresponds to fraction size of accumulated GO-slim annotations for each pipeline. GO, Gene Ontology; EMG, EBI Metagenomics Portal.\n\nThe functional assignment of the “Muddy” sample is comparable between EMG and META-pipe. However, a more thorough analysis has to be performed understand the differences observed in low-level GO-terms.\n\nThroughout the project, several changes and improvements were implemented to harmonize, shorten the process time and enrich the output of the two pipelines. In short, masking of homologous sequences before assembly to reduce misassemblies, new databases to enhance functional annotation, optimization and modifications of databases to reduce wall-time. We identified several key steps and file formats within the respective workflows of each pipeline where intermediate data could be interchanged, allowing for potential interoperability between pipelines. Both pipelines have seen improvements since the start of this project. The EMG pipeline is now in version 3.029, while a new version/redesign of Meta-pipe is currently in development to improve computational constrains and functionality.\n\nIn order to develop sustainable ELIXIR services for marine metagenomics, we performed a gap analysis and concluded on four areas where actions are urgently needed. These include the need to: i) standardise metagenomics data generation; ii) establish marine metagenomics resources; iii) develop gold standard pipelines for metagenomics analysis; and iv) explore HPC and storage technologies. A short description on the four recommendations follows.\n\nMetagenomics data standards. The context in which marine metagenomics projects are conducted often gets lost, since these data are rarely submitted along with the sequence data. If these contextual data are missing, key opportunities for comparison and analysis across studies and environments are hampered or even impossible to conduct. A metagenomics study should report on each processing step, from contextual data of sampling, through experimental variables of sequencing and metadata of sequence analysis to parameters associated with archiving of the analysed data. Over the past five years standards for describing how a sample was captured and sequenced e.g. for sampling and environment packages, these standards need to be extended to include the whole metagenomics experimental workflow from sample gathering to computational results. As illustrated above, analysis pipelines produce different results on the same input, and comparing the results and understanding whether the differences are real, i.e. coming from the biology of the system under investigation, or whether they are artefacts of the analysis methods, is non-trivial to disentangle.\n\nMarine metagenomics data resources. Marine metagenomics research and innovation is limited by the lack of dedicated reference data resources. As indicated by the use of GreenGenes in EMG, existing reference databases are generalized or biased and the contextual data for the records is often incomplete or lacking. Due to the lack of coverage of marine organisms in existing databases, only about one quarter of sequences can be annotated from typical marine samples. To improve the characterization of marine environmental samples, establishment of dedicated data resources for the marine microbial domain is highly needed.\n\nGold standards pipelines. As with most emerging bioinformatics fields, a myriad of tools that perform different types of metagenomics analysis are constantly being published or updated. Pipelines that aggregate such tools are therefore under constant flux. Knowing which tool is the most appropriate to use for specific tasks can be difficult to assess, particular for new researchers entering the field. There is a need to evaluate several types of analysis tool (e.g. preprocessing of reads, prediction of CDSs and taxonomic assignment), and defining gold standard tools and databases.\n\nHigh-performance computer and storage technologies. Marine genomic datasets vary in size, ranging from tens of gigabytes for the typical datasets, to terabytes for projects such as Tara Ocean30, OSD31 and Malaspina (Information available at http://scientific.expedicionmalaspina.es/). Although some pipelines, such as EMG and META-pipe, have been designed for parallel execution on high-performance computer (HPC) clusters, there is a need for exploring more elastic storage and computation resource allocation, e.g. on academic or commercial clouds.\n\n\nConclusion\n\nWhile there are differences in the respective approaches, EMG and META-pipe provide comparable results. They have their own strengths and weaknesses, and it is clear that the optimal solution for the community would be harmonization and interoperability between the analysis platforms. There is still a need for improvements, e.g. harmonization of the preprocessing step, and improvement of eukaryote taxonomic classification by implementing reference databases for internal transcribed spacers (ITS) and/or large subunit (LSU) rRNAs.\n\nThe outcome of the gap analysis has been disseminated to the ELIXIR-EXCELERATE Marine metagenomic infrastructure use case (https://www.elixir-europe.org/excelerate/marine), which will help to define the requirements and specifications for the establishment of a sustainable ELIXIR marine metagenomics infrastructure.\n\n\nData and software availability\n\nEBI Metagenomics Portal (EMG): https://www.ebi.ac.uk/metagenomics/\n\nMETA-pipe: https://galaxy-uit.bioinfo.no (Needs academic user affiliation (FEIDE) or NeLS user login)\n\nThe metagenomic sequence reads are available from the European Nucleotide Archive (http://www.ebi.ac.uk/ena) under the sample accession numbers ERS624612 (muddy), ERS624613 (sandy), ERS624611 (moose) and ERS738393 (sea urchin).", "appendix": "Author contributions\n\n\n\nEMR, RDF and NPW drafted the manuscript. EMR, HD and AM conducted all the experiments. All authors read, revised and approved the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding was provided from ELIXIR, EMBL-EBI and UiT The Arctic University of Norway.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Concetta de Santi for isolating DNA from the environmental samples and Seila Pandur for running the MiSeq sequencer.\n\n\nSupplementary materials\n\nSupplementary Table 1: Datasets used in the present analysis of the two pipelines.\n\nClick here to access the data.\n\nSupplementary Table 2: Genomes and Genbank accession numbers included in the in-house marine reference database MarRef.\n\nClick here to access the data.\n\nSupplementary Table 3: References identified using rRNA in MetaQUAST.\n\nClick here to access the data.\n\n\nReferences\n\nMineta K, Gojobori T: Databases of the marine metagenomics. Gene. 2016; 576(2 Pt 1): 724–728. PubMed Abstract | Publisher Full Text\n\nBaker M: Next-generation sequencing: adjusting to data overload. Nat Methods. 2010; 7(7): 495–499. Publisher Full Text\n\nHunter S, Corbett M, Denise H, et al.: EBI metagenomics--a new resource for the analysis and archiving of metagenomic data. Nucleic Acids Res. 2013; 42(Database issue): D600–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobertsen EM, Kahlke T, Raknes IA, et al.: META-pipe - Pipeline Annotation, Analysis and Visualization of Marine Metagenomic Sequence Data. ArXiv 160404103 Cs. 2016. Reference Source\n\nLee JH, Yi H, Chun J: rRNASelector: a computer program for selecting ribosomal RNA encoding sequences from metagenomic and metatranscriptomic shotgun libraries. J Microbiol. 2011; 49(4): 689–691. PubMed Abstract | Publisher Full Text\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010; 7(5): 335–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeSantis TZ, Hugenholtz P, Larsen N, et al.: Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB. Appl Environ Microbiol. 2006; 72(7): 5069–5072. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLanzén A, Jørgensen SL, Huson DH, et al.: CREST--Classification Resources for Environmental Sequence Tags. PLoS One. 2012; 7(11): e49334. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuast C, Pruesse E, Yilmaz P, et al.: The SILVA ribosomal RNA gene database project: improved data processing and web-based tools. Nucleic Acids Res. 2013; 41(Database issue): D590–D596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRho M, Tang H, Ye Y: FragGeneScan: predicting genes in short and error-prone reads. Nucleic Acids Res. 2010; 38(20): e191. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJones P, Binns D, Chang HY, et al.: InterProScan 5: genome-scale protein function classification. Bioinformatics. 2014; 30(9): 1236–1240. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNoguchi H, Taniguchi T, Itoh T: MetaGeneAnnotator: Detecting Species-Specific Patterns of Ribosomal Binding Site for Precise Gene Prediction in Anonymous Prokaryotic and Phage Genomes. DNA Res. 2008; 15(6): 387–396. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClaudel-Renard C, Chevalet C, Faraut T, et al.: Enzyme-specific profiles for genome annotation: PRIAM. Nucleic Acids Res. 2003; 31(22): 6633–6639. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUniProt Consortium: UniProt: a hub for protein information. Nucleic Acids Res. 2015; 43(Database issue): D204–D212. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChevreux B, Pfisterer T, Drescher B, et al.: Using the miraEST assembler for reliable and automated mRNA transcript assembly and SNP detection in sequenced ESTs. Genome Res. 2004; 14(6): 1147–1159. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchmieder R, Edwards R: Quality control and preprocessing of metagenomic datasets. Bioinformatics. 2011; 27(6): 863–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMikheenko A, Saveliev V, Gurevich A: MetaQUAST: evaluation of metagenome assemblies. Bioinformatics. 2016; 32(7): 1088–90. PubMed Abstract | Publisher Full Text\n\nOndov BD, Bergman NH, Phillippy AM: Interactive metagenomic visualization in a Web browser. BMC Bioinformatics. 2011; 12: 385. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBateman A, Birney E, Durbin R, et al.: The Pfam Protein Families Database. Nucleic Acids Res. 2000; 28(1): 263–266. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFinn RD, Coggill P, Eberhardt RY, et al.: The Pfam protein families database: towards a more sustainable future. Nucleic Acids Res. 2016; 44(D1): D279–D285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaft DH, Selengut JD, White O: The TIGRFAMs database of protein families. Nucleic Acids Res. 2003; 31(1): 371–373. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAttwood TK: The PRINTS database: A resource for identification of protein families. Brief Bioinform. 2002; 3(3): 252–263. PubMed Abstract | Publisher Full Text\n\nAttwood TK, Coletta A, Muirhead G, et al.: The PRINTS database: a fine-grained protein sequence annotation and analysis resource—its status in 2012. Database (Oxford). 2012; 2012: bas019. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSigrist CJ, de Castro E, Cerutti L, et al.: New and continuing developments at PROSITE. Nucleic Acids Res. 2013; 41(Database issue): D344–347. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuchan DW, Shepherd AJ, Lee D, et al.: Gene3D: Structural Assignment for Whole Genes and Genomes Using the CATH Domain Structure Database. Genome Res. 2002; 12(3): 503–514. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAshburner M, Ball CA, Blake JA, et al.: Gene Ontology: tool for the unification of biology. Nat Genet. 2000; 25(1): 25–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGene Ontology Consortium: Gene Ontology Consortium: going forward. Nucleic Acids Res. 2015; 43(Database issue): D1049– D1056. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSantamaria M, Fosso B, Consiglio A, et al.: Reference databases for taxonomic assignment in metagenomics. Brief Bioinform. 2012; 13(6): 682–695. PubMed Abstract | Publisher Full Text\n\nMitchell A, Bucchini F, Cochrane G, et al.: EBI metagenomics in 2016 -- an expanding and evolving resource for the analysis and archiving of metagenomic data. Nucleic Acids Res. 2015; 44(D1): D595–603. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarsenti E, Acinas SG, Bork P, et al.: A Holistic Approach to Marine Eco-Systems Biology. PLoS Biol. 2011; 9(10): e1001177. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKopf A, Bicak M, Kottmann R, et al.: The ocean sampling day consortium. GigaScience. 2015; 4: 27. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19670", "date": "13 Feb 2017", "name": "Marla I. Trindade", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors, using 4 different metagenome datasets, compare the assembly and annotation results of 2 pipelines (META-pipe and EMG). They additionally compare the assemblies after also filtering out rRNA reads.\nAs presented by the authors, the discrepancy observed between the 2 pipelines, particularly in predicting the taxonomic classification, was significant, and alarmingly different. While such a limitation is acknowledged by many published studies, and accepted in practice, I 100% agree with the authors that it is crucial for rigorous benchmarking to be conducted, and they rightly conclude that action is urgently needed - it goes against all research principles to present conclusions and hypotheses when results are generated using tools and methods which are accepted to completely bias the outcome. What actual scientific value do such studies offer given the discrepancies? Thus, for this reason, the study by Robertsen et al should be indexed so as to raise further awareness to the danger of not paying attention to the proper curation of next gen sequence data generation and analysis.\nI do however have some major and minor questions / corrections which need to be addressed before this study can be approved:\nMajor:\nThe authors provide absolutely no information of how the samples were collected and how the DNA was prepared. Given that the authors in their gap analysis, themselves recognise that this kind of detail should be reported in every metagenomic study, it is ironic that they do not do so even if the purpose of the manuscript was not to describe the content of these metagenomes. Furthermore, the Gap analysis should also address the biases that the mDNA extraction and sequencing technology introduce, which are also well documented.\n\nPlease confirm whether the figures presented for the \"aligned to reference\" in Table 1 represents the combined results when using the MarRef and the de novo generated database? It is confusing because in the methods section it is stated that the sediment samples were analysed using the MarRef whereas the others were done using the MetaQUAST-generated database; however, in the results section the marine samples are reported to have been analysed using both databases. Also, what were the cut-off values used for the reference alignments, and were parameters modified to try and increase the %assemblies, or do these represent the best possible outcome?\n\nIrrespective of the answer in the above, the % that could be referenced was very low (the highest was 0.262%), representing a minute proportion of the sequence data generated. If I understood correctly, the #misassemblies refers to only the % of sequences that could be referenced (ie. a minute proportion of the sequence data), and thus I do not think this is a very informative factor with which to compare the different assembly procedures. i.e. if misassembly is only judged on between 0.001% and 0.284% of the dataset this might not be an accurate reflection of assembly issues, as suggested by the authors.\n\nThe choice of assembler and assembly parameters could have a huge impact on the outcome on the META-pipe pipeline. Have the authors, and pipeline administrators, satisfied themselves that MIRA is the best assembler (SPADES, IDBA-UD, collaboration with CLC Genomics?). The fact that when you had longer contigs (faecal dataset), the number of missasemblies increased significantly, points to assembly issues. See a very recent study be Hesse et al 2017 (with relevant references within) which specifically addresses these issues.\n\nThe authors use a non-validated dataset to determine the best pipeline (or at least differences). It would’ve been preferable to use a curated dataset of known composition, to know exactly what the ideal outcome should’ve been. A good example of this is the taxonomic assignments of the two pipelines. The authors finish paragraph 7 on page 6 with “more thorough benchmarking of the different methods and databases are needed to determine the sensitivity, specificity and accuracy”. Had they used a curated dataset, they would know which pipeline gives a more accurate picture of the taxonomic composition of the uploaded dataset. The study suffers the same issue when looking at functional classification.\n\nI agree that dedicated effort and resources needs to be established to ensure increased population of the sequence databases with marine derived genome data. Please can the authors clarify whether they are proposing there to be dedicated marine databases - if this is the case I do not support this notion. From an ecological perspective it would be interesting to make connections with terrestrial systems, if they exist. Only a comprehensive database could help you establish these links. If purely for bioprospecting, perhaps this is less of an issue. Or did the authors simply mean that increased research effort / focus is needed?\n\nIn their gap analysis the authors propose the need for evaluating analysis tools and defining gold standard tools and databases. Standardization of metagenomic data generation is a great idea, similar to the MIQE guidelines for real-time PCR, but very difficult to implement in practice. The pipelines should always allow some flexibility to accommodate datasets outside the “ideal”. Can they propose who should take on this responsibility, or how to coordinate such a task?\n\nNo comparison was made to MG-RAST (278,783 metagenomes), one of the most widely used metagenomic analysis pipelines. If a “gold standard” pipeline is to be created, surely it should also be done in consultation with all large groups involved with such analyses.\n\nIf I understand Figure 5 correctly, the more parallel lines we have the more similar the predictions of the two pipelines? Can the authors provide some quantitative value to summarize the information given in the Figure? Otherwise the display of the results of this analysis seems a bit arbitrary as my only other two references are a figure with all parallel lines (1), and one where there are no parallel lines (0). I acknowledge this may not be easily doable.\n\nMinor:\nPg4, line 13: “OUT” change to “OTU”\n\nPg5, end of the first paragraph: \"and results was visualized\" should be changed to \"and results were visualized\".\n\nPg5, last paragraph:  Figure 4 does not really add value. What it displays is fully expected and will only change for the META-pipe pipeline depending on the environment being sequenced as well as “depending on the complexity of the dataset, sequencing technology and assembly quality”.\n\nPg8, 3rd paragraph: \"the longest gene is 1996 amino acids\". Although it is understood what the authors mean, it is incorrect terminology and this needs to be corrected in this paragraph and also in the Figure 5 legend. A gene is denoted in base pairs, and protein in amino acids.\n\nPg9, paragraph 2: \"analysis has to be performed understand\" needs to be changed to \"analysis has to be performed to understand\".\n\nPg10, 3rd paragraph: \"difficult to assess, particular for new researchers\" should be changed to \"difficult to assess, particularly for new researchers\".", "responses": [] }, { "id": "21907", "date": "18 Apr 2017", "name": "Takashi Gojobori", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract The title can be more precise in reflecting the contents. For example, “ELIXIR pilot action: Comparisons of two representative pipelines of metagenomics between EMG and META-pipe”.\nIn Abstract, the outcome of the two pipeline comparisons may be mentioned.\nArticle Content The design, methods and analysis of the results from the study been explained well and they are appropriate for the topic being studied. However, I have the following comments:\nIn the studies of metagenomics, there are essentially two ways of picking up genomic fragments from the DNA samples: (a) Amplicon-oriented approach and (b) Random shotgun approach. For approach (a), rRNAs can be targets for sequencing whereas for approach (b) any genomic fragments can be targets. Naturally, approach (a) can be used for phylogenetic identification only, while approach (b) can be used for not only phylogenetic identification but also functional analysis.\nIf the authors explain about these two approaches clearly in the text, the whole context can be easier to understand to the readers who do not have the expertise of metagenomics.\n\nMarine metagenomics is one of metagenomic studies. The two pipelines explained in the present paper are basically for metagenomics in general, but not specialized for “marine” metagenomics. Then, the authors may be requested to clarify which points in the pipelines or in the methodologies are keen differences between metagenomics in general and “marine” metagenomics.\n\nAlthough the main part of the present paper is on differences between the two pipelines. In particular, whether the genomic fragment assembly is conducted or not appears to lead to huge differences of the outcome between the two pipelines. This is very important notion. According to our experiences of marine metagenomics, we have already recognized that the fragments assembly produces more identifiable OTUs and function in the annotation process. Therefore, we usually conduct the fragment assembly.\nHowever, the authors did not mention anything about it in Conclusion (Also see below).\nConclusion In Conclusion, the authors have stated that while there are differences in the respective approaches, EMG and META-pipe provide comparable results. I do not think so. As mentioned above, the author should state the main different point between two pipelines, because they are not really “comparable”.\nData I think that enough information has been provided to be able to replicate the experiment.I also think that the data are in a usable format/structure and all the data have been provided.", "responses": [] }, { "id": "19596", "date": "31 May 2017", "name": "Anders Blomberg", "expertise": [ "Reviewer Expertise Functional genomics", "genomics", "databases", "yeast genetics", "yeast phenomics", "marine biology", "osmoregulation" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMetagenomics has a great potential to influence our understanding of the complex ecology of biotopes, including marine waters. Despite the impressive speed of generating sequence data, the analyses pipelines are not as well developed and standardized. This article describes a comparison between two analysis pipelines and how they perform on different types of sequence data. The main methodological difference between the two pipe-lines tested is if the anaysis is done on the read-level (EMG) or at the contig-level (META-pipe). This will of course have a major influence on the results obtained, which is in essence what this study aims to outline. The manuscript also has a link to a nice webinar that explains parts of the background, technical details, challenges and some of the results.\nMajor critique/comments:\nWhy a specific marine metagenomics pipeline? Why could not this service be generic - independent on where the organisms live (marine, soil, stomach, flowers, etc....).\nThis issue is addressed in the webinar, but not in the paper, e.g. marine samples/sequences are taxonomically complex and with really high genetic/sequence diversity. There might be more reasons. These reasons for a specific marine pipeline should be outlined in 1-2 sentences in the paper.\n\nWhy picking unpublished data for the test? Anything specifically general with this data? Or could there be very specific biases and technical problems with this data? This should be outlined and described. They should also consider using some already published data for their comparison.\n\nIn addition, the data analysed is based on comparably long-read Illumina reads - 250nt and 300nt. Plenty of metagenomics data has been, and will be, collected using more standard length reads (≈ 125nt). Please discuss, e.g. in the conclusion part, to what extent this selection of example data could have had an impact on the obtained results.\n\nIn the Conclusion section they state: \" While there are differences in the respective approaches, EMG and META-pipe provide comparable results. \"\nBut do they really show similar results? There appear to exist huge differences between the two programs that are also highlighted earlier in the text:\np.6, rc, \" While META-pipe was able to predict 6584 16S rRNA sequences, EMG predicted 4339 in the “Muddy” dataset (Figure 2).\"\np.6, rc, \" In the “Muddy” dataset, EMG classified 2500 sequences (58%), while META-pipe was able to classify 6119 (93%).\"\np.8, lc, \" EMG predict 11 572 617 CDSs (from 12 103 194 merged reads), while META-pipe predicts a total of 47 434 CDSs (from 25 581 assembled contigs > 500 bp), which accounts for 0.4% compared to EMG.\np.8, rc, \" EMG provided a total of 28 942 422 accumulated GO-slim annotations for the predicted CDSs, while META-pipe only provided 565 125 accumulated annotations, which accounts for 0.2% compared to EMG.\"\nI think there statement about \"comparable results\" should be modified and differences also highlighted in the Conclusion.\n\nFinally, should one recommend that both pipelines are used in analyses before publication, and that results are being reported? And if so, what about other pipelines? How do they see that this challenge (which is a great problem in the comparison of results between studies using different analysis pipelines) should be handled in the future?\n\nMinor comments:\nAbstract In the last sentence of the abstract it says: \"In this paper, we summarize some of the results from the ELIXIR pilot action “Marine metagenomics – towards user centric services”. Shouldn't this be the same as in the title?\nPage 3, left column (lc), line 7 I am not sure I see why replication would be hard given the information in publications - databases are not per see a guarantee for higher transparancy in information handling. Even if the access to the data might be easier. The statement should be modified. Or do they mean \"results\" and not \"analyses\" are hard to replicate?\np.3, lc, l.15 Please provide a short overview of the types of metagenomics pipeline that are available at this stage. Please explain to the reader why EMG and META-pipe were selected for comparison? Anything that make this comparison particularly valid?\np.4, lc, l.3 preform - > perform\np.4, rc, l.18 Give arguments for why Kmer = 31 was selected. Are there reasons to believe the results would have been different if another Kmer had been used?\np.4, right column (rc), l.4 from bottom Please explain \"biomes\".\np.6, lc, l.33 They state that META-pipe is reluctant to classify on species level. Can one explain to the reader why that is?\np.6, lc, l.35 Be more specific - how was \"better\" defined?\nTable 2 Just to be sure - do they mean prokaryotic + eukaryotic?\nTable 2 Why can't EMG do eukaryotes?\nFigure 4 What do we really learn from this figure?\nFigure 5 How have the annotations been sorted?\np.9, lc, l.3 to - > the\np.10, lc, l.10 from bottom They talk about gold standard tools. But these might differ dependent on the data - technical problems; community complexity; lengths of reads; .... Can they be a bit more specific for how they see that this \"golden standard\" can be reached?\n\nIs the rationale for developing the new method (or application) clearly explained? Partly\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? No", "responses": [] } ]
1
https://f1000research.com/articles/6-70
https://f1000research.com/articles/6-69/v1
23 Jan 17
{ "type": "Data Note", "title": "In silico gene expression profiling in Cannabis sativa", "authors": [ "Luca Massimino" ], "abstract": "The cannabis plant and its active ingredients (i.e., cannabinoids and terpenoids) have been socially stigmatized for half a century. Luckily, with more than 430,000 published scientific papers and about 600 ongoing and completed clinical trials, nowadays cannabis is employed for the treatment of many different medical conditions. Nevertheless, even if a large amount of high-throughput functional genomic data exists, most researchers feature a strong background in molecular biology but lack advanced bioinformatics skills. In this work, publicly available gene expression datasets have been analyzed giving rise to a total of 40,224 gene expression profiles taken from cannabis plant tissue at different developmental stages. The resource presented here will provide researchers with a starting point for future investigations with Cannabis sativa.", "keywords": [ "Cannabis sativa", "gene expression", "cannabinoid pathway" ], "content": "Introduction\n\nThe cannabis plant has been used for medical purposes for centuries, before being socially stigmatized for the last half century1. Nevertheless, more than 430,000 published scientific papers exist, with about 25,600 works published in 2016 (https://scholar.google.com/). In addition, there are about 600 ongoing and completed clinical trials involving cannabis (https://www.clinicaltrials.gov/).\n\nThe endocannabinoid system is involved in virtually every biological function2, so it is not surprising that cannabis is being used to treat neurological3, psychiatric4, immunological5, cardiovascular6, gastrointestinal7, and oncological8 conditions.\n\nToday, a large amount of high-throughput functional genomic data exists. Nonetheless, even in the era of ‘omics, the great majority of researchers feature a strong background in molecular biology but lack advanced bioinformatics skills9.\n\nIn the present work, publicly available gene expression data taken from cannabis plant tissue at different developmental stages (shoot, root, stem, young and mature leaf, early-, mid- and mature-stage flower) have been analyzed, giving rise to 40,224 gene expression profiles. Moreover, the expression patterns of 23 cannabinoid pathway related genes are described. The data note provided here will aid future studies by providing researchers with a powerful resource for future investigations.\n\n\nMaterial and methods\n\nGene expression datasets were downloaded from the NCBI SRA directory10 (https://www.ncbi.nlm.nih.gov/sra/) with accession numbers SRP006678 and SRP008673. Raw sequences were mapped to the canSat3 reference genome11 with TopHat2 v2.1.012. Gene counts and relative transcript levels were obtained with Cufflinks v2.2.1.013, and submitted to NCBI GEO (https://www.ncbi.nlm.nih.gov/geo/) with accession number GSE93201. Cannabinoid related genes were found within the canSat3 transcripts with the Cannabis genome browser BLAT web tool11 (http://genome.ccbr.utoronto.ca/cgi-bin/hgBlat?command=start). Gene expression heatmaps and unsupervised hierarchical clustering were carried out with GENE-E14.\n\n\nResults\n\nThe Cannabis sativa reference genome and transcriptome have been published, although data analysis is still at the preliminary stages11. In other words, we know what the presumptive genes are, but we do not know the chromosomes they are located in, nor their molecular functions. Given that this high-throughput gene expression data is publicly available, expression analysis of these yet unidentified genes can be performed. To this end, public repositories have been surveyed for transcriptional profiling datasets derived from Cannabis sativa. In total, 31 RNA-seq datasets derived from one hemp and two different psychoactive strains (NCBI SRA accession numbers: SRP006678 and SRP008673) of Cannabis sativa shoot, root, stem, young and mature leaf, early-, mid- and mature-stage flower have been analyzed. Unsupervised hierarchical clustering of gene expression values revealed six clusters of genes with specific tissue/stage expression (Figure 1). Cluster 1 genes display high expression levels in shoots, mature leaves, and flowers; cluster 2 genes in leaves and flowers; cluster 3 genes in roots and stems; cluster 4 genes in roots, stems, and flowers; cluster 5 genes in hemp flowers and cluster 6 genes in shoots, roots, stems, and flowers.\n\nHeatmap showing relative expression values (log2 RPKM) of the highest expressed genes. Six gene clusters were defined in accordance with the unsupervised hierarchical clustering.\n\nGenes involved in the biosynthesis of cannabinoids and their precursors have been shown to be overexpressed in flowers15. To validate gene expression profiling, cannabinoid, hexanoate, 2-C-methyl-D-erythritol 4-phosphate (MEP) and geranyl diphosphate (GPP) pathway genes11,16, together with the olivetol synthase (OLS) gene17,18, the (-)-limonene terpene synthase (TPS) gene19 and the polyketide synthase (PKS) gene20, have been analyzed. As expected, most of these genes were overexpressed in flowers, although many of the genes also displayed high expression in other tissues (Figure 2; Supplementary table 1). Interestingly, virtually all of them were highly expressed in the shoot.\n\nHeatmap showing relative expression values (log2 RPKM) of genes belonging to cannabinoid and precursor (hexanoate, GPP, MEP, olivetolic acid) pathways, together with terpene synthase (TPS) and polyketide synthase (PKS).\n\n\nDiscussion\n\nToday, cannabis and its derivatives are successfully employed for treatment of a large number of different pathological conditions3,5–8. Each year, more articles related to cannabis are published, with about 25,600 studies published in 2016 (https://scholar.google.com/). Remarkably, only 3% of these papers (13,300 out of 432,000) also take genomics into consideration, with very few of them directly relating to the genomics of cannabis. This could be due to the fact that, for obvious reasons, most researchers still lack advanced bioinformatics skills and are therefore limited in their research9.\n\nTo this end, a total of 40,224 gene expression profiles taken from cannabis plant tissue at different developmental stages were obtained by exploiting common bioinformatics pipelines13. Moreover, expression profiles of the genes belonging to the cannabinoid pathway11,16–20 are provided.\n\nEven if these data are preliminary, some observations can already be made. For instance, virtually all genes found to be highly expressed in flowers (Figure 1, cluster 1 and Figure 2) also displayed high expression in the shoot. Having had only one sample at this specific developmental stage, these results could be derived from technical issues rather than differences in gene expression. However, not all transcripts (57%) were found to be overexpressed in the shoot, thus pointing toward the possible specificity of these changes. If this is confirmed, it may provide researchers with the possibility to study the molecular function of flower specific genes directly in sprouting plants, without having to wait for the plant to fully bloom.\n\nCannabis sativa is a versatile plant - it is being used for medical as well as for industrial purposes21,22. For this reason, cutting-edge genomics technology is currently being applied either to ameliorate specific phenotypes, or for breeding purposes22–27. Cluster 5 genes (Figure 1) seem of great interest in this regard, as they are visibly overexpressed specifically in non-psychoactive cannabis flowers. These genes could be downregulated in hemp in order to create new strains high in cannabidiol (CBD), but with the proper entourage effect commonly found in the psychoactive counterparts28. On the other hand, hemp specific genes could be upregulated in marijuana to produce high fiber/oil containing crops harboring therapeutically valuable active principles within their flowers. One potential candidate is the Csfad2a gene which was recently found to be highly expressed only in some hemp strains. Here, high Csfad2a expression was correlated with both higher oil content and lower oxidation tendency, eventually leading to the production of a significantly better commercial product26.\n\nPerhaps the major pitfall of this kind of analysis comes from the fact that although the current cannabis reference genome and transcriptome have been published, data analysis is still at the preliminary stages11. Like in other plants, the cannabis genome is highly redundant and difficult to resolve29. It is very likely that false negatives have caused important transcripts to still be missing. Nevertheless, these 40,224 gene expression profiles will provide researchers with a valuable resource and important genomic insights for future investigations with Cannabis sativa.\n\n\nData availability\n\nRaw expression data can be found in the NCBI SRA directory (https://www.ncbi.nlm.nih.gov/sra/) with accession numbers SRP006678 and SRP008673.\n\nProcessed data can be found in the NCBI GEO repository (https://www.ncbi.nlm.nih.gov/geo/) with accession number GSE93201.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary table 1. Cannabinoid metabolism related gene profiling in different tissues and developmental stages. Gene expression matrix of cannabinoid pathway genes. Expression values are expressed in RPKM.\n\nClick here to access the data.\n\n\nReferences\n\nPain S: A potted history. Nature. 2015; 525(7570): S10–S11. PubMed Abstract | Publisher Full Text\n\nDi Marzo V, Bifulco M, De Petrocellis L: The endocannabinoid system and its therapeutic exploitation. Nat Rev Drug Discov. 2004; 3(9): 771–84. PubMed Abstract | Publisher Full Text\n\nHosking R, Zajicek J: Pharmacology: Cannabis in neurology--a potted review. Nat Rev Neurol. 2014; 10(8): 429–30. PubMed Abstract | Publisher Full Text\n\nCurran HV, Freeman TP, Mokrysz C, et al.: Keep off the grass? Cannabis, cognition and addiction. Nat Rev Neurosci. 2016; 17(5): 293–306. PubMed Abstract | Publisher Full Text\n\nKlein TW: Cannabinoid-based drugs as anti-inflammatory therapeutics. Nat Rev Immunol. 2005; 5(5): 400–11. PubMed Abstract | Publisher Full Text\n\nDi Marzo V, Després JP: CB1 antagonists for obesity--what lessons have we learned from rimonabant? Nat Rev Endocrinol. 2009; 5(11): 633–8. PubMed Abstract | Publisher Full Text\n\nGerich ME, Isfort RW, Brimhall B, et al.: Medical marijuana for digestive disorders: high time to prescribe? Am J Gastroenterol. 2015; 110(2): 208–14. PubMed Abstract | Publisher Full Text\n\nSwami M: Cannabis and cancer link. Nature Reviews Cancer. 2009; 9:148. Publisher Full Text\n\nChang J: Core services: Reward bioinformaticians. Nature. 2015; 520(7546): 151–152. PubMed Abstract | Publisher Full Text\n\nBarrett T, Clark K, Gevorgyan R, et al.: BioProject and BioSample databases at NCBI: Facilitating capture and organization of metadata. Nucleic Acids Res. 2012; 40(Database issue): D57–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Bakel H, Stout JM, Cote AG, et al.: The draft genome and transcriptome of Cannabis sativa. Genome Biol. 2011; 12(10): R102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim D, Pertea G, Trapnell C, et al.: TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biol. 2013; 14(4): R36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrapnell C, Roberts A, Goff L, et al.: Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks. Nat Protoc. 2012; 7(3): 562–78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGENE-E. Cambridge (MA): The Broad Institute of MIT and Harvard. Reference Source\n\nSirikantaramas S, Taura F, Tanaka Y, et al.: Tetrahydrocannabinolic acid synthase, the enzyme controlling marijuana psychoactivity, is secreted into the storage cavity of the glandular trichomes. Plant Cell Physiol. 2005; 46(9): 1578–82. PubMed Abstract | Publisher Full Text\n\nStout JM, Boubakir Z, Ambrose SJ, et al.: The hexanoyl-CoA precursor for cannabinoid biosynthesis is formed by an acyl-activating enzyme in Cannabis sativa trichomes. Plant J. 2012; 71(3): 353–365. PubMed Abstract | Publisher Full Text\n\nTaura F, Tanaka S, Taguchi C, et al.: Characterization of olivetol synthase, a polyketide synthase putatively involved in cannabinoid biosynthetic pathway. FEBS Lett. 2009; 583(12): 2061–2066. PubMed Abstract | Publisher Full Text\n\nGagne SJ, Stout JM, Liu E, et al.: Identification of olivetolic acid cyclase from Cannabis sativa reveals a unique catalytic route to plant polyketides. Proc Natl Acad Sci U S A. 2012; 109(31): 12811–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGünnewich N, Page JE, Köllner TG, et al.: Functional expression and characterization of trichome- specific (-)-limonene synthase and (+)-α-pinene synthase from Cannabis sativa. Nat Prod Commun. 2007; 2(3): 223–232. Reference Source\n\nFlores-Sanchez IJ, Linthorst HJ, Verpoorte R: In silicio expression analysis of PKS genes isolated from Cannabis sativa L. Genet Mol Biol. 2010; 33(4): 703–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Meijer EP, Hammond KM, Sutton A: The inheritance of chemical phenotype in Cannabis sativa L. (IV): cannabinoid-free plants. Euphytica. 2009; 168(1): 95–112. Publisher Full Text\n\nSalentijn EM, Zhang Q, Amaducci S, et al.: New developments in fiber hemp (Cannabis sativa L.) breeding. Ind Crops Prod. 2015; 68: 32–41. Publisher Full Text\n\nMandolino G, Carboni A: Potential of marker-assisted selection in hemp genetic improvement. Euphytica. 2004; 140(1): 107–120. Publisher Full Text\n\nvan den Broeck HC, Maliepaard C, Ebskamp MJM, et al.: Differential expression of genes involved in C1 metabolism and lignin biosynthesis in wooden core and bast tissues of fibre hemp (Cannabis sativa L.). Plant Sci. 2008; 174(2): 205–220. Publisher Full Text\n\nGuerriero G, Sergeant K, Hausman JF: Integrated -omics: A powerful approach to understanding the heterogeneous lignification of fibre crops. Int J Mol Sci. 2013; 14(6): 10958–10978. PubMed Abstract | Publisher Full Text\n\nBielecka M, Kaminski F, Adams I, et al.: Targeted mutation of Δ12 and Δ15 desaturase genes in hemp produce major alterations in seed fatty acid composition including a high oleic hemp oil. Plant Biotechnol J. 2014; 12(5): 613–23. PubMed Abstract | Publisher Full Text\n\nMassimino L: Cannabis growing meets genomics. F1000Research. 2017; 6: 15.\n\nRusso EB, Taming TH: potential cannabis synergy and phytocannabinoid-terpenoid entourage effects. Br J Pharmacol. 2011; 163(7): 1344–1364. PubMed Abstract | Publisher Full Text\n\nTürktaş M, Kurtoğlu KY, Dorado G, et al.: Sequencing of plant genomes - A review. Turkish J Agric For. 2015; 39: 361–376. Publisher Full Text" }
[ { "id": "20299", "date": "20 Feb 2017", "name": "Gea Guerriero", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present study suits the Data Note format and can be a useful resource for future studies centered on Cannabis sativa. I find the bioinformatics approach sound.  I have one suggestion for the author. How about enriching Figure 1 with a representation of GO/pathway enrichment analysis for each cluster (for example with ClueGO in Cytoscape; Bindea et al., 20091)?\nThe interest around this multi-purpose crop is increasing and recently transcriptomics data have been published for a fiber variety too (see for example Behr et al., 20162). The approach described in this note can be applied also to other varieties in the future and/or to other tissue types.", "responses": [] }, { "id": "20301", "date": "02 May 2017", "name": "Sergio Esposito", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe bioinformatic approach presented in this study is potentially highly interesting, providing to scientists a useful data set to investigate the different pathways activated in Cannabis sativa at different developmental stages, and in different organs. I would only ask to the Author if some of the gene sets identified could be further divided into sub-clusters: e.g. there is a “green” area in Fig.1 – cluster 4 in leaves and flower buds (about in the second quarter from above). On the opposite, this area is \"red\" in cluster 5 – first quarter. Could be these results interpreted under the light of the considerations exposed in the discussion?\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-69
https://f1000research.com/articles/6-58/v1
20 Jan 17
{ "type": "Software Tool Article", "title": "The PathLinker app: Connect the dots in protein interaction networks", "authors": [ "Daniel P. Gil", "Jeffrey N. Law", "T. M. Murali", "Daniel P. Gil", "Jeffrey N. Law" ], "abstract": "PathLinker is a graph-theoretic algorithm for reconstructing the interactions in a signaling pathway of interest. It efficiently computes multiple short paths within a background protein interaction network from the receptors to transcription factors (TFs) in a pathway. We originally developed PathLinker to complement manual curation of signaling pathways, which is slow and painstaking. The method can be used in general to connect any set of sources to any set of targets in an interaction network. The app presented here makes the PathLinker functionality available to Cytoscape users. We present an example where we used PathLinker to compute and analyze the network of interactions connecting proteins that are perturbed by the drug lovastatin.", "keywords": [ "signaling pathways", "pathway reconstruction", "protein interaction networks", "PathLinker", "Cytoscape", "k-shortest paths" ], "content": "Introduction\n\nSignaling pathways are a cornerstone of systems biology. While several databases store high-quality representations of these pathways, they require time-consuming manual curation. PathLinker is an algorithm that automates the reconstruction of any human signaling pathway by connecting the receptors and transcription factors (TFs) in that pathway through a physical and regulatory interaction network1. In previous work, we have demonstrated that PathLinker achieved much higher recall (while maintaining reasonable precision) than several other methods1. Furthermore, it was the only method that could control the size of the reconstruction while ensuring that receptors were connected to TFs in the result. We have also experimentally validated PathLinker’s novel finding that CFTR, a transmembrane protein, facilitates the signaling from receptor tyrosine kinase Ryk to the phosphoprotein Dab2, which controls signaling to β-catenin in the Wnt pathway1. These encouraging results suggest that PathLinker may serve as a powerful approach for discovering the structure of poorly studied processes and prioritizing both proteins and interactions for experimental study.\n\nMore generally, PathLinker can be useful for connecting sources to targets in protein networks, a problem that has been the focus of many studies in the past2–8. Applications have included explaining high-throughput measurements of the effects of gene knockouts9,10, discovering genomic mutations that are responsible for changes in downstream gene expression11,12, studying crosstalk between different cellular processes13,14, and linking environmental stresses through receptors to transcriptional changes8.\n\nIn this paper, we describe a Cytoscape app that implements the PathLinker algorithm. We describe in detail a use case where we employ PathLinker to analyze the Environmental Protection Agency’s ToxCast data. Specifically, we compute and analyze the network of interactions connecting proteins that are perturbed in this dataset by lovastatin, a drug used to lower cholesterol. We conclude by comparing PathLinker to other path-based Cytoscape apps.\n\n\nMethods\n\nPathLinker requires three inputs (Figure 1): a (directed) network G, a set S of sources, and a set T of targets. Each element of S and T must be a node in G. Each edge in G may have a real-valued weight. The primary algorithmic component of PathLinker is the computation of the k best-scoring loopless paths in the network from any source in S to any target in T (Figure 1). By loopless, we mean that a path contains any node at most once. The definition of the score of a path depends on the interpretation of the edge weights, as described in “Operation.” PathLinker computes the k-highest scoring paths by integrating Yen’s algorithm15 with the A* heuristic, which allows very efficient computation for very large k values, e.g., 20,000, on networks with hundreds of thousands of edges1; see Table 2 below for statistics on the running time. PathLinker outputs the sub-network composed of the k best paths.\n\nIn this figure, PathLinker computes five paths from receptors (blue diamonds) to TFs (yellow squares) and ranks each node and edge by the index of the first path that contains it.\n\nThe column titled “# of Genes” displays the number of genes in the PathLinker network that are annotated to that GO term/pathway. The column titled “% Associated Genes” shows the percentage of genes annotated to that term/pathway that are in the PathLinker network.\n\nOne of the first steps in Yen’s algorithm is to compute the shortest path from T to S. Initially, we implemented this step by running Dijkstra’s algorithm after reversing G. Reversing the network using the Cytoscape API proved to be time costly. Therefore, we modified our implementation of Dijkstra’s algorithm to traverse edges from target to source. Yen’s algorithm periodically requires the temporary removal of edges from the network. However, it transpires that using the Cytoscape API to delete and add edges is inefficient. Therefore, we maintain a set of \"hidden edges,\" which our implementation of Yen’s algorithm ignores. When PathLinker completes, the app renders the computed network using the built-in hierarchical layout, if k ≤ 200. Since this layout renders the network upside down, i.e., with source nodes at the bottom and target nodes at the top, we reflected node coordinates around the x-axis before displaying the layout.\n\nWe have implemented PathLinker in Java 7. We have tested it with Cytoscape v3.2, 3.3, and 3.4. PathLinker requires a network to be already loaded in Cytoscape. To run PathLinker on the currently selected network, the user needs to fill in the inputs and press the “Submit” button. The input panel has three sections (Figure 2(a)):\n\n(a) The input panel for the app. (b) PathLinker lovastatin results (described in “Use Case”).\n\nSources/Targets: The names of the sources and the targets, separated by spaces. If there are sources or targets that are not nodes in the network, PathLinker will warn the user, identify the errant nodes, and ask the user for permission to continue with the remaining nodes. If none of the sources or none of the targets are in the network, PathLinker will exit. There are two options here:\n\nAllow sources and targets in paths: Normally, PathLinker removes incoming edges to sources and outgoing edges from targets before computing paths. If the user selects this option, PathLinker will not remove these edges. Therefore, source and target nodes can appear as intermediate nodes in paths computed by PathLinker.\n\nTargets are identical to sources: If the user selects this option, PathLinker will copy the sources to the targets field. This option allows the user to compute a subnetwork that connects a single set of nodes. In this case, PathLinker will allow sources and targets to appear in paths, i.e., it will behave as if the previous option is also selected. Note that since PathLinker computes loopless paths, if the user inputs only a single node and selects this option, PathLinker will not compute any paths at all.\n\nAlgorithm: There are two parameters here.\n\nk: the number of paths the user seeks. The default is k = 200. If the user inputs an invalid value (e.g., a negative number or a non-integer), PathLinker will use the default value.\n\nEdge penalty: This value is relevant only when the network has edge weights. In the case of additive edge weights, PathLinker will penalize each path by a factor equal to the product of the number of the edges in the path and the value of this parameter. In other words, each edge in the path will increase the cost of the path by the value of this parameter. When edge weights are multiplicative, PathLinker performs the same penalization but only after transforming the weights and the edge penalty to their logarithms. The default value is one for multiplicative weights and zero for the other two cases.\n\nEdge weights: There are three options for the edge weights to be used in the algorithm:\n\nNo weights: The score of a path is the number of edges in it. PathLinker computes the k paths of lowest score.\n\nEdge weights are additive: The score of a path is the sum of the weights of the edges in it. PathLinker computes the k paths of lowest score in this case as well.\n\nEdge weights are probabilities: This situation arises often with protein interactions networks, since such a weight indicates the experimental reliability of an edge. PathLinker treats the edge weights as multiplicative and computes the k highest cost paths, where the cost of a path is the product of the edge weights. Internally, PathLinker transforms each weight to the absolute value of its logarithm to map the problem to the additive case.\n\nOutput: The user can select a checkbox to generate a sub-network containing the nodes and edges in the top k paths. If k ≤ 200, PathLinker will display this sub-network using the built-in hierarchical layout (Figure 3). If k > 200, PathLinker will use the default layout algorithm.\n\nWe have mapped UniProt ids to gene names in this network.\n\nWhen it completes, PathLinker opens a table containing the k paths. Each line in the table displays the rank of each path, its score, and the nodes in the path itself. The user may analyze the network computed by PathLinker using other Cytoscape apps. The next section describes a use case that further elaborates on these possibilities.\n\n\nUse Case: analysis of ToxCast data for lovastatin\n\nThe Environmental Protection Agency’s (EPA) Toxicity Forecaster (ToxCast) initiative and its extension Tox21, have screened over 9,000 chemicals (such as pesticides and pharmaceuticals) using high-throughput assays designed to test the response of many receptors, TFs, and enzymes in the presence of each chemical16,17. Here we show a use case on how to integrate PathLinker with the ToxCast data to examine possible signaling pathways by which the chemical lovastatin could affect a cell.\n\nInput datasets and pre-processing. We downloaded the “ToxCast & Tox21 Summary Files” data from the ToxCast website18. In these data, lovastatin perturbed three receptors (EGFR, KDR and TEK) and five TFs (MTF1, NFE2L2, POU2F1, SMAD1 and SREBF1). We used these proteins as the sources and targets, respectively, for PathLinker (Figure 2(a)). Rather than use the default Cytoscape human network, we used the interactome used in the original PathLinker paper1, which contained 12,046 nodes and 152,094 directed edges (http://bioinformatics.cs.vt.edu/~murali/supplements/2016-sys-bio-applications-pathlinker). We preferred this network as we had used a popular Bayesian approach12 to estimate edge weights so as to favor signaling interactions.\n\nRunning PathLinker. We used k = 50, no edge penalty (i.e., a penalty of 1), and the option for edge weights that indicated that they are like probabilities (Figure 2(a)). The results appear in Figure 2(b) and Figure 3. Each row in Figure 2(b) describes a path: its index (from 1 to k = 50), the score of the path, and the nodes in the path, ordered from receptor to TF. Note that the score of the path is the product of the weights of the edges in it, due to the edge weight option we selected. Since PathLinker prefers high-scoring paths in this case, the paths appear in decreasing order of score. Figure 3 displays a hierarchical layout of the sub-network composed of the paths computed by PathLinker.\n\nFurther analysis. We mapped the node UniProt accession number names to gene names using UniProt’s ID mapping tool (http://www.uniprot.org/uploadlists), imported the mapping results to the PathLinker network, and then changed the node labels using the Style tab. Finally we applied a hierarchical layout to the (lovastatin) sub-network and spread apart overlapping nodes to make the paths easier to visualize (Figure 3). We noted that the target MTF1 did not appear in any of the top 50 paths.\n\nFunctional Enrichment. Since the result from PathLinker is a network in the current session of Cytoscape, it is amenable for analysis by other Cytoscape apps. As an example, we demonstrate how we applied the ClueGo app for functional enrichment19 to see if the lovastatin sub-network was enriched for any Gene Ontology (GO) terms or KEGG pathways. Table 1 displays the top 15 enriched terms/pathways. Most of the paths in the PathLinker result come from the EGFR source node, so it is not surprising the ErbB signaling pathway is highly significant. We found considerable support in the literature for this pathway and other significant GO terms/pathways. Lovastatin has been shown to inhibit epidermal growth factor (EGF) and insulin-like growth factor 1 (IGF-1)20,21. Moreover, the PathLinker sub-network for lovastatin includes an interaction from EGFR to AKT1, which agrees with a study showing that lovastatin inhibits EGFR dimerization and results in the activation of AKT22. Lovastatin has also been shown to inhibit the T cell receptor pathway23, the Ras signaling pathway23, and the Fc receptor–mediated phagocytosis by macrophages24. Thus, the network computed by PathLinker for lovastatin promises to capture several possible mechanisms by which the chemical inhibits cellular pathways.\n\nRunning time. As we mentioned earlier in \"Implementation,\" PathLinker is very efficient. In Table 2, we show the running time for the PathLinker app for lovastatin and for a representative set of signaling pathways. Even for k = 10,000, the app completed in less than 2.5 minutes for all inputs. We executed PathLinker on the same network on which we performed the lovastatin analysis.\n\n\nComparison to related Cytoscape apps\n\nIn this section, we compare PathLinker to other Cytoscape apps that compute paths in networks. A difficulty we faced in understanding the functionality of some of these apps was that they did not precisely define their output in the documentation. Therefore, we had to take recourse to studying the source code for some of these apps in order to understand precisely the properties of the computed paths. We focus the comparison mainly on these properties and not on other features of the apps.\n\nPathExplorer. (http://apps.cytoscape.org/apps/pathexplorer) This app uses breadth first search (BFS) to compute the shortest path from a single node (that the user can select) to every other node in the network. The app can also compute the shortest path from every node in the network to a single node. Since the app uses BFS, the shortest path property is guaranteed only for unweighted networks. If there are multiple shortest paths to a node, it appears that the app will select one.\n\nStrongestPath. (http://apps.cytoscape.org/apps/strongestpath) This app computes the “strongest” paths from a group of source nodes to a group of target nodes. The authors do not provide a definition of “strongest” paths. We describe our understanding of their algorithm now. Suppose the input network is G. Their software takes a real-valued threshold τ > 0 as input; the user can manipulate a slider to select this value. The app appears to operate as follows:\n\n1. Connect a super source s to each source in G. Connect each target to a super target t in G.\n\n2. Use Dijkstra’s algorithm to compute the shortest path in G from s to every node in G.\n\n3. Create a new network G′ with the same node set as G. For every edge (u, v) in G, add the reverse of that edge (v, u) to G′ .\n\n4. Use Dijkstra’s algorithm to compute the shortest path in G′ from t to every node in G.\n\n5. For every node v in G, record d(v) the sum of the length of the shortest s-v path in G and the length of the shortest t-v in G′ . Compute the corresponding s-t path πv that goes through v.\n\n6. Sort all the nodes in G in increasing order of d(v).\n\n7. Let a be the smallest value of d(v).\n\n8. For every node v such that d(v) ≤ a + τ, output the path πv.\n\nIn other words, for every node v, the app computes the shortest path that starts at some source node, goes through v, and ends at some target node. The number of such paths returned depends on the value of the threshold τ selected by the user. This app can operate on weighted and directed networks. We believe that the algorithm will compute the shortest path from any source to any target correctly. However, when τ > 0, it is not possible to guarantee that the algorithm will compute all paths from a source to a target of length ≤ a + τ, since the method computes at most n distinct paths, where n is the number of nodes in the network.\n\nPesCa [25]. (http://apps.cytoscape.org/apps/pesca30) For a single node, this app computes the shortest path from that node to every other node in the network. If the user selects multiple nodes, PesCa computes the shortest path(s) between each pair of selected nodes. A useful feature is that if there are multiple shortest paths between a pair of nodes, the app computes all of them. This app focuses on shortest paths.\n\nPathLinker. Our algorithm is strikingly different in that it allows the user to compute as many (k) shortest paths from sources to targets as desired. For example, if k = 1, PathLinker will compute the shortest path from some source to some target using Dijkstra’s algorithm on a graph with a new super source and a super target. For larger values of k, Yen’s algorithm (used by PathLinker) uses a dynamic program to mathematically guarantee the following property: if πk−1 is the (k −1)st path and πk is the kth path, then there is no source-to-target path in the graph whose length is strictly between the lengths of πk−1 and πk. The other Cytoscape apps discussed here either cannot guarantee this property (e.g., StrongestPath) or do not compute less-than-optimal paths (e.g., PathExplorer and PesCa).\n\n\nSummary\n\nWe have described a new Cytoscape app that implements a mathematically rigorous, computationally-efficient, and experimentally-validated network connection algorithm called PathLinker. While we had originally developed PathLinker for reconstructing signaling pathways, the method is general enough to connect any set of sources to any set of targets in a weighted and directed network. As a specific example, we used PathLinker to compute the network of interactions connecting proteins perturbed by the drug lovastatin in the ToxCast dataset and showed how the literature supported PathLinker’s findings. The app may also be used to compute a sub-network connecting a single set of nodes. This app promises to be a useful addition to the suite of Cytoscape apps for analyzing networks.\n\n\nData and software availability\n\nSoftware available from: http://apps.cytoscape.org/apps/pathlinker\n\nLatest source code: https://github.com/Murali-group/PathLinker-Cytoscape\n\nArchived source code as at time of publication: 10.5281/zenodo.16516226\n\nLicense: GNU General Public License version 3\n\nThe original Python implementation is available at https://github.com/Murali-group/PathLinker for users who seek to integrate PathLinker directly into their own computational pipelines or want to apply PathLinker for large values of k.\n\nDatasets: We obtained the lovastatin data from the following three files in the INVITRODB_V2_SUMMARY.zip file that we downloaded18:\n\n• hitc_Matrix_151020.csv\n\n• Chemical_Summary_151020.csv\n\n• Assay_Summary_151020.csv", "appendix": "Author contributions\n\n\n\nDPG implemented and tested the Cytoscape app. JNL performed the lovastatin analysis. TMM proposed the development of the PathLinker app and the lovastatin analysis, and supervised DPG and JNL. All three authors wrote the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe National Institute of General Medical Sciences of the National Institutes of Health grant R01-GM095955 (TMM) and National Science Foundation (NSF) grant DBI-1062380 (TMM) supported this work.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nRitz A, Poirel CL, Tegge AN, et al.: Pathways on demand: Automated reconstruction of human signaling networks. NPJ Syst Biol Appl. 2016; 2: 16002. Publisher Full Text\n\nSteffen M, Petti A, Aach J, et al.: Automated modelling of signal transduction networks. BMC Bioinformatics. 2002; 3(1): 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScott J, Ideker T, Karp RM, et al.: Efficient algorithms for detecting signaling pathways in protein interaction networks. J Comput Biol. 2006; 13(2): 133–144. PubMed Abstract | Publisher Full Text\n\nHuang SS, Fraenkel E: Integrating proteomic, transcriptional, and interactome data reveals hidden components of signaling and regulatory networks. Sci Signal. 2009; 2(81): ra40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBailly-Bechet M, Borgs C, Braunstein A, et al.: Finding undetected protein associations in cell signaling by belief propagation. Proc Natl Acad Sci U S A. 2011; 108(2): 882–887. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGitter A, Klein-Seetharaman J, Gupta A, et al.: Discovering pathways by orienting edges in protein interaction networks. Nucleic Acids Res. 2011; 39(4): e22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTuncbag N, Braunstein A, Pagnani A, et al.: Simultaneous reconstruction of multiple signaling pathways via the prize-collecting Steiner forest problem. J Comput Biol. 2013; 20(2): 124–136. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGitter A, Carmi M, Barkai N, et al.: Linking the signaling cascades and dynamic regulatory networks controlling stress responses. Genome Res. 2013; 23(2): 365–376. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOurfali O, Shlomi T, Ideker T, et al.: SPINE: a framework for signaling-regulatory pathway inference from cause-effect experiments. Bioinformatics. 2007; 23(13): i359–66. PubMed Abstract | Publisher Full Text\n\nShih YK, Parthasarathy S: A single source k-shortest paths algorithm to infer regulatory pathways in a gene network. Bioinformatics. 2012; 28(12): i49–i58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuthram S, Beyer A, Karp RM, et al.: eQED: an efficient method for interpreting eQTL associations using protein networks. Mol Syst Biol. 2008; 4: 162. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYeger-Lotem E, Riva L, Su LJ, et al.: Bridging high-throughput genetic and transcriptional data reveals cellular responses to alpha-synuclein toxicity. Nat Genet. 2009; 41(3): 316–323. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYosef N, Ungar L, Zalckvar E, et al.: Toward accurate reconstruction of functional protein networks. Mol Syst Biol. 2009; 5: 248. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYosef N, Zalckvar E, Rubinstein AD, et al.: ANAT: a tool for constructing and analyzing functional protein networks. Sci Signal. 2011; 4(196): pl1. PubMed Abstract | Publisher Full Text\n\nYen JY: Finding the k shortest loopless paths in a network. Manage Sci. 1971; 17(11): 712–716. Publisher Full Text\n\nJudson RS, Houck KA, Kavlock RJ, et al.: In vitro screening of environmental chemicals for targeted testing prioritization: the ToxCast project. Environ Health Perspect. 2010; 118(4): 485–492. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTice RR, Austin CP, Kavlock RJ, et al.: Improving the human hazard characterization of chemicals: a Tox21 update. Environ Health Perspect. 2013; 121(7): 756–765. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUSEPA: ToxCast & Tox21 Summary Files from invitrodb_v2.2015; Data released October 2015. Reference Source.\n\nBindea G, Mlecnik B, Hackl H, et al.: ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinformatics. 2009; 25(8): 1091–1093. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVincent TS, Wülfert E, Merler E: Inhibition of growth factor signaling pathways by lovastatin. Biochem Biophys Res Commun. 1991; 180(3): 1284–1289. PubMed Abstract | Publisher Full Text\n\nMcGuire TF, Xu XQ, Corey SJ, et al.: Lovastatin disrupts early events in insulin signaling: a potential mechanism of lovastatin’s anti-mitogenic activity. Biochem Biophys Res Commun. 1994; 204(1): 399–406. PubMed Abstract | Publisher Full Text\n\nZhao TT, Le Francois BG, Goss G, et al.: Lovastatin inhibits EGFR dimerization and AKT activation in squamous cell carcinoma cells: potential regulation by targeting rho proteins. Oncogene. 2010; 29(33): 4682–4692. PubMed Abstract | Publisher Full Text\n\nGoldman F, Hohl RJ, Crabtree J, et al.: Lovastatin inhibits T-cell antigen receptor signaling independent of its effects on ras. Blood. 1996; 88(12): 4611–4619. PubMed Abstract\n\nLoike JD, Shabtai DY, Neuhut R, et al.: Statin inhibition of Fc receptor-mediated phagocytosis by macrophages is modulated by cell activation and cholesterol. Arterioscler Thromb Vasc Biol. 2004; 24(11): 2051–2056. PubMed Abstract | Publisher Full Text\n\nScardoni G, Tosadori G, Pratap S, et al.: Finding the shortest path with PesCa: a tool for network reconstruction [version 2; referees: 2 approved, 2 approved with reservations]. F1000Res. 2015; 4: 484. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGil D, Bezawada S, Murali TM, et al.: The PathLinker App for Cytoscape [Data set]. Zenodo. 2016. Data Source" }
[ { "id": "19605", "date": "01 Feb 2017", "name": "Barry Demchak", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes the PathLinker Cytoscape app, including the mathematical algorithms and a comparison to similarly-focused Cytoscape apps. It is well written and address the important problem of deducing relationships that can advance biology.\nIt is very economical in its explanation of the app/algorithm, its uses and its relationship to other apps, and in several places needs more explanation. Explanations tend to weigh in favor of expert Cytoscape users, though this app would be of interest to less expert users, too, particularly those trying to relate PathLinker to biological investigation. The paper would benefit from better enabling the reader to follow a use case in Cytoscape using actual data and actual app settings.\nIn Methods | Operation, please explain how to acquire and run PathLinker.\n\nIn \"Allow sources and targets in paths\" and \"Targets are identical to sources\", please explain the biological implications of these settings ... it's difficult to jump from the graph implications to the biological implications.\n\nIn \"Algorithm\", why is the default chosen, and what are the biological ramifications of choosing a higher or lower k?\n\nThe output in Figure 2B seems to be a standalone window. How can the user capture the results? It's unclear how the user should be using this report in investigating relationships.\n\nIn \"Edge penalty\", please explain when a edge penalty would be used in a network and what its biological implication would be.\n\nIn \"Input datasets and pre-processing\", I attempted to download the ToxCost data and could not. The site requires a credential and does not give instructions regarding how to get the credential. Without this data, the user is hard pressed to reproduce these results and then evolve his/her own questions. The web site apparently identifies this data as freely available. Can it be included as supplementary material (as a Cytoscape session file?) to assist the user in following this paper?\n\nIn \"Input datasets and pre-processing\", I tracked down the referenced original PathLinker paper. It took a while to determine which network was being used. I downloaded it and imported it into Cytoscape. During the import, there were a number of options available, and it was unclear which options should be chosen. Can this network be included as supplementary material (as a Cytoscape session file?) to assist the user in following this paper?\n\nIn \"Running PathLinker\", can you explain the biological ramifications behind the k=50 and edge penalty settings?\n\nIn \"Further Analysis\", can you explain which Cytoscape tool or feature you used to spread the nodes apart? I'm thinking of the biological user that's trying to follow the paper.\n\nIn \"Functional Enrichment\", can you specify which ClueGO settings you used? This is a very valuable step, and it's hard for the user to follow without giving settings.\n\nIn \"Running Time\", how many CPU cores and how much RAM were on the test machine?\n\nIn the \"Comparison to related Cytoscape Apps\", the discussion focuses on differences in graph analysis approaches, and assumes the reader can appreciate the reasons why PathLinker gives better results. The discussion could use a little more justification, and also some grounding in the biological consequences of these differences.\n\nIn the Introduction, the claim \"any human signaling pathway\" is overbroad. I suggest claiming \"human signaling pathways\".", "responses": [] }, { "id": "20283", "date": "13 Mar 2017", "name": "Tamás Korcsmáros", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper of Gil et al. describes a new Cytoscape App, Pathlinker, which is the Cytoscape implementation of the previously published approach by the Murali group with the same name. It is always useful for the community to implement network analyzing algorithms to Cytoscape.\n\nThe paper and the abstract is well written and clear. The figures were well selected.\nIn order to facilitate the application of the PathLinker App, it would be useful to provide more, tutorial type comments and guidelines for new users. Given the important task PathLinker is meant to solve, many users would find it useful. Currently the Methods section contains the key steps but it does not read as a protocol or suggest alternatives for troubleshooting.\nThe current version of the paper does not contain the limitations of PathLinker. When this App should not be used, for which datatypes it is not good, or cases when the user should pay attention to any bias or problem?\nThe comparison with existing Apps focuses on the differences in the algorithms. As this is an App paper, it would be useful to include a comparison of the functional differences (features) between the Apps.\nIf possible, maybe for a new version, it would be nice if the App allows to input the source and target node names by node selection function, instead of typing it in (or pasting it in) to the requested fields.\nFinally, a small bug in the App: When the user select the checkbox to generate a sub-network as an output, it does not generate a subnetwork within Cytoscape but a new network. The problem with this that it means the attributes of the original network will be lost. This should be fixed easily.\nI believe PathLinker will be a popular and often used App for the biomedical and systems biology communities. I think the next step to increase its impact is to make the application of it as clear and as didactical as possible.", "responses": [] }, { "id": "20285", "date": "22 Mar 2017", "name": "Stefan Wuchty", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript 'The PathLinker app: Connect the dots in protein interaction networks' by Gil, Law and Murali introduces a Cytoscape app that allows the user to apply their PathLinker algorithm to find potential signaling pathways from a user-defined set of sources, targets and molecular interaction data. The underlying PathLinker algorithm has been introduced in the paper in Ritz et al., NPJ Syst. Biol. Appl. 2016, 2:16002, indicating that the current manuscript is an extension in that it provides a Cytoscape application. The manuscript provides a crash-course in using the PathLinker algorithm, allowing the reader to quickly get into the game determining signalling paths based on the users data. As it stands, it seems to be a popular one and will be used frequently.\nWhile the manuscript gives enough information to get the user going, I would add a bit more information about the specifics of the underlying algorithm. It is based on Yen's algorithm but uses the A* algorithm instead of a shortest path algorithm. While many readers are probably familiar with the latter, the A* algorithm may need an introduction to avoid that users operate a 'black box'. In particular, the A* algorithm makes at each step an assessment of the distance to a target to find an optimal path. In this regard, it would be beneficial to add more details how this assessment works and the ways in which A* was embedded in the framework of Yen's algorithm. As for the latter, also Yen's algorithm deserves more detail as it is an algorithm that users rather rarely encounter to make the user fully aware what she is doing. In particular, such considerations are important as the authors describe in the paper different weights on interactions that may be used in different ways to assess and find optimal paths.\nWith that said, a bit more technical information about the 'ingredients' of the algorithms that are used to compare (with regard to the ways weighting information is used) would be helpful too. Such details would allow the reader to see where the differences to (and the advantages of) the PathLinker algorithm and apps are.", "responses": [] } ]
1
https://f1000research.com/articles/6-58
https://f1000research.com/articles/5-2832/v1
08 Dec 16
{ "type": "Systematic Review", "title": "Candida antifungal drug resistance in sub-Saharan African populations: A systematic review", "authors": [ "Charlene Wilma Joyce Africa", "Pedro Miguel dos Santos Abrantes", "Pedro Miguel dos Santos Abrantes" ], "abstract": "Background: Candida infections are responsible for increased morbidity and mortality rates in at-risk patients, especially in developing countries where there is limited access to antifungal drugs and a high burden of HIV co-infection. Objectives: This study aimed to identify antifungal drug resistance patterns within the subcontinent of Africa. Methods: A literature search was conducted on published studies that employed antifungal susceptibility testing on clinical Candida isolates from sub-Saharan African countries using Pubmed and Google Scholar. Results: A total of 21 studies from 8 countries constituted this review. Only studies conducted in sub-Saharan Africa and employing antifungal drug susceptibility testing were included. Regional differences in Candida species prevalence and resistance patterns were identified. Discussion: The outcomes of this review highlight the need for a revision of antifungal therapy guidelines in regions most affected by Candida drug resistance.  Better controls in antimicrobial drug distribution and the implementation of regional antimicrobial susceptibility surveillance programmes are required in order to reduce the high Candida drug resistance levels seen to be emerging in sub-Saharan Africa.", "keywords": [ "Candida", "antifungal drug resistance", "Africa" ], "content": "Introduction\n\nCandida species are known to shift from commensal to opportunistic infectious agents when triggered by factors such as immunosuppression, continuous usage of antibiotics and poor nutrition, leading to increased patient morbidity and mortality1–3. In severely immunocompromised patients, Candida species can spread through the bloodstream and gastrointestinal tract. This can lead to systemic candidiasis, with a reported mortality rate in developed countries of 38%4 and 44%5. Candida is currently the 4th most commonly isolated microorganism in nosocomial bloodstream infections6 and has been implicated in >78% of cancerous and precancerous oral lesions7.\n\nVarious antifungal drugs with different modes of action have been developed over the years. These include polyene antifungals (e.g. nystatin and amphotericin B), which interfere with ergosterol synthesis, thereby causing cell membrane leakage; the imidazoles (e.g. miconazole, clotrimazole, econazole and ketoconazole), which also interfere with ergosterol and other cell membrane sterol synthesis; the echinocandins (e.g. anidulafungin, micafungin and caspofungin), which inhibit β 1–3 glucan synthesis, affecting the fungal cell wall and 5-flucytosine that in turn interferes with fungal RNA and DNA synthesis8. The triazoles (including fluconazole, posaconazole, voriconazole and itraconazole) interfere with the synthesis of ergosterol and have been shown to have fewer side effects than some of the other antifungal drug classes9.\n\nResistance to available antifungal therapies is widespread10,11, probably due to the widespread and repeated use of these drugs12. Different Candida species have varying resistance patterns, which appear to be geographically determined13,14. Therefore early recognition of resistance facilitates the selection of an appropriate antifungal drug, with the use of oral antifungals in oropharyngeal candidiasis reserved for cases where there is no response to topical antifungal treatment15. Resistance pattern surveillance to avoid an even higher number of improperly treated, and therefore resistant fungal infections, is imperative16. This is a cause for concern in the case of immunocompromised patients, who are at a much higher risk of developing opportunistic complications. Importantly, sub-Saharan Africa is the region most affected by HIV, with approximately 25.8 million infected people in 2014 and accounting for almost 70% of the global number of new HIV infections (http://www.who.int/mediacentre/factsheets/fs360/en/).\n\nProgrammes on species prevalence and antifungal surveillance have been successfully developed and introduced in Europe, Asia-Pacific, Latin America and North America17–19. The gap in antifungal drug resistance surveillance in Africa has been documented20. Surveillance programmes are crucial tools in the transition from empirical antifungal treatment, which often does not work due to the diverse resistance levels seen in different regions, and the presence of species that are intrinsically resistant to certain antifungal drugs. The non-existence of routine diagnostics laboratories in most African countries has meant that many African patients are treated without the knowledge of which species they harbour and without any updated guideline data that could be used as a reference in prescribing antifungals. Possible causes for the lack of Candida surveillance programmes in Africa include lack of funding, the limited number of research collaborations and the existence of conflict areas within the continent. This prompted the need for a review of the current situation in Africa regarding the drug susceptibility profiles of Candida species available from different regions.\n\n\nMethods\n\nA literature search was conducted on 21 published studies that employed antifungal susceptibility testing on clinical Candida isolates from 8 sub-Saharan African countries, with the aim of identifying antifungal drug resistance patterns within different regions of the subcontinent and included resistance data for 14 antifungal drugs. Searches were performed on PubMed and Google Scholar between August and November 2016 using the keywords ‘Candida’, ‘Susceptibility Testing’, ‘Drug Resistance’ and ‘Antifungal’.\n\nData extracted from the individual studies included the regions within the different countries, patient health information, the methods used for antifungal susceptibility testing, the frequency of Candida species and their susceptibilities to antifungal drugs (Dataset 121).\n\nOnly studies conducted in sub-Saharan Africa between the years 1998 and 2016 employing antifungal drug susceptibility testing were included.\n\nStudies conducted in Africa which reported on the prevalence of Candida but did not describe antimicrobial susceptibility were excluded from this review.\n\n\nResults\n\nThe study populations included healthy22,23, HIV-positive22–35 and cancer patients23, as well as patients with genitourinary tract infections36–39, respiratory tract infections32,39, meningitis39 and candidemia40,41. Most studies relied on broth microdilution or disk diffusion for antimicrobial susceptibility testing, while one of the publications was a retrospective clinical study based on the patients’ response to antifungal therapy.\n\nThis review included seven studies from two regions in South Africa22–25,36,40,41, three studies from different regions in Ethiopia33–35, three studies from two regions in Cameroon25–27, three studies from different regions in Nigeria28,29,37, two studies from the same region in Ivory Coast30,38, one study from Tanzania31, one study from Kenya32 and one study from Ghana39. Due to the paucity of studies and differences in isolation and antifungal susceptibility testing, a meta-analysis could not be conducted.\n\nNon-albicans species, such as C. glabrata and C. krusei are reported to have innate resistance to antifungal drugs. C. krusei resistance has been reported from South Africa22,25, Cameroon25, Nigeria28,37, Ghana39, Tanzania31 and Ethiopia34. Initially thought to have innate resistance to azoles, C. glabrata resistance has been reported in Cameroon25, Ethiopia34 and Tanzania31, while susceptibility has been reported in South Africa22,25 and Nigeria28. This discrepancy may be explained by the phenotypic similarity between C. glabrata, C. nivariensis and C. bracarensis, which could possibly be confused in the absence of molecular typing methods and show different antifungal profiles42. Resistant C. glabrata has increased in patients presenting with candidiasis in recent years43, with increased mortality rates44, and echinocandins have been recommended for the treatment of invasive C. glabrata infections showing resistance to azoles. However, co-resistance to both echinocandins and azoles in clinical isolates of C. glabrata have been reported45, with two cases of echinocandin-resistant C. glabrata infections recently reported from South Africa36.\n\nA new multi-drug resistant species, C. auris is taking the world by storm. First discovered in Japan46, this species has been found in nine other countries on four continents. The Centre for Disease Control and Prevention (CDC) has issued a warning for increased awareness of C. auris in healthcare settings. This nosocomial pathogen is frequently misdiagnosed, shows resistance to different classes of antifungals routinely administered, and is associated with high mortality rates47. The isolation of this species in South Africa40 appears to be the only report in Africa at the time of writing this paper.\n\nRegional differences in Candida susceptibility profiles have been observed. In South Africa, earlier reports of baseline data demonstrated a high susceptibility (100%) of C. albicans to fluconazole, along with intrinsically resistant non-albicans species22,24, with more recent studies in South Africa showing an emerging resistance to azoles23,25,48. The reasons for this change in susceptibility patterns is not clear, but it is worth noting that the earlier studies were done before the 2002 introduction of fluconazole as prophylaxis to patients attending HIV-AIDS clinics in South Africa49.\n\nStudies from abroad have reported cross-resistance to fluconazole in patients receiving itraconazole prophylaxis50 and other previously administered azole therapies, such as ketoconazole and miconazole51,52. Similar cross-resistance was recently reported in South Africa where 37% of C. parapsilosis isolates were susceptible to fluconazole and voriconazole, and 44% of fluconazole-resistant isolates were voriconazole cross-resistant41.\n\nStudies from Bamenda25 and Douala26 in Cameroon showed high resistance of C. albicans isolates to azoles (>50% and 70% respectively), with low resistance reported from Mutegene27 and Bamenda25 for amphotericin B (4.9 and 4.3%) and 5-flucytosine (10.7 and 6.5%), respectively. The Douala study, on the other hand, reported increased C. albicans resistance to amphotericin B (52.6%) and 5-flucytosine (70%). A comparison of the Mutengene and Bamenda studies further revealed that C. dubliniensis and C. tropicalis susceptibilities differed between the two groups, with C. dubliniensis showing susceptibility to fluconazole and 100% resistance to amphotericin B in the Bamenda group, and increased resistance (66%) to fluconazole and no resistance to amphotericin B in the Mutengene group. C. tropicalis showed resistance to amphotericin B in the Bamenda group (50%) and only 4.3% resistance in the Mutegene group.\n\nC. albicans resistance to amphotericin B has also been reported in Kenya (25.6%)32 and Ghana (23.4%)39, while no resistance was seen in studies from Ivory Coast30,38 nor Nigeria29. Intermediate resistance values observed for clotrimazole and amphotericin B in studies from South West Cameroon, may indicate the need for administering higher doses to effectively treat these patients. This raises some concern, since both drugs are toxic at high concentrations and might have various side effects, such as the reduction of blood pressure caused by clotrimazole therapy53.\n\nThe application of topical antifungals, such as econazole and nystatin, is recommended for the localized treatment of Candida infections. Candida isolates from the Ivory Coast showed good susceptibility to nystatin with an increasing resistance noted in Ethiopia (1.3–4.7%), Kenya (36%), Gauteng, South Africa (67%) and Mutengene, Cameroon (68%). Resistance to econazole was reported in South West Cameroon26,27. Overall, Candida isolates from Eastern African countries demonstrated the lowest resistance levels, with the exception of Kenya where resistance values for clotrimazole (74%) and nystatin (35.6%) were high32. Systemic antifungals are usually reserved for patients who are unresponsive to topical treatment in cases such as these.\n\n\nDiscussion\n\nFluconazole is widely used in public health settings in the African continent and is used empirically in the treatment of systemic or localized Candida infections54, as it is less toxic and regarded as more effective than imidazole antifungals, such as ketoconazole or amphotericin B, even though it is a teratogenic drug55,56. Although still somewhat effective in other regions, the use of azoles as first line drugs for systemic infection should be revisited in certain areas of South and West Africa, due to their increasing inefficacy. Regular monitoring of Candida at a regional level could therefore be an important tool to aid in the prescription of antifungals based on the prevalent species and their susceptibilities to antifungal drugs in areas where routine microbiological laboratory testing is not available.\n\nThe sale of antimicrobial medications is largely unregulated in Africa and is exacerbated by the influx of fake and adulterated drugs with little or no active ingredients, often available both in pharmacies and on the streets. This problem is aggravated by practitioners who prescribe antimicrobial medications empirically based on clinical presentation, without prior knowledge of which microbial agent(s) are causing infections in their patients. These issues pose a serious public health threat, as they are largely responsible for more and more antimicrobial drugs being rendered ineffective in treating life-threatening infections. This is especially true in the case of the antifungal armamentarium, which is already very limited57, especially in resource-poor settings. Limitations of this study include the paucity of available data from African regions, differences in sample sourcing, as well as techniques for isolation and susceptibility profiles from different regions, all of which complicate a comparison of outcomes of the cited studies.\n\nThe regional differences in antifungal drug susceptibility of Candida species, often seen within the same country, are an important finding that justifies the implementation of Candida species prevalence and susceptibility testing programmes in the African continent, notably in at-risk population groups, such as HIV patients. With the emergence of inherently drug resistant non-albicans species, more studies on Candida prevalence and drug susceptibility are needed throughout sub-Saharan Africa. This is most critical in resource-poor areas where there is little or no information available, such as southern (with the exception of South Africa) and central African countries and countries bordering the Sahara.\n\nWe would like to conclude by adding that Candida identification to species level is rarely made in clinical settings in Africa, and patients are treated empirically based on their clinical symptoms. The introduction of routine antimicrobial susceptibility testing before initiation of therapy can be relatively expensive, but is certainly a long-term cost effective solution in preventing the progression of drug resistance. Changes in drug susceptibility over time serve as a reminder for the need to test clinical Candida isolates for sensitivity to antifungal drugs in the effort to improve patient care and reduce patient morbidity and mortality.\n\n\nData availability\n\nDataset 1: Antifungal drug resistance of Candida species per region. DOI, 10.5256/f1000research.10327.d14531921", "appendix": "Author contributions\n\n\n\nCA conceived the study and contributed to the writing of the manuscript. PA wrote the first draft of the manuscript. Both authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by the Research Office of the University of the Western Cape (project registration no. ScRIRC2012/10/72).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAkpan A, Morgan R: Oral candidiasis. Postgrad Med J. 2002; 78(922): 455–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJabra-Rizk MA, Falkler WA Jr, Enwonwu CO, et al.: Prevalence of yeast among children in Nigeria and the United States. Oral Microbiol Immunol. 2001; 16(6): 383–5. PubMed Abstract | Publisher Full Text\n\nOwotade FJ, Patel M, Ralephenya TR, et al.: Oral Candida colonization in HIV-positive women: associated factors and changes following antiretroviral therapy. J Med Microbiol. 2013; 62(Pt 1): 126–32. PubMed Abstract | Publisher Full Text\n\nGudlaugsson O, Gillespie S, Lee K, et al.: Attributable mortality of nosocomial candidemia, revisited. Clin Infect Dis. 2003; 37(9): 1172–7. PubMed Abstract | Publisher Full Text\n\nAlmirante B, Rodríguez D, Park BJ, et al.: Epidemiology and predictors of mortality in cases of Candida bloodstream infection: results from population-based surveillance, Barcelona, Spain, from 2002 to 2003. J Clin Microbiol. 2005; 43(4): 1829–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBudhavari S: What’s new in diagnostics? Fungitell®: 1,3 beta-D Glucan assay. South Afr J Epidemiol Infect. 2009; 24(1): 37–8. Reference Source\n\nMohd Bakri M, Mohd Hussaini H, Rachel Holmes A, et al.: Revisiting the association between candidal infection and carcinoma, particularly oral squamous cell carcinoma. J Oral Microbiol. 2010; 2: 5780. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOdds FC, Brown AJ, Gow NA: Antifungal agents: mechanisms of action. Trends Microbiol. 2003; 11(6): 272–9. PubMed Abstract | Publisher Full Text\n\nKhan ZK, Jain P: Antifungal agents and immunomodulators in systemic mycoses. Indian J Chest Dis Allied Sci. 2000; 42(4): 345–55. PubMed Abstract\n\nLuque AG, Biasoli MS, Tosello ME, et al.: Oral yeast carriage in HIV-infected and non-infected populations in Rosario, Argentina. Mycoses. 2009; 52(1): 53–9. PubMed Abstract | Publisher Full Text\n\nManzano-Gayosso P, Méndez-Tovar LJ, Hernández-Hernández F, et al.: [Antifungal resistance: an emerging problem in Mexico]. Gac Med Mex. 2008; 144(1): 23–6. PubMed Abstract\n\nJia XM, Ma ZP, Jia Y, et al.: RTA2, a novel gene involved in azole resistance in Candida albicans. Biochem Biophys Res Commun. 2008; 373(4): 631–6. PubMed Abstract | Publisher Full Text\n\nPfaller MA, Jones RN, Doern GV, et al.: International surveillance of bloodstream infections due to Candida species: frequency of occurrence and antifungal susceptibilities of isolates collected in 1997 in the United States, Canada, and South America for the SENTRY Program. The SENTRY Participant Group. J Clin Microbiol. 1998; 36(7): 1886–9. PubMed Abstract | Free Full Text\n\nFalagas ME, Roussos N, Vardakas KZ: Relative frequency of albicans and the various non-albicans Candida spp among candidemia isolates from inpatients in various parts of the world: a systematic review. Int J Infect Dis. 2010; 14(11): e954–66. PubMed Abstract | Publisher Full Text\n\nPowderly WG, Mayer KH, Perfect JR: Diagnosis and treatment of oropharyngeal candidiasis in patients infected with HIV: a critical reassessment. AIDS Res Hum Retroviruses. 1999; 15(16): 1405–12. PubMed Abstract | Publisher Full Text\n\nGodoy P, Tiraboschi IN, Severo LC, et al.: Species distribution and antifungal susceptibility profile of Candida spp. bloodstream isolates from Latin American hospitals. Mem Inst Oswaldo Cruz. 2003; 98(3); 401–5. PubMed Abstract | Publisher Full Text\n\nAdriaenssens N, Coenen S, Muller A, et al.: European Surveillance of Antimicrobial Consumption (ESAC): outpatient systemic antimycotic and antifungal use in Europe. J Antimicrob Chemother. 2010; 65(4): 769–74. PubMed Abstract\n\nCuenca-Estrella M, Rodríguez-Tudela JL, Córdoba S, et al.: [Regional laboratory network for surveillance of invasive fungal infections and antifungal susceptibility in Latin America]. Rev Panam Salud Publica. 2008; 23(2): 129–34. PubMed Abstract | Publisher Full Text\n\nPfaller MA, Moet GJ, Messer SA, et al.: Geographic variations in species distribution and echinocandin and azole antifungal resistance rates among Candida bloodstream infection isolates: report from the SENTRY Antimicrobial Surveillance Program (2008 to 2009). J Clin Microbiol. 2011; 49(1): 396–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe World Health Organization: Antimicrobial resistance global report on surveillance. WHO Press, Geneva, Switzerland. ISBN 978 92 4 156474 8. 2014. Reference Source\n\nWilma Joyce Africa C, Dos Santos Abrantes PM: Dataset 1 in: Candida antifungal drug resistance in sub-Saharan African populations: A systematic review. F1000Research. 2016. Data Source\n\nBlignaut E, Messer S, Hollis RJ, et al.: Antifungal susceptibility of South African oral yeast isolates from HIV/AIDS patients and healthy individuals. Diagn Microbiol Infect Dis. 2002; 44(2): 169–74. PubMed Abstract | Publisher Full Text\n\nOwotade FJ, Gulube Z, Ramla S, et al.: Antifungal susceptibility of Candida albicans isolated from the oral cavities of patients with HIV infection and cancer. SADJ. 2016; 71(1): 8–11. Reference Source\n\nBlignaut E, Botes ME, Nieman HL: The treatment of oral candidiasis in a cohort of South African HIV/AIDS patients. SADJ. 1999; 54(12): 605–8. PubMed Abstract\n\nDos Santos Abrantes PM, McArthur CP, Africa CW: Multi-drug resistant oral Candida species isolated from HIV-positive patients in South Africa and Cameroon. Diagn Microbiol Infect Dis. 2014; 79(2): 222–7. PubMed Abstract | Publisher Full Text\n\nNjunda AL, Nsagha DS, Assob JCN, et al.: In vitro antifungal susceptibility patterns of Candida albicans from HIV and AIDS patients attending the Nylon Health District Hospital in Douala, Cameroon. J Pub Health Afr. 2012; 3(1): 4–7. Publisher Full Text\n\nNjunda LA, Assob JCN, Nsagha SD, et al.: Oral and urinary colonization of Candida species in HIV/AIDS patients in Cameroon. Basic Sci Med. 2013; 2(1): 1–8. Reference Source\n\nEnwuru CA, Ogunledun A, Idika N, et al.: Fluconazole resistant opportunistic oro-pharyngeal Candida and non-Candida yeast-like isolates from HIV-infected patients attending ARV clinics in Lagos, Nigeria. Afr Health Sci. 2008; 8(3): 142–8. PubMed Abstract | Free Full Text\n\nNweze EI, Ogbonnaya UL: Oral Candida isolates among HIV-infected subjects in Nigeria. J Microbiol Immunol Infect. 2011; 44(3): 172–7. PubMed Abstract | Publisher Full Text\n\nNébavi F, Arnavielhe S, Le Guennec R, et al.: Oropharyngeal candidiasis in AIDS patients from Abidjan (Ivory Coast): antifungal susceptibilities and multilocus enzyme electrophoresis analysis of Candida albicans isolates. Pathol Biol (Paris). 1998; 46(5): 307–14. PubMed Abstract\n\nHamza OJ, Matee MI, Moshi MJ, et al.: Species distribution and in vitro antifungal susceptibility of oral yeast isolates from Tanzanian HIV-infected patients with primary and recurrent oropharyngeal candidiasis. BMC Microbiol. 2008; 8: 135. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBii CC, Ouko TT, Amukoye E, et al.: Antifungal drug susceptibility of Candida albicans. East Afr Med J. 2002; 79(3): 143–5. PubMed Abstract | Publisher Full Text\n\nWabe NT, Hussein J, Suleman S, et al.: In vitro antifungal susceptibility of Candida albicans isolates from oral cavities of patients infected with human immunodeficiency virus in Ethiopia. J Exp Integr Med. 2011; 1(4): 265–71. Publisher Full Text\n\nMulu A, Kassu A, Anagaw B, et al.: Frequent detection of 'azole' resistant Candida species among late presenting AIDS patients in northwest Ethiopia. BMC Infect Dis. 2013; 13: 82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoges B, Bitew A, Shewaamare A: Spectrum and the in vitro antifungal susceptibility pattern of yeast isolates in Ethiopian HIV patients with oropharyngeal candidiasis. Int J Microbiol. 2016; 2016: 3037817. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNaicker SD, Magobo RE, Zulu TG, et al.: Two echinocandin-resistant Candida glabrata FKS mutants from South Africa. Med Mycol Case Rep. 2016; 11: 24–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAkortha EE, Nwaugo VO, Chikwe NO: Antifungal resistance among Candida species from patients with genitourinary tract infection isolated in Benin City, Edo state, Nigeria. Afr J Microbiol Res. 2009; 3(11): 694–9. Reference Source\n\nDjohan V, Angora KE, Vanga-Bosson AH, et al.: [In vitro susceptibility of vaginal Candida albicans to antifungal drugs in Abidjan (Ivory Coast)]. J Mycol Med. 2012; 22(2): 129–33. PubMed Abstract | Publisher Full Text\n\nFeglo PK, Narkwa P: Prevalence and antifungal susceptibility patterns of yeast isolates at the Komfo Anokye Teaching Hospital (KATH), Kumasi, Ghana. British Microbiol Res J. 2012; 2(1): 10–22. Publisher Full Text\n\nMagobo RE, Corcoran C, Seetharam S, et al.: Candida auris: An emerging, azole-resistant pathogen causing candidemia in South Africa. IJID. 2014; 21(S1): 215. Publisher Full Text\n\nGovender NP, Patel J, Magobo RE, et al.: Emergence of azole-resistant Candida parapsilosis causing bloodstream infection: results from laboratory-based sentinel surveillance in South Africa. J Antimicrob Chemother. 2016; 71(7): 1994–2004. PubMed Abstract | Publisher Full Text\n\nLockhart SR, Messer SA, Gherna M, et al.: Identification of Candida nivariensis and Candida bracarensis in a large global collection of Candida glabrata isolates: comparison to the literature. J Clin Microbiol. 2009; 47(4): 1216–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVermitsky JP, Edlind TD: Azole resistance in Candida glabrata: coordinate upregulation of multidrug transporters and evidence for a Pdr1-like transcription factor. Antimicrob Agents Chemother. 2004; 48(10): 3773–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoo JI, Choi CW, Lee KM, et al.: Gene Expression and Identification Related to Fluconazole Resistance of Candida glabrata Strains. Osong Public Health Res Perspect. 2010; 1(1): 36–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlexander BD, Johnson MD, Pfeiffer CD, et al.: Increasing echinocandin resistance in Candida glabrata: clinical failure correlates with presence of FKS mutations and elevated minimum inhibitory concentrations. Clin Infect Dis. 2013; 56(12): 1724–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSatoh K, Makimura K, Hasumi Y, et al.: Candida auris sp. nov., a novel ascomycetous yeast isolated from the external ear canal of an inpatient in a Japanese hospital. Microbiol Immunol. 2009; 53(1): 41–4. PubMed Abstract | Publisher Full Text\n\nChowdhary A, Voss A, Meis JF: Multidrug-resistant Candida auris: ‘new kid on the block’ in hospital-associated infections? J Hosp Infect. 2016; 94(3): 209–12. PubMed Abstract | Publisher Full Text\n\nMolepo JSE, Blignaut E: Antifungal susceptibility of oral Candida isolates from HIV/AIDS patients. J Dent Res. 2006; 85(Spec Issue B): 837.\n\nWertheimer AI, Santella TM, Lauver HJ: Successful public/private donation programs: a review of the diflucan partnership program in South Africa. J Int Assoc Physicians AIDS Care. 2004; 3(3): 74–9, 84–5. PubMed Abstract | Publisher Full Text\n\nGoldman M, Cloud GA, Smedema M, et al.: Does long-term itraconazole prophylaxis result in in vitro azole resistance in mucosal Candida albicans isolates from persons with advanced human immunodeficiency virus infection? The National Institute of Allergy and Infectious Diseases Mycoses study group. Antimicrob Agents Chemother. 2000; 44(6): 1585–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPelletier R, Peter J, Antin C, et al.: Emergence of resistance of Candida albicans to clotrimazole in human immunodeficiency virus-infected children: in vitro and clinical correlations. J Clin Microbiol. 2000; 38(4): 1563–8. PubMed Abstract | Free Full Text\n\nRautemaa R, Richardson M, Pfaller M, et al.: Reduction of fluconazole susceptibility of Candida albicans in APECED patients due to long-term use of ketoconazole and miconazole. Scand J Infect Dis. 2008; 40(11–12): 904–7. PubMed Abstract | Publisher Full Text\n\nMakita K, Takahashi K, Karara A, et al.: Experimental and/or genetically controlled alterations of the renal microsomal cytochrome P450 epoxygenase induce hypertension in rats fed a high salt diet. J Clin Invest. 1994; 94(6): 2414–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaplan JE, Benson C, Holmes KK, et al.: Guidelines for prevention and treatment of opportunistic infections in HIV-infected adults and adolescents: recommendations from CDC, the National Institutes of Health, and the HIV Medicine Association of the Infectious Diseases Society of America. MMWR Recomm Rep. 2009; 58(RR-4): 1–207; quiz CE1-4. PubMed Abstract | Publisher Full Text\n\nPursley TJ, Blomquist IK, Abraham J, et al.: Fluconazole-induced congenital anomalies in three infants. Clin Infect Dis. 1996; 22(2): 336–40. PubMed Abstract | Publisher Full Text\n\nLopez-Rangel E, Van Allen MI: Prenatal exposure to fluconazole: an identifiable dysmorphic phenotype. Birth Defects Res A Clin Mol Teratol. 2005; 73(11): 919–23. PubMed Abstract | Publisher Full Text\n\nRoemer T, Krysan DJ: Antifungal drug development: challenges, unmet clinical needs, and new approaches. Cold Spring Harb Perspect Med. 2014; 4(5): pii: a019703. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "18772", "date": "06 Jan 2017", "name": "António Paulo Gouveia de Almeida", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very good and much needed review. The subject in question, Candida antifungal drug resistance in sub-Saharan Africa, is most pertinent and up to date.\nI would, however, suggest that, similarly to what authors do in the Introduction, they would mention in Discussion, how is the situation of Candida antifungal drug resistance in sub-Saharan Africa different from that in the rest of the world, in a general outlook and as a result from this literature review.", "responses": [] }, { "id": "19187", "date": "11 Jan 2017", "name": "Roland N. Ndip", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title is appropriate for the content of the work; and the abstract captures a very good summary of the respective sections of the work. To me it is well written.\n\nThe content is excellently presented with a very good and balanced literature review. All the articles cited are not only current, but very relevant to the scope of the work. The gaps in knowledge in this area were well highlighted and a good attempt made to bridge the gap.\n\nThe conclusions are in line with the results obtained, and have profound recommendations which will improve on antifungal antimicrobial chemotherapy in sub-Saharan Africa.\n\nHowever, I have some minor questions, which I think would improve on the presentation if addressed. In the methods section under inclusion criteria: I was wondering why only studies conducted between 1998 and 2016 were used? Authors should state reasons to justify the stated period.\n\nExcept for journal instructions, I am of the opinion that merging Results and Discussion as a single section to read: Results/Discussion will be better since so much discussion has been taken under the results section.", "responses": [] } ]
1
https://f1000research.com/articles/5-2832
https://f1000research.com/articles/6-56/v1
19 Jan 17
{ "type": "Research Article", "title": "Annotated mitochondrial genome with Nanopore R9 signal for Nippostrongylus brasiliensis", "authors": [ "Jodie Chandler", "Mali Camberis", "Tiffany Bouchery", "Mark Blaxter", "Graham Le Gros", "David A Eccles", "Mali Camberis", "Tiffany Bouchery", "Mark Blaxter", "Graham Le Gros" ], "abstract": "Nippostrongylus brasiliensis, a nematode parasite of rodents, has a parasitic life cycle that is an extremely useful model for the study of human hookworm infection, particularly in regards to the induced immune response. The current reference genome for this parasite is highly fragmented with minimal annotation, but new advances in long-read sequencing suggest that a more complete and annotated assembly should be an achievable goal. We de-novo assembled a single contig mitochondrial genome from N. brasiliensis using MinION R9 nanopore data. The assembly was error-corrected using existing Illumina HiSeq reads, and annotated in full (i.e. gene boundary definitions without substantial gaps) by comparing with annotated genomes from similar parasite relatives. The mitochondrial genome has also been annotated with a preliminary electrical consensus sequence, using raw signal data generated from a Nanopore R9 flow cell.", "keywords": [ "nanopore", "MinION", "parasite", "mitochondria", "de novo", "phylogenetic", "bioinformatics" ], "content": "Introduction\n\nNippostrongylus brasiliensis is a parasitic nematode that naturally infects rodents. Its life cycle and morphology is comparable to Necator americanus and Ancylostoma duodenale, and it is thus an excellent murine model of human hookworm infection, a disease that affects approximately 700 million people worldwide1. Like its human counterparts, N. brasiliensis L3 larvae infect the host through the skin and migrate to the lungs where they feed on red blood cells (unpublished study; Haem metabolism is a check-point in blood-feeding nematode development and resulting host anaemia; Bouchery T, Filbey K, Shepherd A, Chandler J, Patel D, Schmidt A, Camberis A, Peignier A, Smith AAT, Johnston K, Painter G, Pearson M, Giacomin P, Loukas A, Bottazzi M-E, Hotez P, Le Gros G), causing extensive haemorrhage and anaemia – both hallmarks of hookworm infections. The larvae are coughed up and swallowed to enter the gastrointestinal tract. The nematode matures into a sexually active adult in the small intestine where it secretes eggs that enter the environment via the hosts’ faeces. Larvae hatch, undergo two molts to become infective L3 larvae, which propagates the lifecycle2. The immunology of N. brasiliensis infection has been studied extensively, and the parasite has been utilised as an inducer of potent Th2 responses in the lung and intestine, yielding important discoveries into cellular and molecular immune responses3–6. The N. brasiliensis model allows delineation of hookworm-induced immune profiles that could be targeted in drug or vaccine design, and provides a simple and well-characterised murine model in which to test these interventions for efficacy. To underpin these studies, a highquality reference genome is needed.\n\nThe most recent NCBI reference genome sequence for N. brasiliensis is a draft generated from Illumina HiSeq reads as part of the Wellcome Trust Sanger Institute (WTSI) 50 Helminth Genomes initiative7–9. It is 294.4 Mbp in total length, and highly fragmented (29,375 scaffolds with an N50 length of 33.5kb, and a longest scaffold of under 400kb). The N. brasiliensis reference genome would benefit from improvement, a goal that may be readily achieved with the advent of affordable long-read sequencing technologies.\n\nThe Oxford Nanopore Technologies’ (ONT) MinION platform is improving at a rapid pace, with improvements in flow cell chemistry and base calling software announced frequently. In 2015, the median accuracy of double-stranded MinION reads, using R7.3 sequencing pores, was about 89% pores, sequenced at 60 bases per second with a yield of about 200 Mb10. The quality and length of sequences generated from R7.3 pores was sufficient to create a single-contig assembly of the Escherichia coli K-12 MG1655 chromosome using nanopore reads alone, with consensus accuracy of 99.5%11. An equimolar sample of Mus musculus, E. coli and Enterobacteriophage lambda DNA was sequenced in September 2016 on the International Space Station using R7.3 flow cells, producing approximately equal read counts for the different samples with a median accuracy of 83–92% for 2D reads across four runs12,13.\n\nThe recent introduction of R9 sequencing pores in June 2016, together with improved software for base-calling the generated signal trace at 250 bases per second14, has improved the median accuracy of high-quality double-stranded reads to 95%, and yield to 800 Mb (personal communication, September 2016; MinION Analysis and Reference Consortium). Consensus accuracy for an E. coli K12 assembly consequently also increased to 99.96%15. A rapid single-stranded sequencing kit was introduced in August 2016, reducing post-extraction sample preparation time to less than 15 minutes (see 16).\n\nThe R9.4 flow cell was commercially released by ONT a few months later in October 2016. This release brought together software and chemistry improvements that increased run flow cell yield into the gigabase range, and increased sequencing speed to 450 bases per second. Additional use cases for the MinION are evident with this increased yield: the R9.4 flow cells have already been used for sequencing human genomes using multiple flow cells, with observed yields of about 1–4Gb for each individual sequencing run17,18.\n\nThe mitochondrial genome is useful for epidemiology and population genetic analysis in nematodes, as it is rapidly evolving19,20. An average cell has 100–1000 mtDNA molecules, compared to two nuclear DNA molecules21, and this stoichiometric excess facilitates analyses, especially where starting materials are limited. The strict maternal inheritance of the mitochondrial chromosome, coupled with a general lack of recombination in this haploid replicon permits inference of maternal lineages21–23. The ONT MinION can be deployed in infectious disease outbreak scenarios, and a \"read until\" methodology promises to make rapid, specific identification of known infectious agents possible. The technology has obvious utility in other areas of epidemiology and infection surveillance, and to enhance these applications it will be useful to develop the \"read until\" methodology to be able to detect a wider range of infectious agents from metagenomic sequencing. To do this, electronic signatures representing the MinION nanopore event signals could be used as a reference library to pre-screen raw signals from the pores before base calling. Here we present a complete mitochondrial genome for N. brasiliensis, assess its quality by gene prediction and phylogenetic analyses, and provide a validated electronic signal trace for the sequence.\n\nThis annotation represents the first hurdle in generating a complete genomic sequence for this model organism and provides crucial information for evolutionary and immunological studies. The rapid advancement of molecular technologies, such as qPCR, RNAseq, nanostring and high through-put sequencing, has given researchers the capacity to acquire an expansive array of new knowledge and insight into how genetic pathways function and interact at a molecular level. However, the lack of a complete annotated reference genome for N. brasiliensis thus far has restricted the full exploration into this important helminth.\n\n\nMethods and results\n\nGenomic DNA was extracted from adult N. brasiliensis and sequenced on a MinION R9 flow cell. Reads from this sequencing run were then assembled, and the highest-coverage contig (mitochondrial DNA) was error-corrected and circularised for further analysis.\n\nN. brasiliensis was originally sourced from Lindsey Dent of the University of Adelaide, South Australia and has been maintained for 22 years by serial passage at the Malaghan Institute. Female Lewis rats were bred and used for the maintenance of the N. brasiliensis life cycle at 4 months of age (weight over 150g; housed in IVC caging and given ad libitum access to food and water). For the purposes of this study, one rat was infected with 4000 infective larvae. After 7 days, to allow the worms to mature to the adult stage in the small intestine, the rat was euthanized, and the small intestine dissected and flushed with PBS to harvest worms, as outlined in Camberis et al.2. Ethics approval for the maintenance of the N. brasiliensis life cycle is overseen and approved by the Victoria University of Wellington Animal Ethics Committee.\n\nThe harvested N. brasiliensis were washed in PBS by centrifugation to remove cellular debris. The nematodes were frozen at -80°C bead-beaten, and DNA extracted using Qiagen DNeasy Blood and Tissue DNA extraction kit, yielding approximately 4µg of high molecular weight double-stranded DNA (determined by the Quantus QuantiFluor dsDNA System). This DNA was treated with RNAse. Two sequencing libraries were made using the Oxford Nanopore 2D genomic DNA sequencing kit, yielding in total about 70ng of adapter-ligated sequencing library. No effort was made to specifically isolate mitochondrial DNA. The first preparation was loaded onto an R9 MinION flow cell and sequenced for 6 hours, and the second preparation was loaded onto the same flow cell and sequenced for an additional 36 hours. Pore occupancy at 30 minutes into the first run was about 25%, while pore occupancy at 30 minutes into the second run was about 80%.\n\nAll FASTQ sequences (i.e. both 1D and 2D reads) were extracted from the base-called FAST5 files. These sequences were fed into Canu v1.324 to generate assembled contigs. The contig with the highest coverage was a 19907 bp sequence with similarity to other nematode mitochondrial genomes (see Supplementary File 1). This sequence had 98% identity to an unannotated N. brasiliensis contig in the Wellcome Trust Sanger Institute (WTSI) N. brasiliensis assembly7.\n\nReads generated by WTSI (SRA ID: ERR063640) were mapped as pairs to the MinION mitochondrial contig using Bowtie225 in local mode. At each location, one read was randomly sampled from those that mapped to that location, representing a reference-based digital normalisation to approximately 100X coverage (see Supplementary File 2). The differences between these normalised reads and the MinION contig were evaluated using a custom script, producing a corrected sequence based on the consensus read alignments. The mapping and correction process was repeated with BWA-MEM26 on the corrected sequence (see Supplementary File 3) to identify additional variants that were missed by Bowtie2, due to multiple matches to duplicated regions.\n\nRepeated sections of the linear contig (representing duplicated regions of the circular sequence) were merged to generate a circular consensus sequence, and the resultant sequence adjusted (by shifting sequence from the end to the start of the circular genome) so that the first base in the genome was set to the beginning of the COX1 gene (following the convention of OGRe27, see http://drake.physics.mcmaster.ca/ogre). A final round of error correction was carried out on the circularised genome using Bowtie2-aligned reads from ERR063640 (see Supplementary File 4), producing a final mitochondrial genome length of 13,355 bp. The original 19 kbp contig thus contained about 6 kbp of duplicated sequence. MinION reads were mapped to the assembled genome to identify variants not present in the WTSI reads.\n\nAfter remapping the original R9 MinION reads back to the assembled and corrected genome with GraphMap28, four locations were found with variant calls that contributed to more than 50% of the read coverage. Three of these variants involved transition mutations: T → C at 5742, G → A at 6102, and T → C at 11460. One additional complementary mutation was found: T → A at 2860 (see Figure 1).\n\nGene regions are displayed in this circular mitochondrial DNA diagram in yellow, with tRNA regions in blue. The AT-rich region between the ND5 and ND6 genes is shaded grey. A combined coverage/variant plot is also displayed, showing MinION read coverage (in black), and base-called transition, transversion, and complementary variants (in chartreuse, magenta and cyan, respectively). Variant differences between Wellcome Trust Sanger Institute and Malaghan Institute of Medical Research strains of Nippostrongylus brasiliensis are indicated on the perimeter of the diagram.\n\nApproximate gene boundaries were determined by a local NCBI BLASTx search, mapping the contig to mitochondrial protein sequences from Necator americanus (see Table 1; Supplementary File 5 and Supplementary File 6). Regions between genes were then scanned using Infernal cmscan29 to identify exact tRNA gene boundaries and codon sequences (see Table 2). The amino acid associated with each tRNA was identified using BWA-MEM to map annotated tRNA sequences from Oesophagostomum columbianum, N. americanus, Strongylus vulgaris, and A. duodenale. One tRNA region found by cmscan (between the ND4 and COX1 genes) could not be matched to any existing tRNA sequences. When this sequence was fed into RNAstructure30, the predicted secondary structure had no T-loop or D-loop, and an anticodon loop of 8 bases (Figure 2). The anticodon for this structure pairs with one of the two most common gene start codons (i.e. ATT), and could potentially pair with the other most common start codon through a wobble A-A pairing on the third base (see 31).\n\nRNA structure for the truncated tRNA between ND4 and COX1, predicted by RNAstructure.\n\nPredicted gene features from the Nippostrongylus brasiliensis mitochondrial genome. Stop codons that end in hyphens (-) are completed by the addition of polyA sequence.\n\nPredicted tRNA sites in the Nippostrongylus brasiliensis mitochondrial genome. One truncated tRNA site between the ND4 and COX1 genes (detected by cmscan) could not be fully annotated.\n\nPrecise gene start boundaries were determined by mapping open reading frames (ORFs) between the tRNA genes (codon translation table 5: Invertebrate Mitochondrial) with NCBI SmartBLAST (https://blast.ncbi.nlm.nih.gov/smartblast/smartBlast.cgi?CMD=Web). Stop boundaries were determined by looking for plausible in-frame stop sequences surrounding the end region of matching SmartBLAST hits. The boundaries for the ribosomal RNA genes were determined by a BLAST search against the four previously compared parasite species. Finally, the AT-rich region was identified as the region between tRNA-Ala and tRNA-Pro.\n\nWe identified orthologues of cytochrome oxidase 1 (COX1), cytochrome B (CytB), and the large ribosomal RNA subunit (l-rRNA) in other rhabditid nematodes using BLAST, and collated a dataset from 49 taxa. Nucleotide sequences were aligned using clustalo32, trimmed with trimAL, and phylogenies estimated using RAxML using the GTRGAMMA model. Bootstrap values were calculated from 100 iterations. Figures were generated using FigTree v1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/). The Nippostrongylus brasiliensis sequences were placed within Strongylomorpha, as expected, and N. brasiliensis was found to be sister to Heligmososmoides polygyrus, a finding in keeping with morphological systematics. Many internal nodes have very low bootstrap values, suggesting either low or conflicting signal in the data. Some groups were well supported, but these tend to be within rather than between genera. Overall the tree conforms to the classical morphological and global molecular phylogenies of the suborder, but cannot stand as indicators of those relationships independently (Figure 3).\n\nPhylogenetic tree based on evidence from three mitochondrial-encoded genes: cytochrome oxidase 1, l-rRNA, and cytochrome B. This tree demonstrates sequence similarities for 47 species from the Rhabditida together with two outgroups (Pristionchus pacificus and Koerneria sudhausi). Branch lengths are nucleotide substitutions per bp. Nodes are labelled with sub-sequence deletion bootstrap values. Branch colours and width are representative of bootstrap proportion.\n\nPark and colleagues32 used whole mitochondrial genomes (i.e. all 12 protein coding loci) to develop a phylogeny of Nematoda, with the goal of analysing the placement of some unusual mitochondria from Ascaridia species, but including many strongyles. Our analyses are largely congruent with theirs, albeit with lower support (as noted above).\n\nThe template and complement raw signal from the MinION reads mapped by GraphMap28 were extracted from the FAST5 files, and sorted into four groups:\n\n1. Template sequence, mapped to coding strand\n\n2. Template sequence, mapped to non-coding strand\n\n3. Complement sequence, mapped to coding\n\n4. Complement sequence, mapped to non-coding\n\nA summary of mapping counts can be found in Table 3. Reads where the template fragment mapped to the non-coding strand were about two-thirds that of coding strand-mapped reads, with a similar proportion of reads distributed between the template and complement read fragments.\n\nStatistics for the four different read mapping groups, showing reads that mapped to the Nippostrongylus brasisiliensis mitochondrial genome with over 50% coverage.\n\nEvent information (generated by the ONT cloud base caller Metrichor dragonet, version 1.22.4) was extracted for these sorted reads, and per-group median event currents were calculated for each pentamer found in the reference mitochondrial genome. An ideal signal trace of the mitochondrial genome was generated using these statistics for the four different signal groups (see Figure 4; Supplementary File 7).\n\nIdeal event trace for 200 pentamers at the tail end of the Cytochrome B gene. The complement sequence has a slightly lower current than the template sequence for reads mapped to the coding strand, and also a slightly lower current for reads mapped to the non-coding strand.\n\nMedian complement events mapped to coding strand pentamers had a slightly higher event current when compared to template events (median difference = 3.94 pA, 90% range: 1.2 ∼ 6.7, M AD = 1.53), and were lower in events mapped to non-coding pentamers (median difference = −2.08pA, 90% range: −5.7 ∼ 1.6, M AD = 2.93).\n\nThe median signal level for pentamers found in the N. brasiliensis mitochondrial genome has a very strong positive correlation between read direction for the coding strand (r = 0.982, 90% range: 0.980 ∼ 0.984) and the non-coding strand (r = 0.974, 90% range: 0.972 ∼ 0.978), whereas there is weaker negative correlation between strands for the template direction (r = −0.67, 90% range: −0.70 ∼ −0.63) and the complement direction (r = −0.66, 90% range: −0.69 ∼ −0.62).\n\nRaw signal traces from both template and complement strands were converted to pA using scaling metadata in the FAST5 files, mapped to the GraphMap-aligned reference base positions using event metadata, and linearly interpolated to 11 samples per base using the R approx function (R version 3.3.1). Median signal traces (at a sub-base resolution) were generated by summarising the mapped signal at each interpolated location (Figure 5; Supplementary File 8).\n\nRaw signal plot for 100 bases at the start of the Cytochrome B gene for template read directions (top) and complement read directions (bottom). Median raw signal current is shown as a thick red line, with individual raw signal observations shown in grey. Ideal event current for the observed pentamers is shown as black circles.\n\nThe event data signal for template sequence mapped to the coding strand was loosely correlated with median raw signals in the middle of the interpolated region (r = 0.52, 90% range: 0.51 ∼ 0.53), with other read groups demonstrating lower correlations (r = 0.29 ∼ 0.44). This correlation disappeared when shifting the compared signal by one base in either direction (r = 0.03 ∼ 0.09).\n\n\nDiscussion\n\nUsing a long-read assembler, and three passes of error correction with publicly-available data, we have created a full-length, error-free, de novo assembly of the mitochondrial genome of N. brasiliensis. This genome has been annotated with gene and tRNA boundaries, and compared with other related parasite species. An additional preliminary “electrical” annotation was generated from mapped nanopore read sequences.\n\nLow-cost long-read sequencing has made possible full-length assemblies of a number of different megabase-length genomes from nanopore data alone (e.g 11,33–35), so it is not surprising that a full-length mitochondrial assembly was also possible using nanopore reads. The vast wealth of publicly-available data allows fast and low cost assembly, correction, and annotation of genomes, producing high-quality reference sequences that are of great benefit to medical research.\n\nWe were able to assemble the N. brasiliensis mitochondrial genome from a whole-genome sequencing nanopore dataset, by identifying assembly contigs with high relative coverage. The assembly is of high quality, based on read coverage, mapping of Illumina short reads, and annotation. The gene order is identical to that of Caenorhabditis elegans and other strongylomorph nematodes (see 36). Despite this shared structure, there is sufficient variation in sequences between species to generate resolved phylogenies32.\n\nDuring the final preparation of this paper for publication, the WTSI deposited an annotated mitochondrial genome for N. brasiliensis (accession id: AP017690.1). This complements the introduction of the WormBase ParaSite resource for helminth genomics9. While the associated reference for the WTSI N. brasiliensis mitochondrial genome is not yet published, it is expected that this mitochondrial genome was assembled using a similar method to the WTSI’s previous work37 (i.e. a reference-based iterative mapping procedure using MITObim38).\n\nThe sequence of this assembly differs only in an additional T insertion into a 10 base poly-T tract in the l-rRNA gene. While such polynucleotide tracts are problematic for MinION, the polyT region appears to be polymorphic, with some support for both variants in the WTSI reads (ERR063640). In addition the WTSI annotation excludes the AT-rich region.\n\nAt the time of sequencing, no mitochondrial genome for N. brasiliensis was available. We thus explored the utility of the MinION data in species identification. As the mitochondrial genome is at a higher molarity than the nuclear genome, low-coverage sequencing of a target genome can yield deep coverage of the mitochondrion. Assembly of this replicon, and then analysis in a phylogenetic context was successful in placing N. brasiliensis in the Strongylomorpha. We suggest that this approach would be a useful technology for identification of unknown specimens in clinical practice, biosurveillance or biodiversity research programmes. In addition the nanopore electronic signal of the mitochondrial sequence could be used in a “read until” approach39 to diagnosis, using live monitoring to identify reads that likely derive from this, or a very similar genome. Usually, identification through sequencing is applied to amplification of specific target loci in a specimen or sample, an approach known as DNA barcoding. Direct sequencing of the whole genome of a specimen on MinION would allow both barcoding and produce additional sequences that could be used for, for example, population genetic diversity analysis.\n\nNanopore reads were separated into four different read groups to provide information that could be used to establish whether or not there are different sequencing features associated with template and complement strands. In general, the coding and non-coding strands had similar electrical profiles, as demonstrated by the event data (e.g. see Figure 4).\n\nAs this investigation is the first attempt to categorise the electrical properties of a complete mitochondrial genome, errors in the data analysis (e.g. due to incorrect mapping, low read coverage, and incorrect scaling parameters) cannot be excluded as an explanation for the difference in current that were observed between event data and raw signal. A comparison of raw signal current to the ideal current suggests that the pentamer model is probably sufficient to fully describe variation in signal in the mitochondrial genome. Although correlation between the signal and the ideal pentamer model is low for all four sequencing groups (template coding, template non-coding, complement coding, complement non-coding), this variation could be explained by errors in the raw signal mapping process, and other alternative mapping techniques (e.g. nanoraw40) may give better performance for linking raw signal to sequenced bases.\n\nIt is possible that the observed difference between the raw and ideal event signal may be due to methylation and other epigenetic modification of the mitochondrial genome. Methylation is a known feature of mitochondrial DNA (see 41), and methylation patterns can be observed as changes in the nanopore electrical signal42. Due to the lack of information about epigenetic patterns from de novo nanopore sequencing, this dataset is provided without additional epigenetic analysis as a source of discovery for other researchers.\n\n\nConclusions\n\nThe data presented here have been created from a minimally-prepared whole-genome DNA from N. brasiliensis, combining nanopore reads with publicly-available datasets. Using non-targeted sequencing, we have been able to generate a fully-annotated (gap-free) mitochondrial genome, with an initial electrical signal annotation having a resolution that is finer than a single base. The analysis proves that the efficiently MinION-generated mitochondrial genome of N. brasiliensis is of high enough quality for phylogenetic use.\n\nWe hope that the procedures discussed here will be sufficient to guide other researchers in annotating mitochondrial genomes and generating consensus signal traces, and that these data will contribute more generally towards improving the sequence base calling algorithms in the future for devices that implement sequencing by observation.\n\n\nData availability\n\nSequences have been deposited into NCBI Genbank, with accession number KY347017. Reads used to produce this assembly are associated with BioProject PRJNA328296. The assembly was error corrected using Illumina reads from a Wellcome Trust Sanger Institute sequencing run (ERR063640).\n\nThe mpileup2proportion.pl custom script that was used for error-correcting nanopore reads using Bowtie2-mapped short reads, as well as for generating count data for the read coverage plot, is available from David Eccles’ github repository (DOI, 10.5281/zenodo.164193)43. Read mapping group statistics were generated using the fastx-grep.pl and fastx-length.pl scripts also from this repository. These scripts have also been included here as a supplementary file (Supplementary File 9).", "appendix": "Author contributions\n\n\n\nJC Preparation & extraction of DNA, phylogenetic analyses, manuscript preparation\n\nMC Maintenance & propagation of N. brasiliensis larvae\n\nTB N. brasiliensis sequencing project conception\n\nMB Interpretation of phylogenetic results\n\nGLG Project design & oversight\n\nDAE Project design, MinION sequencing, assembly, data analysis, manuscript preparation\n\nAll authors have read the paper and provided edit suggestions where appropriate.\n\n\nCompeting interests\n\n\n\nThe present project has been fully-funded, but we have in the past received complimentary deliveries of flow cells and sequencing reagents from Oxford Nanopore Technologies, as part of the MinION Access Program.\n\nThe authors declare that there are no other competing interests.\n\n\nGrant information\n\nThis study was supported by program grant funding from the Health Research Council of New Zealand (14/003), the Marjorie Barclay Trust, and the Glenpark Foundation.\n\n\nAcknowledgements\n\nWe would like to thank Dr. Matt Berriman and the Wellcome Trust Sanger Institute for the unpublished draft genome data of N. brasiliensis, Kara Filbey for providing editorial suggestions for this manuscript and to G. Koutsovoulos and A. Buck for prepublication access to H. polygyrus genome data.\n\n\nSupplementary files\n\nSupplementary File 1 Original single-contig assembly generated by Canu v1.3.\n\nClick here to access the data.\n\nSupplementary File 2 ERR063640 reads mapped to Canu-assembled genome, digitally normalised to one read per base.\n\nClick here to access the data.\n\nSupplementary File 3 ERR063640 reads mapped to the first error corrected-genome, digitally normalised to one read per base.\n\nClick here to access the data.\n\nSupplementary File 4 ERR063640 reads mapped to the second error corrected-genome, digitally normalised to one read per base.\n\nClick here to access the data.\n\nSupplementary File 5 BED-format file of discovered mtDNA features (prior to correction of boundaries following protein translation).\n\nClick here to access the data.\n\nSupplementary File 6 FASTA file containing subsequences of the mitochondrial genome representing discovered features.\n\nClick here to access the data.\n\nSupplementary File 7 Event-level data aggregated for each base in the mitochondrial genome, including ideal current derived from pentamer signals.\n\nClick here to access the data.\n\nSupplementary File 8 Data file containing interpolated raw signal-level data.\n\nClick here to access the data.\n\nSupplementary File 9 Compressed file containing all Perl and R scripts used for data processing and analysis.\n\nClick here to access the data.\n\n\nReferences\n\nHotez PJ, Bethony JM, Diemert DJ, et al.: Developing vaccines to combat hookworm infection and intestinal schistosomiasis. Nat Rev Microbiol. 2010; 8(11): 814–826. PubMed Abstract | Publisher Full Text\n\nCamberis M, Le Gros G, Urban J Jr: Animal model of Nippostrongylus brasiliensis and Heligmosomoides polygyrus. Curr Protoc Immunol. 2003; Chapter 19: Unit 19.12. PubMed Abstract | Publisher Full Text\n\nBouchery T, Kyle R, Camberis M, et al.: ILC2s and T cells cooperate to ensure maintenance of M2 macrophages for lung immunity against hookworms. Nat Commun. 2015; 6: 6970. PubMed Abstract | Publisher Full Text\n\nOhnmacht C, Schwartz C, Panzer M, et al.: Basophils orchestrate chronic allergic dermatitis and protective immunity against helminths. Immunity. 2010; 33(3): 364–374. PubMed Abstract | Publisher Full Text\n\nChen F, Wu W, Millman A, et al.: Neutrophils prime a long-lived effector macrophage phenotype that mediates accelerated helminth expulsion. Nat Immunol. 2014; 15(10): 938–946. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeill DR, Wong SH, Bellosi A, et al.: Nuocytes represent a new innate effector leukocyte that mediates type-2 immunity. Nature. 2010; 464(7293): 1367–1370. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolroyd N, Sanchez-Flores A: Producing parasitic helminth reference and draft genomes at the wellcome trust sanger institute. Parasite Immunol. 2012; 34(2–3): 100–107. PubMed Abstract | Publisher Full Text\n\nWellcome Trust Sanger Institute: Nippostrongylus brasiliensis genome sequencing. NCBI BioProject. 2014. Reference Source\n\nHowe KL, Bolt BJ, Shafie M, et al.: WormBase ParaSite - a comprehensive resource for helminth genomics. Mol Biochem Parasitol. 2016; pii: S0166-6851(16)30160-8. PubMed Abstract | Publisher Full Text\n\nIp CL, Loose M, Tyson JR, et al.: MinION Analysis and Reference Consortium: Phase 1 data release and analysis [version 1; referees: 2 approved]. F1000Res. 2015; 4: 1075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoman NJ, Quick J, Simpson JT: A complete bacterial genome assembled de novo using only nanopore sequencing data. Nat Methods. 2015; 12(8): 733–5. PubMed Abstract | Publisher Full Text\n\nCastro-Wallace SL, Chiu CY, John KK, et al.: Nanopore dna sequencing and genome assembly on the international space station. bioRxiv. 2016. Publisher Full Text\n\nJSCNASA and r/Science: NASA AMA: We just sequenced DNA in space for the first time. ask us anything! The Winnower. 2016. Publisher Full Text\n\nBrown C: Inside the skunkworx. In: London Calling. Oxford Nanopore Technologies. 2016. Reference Source\n\nSimpson J: Supporting R9 data in nanopolish. Simpson Lab Blog. 2016. Reference Source\n\nEdwards A, Debbonaire AR, Sattler B, et al.: Extreme metagenomics using nanopore DNA sequencing: a field report from svalbard, 78 n. bioRxiv. 2016. Publisher Full Text\n\nBrown C: Cliveome onthg1 data release. Github repository. 2016. Reference Source\n\nAkeson M, Beggs AD, Nieto T, et al.: Na12878: Data and analysis for na12878 genome on nanopore. Github repository. 2016. Reference Source\n\nBrown WM, George M Jr, Wilson AC: Rapid evolution of animal mitochondrial DNA. Proc Natl Acad Sci U S A. 1979; 76(4): 1967–1971. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartin SA: Mitochondrial DNA repair. In: DNA Repair – On the Pathways to Fixing DNA Damage and Error. InTech, 2011. Publisher Full Text\n\nPakendorf B, Stoneking M: Mitochondrial DNA and human evolution. Annu Rev Genomics Hum Genet. 2005; 6: 165–183. PubMed Abstract | Publisher Full Text\n\nCann RL, Stoneking M, Wilson AC: Mitochondrial DNA and human evolution. Nature. 1987; 325(6099): 31–36. PubMed Abstract | Publisher Full Text\n\nHarrison RG: Animal mitochondrial DNA as a genetic marker in population and evolutionary biology. Trends Ecol Evol. 1989; 4(1): 6–11. PubMed Abstract | Publisher Full Text\n\nBerlin K, Koren S, Chin CS, et al.: Assembling large genomes with single-molecule sequencing and locality-sensitive hashing. Nat Biotechnol. 2015; 33(6): 623–630. PubMed Abstract | Publisher Full Text\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H: Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv, 1303.3997v2. 2013. Reference Source\n\nJameson D, Gibson AP, Hudelot C, et al.: Ogre: a relational database for comparative analysis of mitochondrial genomes. Nucleic Acids Res. 2003; 31(1): 202–206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSović I, Šikić M, Wilm A, et al.: Fast and sensitive mapping of nanopore sequencing reads with GraphMap. Nat Commun. 2016; 7: 11307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNawrocki EP, Kolbe DL, Eddy SR: Infernal 1.0: inference of RNA alignments. Bioinformatics. 2009; 25(10): 1335–1337. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReuter JS, Mathews DH: RNAstructure: software for RNA secondary structure prediction and analysis. BMC Bioinformatics. 2010; 11(1): 129. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMurphy FV 4th, Ramakrishnan V: Structure of a purine-purine wobble base pair in the decoding center of the ribosome. Nat Struct Mol Biol. 2004; 11(12): 1251–1252. PubMed Abstract | Publisher Full Text\n\nPark JK, Sultana T, Lee SH, et al.: Monophyly of clade III nematodes is not supported by phylogenetic analysis of complete mitochondrial genome sequences. BMC Genomics. 2011; 12(1): 392. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRisse J, Thomson M, Patrick S, et al.: A single chromosome assembly of Bacteroides fragilis strain BE1 from Illumina and MinION nanopore sequencing data. Gigascience. 2015; 4(1): 60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIstace B, Friedrich A, d’Agata L, et al.: de novo assembly and population genomic survey of natural yeast isolates with the oxford nanopore minion sequencer. bioRxiv. 2016; 066613. Publisher Full Text\n\nDavis AM, Iovinella M, James S, et al.: Using minion nanopore sequencing to generate a de novo eukaryotic draft genome: preliminary physiological and genomic description of the extremophilic red alga galdieria sulphuraria strain sag 107.79. bioRxiv. 2016. Publisher Full Text\n\nHu M, Gasser RB: Mitochondrial genomes of parasitic nematodes--progress and perspectives. Trends Parasitol. 2006; 22(2): 78–84. PubMed Abstract | Publisher Full Text\n\nHunt VL, Tsai IJ, Coghlan A, et al.: The genomic basis of parasitism in the Strongyloides clade of nematodes. Nat Genet. 2016; 48(3): 299–307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHahn C, Bachmann L, Chevreux B: Reconstructing mitochondrial genomes directly from genomic next-generation sequencing reads--a baiting and iterative mapping approach. Nucleic Acids Res. 2013; 41(13): e129. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoose M, Malla S, Stout M: Real-time selective sequencing using nanopore technology. Nat Methods. 2016; 13(9): 751–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStoiber MH, Quick J, Egan R, et al.: De novo identification of dna modifications enabled by genome-guided nanopore signal processing. bioRxiv. 2016. Publisher Full Text\n\nGhosh S, Singh KK, Sengupta S, et al.: Mitoepigenetics: the different shades of grey. Mitochondrion. 2015; 25: 60–66. PubMed Abstract | Publisher Full Text\n\nSimpson JT, Workman R, Zuzarte PC, et al.: Detecting DNA methylation using the oxford nanopore technologies MinION sequencer. bioRxiv. 2016; 047142. Publisher Full Text\n\nEccles D: Bioinformatics scripts: Initial citable release. Zenodo. 2016. Data Source" }
[ { "id": "19519", "date": "20 Feb 2017", "name": "Matthias Bernt", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper describes the sequencing of the mitochondrial genome of the Nippostrangylus brasiliensis with the novel Nanopore sequencing technique. To the best of my knowledge this seems to be one of the first mitogenomes that have been sequenced with this technology. The annotation of the genome and its use for phylogeny and taxonomic identification have been discussed.\nAnother group has sequenced the genome (including the mitogenome) has been sequenced using another NGS strategy. While this seem unfortunate its actually good for this study otherwise no reference data would have been available for comparison and error correction. I'm missing an analysis of the error rates of the sequencing without the correction that has used the read data from the other study. I'm wondering if the combination of data from MiniON sequencing and short read sequencing strategies might be a good general strategy?\nThe paper is well written and needs only a few corrections and additions. Details are given below.\n\nAbstract: =========\nThe term \"electrical consensus sequence\" might be puzzling for uninformed readers.\nIntroduction: =============\n\"L3\" is also difficult to understand for non experts. Maybe add 'stage'?\n\"highquality\" missing space\nMiniON sequencing =================\n\"R7.3\" Can you explain what this means?\n\"89% pores\" is unclear to me.\nWhat are \"2D reads\"?\nScientific justification ========================\n\"strict maternal inheritance\": nothing in biology is strict. Check for paternal leakage or doubly-uniparental inheritance.\nThe term \"read until\" methodology is unclear.\nDNA extraction and library preparation ======================================\nExplain the abbreviation PBS\nError correction and circularisation ====================================\nIt needs to be explained what the custom script is doing.\n\"Repeated sections of the linear contig were merged... \" What happens with true repeats?\nSince not all readers might know the color chartreuse I would suggest to order the colors as in the legend.\nMitochondrial genome annotation ===============================\nI'm wondering why automatic methods for genome annotation have been ignored. Not saying that the applied approach is wrong.\nWhen you use cmscan you need to state the used model as well.\n\"tRNA... codon sequences\" Do you mean anticodon?\nHow about non-canonical start codons? How do you define \"plausible\" in frame stop?\nFor the truncated tRNAs there are examples known for Enoplea: see http://dx.doi.org/10.4161/rna.21630 and http://dx.doi.org/10.1016/j.biochi.2013.07.034\nPhylogenetic Analyses =====================\nReferences for RAxML and trimAL are missing.\nEvent Mapping =============\n\"Event information\" Specify what an event is.\n\"per-group\" and later on \"signal groups\" You should reformulate this. Currently its a bit confusing.\nWhy pentamer?\nWhat is an ideal signal trace?\nRaw Signal Mapping ==================\nHas Graph Map been referenced?\nDiscussion ==========\nAre you really sure that the sequence is \"error free\"? In the end of the paper you write that its of \"high enough quality...\".", "responses": [ { "c_id": "2637", "date": "12 Apr 2017", "name": "David Eccles", "role": "Author Response", "response": "Thank you very much for your review of our paper on mitochondrial genome sequencing with the MinION sequencer. We are currently working on updating the paper as per your report (and the report of Christian Rödelsperger), and will deliver a full response once the next revision of the paper is ready." } ] }, { "id": "21560", "date": "11 Apr 2017", "name": "Christian Rödelsperger", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Chandler et al describes the sequencing, assembly and annotation of the Nippostrongylus brasiliensis mitochondrial genome using Nanopore sequencing technology. The comparison of the resulting assembly with other N. brasiliensis data from the parasite sequencing initatitive of the Sanger Institute and also with mitochondrial genomes from other nematodes support that the produced assembly is of high quality.\n\nIn general, the structure of the article is a bit unusual. Methods and Results sections are combined, each part has multiple subsections that are not really connected. Some parts of the paper deal with the mitochondrial genome of N. brasiliensis other parts focus on very specific aspects of Nanopore sequencing. I would recommend to concentrate on the  mitochondrial genome of N. brasiliensis and keep the nanopore-specific questions for a separate methodological paper.\nSection: Introduction The Introduction basically describes the lifecycle of N. brasiliensis and the mode of infection. The authors might consider writing a more general introduction about nematodes, parasites, .. that it is important to study these parasites to develop treatments. In addtion, there are multiple related parasites that are later part of the phylogenetic analysis. It would be good to give some information about those ones as well. e.g. what are their hosts?\nSection: Current reference genome Please provide the Genbank entry for the NCBI reference genome or provide the assembly that has been used for this study as supplemental data. Otherwise, it will be hard to reproduce the results.\nSection: Scientific justification please explain what a \"read until” methodology is and provide some reference for the use of ONT MinION in studies of infectious disease outbreak.\nIs the N. brasiliensis isolate that was used for sequencing have a strain ID? If yes, please specify and at least register a biosample for it and give the accession number. Was it the same isolate that was used for the NCBI reference genome.\nSection: Whole-genome assembly with Canu How much sequencing data was obtained? Please provide some more details about the assembly results. How many Contigs, total size.\nFor readers, that would like to use Nanopore technology to sequence their genomes it would be interesting to compare the quality of of the mitochondrial genome with nuclear contigs.  I guess, that the lower coverage of nuclear contigs should also result in higher number mismatches with regard to the reference genome. A major finding of the paper could be that based on current nanopore technology, it only makes sense to do the multicopy mitochondrial genome. Such a statement could help people to plan their projects.\n\nSection: Error correction and circularisation How many sites had to be corrected. Error correction only makes sense, if the WTSI data is from the same isolate. Please clarify if this is the case. If it is the same isolate, there does the 2% mismatches come from in the \"Whole-genome assembly with Canu\" section.\nSection: Mitochondrial genome annotation \"The amino acid associated with each tRNA was identified using BWA-MEM to map annotated tRNA sequences from Oesophagostomum columbianum, N. americanus, Strongylus vulgaris, and A. duodenale.\" Using BWA-mem to annotate tRNAs from other species sounds unusual. Do you have a reference where the performance of this methodology has ever been evaluated?\nSection: Phylogenetic analyses Please provide more information about the alignment, how many sites? amount of missing data?\n\nPlease provide references for what is called \"the classical morphological and global molecular phylogenies\"\nSection: Read mapping/  Event mapping /Raw signal mapping  (Table 3, Fig 4 and 5) These sections seem to examine very specific aspects of the Nanopore sequencing technology and do not add any addtional insights for the presented mitochondrial genome. I also have problems in understanding what kind of questions are asked. It seems to me, as if the authors try to examine whether Nanopore data has a preference for the template or complement strand or whether there is a bias for coding or non-coding sequences. How well the sequencing signal corresponds to the basecalls in the final assembly and what features correlate with variation in sequencing signals. The presented results are not conclusive (no statistical tests have been done to assess the significance of the results)  and are not really related to the rest of the manuscript. I would recommend to use this and other comparable data for a separate more methodological paper. One additional feature that could be tested would be how differences raw and ideal event current, sequencing coverage  depend on GC content.\nMinor comments\nSection: DNA extraction and library preparation The first paragraph should probably labeled as \"Worm culturing\" or something else. It has nothing to do with DNA extraction or library preparation\nSection: MinION sequencing \"sequenced at 60 bases per second with a yield of about 200 Mb\" does that mean per sequencing run?\nThis section sounds a bit like a promotion of MinION sequencing. I would recommend to reduce it only to the parts that are relevant for the current paper.\nhigh through-put sequencing -> high-throughput\nI wonder why the title has to have the information that R9 signal has been used. Probably most readers have heard about Nanopore sequencing but do not have a clue what R9 signal is. I would recommend to put this detailed information into the methods section but remove it from the title.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [ { "c_id": "2636", "date": "12 Apr 2017", "name": "David Eccles", "role": "Author Response", "response": "Thank you very much for your review of our paper on mitochondrial genome sequencing with the MinION sequencer. We are currently working on updating the paper as per your report (and the report of Matthias Bernt), and will deliver a full response once the next revision of the paper is ready." } ] }, { "id": "21379", "date": "12 Apr 2017", "name": "Jianbin Wang", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, Chandler et al described in detail how they used the Nanopore sequencing technology to assemble the mitochondrial genome of Nippostrongylus brasiliensis. They also annotated the mitochondrial genome and did phylogenetic analysis among a selected group of nematodes. In addition, they characterized the Nanopore sequencing features for this genome. Overall, the authors have demonstrated that they can produce the complete mitochondrial genome from their Nanopore sequencing dataset.\n\nThe authors were able to recover the mitochondrial genome from a genomic DNA library due to the much higher copy number (often hundreds or thousands of times) of the mitochondrial DNA when compared to the nuclear genome. This approach has also been extensively used to recover mitochondrial and chloroplast genomes in whole genome shotgun libraries. In principle, it should work for any type of sequencing technology. The Nanopore sequencing technology is relatively new and is still fast evolving. In this case the technology does not seem to me to have a clear advantage over the Illumina or other sequencing approaches on mitochondrial genome assembly. In additional, the authors eventually used the Illumina data to do the error correction to make the final assembly. Nevertheless, the authors have presented a complete genome assembled from a combination of Nanopore and Illumina data with a full description of how they did this.\n\nNot considering the novelty or significance of the work, I think the mitochondrial genome is properly assembled and annotated. The results are clear and the manuscript is well written.", "responses": [ { "c_id": "2638", "date": "12 Apr 2017", "name": "David Eccles", "role": "Author Response", "response": "Thank you very much for your review of our paper on mitochondrial genome sequencing with the MinION sequencer. We are currently working on updating the paper as per the reports of Christian Rödelsperger and Matthias Bernt, and intend to deliver a full response to them at that time. This paper was intended as a stepping stone for investigating techniques that could be used to assemble a parasite genome from unamplified genomic DNA using the MinION. We discovered that the run yield in this case was not sufficient for assembling the entire N. brasiliensis genome, but being able to assemble a mitochondrial genome as a single contig has given us confidence that the technology is capable of improving on the existing Illumina-derived whole genome assembly. We did not intend to wow the world with this paper, rather it was an attempt to demonstrate methods and show how easy and quick it can now be to assemble a genome. Thank you for understanding this aspect of our paper. At the time of sequencing, the base-calling software was not sufficiently accurate to generate a reliable sequence at a single base level. Understanding this, we used MinION reads for scaffolding, and Illumina reads (from a different strain) to correct the abundant base call errors. This approach has allowed a relatively cheap and fast assembly of the mitochondrial genome, such that comprehensive phylogenetic analyses can be carried out on the mitochondrial genes. As you have mentioned, the nanopore sequencing technology is evolving fast. It is likely the case that updated base-calling software has improved base calling accuracy sufficiently that this approach can be carried out using MinION reads alone. I would like to carry out additional investigations on these data to discover if that is indeed the case, but would rather hold off on that until after we have published our attempts at whole genome assembly. Regardless, the mitochondrial sequences (including raw signal) are available for anyone else to determine themselves whether or not a high-quality MinION-only assembly is possible using re-called (but otherwise identical) nanopore sequence data." } ] } ]
1
https://f1000research.com/articles/6-56
https://f1000research.com/articles/6-53/v1
18 Jan 17
{ "type": "Research Article", "title": "Better than we thought? The diagnostic performance of an influenza point-of-care test in children, a Bayesian re-analysis", "authors": [ "Joseph Lee" ], "abstract": "Background: Point-of-care tests (POCTs) for influenza have been criticised for their diagnostic accuracy, with clinical use limited by low sensitivity. These criticisms are based on diagnostic-accuracy studies that often use the questionable assumption of an infallible gold standard. Bayesian latent class modelling can estimate diagnostic performance without this assumption. Methods: Data extracted from published diagnostic-accuracy studies comparing the QuickVue® influenza A+B influenza POCT to reverse-transcriptase polymerase chain reaction (RT-PCR) in two different populations were re-analysed. Classical and Bayesian latent class methods were applied using the Modelling for Infectious diseases CEntre (MICE) web-based application. Results: Under classical analyses the estimated sensitivity and specificity of the QuickVue® were 66.9% (95% confidence interval (CI) 61.4-71.9) and 97.8% (95% CI 95.7-98.9), respectively. Bayesian latent class models estimated sensitivity of 97.8% (95% credible interval (CrI) 82.1-100) and specificity of 98.5% (95% CrI 96.5-100). Conclusions: Data from studies comparing the QuickVue® point-of-care test to RT-PCR are compatible with better diagnostic performance than previously reported.", "keywords": [ "Bayesian latent class models", "influenza", "diagnostic accuracy", "point-of-care test", "near-patient test", "primary care", "paediatrics" ], "content": "Introduction\n\nInfluenza is an infectious disease of global importance and is a target of many near-patient tests1,2. These tests have been criticized for reported low sensitivity. This relatively poor ability to ‘rule out’ infection has been given as a reason to avoid their use in clinical practice, and instead develop better tests3. There are reasons to suspect some diagnostic-accuracy studies of point-of-care tests (POCTs) may have systematically underestimated sensitivity. If this is the case, the diagnostic accuracy of existing tests may be better than previously thought, with implications for clinical practice and test development.\n\nClassic diagnostic-accuracy studies compare the performance of the index (new) test, with another reference (pre-existing) test, on samples from the same patients. Although rarely explicitly stated, the reference test is assumed to be an infallible ‘gold standard’. Under this assumption, whenever the index test and the reference test results differ, the index test is assumed to be wrong. This prevents the index test outperforming the reference, and may systematically underestimate test performance. Many diagnostic-accuracy studies of point-of-care tests for influenza have used these classical methods, raising the possibility that their diagnostic performance have been artificially suppressed4.\n\nEstablished techniques for when a ‘gold standard’ is not available include: constructing a reference standard by multiple panels of tests, re-testing discrepant results, and statistical modelling5. Bayesian latent class models are one such statistical technique6,7. Unlike many other methods, they offer an opportunity to retrospectively analyse existing data, providing a test has been compared to the same reference standard in more than one population6. As far as I can tell, this study is the first attempt at Bayesian re-analysis of point-of-care tests for influenza.\n\nThis paper aims to examine the extent to which published estimates of influenza point-of-care test accuracy are constrained by the infallible gold standard assumption, with a view to informing clinical practice, and future diagnostic-accuracy studies.\n\n\nMethods\n\nPublished data were re-analysed using Bayesian latent class modelling and classical analysis. Data were extracted from two studies8,9 comparing the same reference and index tests (reverse-transcriptase polymerase chain reaction (RT-PCR) vs. QuickVue® influenza A+B influenza), in two separate primary care populations.\n\nAnalyses were performed using the free online application Modelling for Infectious diseases CEntre (MICE; http://mice.tropmedres.ac/home.aspx), which has been described elsewhere, and runs parallel analyses of Bayesian latent class models and classical frequentist statistics for diagnostic test accuracy7. Data are input into MICE via a simple online portal, and results are stored online or emailed to the user.\n\nMICE employs Markov Chain Monte Carlo (MCMC) simulations. These use the data provided to estimate all unknown parameters: the specificity and sensitivity of both reference and index tests, and the prevalence in the study population(s). The predicted combinations of test results are compared to the actual observed data, and the process is iterated, ideally until the estimates converge on the best fitting values for specificity, sensitivity and prevalence. MICE presents these results in the form of a table, with further graphs of the iterated estimates to allow the user to check convergence of MCMC chains, and Bayesian P values to allow the fit of the final model to the observed data to be assessed.\n\nFor this study the ‘two tests in one population’ model was selected and default values were used. Under the default settings non-informative priors (beta distribution 0.5, 0.5) are used to initiate the analysis, with the specificity of both tests constrained to above 40%. MCMC simulation used default initial values for diagnostic accuracy: (90% and 30% for prevalence, 90% and 70% for sensitivity, 90% and 99% for specificity). The analysis ran for 5000 iterations of pre-analysis adjustment (burn in), and 20,000 iterations.\n\n\nResults\n\nData were extracted from two studies comparing a QuickVue® point-of-care test to Reverse-Transcriptase PCR in children with influenza-like-illness. Gordon et al8 studied 989 children in Nicaragua, Harnden et al9 included 157 children in England. Patient characteristics and study procedures were similar, with a low risk of bias (Table 1).\n\nMCMC chains converged for all estimates. Model fit, assessed by Bayesian p value, was close between observed and expected values, with the exception of cases positive by RT-PCR, but negative by point-of-care test in the English study: 26 were predicted and 34 observed (Bayesian p value 0.081 – close to 0.5 indicates a good fit).\n\nIn both populations estimated prevalence was lower with Bayesian analysis: 34.0% (95% credible interval (CrI) 29.6-40.3) Vs 45.3% (95% CI 41.2-49.5) in Nicaragua and 18.6% (95% CrI 12.7-27.6) Vs 38.9% (95% confidence interval (CI) 31.3-47.0) in England (Table 2).\n\nRT-PCR performance was assumed to be 100% under the classical model, and estimated by Bayesian modelling. Bayesian sensitivity and negative predictive values were close to the assumed values at 98.8% (95% CrI 94.3-100) and 99.3% (95% CrI 96.7-100), but specificity 80.1% (95% CrI 75.9-87.0) and positive predictive value 68.4% (95% CrI 62.0-81.0) were reduced (Table 2).\n\nThe performance estimates for the Quick-Vue® point-of-care test were markedly different under Bayesian assumptions. Sensitivity increased from 66.9% (95% CI 61.4-71.9) to 97.8% (95% CrI 82.1-100). Accordingly, the estimates for negative predictive value also increased, from 79.0% (95% CI 75.2-82.4) to 99.0% (95% CrI 90.7-100). Specificity was more similar between models (Classical 97.8%; 95% CI 95.7-98.9 Vs. Bayesian 98.5%; 95% CrI 96.5-100), as was estimated positive predictive value (Classical 96.0%; 95% CI 92.3-98.0 Vs. Bayesian 96.6%; 95% CrI 92.0-100) (Table 2).\n\n\nDiscussion\n\nThe classical results for QuickVue® presented here are typical. A systematic review of all point-of-care tests for influenza reported overall specificity of 98.2% (CI, 97.5% to 98.7%), but sensitivity only 62.3% (95% CI, 57.9% to 66.6%)2. In contrast, Bayesian analysis estimated sensitivity of 97.8% (95% CrI 82.1-100), with a negative predictive value of 99.0% (95% CrI 90.7-100), suggesting a test of clinical importance, where there is little room for improvement in the ability to ‘rule out’ infection, apparently answering one of the major criticisms of point-of-care tests3.\n\nThe findings suggest false positives by the ‘infallible’ RT-PCR reference test. RT-PCR multiplies nucleic acids exponentially, making it both highly sensitive and vulnerable to false positives. Even the smallest amount of contamination can lead to a false-positive result. This is well recognised, so laboratories often use multiple negative controls10. Gordon et al did not mention negative controls; Harnden et al used water.\n\nA weakness is that the original data were not collected specifically for this analysis; there may therefore be differences between in the study conduct by Gordon et al. and Harnden et al. other than the populations. Despite this, the studies appear to be remarkably similar. The imperfect fit of the data to an element of the Bayesian model should be balanced against classical modelling, where the fit of the data to the ‘perfect’ reference standard is rarely acknowledged, let alone assessed.\n\nOverall, the findings are consistent with higher sensitivity than previously reported, and this underestimation can be attributed to the use of RT-PCR as a ‘gold standard’. These findings have implications for clinical practice, test development, and diagnostic-accuracy studies.\n\n\nData availability\n\nData used in this analysis are from the articles ‘Performance of an influenza rapid test in children in a primary healthcare setting in Nicaragua’8 by Gordon et al. (available at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007907) and ‘Near patient testing for influenza in children in primary care: comparison with laboratory test’9 by Harnden et al. (available at http://www.bmj.com/content/326/7387/480).", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nJJL is a Career Progression Fellow funded by the UK National Institute for Health Research’s School for Primary Care Research ( https://www.spcr.nihr.ac.uk/).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nHayward AC, Fragaszy EB, Bermingham A, et al.: Comparative community burden and severity of seasonal and pandemic influenza: results of the Flu Watch cohort study. Lancet Respir Med. 2014; 2(6): 445–54. PubMed Abstract | Publisher Full Text\n\nChartrand C, Leeflang MM, Minion J, et al.: Accuracy of rapid influenza diagnostic tests: a meta-analysis. Ann Intern Med. 2012; 156(7): 500–11. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: WHO Public Health Research Agenda for Influenza. Public Health. 2009; 1–18. Reference Source\n\nPetrozzino JJ, Smith C, Atkinson MJ: Rapid diagnostic testing for seasonal influenza: an evidence-based review and comparison with unaided clinical diagnosis. J Emerg Med. 2010; 39(4): 476–490.e1. PubMed Abstract | Publisher Full Text\n\nRutjes AW, Reitsma JB, Coomarasamy A, et al.: Evaluation of diagnostic tests when there is no gold standard. A review of methods. Health Technol Assess. 2007; 11(50): iii, ix–51. PubMed Abstract | Publisher Full Text\n\nLimmathurotsakul D, Turner EL, Wuthiekanun V, et al.: Fool’s gold: Why imperfect reference tests are undermining the evaluation of novel diagnostics: a reevaluation of 5 diagnostic tests for leptospirosis. Clin Infect Dis. 2012; 55(3): 322–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim C, Wannapinij P, White L, et al.: Using a web-based application to define the accuracy of diagnostic tests when the gold standard is imperfect. PLoS One. 2013; 8(11): e79489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGordon A, Videa E, Saborio S, et al.: Performance of an influenza rapid test in children in a primary healthcare setting in Nicaragua. PLoS One. 2009; 4(11): e7907. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarnden A, Brueggemann A, Shepperd S, et al.: Near patient testing for influenza in children in primary care: comparison with laboratory test. BMJ. 2003; 326(7387): 480. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLion T: Current recommendations for positive controls in RT-PCR assays. Leukemia. 2001; 15(7): 1033–7. PubMed Abstract | Publisher Full Text" }
[ { "id": "20109", "date": "10 Feb 2017", "name": "Nicolas Tremblay", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this report, Lee uses a Bayesian latent class model to estimate the specificity and sensitivity of QuickVue Point-Of-Care-Test for Influenza A & B. The author concludes that, based on his retrospective analysis of two studies (Gordon et al, 2009; Harnden et al, 2003), the sensitivity of the POCT test is much higher than expected, in major part due the to the gold standard used for comparison.\n\nMajor Issues\n1. The introduction is an oversimplification of the actual knowledge and match the style of an editorial much closer than an actual overview of the field. Many key points are not addressed:\n\nThe introduction does not discuss of the most likely cause of heterogeneity of the sensitivity of Influenza POCT (patient age, duration of symptoms, type of specimen, season of sampling, etc.) The introduction does not highlight that a positive test is able to ''rule in'' an influenza infection which is of clinical significance for therapy initiation, infection control, reduction of ancillary tests, etc. The introduction does not report any information on the gold standard test for influenza diagnostic in regards to advantage, limitation and turnaround. The usefulness of the approach is not well anchored and raises concerns about the clinical or biological significance or usefulness of the anticipated results The rationale for the study is now well established by the information provided and, as such, it is difficult for the reader to understand the logical flow of ideas. Many key references are missing\n2. The methods section is missing key elements\nThere is no information about the data that were extracted from the two selected studies There is no information about the methods of inclusion of the two selected studies There is no information or reference towards the method used to assess the risk of bias of the two selected studies such as QUADAS-2 or else.\n3. The discussion is missing several key elements\nThe discussion is missing references for many of the statements made in this section. As an example, ''The classical results for QuickVue presented here are typical''. The statement should be backed up by a reference and a comparison such as a meta-analysis of POCT. There is no discussion about other gold standard test and how POCT performs against them. The first paragraph makes a cookie-cutter conclusion that is well outside the scope of the present study. The second paragraph does not discuss the limitation of a clinical RT-PCR appropriately. The contamination of a diagnostic sample, as the only example used by the author, is far from reflecting why RT-PCR might be too sensitive, in regards to clinical significance and benchmarking, as a gold standard test for influenza testing. The authors does no report or discussion of the limitation of the Gordon et al. and Harnden et al. studies and conclude, without substance, that both study are remarkably similar. The last paragraph is an oversimplification of the results presented and does not account for various limitations of the study.\nMinor point\nThe 2x2 tables of the data extracted from the two selected studies are not provided. It makes it difficult to replicate the results. As an example, Gordon has 2 stratified data sets available based on clinical presentation (Table 2. n=1157 and n=578). It is unclear which data were used.\nConclusions and Recommendations\nIt is the opinion of the referee that, while the paper raise an interesting idea and novel approach, the paper is of limited significance because it presents major drawbacks and pitfalls. We suggest that the author review the comments made thereof to improve the study design and the overall presentation and content of the paper.", "responses": [] }, { "id": "20688", "date": "06 Mar 2017", "name": "Benjamin J Cowling", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting re-analysis of published data on rapid test sensitivity, making the case that the rapid test might be more sensitive than previously thought, because of inaccuracy of the gold standard.\nMajor comment\nI rated the article \"approved with reservations\" because I believe the current description of methodology is inadequate. It is not sufficient to refer to an online application for further details. The methods section of this article should provide technical details, for example the likelihood function, and any other information that would be needed for independent replication of the results using a different software package. This information should be sufficient to reveal assumptions that have been made in the model. The basic idea of a Bayesian model for two tests is not complex and can easily be programmed, my point is that the author of this article is responsible for explaining what he has done, and it is not satisfactory to say that the data were plugged into an online tool.\nOther comments\nI do not oppose the idea of questioning the sensitivity of PCR. There are other studies using serology and PCR which show that many influenza virus infections occur (based on observed rises in antibody titers) but are not detectable by RT-PCR. However, I am not convinced by this work alone that the reason for apparent low rapid test sensitivity is because PCR is an imperfect gold standard. I think the author should propose additional studies that could be done to confirm whether PCR is indeed an imperfect gold standard for the rapid test. I would suggest the author to be more cautious in his enthusiasm for his alternative explanation, in the absence of stronger evidence for that explanation.\nI am not sure if the author is aware that other studies of rapid test sensitivity have examined the viral load in respiratory swabs, and found that rapid test sensitivity tends to be lower in specimens with lower viral load, and specifically the detection threshold seems to be around 1-2 logs higher for rapid tests than for PCR1. I would view this observation as reasonably good evidence for poorer sensitivity of rapid tests compared to PCR. Wouldn't it be useful to do further analysis of specimens that are positive by the rapid test but negative by PCR? What kind of analysis is suggested?", "responses": [] } ]
1
https://f1000research.com/articles/6-53
https://f1000research.com/articles/6-52/v1
18 Jan 17
{ "type": "Software Tool Article", "title": "The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows", "authors": [ "Brian D. O'Connor", "Denis Yuen", "Vincent Chung", "Andrew G. Duncan", "Xiang Kun Liu", "Janice Patricia", "Benedict Paten", "Lincoln Stein", "Vincent Ferretti", "Denis Yuen", "Vincent Chung", "Andrew G. Duncan", "Xiang Kun Liu", "Janice Patricia", "Benedict Paten", "Lincoln Stein", "Vincent Ferretti" ], "abstract": "As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore (https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH).", "keywords": [ "Docker", "containers", "genomics", "bioinformatics", "cloud", "big data" ], "content": "Introduction\n\nThe Dockstore project has its roots in the large-scale ICGC PanCancer Analysis of Whole Genomes (PCAWG; https://dcc.icgc.org/pcawg) cancer genomics project, which necessitated the creation of highly portable and self-contained computational tools1. PCAWG’s initial core goal was to consistently analyze approximately 2,800 cancer donors (~5,800 whole genomes), an effort that culminated in the re-alignment and somatic variant calling for these donors. This effort used considerable computational resources. At its peak, 14 cloud and HPC environments were utilized with over 16,000 cores in total, resulting in a cumulative dataset of nearly 1 Petabyte in size.\n\nOur initial approach for PCAWG was to utilize cloud Application Program Interfaces (APIs) to build computational worker nodes from scratch, rather than use the Docker virtualization technology2. In this approach, we used API calls to create virtual machines (VMs) and to install software on them using Linux Bash setup scripts and, later, Ansible playbooks (https://www.ansible.com). We found that the use of cloud APIs and scripts to be a cumbersome and error prone way to move algorithms to the data. Over time dependencies and software versions would change, resulting in frequent failures of the setup scripts, or mysterious downstream analytical failures. Docker, a relatively new lightweight virtualization technology, mitigated these issues by providing a mechanism to encapsulate tools and their dependencies in a highly portable way (https://www.docker.com). This meant PCAWG workflow authors could create and set up their environments within a Docker image, including tools, library dependencies, reference files, and so forth, and then copy that image from cloud to cloud for analysis of data in place. This allowed us to very quickly create cloud-based VMs, install Docker, pull the current version of the Docker-based workflows, and be ready to perform analysis within a few minutes, highly simplifying our deployment strategy. The consistent, portable execution environment provided within a Docker container meant we could avoid issues caused by differences between cloud environments. Furthermore, the inherent portability of Docker images allowed us to leverage a multitude of computational environments, including non-cloud environments that were previously inaccessible to the project.\n\nGiven our positive experience using Docker to distribute analytical tools, we began exploring a generalized method for other projects to leverage the same approach. Our creation, the Dockstore (https://dockstore.org), generalizes the PCAWG approach in an easy-to-use web application that any tool developer or tool end user can utilize. The concept extends popular services used in Information Technology (IT) fields, in particular commercial sites, such as Quay.io (https://quay.io) and DockerHub (https://hub.docker.com), which provide hosted Docker registries where anyone can upload images containing tools or services.\n\nDockstore’s key innovation is its bridging of Docker image registries with a new, standardized approach to describing tools inside images. Up to this point, tools inside Docker images have had no standardized way to document how to call them, leading to the convention of using human-readable README files to describe tool invocation. This has made automation and integration among Docker images and execution systems cumbersome given the lack of machine-readable tool definitions. To solve this, we used the Common Workflow Language (https://dx.doi.org/10.6084/m9.figshare.3115156.v2) or Workflow Definition Language (WDL; https://github.com/broadinstitute/wdl) tool definition syntaxes to define the commands available inside a Docker image, how to parameterize them, to describe their inputs/outputs and their resource requirements. Dockstore also supports linking multiple tools together using CWL or WDL workflows; these multi-image workflows can then be registered on the site and used as building blocks to create more complex systems. The result is that Dockstore-based tools and workflows can be programmatically addressed and executed, enabling a new level of modularity, automation and integration.\n\nIn addition to providing a mechanism to bring together Docker-based tools and their corresponding machine-readable descriptors, the Dockstore provides a compelling and useful web-based interface, an instance of which is hosted at https://dockstore.org. This allows it to serve two communities: developers who want to register and share their tools through Dockstore, and users wishing to find genomics tools packaged in Docker and ready to execute in their own systems (Figure 1). The Dockstore web application provides a full host of capabilities for these two types of users, including registering new Docker images and descriptors, searching for tools others have registered, and assisting users in executing tools on any platform that supports Docker. The Dockstore also provides a command line interface for power users who want to script and automate their use of Dockstore.\n\nDevelopers can use Dockstore to register Docker images built by, or uploaded to, Quay.io and DockerHub with CWL/WDL machine- and human-readable descriptors from GitHub or Bitbucket. Users can then query and find tools of interest, parameterize them, and run them at a small scale locally or at large scale on commercial or open source execution engines supporting Docker and CWL/WDL. Execution takes place on cloud or HPC environments supported by the execution engine of choice.\n\nFinally, Dockstore is supported by the Global Alliance for Genomics and Health (GA4GH) organization3. The GA4GH’s mission is to accelerate progress in human health through establishing common frameworks for sharing genomics data and tools. The GA4GH Data Working Group focuses on data representation, storage, and analysis of genomic data. It provides an emerging standard web service API for accessing Docker-based tools and workflows (https://github.com/ga4gh/tool-registry-schemas). This Tool Registry API is being developed as part of a larger effort by the GA4GH Containers and Workflows task team to create a container registry API standard. Its implementation in Dockstore, and other sites, is a key goal of the standards effort and will allow for federated searches across tool registries that implement the GA4GH API.\n\n\nMethods\n\nThe Dockstore implementation can be divided into four facets: a tool and workflow registration process aimed at authors, a RESTful web API used to power the site, a web application that uses this API, and, finally, a command line utility that interacts with, and launches, tools and workflows present on Dockstore.\n\nTool registration process. The Dockstore does not itself act as a Docker image host or provide services to build Docker images automatically from source. These services are already provided reliably and at scale by sites, such as Quay.io and DockerHub. Instead, Dockstore provides a registry to link Docker-based tools hosted on Quay.io or DockerHub with tool metadata described in CWL or WDL and checked into a source control repository at GitHub or Bitbucket. It also acts as a workflow registry for CWL or WDL-based workflow definitions hosted on GitHub or Bitbucket. CWL and WDL provide the emerging standard for describing tools and their parameterizations (Supplementary File 1) along with overall computational workflows that string together multiple tools. This allows Dockstore to be lightweight and focus on the utility of presenting tools and workflows to the community through a searchable web application.\n\nFor developers adding tools to Dockstore, we recommend a method in which Docker-based tools are built automatically from public source repositories to maximize transparency and utility to the community. In our preferred approach, Quay.io is used to build the Docker image while GitHub or Bitbucket is used to store the Dockerfile and WDL/CWL descriptor (Figure 2A). This approach provides a considerable degree of automation for the developer, and encourages practices that result in a clear provenance for the tools during and after development. For example, this approach encourages developers to check in a Dockerfile, the key script used to make reproducible Docker images; the Dockerfile then provides a resource for other users who wish to extend the tool. Multiple releases of a Docker-based tool and its descriptors are supported and clearly associated with each other; the Dockstore web API allows tool developers to register one or more releases of a particular tool with a simple click in the web application. The Dockstore web API gathers descriptors and Dockerfiles via delegated OAuth authorization4. Similarly, the command line tool supports a highly streamlined registration process for Docker images that are built following this automated process. While it is possible to use DockerHub in place of Quay.io, the lack of a public DockerHub API makes integration into Dockstore less streamlined and introduces manual steps.\n\nFor tools, users can either use the fully automated approach (A) where Docker images are built using Quay.io and original source Descriptors and Dockerfile are on BitBucket or GitHub. Alternatively, they can register pre-build Docker images (C) that have been manually pushed to Quay.io or DockerHub. The former approach results in greater tool transparency and build reproducibility. Workflows in CWL/WDL do not require an image build process and can be directly registered from source control on BitBucket or GitHub (B).\n\nIn addition to the recommended automated build process, Dockstore offers alternative manual processes that give developers greater control over how their tools are registered. For example, Dockstore supports tools built outside of the normal DockerHub/Quay.io automated build process (Figure 2C). This allows developers to build Docker-based tools themselves, possibly for performance reasons, and then push the finished image to DockerHub or Quay.io for inclusion in Dockstore. The drawback of this for developers is that the series of manual steps cannot necessarily be easily reproduced, while for end users these approaches can obscure how the Docker-based tool image was created. For these reasons we recommend the fully automated approach to developers sharing tools on Dockstore.\n\nWorkflow registration process. Workflows are not directly associated with Docker images. Instead, they reference multiple tools (ideally registered using the Dockstore process). For that reason, registering workflows in either CWL or WDL format is simpler, and only requires the workflow document to be checked into source control in BitBucket or GitHub. It can then be found and registered in the Dockstore (Figure 2B).\n\nRESTful application programming interface (API). The Dockstore web and command line interfaces are driven by a RESTful web API (DOI: 10.5281/zenodo.154185). This API includes endpoints that conform to the emerging GA4GH Tool Registry API standard (Figure 3), allowing for multiple tools to interoperate with Dockstore and other sites that implement the standard. The API, currently in its 1.0.0 release, allows for read only access to list and retrieve details of registered Docker images on the site, for more information see https://github.com/ga4gh/tool-registry-schemas. The standard defines the JSON schema used to describe a particular tool registration and includes items such as name, description, author information, tool versions, and test data, in addition to endpoints that allow for listing and filtering tools. In addition, the Dockstore API includes extended, non-standard endpoints that are used for additional features implemented on the site, such as user authentication, integration with third-party services, like Github, and tool labelling.\n\nThese let systems find all tools in a given repository and get details on a particular tool, including versions, descriptors, and the original Dockerfile if available.\n\nWeb application interface. The Dockstore provides a simple-to-use web application that allows developers to register and manage tools and workflows while enabling end users to find and execute them. The site prominently displays search capabilities on the home page along with recently registered tools (Figure 4A). The search capability indexes names, descriptions, and versions and presents a list of matching tools. Once a user selects a given tool, the details are displayed, including links out to the Docker hosting service (Quay.io or DockerHub) for tools and the source repository (Bitbucket or GitHub) for tools and workflows (Figure 4C). The site also includes the ability for authors to tag their registered tools with labels that provide additional searchable annotations (Figure 4B). Together these features allow a user to quickly search for and identify tools and workflows that are available in Docker and are ready for execution in a variety of environments. The Dockstore web application also provides social features. Each entry incorporates Disqus (https://disqus.com), a comments system, and links to share entries via various social media sites.\n\n(A) The main page lists the most recent additions to Dockstore and allows for users to search and login. (B) A developer can easily publish their tools in Dockstore after logging in and linking to accounts. (C) Users can see details about each tool, discuss the tool, share with social media, and navigate back to source.\n\nDevelopers wishing to share their tools on Dockstore can log in using GitHub as an identity provider. Upon first login, they are presented with an onboarding wizard that assists in linking third party services that provide source code hosting (in order to host CWL and WDL descriptors) and Docker registries (in order to host Docker images). For source code, GitHub is linked to by default while Bitbucket is also supported. For Docker images, Quay.io is supported (DockerHub linking is not required since an API is not offered). Once linked, the developer is prompted to download and configure the Dockstore command line tool and is presented with an API token to be used with the underlying Dockstore web service. Developers wishing to build on top of Dockstore can use this token to authenticate against the Dockstore API and use it to make secure requests to GitHub, Bitbucket, and Quay.io.\n\nFollowing login through GitHub and the onboarding process to set up linked accounts and obtain the command line and API token, the developer is presented with a listing of the Docker images they have previously built with Quay.io. In the recommended build process, we link to the source code repository for the automated build in order to locate tool descriptors. By default the developers’ images are “unpublished” and not publicly visible in Dockstore. Valid images (images that can be linked to a WDL/CWL descriptor) can be toggled to “published”, making them visible to any Dockstore user (Figure 4B). The developer can use this interface to customize the WDL/CWL paths used, hide or show particular Docker image versions, and add labels to the tool. They can also “refresh” the particular Docker tool, causing Dockstore to re-query Quay.io and GithHub/Bitbucket to ensure the latest build image and associated descriptors are present in the system. For Docker images hosted in DockerHub, a more labor intensive process is needed to manually register in Dockstore given the current lack of publically available DockerHub API. Workflows are registered via a simpler process, since only the path to a CWL or WDL workflow document in GitHub or BitBucket is required.\n\nCommand line interface. The Dockstore command line utility provides the registration and search functionality offered by the web interface, and additionally provides assistance for file provisioning and local execution of tools and workflows registered within the system. This functionality allows Dockstore users to find workflows and tools of interest and quickly execute them using a completely standardized approach. Since every tool and workflow in Dockstore is described with CWL or WDL, the local execution of these tools is always done using the same command line and same parameterization process, greatly simplifying the learning curve for using any particular tool or workflow from Dockstore.\n\nLocal execution functionality proceeds through three distinct steps: 1) input files are staged; 2) cwl-runner (for CWL descriptors; https://github.com/common-workflow-language/cwltool) or Cromwell (for WDL descriptors; https://github.com/broadinstitute/cromwell) is called to invoke the tool in Docker or the workflow on the local host; and 3) output files are collected and staged to a final location. The parameterization of the Docker-based tool or workflow is encoded in a JSON document, a template of which can be created with the Dockstore command line. The command line launcher supports file downloads from HTTP/HTTPS, Amazon S3, FTP/SFTP, and local file paths, while file uploads are supported for Amazon S3, FTP/SFTP, and local file paths. The Dockstore command line supports file provisioning, since provisioning of files is beyond the scope of the specifications for CWL and WDL. The ability to execute tools from Dockstore is of particular value for development and user evaluation purposes and the command line supports a batch processing mode as well. We anticipate that other systems, both open source and commercial, and through a standard API, will ultimately enable larger-scale concurrent analysis with Dockstore-registered workflows and tools.\n\nDockstore follows best practices for software development, including using source control through GitHub, continuous integration testing with Travis CI (https://travis-ci.org/), testing coverage prediction with Coveralls (https://coveralls.io/), and community engagement with Gitter (https://gitter.im/ga4gh/dockstore). The Dockstore web application, https://dockstore.org, will remain an open and free site for users to register their public tool images and workflows. As an open source project5, we also encourage others to customize and install instances of Dockstore (both the UI and RESTful web API) at their own sites. Modifications to the source should be submitted back to the project via the standard GitHub “pull request” mechanism. We hope sites with sharable content participate in our federated network of GA4GH Tool Registry API compliant services, see https://github.com/ga4gh/tool-registry-schemas.\n\nSince Dockstore is designed to use Quay.io and Dockerhub as a backend, the server resources necessary for running it are modest. We recommend a Linux server or VM with 1–4 cores, 8GB of RAM, and 20GB of available disk space. Dockstore has been successfully installed on Ubuntu 14.04 and, while other distributions are possible, we currently only recommend this one.\n\n\nUse cases\n\nDockstore is a general platform for sharing tools and workflows, so the potential use cases the site supports are quite varied. However, we had three primary use cases in mind as the site was built: developers, individual users, and distributed projects performing large-scale computations (Figure 1).\n\nThe developer use case focuses on providing a standardized, best-practice development process for building portable tools and workflows. Using Dockstore necessitates that a tool or workflow author uses source control, leverages a Docker build/hosting service, and provides a standardized description of how to invoke the tool/workflow. This development process ensures a given tool or workflow is ready for distribution in a transparent and portable way. Standardized descriptor formats (in WDL or CWL) mean that the tool or workflow is self-documenting, easing the documentation burden on developers. Example Dockerfile, CWL-descriptor, and JSON parameterization files for the BAMStats (http://bamstats.sourceforge.net) tool can be found in the Supplementary materials (Supplementary Files 1–3). As an outcome of registering their tools/workflows on Dockstore, developers can take advantage of the underlying GA4GH Tool Registry API standard. This means a growing number of services can find and launch tools from Dockstore, providing additional motivation for developers to redistribute tools and workflows using the site.\n\nFor individual users, Dockstore is a catalogue of available tools and workflows that all work in a consistent and reliable way. A user can use Dockstore to find tools and workflows of interest to their research and leverage the standard descriptor format, in either WDL or CWL, to provide clear documentation on how to execute the tool/workflow. Furthermore, the inclusion of known-good test JSON documents on Dockstore provide key examples of inputs and expected outputs, something of importance in the bioinformatics community given the variability in file standards (Supplementary File 3). In addition to providing clear usage information and example inputs/outputs, individual users can leverage Dockstore-based tools and workflows in a growing collection of execution environments that understand the GA4GH Tool Registry API standard supported by Dockstore. Users will also be able to find and use tools from other sites in a standardized way as more tool and workflow repositories support this API.\n\nLarge-scale, distributed computational projects are a special form of the developer and user use cases above. Since Dockstore was inspired directly from the lessons learned in the highly-distributed PCAWG project, we feel other large-scale, distributed analysis efforts, such as the upcoming ICGCmed (https://icgcmed.org) project, will be able to benefit from Dockstore infrastructure. In these projects, Dockstore, or sites supporting the GA4GH Tool Registry standard, provide a standardized way to develop and share portable tools and workflows. Developers and researchers creating analytical tools and workflows for these projects can build, test, and distribute these tools/workflows using Dockstore. This is decoupled from the environments that run the tools and workflows, allowing tool and workflow authors to focus on their scientific content rather than compatibility with execution sites. For those tasked with executing Dockstore-based tools and workflows at scale, their inherent consistency means execution environments shown to run a given Dockstore-based tool or workflow are very likely to be able to run any other Dockstore-based tool or workflow. This separation of concerns, through the consistency provided by Dockstore and portability provided by Docker and standards like CWL and WDL, mean large-scale projects are much more likely to be successful in their distributed computing goals than a model where every tool and workflow needs to be validated across all compute environments used by the distributed project. This is particularly important when environments are changed, added, or removed over the life of the distributed project, or there are a large and dynamic number of tools and workflows being employed, such as in the Dream challenges (http://dreamchallenges.org/).\n\n\nDiscussion\n\nThe Dockstore is unique in its synthesis of programmatically friendly tool descriptors (WDL or CWL) with Docker images hosted on high-quality commercial services. Together these two features allow tools to be utilized in a variety of automated systems, programmatically discovered, built into larger workflows, and shared with the community. These features are key to supporting the next generation of large-scale genomics analysis projects, such as ICGCmed which require a robust mechanism to encapsulate and move algorithms to data, integrate the efforts of multiple developers, and handle change management in a dynamic environment.\n\nIn contrast with generic Docker repositories, such as DockerHub, the Dockstore provides mechanisms to interpret the contents of one or more Docker images, link them together, and execute them on a variety of HPC and cloud environments without modification. Projects like Galaxy Toolshed6 and Bioconda (https://bioconda.github.io) provide methods for describing and linking tools, but do not use Docker to abstract the execution environments. Hence, the Dockstore approach combines the cloud-based flexibility and elasticity of Docker with the modularity of tool repositories like Galaxy Toolshed.\n\nA number of existing projects, such as BioShaDock7, Bioboxes8, and BioDocker (http://biodocker.org), focus on encapsulating bioinformatics tools in Docker images in a way similar to Dockstore. BioDocker encourages the use of bioinformatics tools in Docker images by curating them in a single GitHub repository that collaborators can contribute to. Bioboxes defines guidelines (https://github.com/bioboxes/rfc) for particular types of software, such as assemblers or binning applications, allowing for easy benchmarking and interoperability between tools in bioinformatics pipeline. BioShaDock is the most similar to Dockstore and provides a fully controlled environment to build and publish bioinformatics software. It also hosts Docker images locally. Dockstore, like these existing efforts, encourages the use of Docker as a technology for packaging and distributing bioinformatics tools. However, unlike Bioboxes and BioDocker, Dockstore has a heavy focus on CWL/WDL in order to collect Docker images that can be used as part of larger workflows. Unlike BioShaDock, Dockstore is a lightweight registry that focuses on deep integration with commercial source code providers and the Quay.io Docker image registry. We believe that the combination of a standardized descriptor for bioinformatics tools and integration with third party services allows for a great deal of flexibility by allowing for a robust software development experience, which will enable execution of tools in any CWL/WDL-compatible cloud environment. Furthermore, integration with commercial providers allows for a convenient registration experience that mimics popular services focused on the general software development community, such as Coveralls (https://coveralls.io/) and Travis CI (https://travis-ci.org/).\n\nIn the future, it should be possible to leverage multiple open source user interfaces (such as Galaxy) and commercial platforms (such as Seven Bridges Genomics, DNAnexus, DNAstack, and others) to provide a friendly environment for finding, combining, and executing Dockstore-based tools and workflows. To further this goal, the creation of the Tool Registry API standard through the GA4GH will be key for future interoperability between tool registries and the systems that scale the execution of tools they contain. The Dockstore is the first implementation of this emerging standard. We hope that other tool repositories will implement the standard, allowing the creation of a tool sharing network of registries. Multiple sites that have different models of how Docker-based tools should be built, shared, and secured, such as BioShaDock, Bioboxes, and BioDocker (http://biodocker.org) can flourish independently, but benefit from supporting the emerging GA4GH API standard. Such a network stands a good chance of gaining the critical mass to make scientific tool sharing a popular reality\n\nFuture features of Dockstore will include the support of testing frameworks and execution environments. The ability to specify test datasets for each tool and workflow will be extended providing users with “known good” sample inputs for testing and instructional purposes. We will also add support for signed Docker images, providing a mechanism to support “verified” Dockstore entries that are validated to come from trusted sources. This will complement private registry support in Dockstore in order to facilitate sharing Docker-based tools and workflows with a select set of collaborators. A long term evolution of the Dockstore site will include a central registry index, complete with faceted search, for querying across the network of GA4GH-compliant tool registries described previously. Dockstore will also integrate with the related and complementary GA4GH Workflow and Task Execution API standards currently in development, enabling the use of compute resources to run Dockstore-based tools and workflows through standardized APIs. Dockstore’s support of these features, and emerging standards, will support future successors to large scale, distributed analysis projects such as PCAWG. This may include efforts, such as the ICGCmed (https://icgcmed.org/) and future DREAM challenges (http://dreamchallenges.org/), where Dockstore can enable the seamless interchange and execution of software tools across a variety of computer environments.\n\n\nSoftware availability\n\nSoftware available at: https://dockstore.org/\n\nDockstore source code available from the Global Alliance for Genomics and Health (GitHub): https://github.com/ga4gh/dockstore (web UI: https://github.com/ga4gh/dockstore-ui)\n\nArchived source code for Dockstore 1.0 release: https://zenodo.org/record/154185, DOI: 10.5281/zenodo.1541859\n\nLicense: Apache 2.0", "appendix": "Author contributions\n\n\n\nBO conceived of and provided functional requirements and implementation guidance. DY provided architectural and software development supervision. VC, AD, XL, JP implemented the software. BP, VF and LS provided strategic guidance for the project.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors wish to acknowledge the funding support from the Discovery Frontiers: Advancing Big Data Science in Genomics Research program (grant no. RGPGR/448167-2013, ‘The Cancer Genome Collaboratory’), which is jointly funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada, the Canadian Institutes of Health Research (CIHR), Genome Canada, and the Canada Foundation for Innovation (CFI), and with in-kind support from the University of Chicago and the Ontario Research Fund of the Ministry of Research and Innovation.\n\nResearch reported in this publication was also supported by the National Human Genome Research Institute of the National Institutes of Health (award no. U54HG007990). Computing resources were contributed by Microsoft through a grant to the UC Santa Cruz Genomics Institute.\n\n\nAcknowledgements\n\nThe authors wish to acknowledge the valuable feedback from the members of the Global Alliance for Genomics and Health (GA4GH). Specifically, the Containers and Workflow Task Team co-leaders Jeff Gentry and Peter Amstutz, and the team’s membership, including Kyle Ellrott who leads the development of the GA4GH Task Execution API standard. We also wish to thank the Big Data to Knowledge (BD2K) initiative, in particular contributors from the Center for Big Data in Translational Genomics, including David Haussler, for their valuable feedback and support.\n\n\nSupplementary material\n\nSupplementary File 1: Zipped file containing the following (Click here to access the data.):\n\nA tool descriptor, in this case the Dockstore.cwl descriptor written for the BAMStats tool on Dockstore (https://dockstore.org/containers/quay.io/collaboratory/dockstore-tool-bamstats). Descriptors define the key attributes like name, inputs and outputs of a tool, the system requirements, which Docker image to use, authorship information, and information making the construction of the command possible.\n\nA Dockerfile that includes the instructions on how to make a Docker image, in this case, one containing the BAMStat tool.\n\nThis Sample.json file provides sample parameterizations for this tool including a “known good” input BAM file.\n\n\nReferences\n\nStein LD, Knoppers BM, Campbell P, et al.: Data analysis: create a cloud commons. Nature. 2015; 523(7559): 149–151. PubMed Abstract | Publisher Full Text\n\nDirk M: Docker: lightweight linux containers for consistent development and deployment. Linux Journal. 2014; 239: 2. Reference Source\n\nMark L, Siu LL, Rehm HL, et al.: All the World's a Stage: Facilitating Discovery Science and Improved Cancer Care through the Global Alliance for Genomics and Health. Cancer Discov. 2015; 5(11): 1133–1136. PubMed Abstract | Publisher Full Text\n\nBarry L: Oauth web authorization protocol. IEEE Internet Computing. 2012; 16(1): 74–77. Publisher Full Text\n\nThomas FR: Architectural styles and the design of network-based software architectures. University of California, Irvine. 2000. Reference Source\n\nDaniel B, Von Kuster G, Bouvier E, et al.: Dissemination of scientific software with Galaxy ToolShed. Genome Biol. 2014; 15(2): 403. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoreews F, Sallou O, Ménager H, et al.: BioShaDock: a community driven bioinformatics shared Docker-based tools registry [version 1; referees: 2 approved]. F1000Res. 2015; 4: 1443. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBelmann P, Dröge J, Bremges A, et al.: Bioboxes: standardised containers for interchangeable bioinformatics software. Gigascience. 2015; 4: 47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYuen D, Duncan A, Liu V, et al.: ga4gh/dockstore: 1.0. 2016. Publisher Full Text" }
[ { "id": "19471", "date": "01 Feb 2017", "name": "Gaurav Kaushik", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, the authors describe the motivation, design, architecture, and merit of Dockstore.org, a community-focused utility for sharing Docker-based tools and workflows for the sciences.\nThe authors should be commended for their overview of the significant challenges facing large-scale genomics efforts, such as maintaining consistent, reproducible analyses across environments, as well as the solution they’ve architected. They highlight important considerations that must be addressed in order to accelerate scientific progress and the improvement of human health. The technical description of the ICGC PCAWG project is illuminating for researchers and organizations wanting to organize or participate in large-magnitude informatics projects.\nOverall, we recommend that the manuscript be accepted pending minor revisions. Each revision item is discussed below:\nThe description of Dockstore architecture is thorough and each design decision is justified and informative to the reader. A few additions, however, may benefit audiences which are less conversant in Docker or cloud architecture. For example, though container-based workflow descriptions are becoming increasingly common, many researchers may not yet be familiar with CWL and WDL. A more detailed description of the container-tool-workflow relationship and the benefit of modularizing workflows into containerized tools (as opposed to have whole workflows in a single container) may be helpful to newcomers.\nWe request that the authors cite the Common Workflow Language and Workflow Description Language as appropriate. For CWL, the appropriate citation is  https://dx.doi.org/10.6084/m9.figshare.3115156.v2, as stated on commonwl.org.1 For WDL, we have previously cited their GitHub repository (https://github.com/broadinstitute/wdl) though a more appropriate citation may now exist and could be provided by their development team.\nThe authors mention that cloud APIs and scripts resulted in analytical failures. The manuscript may benefit from brief discussion of any design constraints when using containers and workflows that may introduce similar risks. If there are none or relatively few, please elucidate why such a technological advantage exists to the reader.\nFigure 2 may benefit from streamlining, as there are duplicate images and the discussion items (A-C) are mentioned out of order.\nRegarding the use of GitHub for automated builds and workflow descriptions, the reader may benefit from a small description of best practices in a supplement. For example, how does Dockstore handle tagging of Dockerfiles and how should users make use of them? This can be brief, but it may be helpful to better describe how to augment the value add that Dockstore brings to reproducibility with good git practices.\nOn page 7, “Dream” should be “DREAM”.", "responses": [] }, { "id": "20278", "date": "27 Feb 2017", "name": "Heinz Stockinger", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is very well written and discusses the implementation of a valuable tool for the community. The use of Docker is currently very popular, and the combination with CWL/WDL is very good.\nI have a minor comment for the on-line tool and the presented workflows (https://dockstore.org/search-workflows). Currently, there does not seem to be description for the presented workflows. Example:\nhttps://dockstore.org/workflows/ICGC-TCGA-PanCancer/wdl-pcawg-bwa-mem-workflow \"No description associated with this workflow. \"\nThis makes it difficult for users to select a workflow. Adding a short description would be very helpful.", "responses": [] } ]
1
https://f1000research.com/articles/6-52
https://f1000research.com/articles/4-1091/v1
20 Oct 15
{ "type": "Research Article", "title": "Machine learning models identify molecules active against the Ebola virus in vitro", "authors": [ "Sean Ekins", "Joel S. Freundlich", "Alex M. Clark", "Manu Anantpadma", "Robert A. Davey", "Peter Madrid", "Joel S. Freundlich", "Alex M. Clark", "Manu Anantpadma", "Robert A. Davey", "Peter Madrid" ], "abstract": "The search for small molecule inhibitors of Ebola virus (EBOV) has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs) with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in vitro.", "keywords": [ "Drug repurposing", "Ebola Virus", "Computational models", "Machine learning", "Pharmacophore", "Pyronaridine", "Quinacrine", "Tilorone" ], "content": "Introduction\n\nIn 2014, the outbreak of the Ebola virus (EBOV) in West Africa highlighted the need for broad-spectrum antiviral drugs for this and other emerging viruses1. Several groups had previously performed high throughput screens (HTS) and identified FDA approved drugs (amodiaquine, chloroquine, clomiphene and toremifene) with in vitro growth inhibitory activities against EBOV2,3. It appears none of these molecules were tried during the epidemic in Africa4, likely due to the lack of efficacy data in higher order species. We have previously summarized the numerous small molecules described in the literature as possessing antiviral activity that could be further evaluated for their potential EBOV activity alongside the few new antivirals. We have found that there is considerable prior knowledge regarding these small molecules possessing activity against EBOV in vitro or in animal models5–8, and this includes a number of accessible FDA-approved drugs2,3,9. Another recent study has shown three approved ion channel blockers (amiodarone, dronedarone, and verapamil) inhibited EBOV cellular entry9. The drugs were given at concentrations that would be achieved in human serum, and were effective against several of filoviruses9. None of the FDA approved drugs described in these various studies were designed to target the Ebola virus. For example amodiaquine and chloroquine are well known antimalarials, clomiphene and toremifene are selective estrogen receptor modulators, while amiodarone, dronedarone, and verapamil are anti-arrhythmics4. It may or may not be of importance but all of these compounds have a common tertiary amine feature10,11. What is important is that they are all orally bioavailable and generally safe for humans at their approved doses. Some have suggested that G-protein-coupled receptors (GPCRs) may play a role in filoviral entry and receptor antagonists could be developed as anti-EBOV therapies12. The compounds which are FDA-approved drugs for other diseases2,3,9 but with activity against EBOV in vitro or in vivo may represent useful starting points with the advantage that much is known regarding their absorption, distribution, metabolism and excretion (ADME) and toxicity properties. Thus, these repurposed drugs may represent a more advanced starting point for therapeutic development and approval compared with new chemical entities for preventing the spread and mortality associated with EBOV.\n\nBeyond these early stage drugs, there are a number of other compounds that have also been identified as active against EBOV (summarized in a review13). A thorough literature search identified 55 molecules suggested to have activity against EBOV in vitro and/or in vivo which were evaluated from the perspective of an experienced medicinal chemist as well as using simple molecular properties and ultimately 16 were highlighted as desirable14. This dataset overlaps to some extent with another review that identified over 60 molecules15. Two recent repurposing screens identified 5316 and 8017 compounds with antiviral activity which also overlap the earlier screens. Additional studies have identified small number of inhibitors18,19. In total there may now be close to several hundred compounds identified with activity against EBOV in vitro.\n\nApproaches with more capacity to screen compounds include using computational methods as a filter before in vitro testing. Computational models for anti-EBOV activity include one which used the average quasi valence number (AQVN) and the electron-ion interaction potential (EIIP), parameters determining long-range interaction between biological molecules for virtual screening of DrugBank and suggested hundreds of compounds to test20. A follow up to this study proposed ibuprofen for testing21. Others have also used computational docking studies to propose multi-target inhibitors of VP40, VP35, VP30 and VP2422, inhibitors of VP4023 or have suggested molecules to test in the absence of computational approaches24,25. We are unaware of any validation of these compounds. A further computational approach used a pharmacophore26 that was generated from four FDA approved compounds resulting from the two earliest high throughput screens against EBOV2,3. This pharmacophore closely matched the receptor-ligand pharmacophores for the EBOV protein 35 (VP35)5. Follow-up docking studies suggested that these compounds may also have favorable inhibitory interactions with this receptor. The pharmacophore was further used to screen several compound libraries27. We proposed that if we could learn from the many compounds already screened for anti-EBOV activity, we could more efficiently find additional compounds and perhaps understand the key molecular features needed for antiviral activity14. We speculated then that Laplacian-corrected Naïve Bayesian classifier models might be useful as they have been for M. tuberculosis28,29 and more recently for T. cruzi30. To our knowledge machine learning approaches to identify EBOV inhibitors have not been attempted elsewhere. The current study extends the machine learning approach to EBOV and uses both commercially available Bayesian, Support Vector Machines (SVM) and recursive partitioning methods and open source Bayesian software for model generation and compound scoring. We report the identification of three novel EBOV inhibitors with nanomolar EC50 values as validation of this approach.\n\n\nMethods\n\nQuinacrine hydrochloride, pyronaridine tetraphosphate, and tilorone dihydrochloride (BOC Sciences, Shirley, NY), bafilomycin A1, and chloroquine diphosphate (Sigma, St. Louis, MO) were dissolved in either DMSO or water as 10 mM stock solutions and were stored at -20°C. The nucleus staining dye, Hoechst 33342, CellMask Deep™ Red cytoplasmic/nuclear stain, NHS-Alexa-488 dye, the Dual-Glo® Luciferase Assay System and CytoTox 96™ assay kit were purchased from Promega (Promega, Madison, WI). The modified MTT assay Cell Counting Kit 8 was procured from Dojindo Molecular Technologies (Dojindo Molecular Technologies, Gaithersburg, MD). The 96-well high-content imaging plates were obtained from BD (BD Biosciences, Franklin Lakes, NJ) and 96-well white-walled tissue culture plates were from Corning (Corning Life Sciences, MA). The Opera QEHS confocal imaging reader, Acapella™ and Definiens™ image analysis packages were purchased from PerkinElmer (PerkinElmer, USA). Image acquisition was done using Nikon TI eclipse high content imaging enabled microscope running NIS elements high content imaging software (version 4.30.02).\n\n868 molecules from the viral pseudotype entry assay and the EBOV replication assay from a recent publication3,31 were made available as an sdf file3. Salts were stripped and duplicates removed using Discovery Studio 4.1 (Biovia, San Diego, CA)32–36. For each assay, compounds with IC50 values less than 50 μM were selected as actives. All other compounds were classed as inactives. Models were generated using a standard protocol with the following molecular descriptors: molecular function class fingerprints of maximum diameter 6 (FCFP_6)37, AlogP, molecular weight, number of rotatable bonds, number of rings, number of aromatic rings, number of hydrogen bond acceptors, number of hydrogen bond donors, and molecular fractional polar surface area. Models were validated using five-fold cross validation (leave out 20% of the database). Bayesian, Support Vector Machine and Recursive Partitioning Forest and single tree models built with the same molecular descriptors in Discovery Studio were compared. For SVM models, we calculated interpretable descriptors in Discovery Studio and then used Pipeline Pilot to generate the FCFP_6 descriptors followed by integration with R38. RP Forest and RP Single Tree models used the standard protocol in Discovery Studio. In the case of RP Forest models, ten trees were created with bagging. Bagging is short for “Bootstrap AGgregation”. For each tree, a bootstrap sample of the original data is taken, and this sample is used to grow the tree. RP Single Trees had a minimum of ten samples per node and a maximum tree depth of 20. In all cases, 5-fold cross validation or leave out 50% × 100 fold cross validation was used to calculate the Receiver Operator Curve (ROC) for the models generated28,29.\n\nOpen Bayesian models for the Ebola datasets were developed using open source software39–41 and loaded into the Mobile Molecular Data Sheet (MMDS (http://molmatinf.com/)) and then the two models were used to score the three compounds selected by the earlier models. These two models are also openly accessible (http://molsync.com/ebola/) and can be uploaded into MMDS in order to score molecules of interest.\n\nPyronaridine was mapped to the recently published pharmacophore26 derived from in vitro active amodiaquine, chloroquine, clomiphene and toremifene in Discovery Studio Vers 4.1 and a fit score was generated.\n\nRecombinant, infectious Ebola virus encoding green fluorescent protein (GFP) was used for testing efficacy of compounds and was originally provided by Dr. Heinz Feldmann, Rocky Mountain Laboratories. The strain that was used has the GFP gene inserted between the VP30 and VP24 genes. All viral infections were done in the BSL-4 lab at Texas Biomedical Research Institute. Briefly, 4,000 HeLa cells per well were grown overnight in 384-well tissue culture plates, the volume of DMEM (Fisher scientific, Cat#MT10017CV) culture medium supplemented with 10% fetal bovine serum (Gemini Bio-Products, Cat#100106) was 25 µL. On the day of assay, test drugs were diluted to 1 mM concentration in complete medium. 25 µL of this mixture was added to the cells already containing 25 µL medium to achieve a concentration of 500 µM. All treatments were done in triplicates. 25 µL of medium was removed from the first wells and added to the next well. This type of serial dilution was done 12 times and treated cells were then incubated at 37°C in a humidified CO2 incubator for 1 hour. Final concentrations of 250, 125, 62.5, 31.25, 15.62, 7.81, 3.9, 1.9, 0.97, 0.48, 0.24 and 0.12 µM were achieved upon addition of 25 µL of infection mix containing wild type Ebola-GFP virus, Bafilomycin at a final concentration of 10 nM was used as a positive control drug. Infections were done to achieve a MOI of 0.05 to 0.15. Infected cells were incubated for 24 hours. 24 hours post-infection cells were fixed by immersing the plates in formalin for 24 hours at 4°C. Fixed plates were decontaminated and brought out of the BSL-4. Formalin from fixed plates was decanted and plates were washed thrice with PBS. EBOV-infected cells were stained for nuclei using Hoechst at 1:50,000 dilution and plates were imaged. Nuclei (blue) and infected cells (green) were counted using CellProfiler software (Broad Institute)- Version 2.1.1. Total number of nuclei (blue) was used as a proxy for cell numbers and a loss of cell number was assumed to reflect cytotoxicity. Concentrations where total cell numbers were 20% less than the control were rejected from the analysis.\n\n\nResults\n\nUsing 5-fold cross validation the Bayesian approach (Data S1 and Data S2) performed the best for the EBOV replication data and was equivalent for the RP Forest approach (Table 1) and was better than SVM (Data S3 and Data S4) for the pseudotype data. The Open Bayesian models had ROC scores slightly lower than the Bayesian models built with Discovery Studio. A more exhaustive cross validation for the Bayesian models is the ‘leave out 50% repeated randomly 100 times’ which produced ROC values greater than 0.8 and were comparable to the 5-fold cross validation data. This indicated the models are stable. For the EBOV pseudotype assay, alkoxyethylamino was a common feature amongst active compounds in the training set, as were 1,3-diaminopropyl and saturated six-member heterocycles with an oxygen and perhaps an additional heteroatom in the ring (Figure 1A). Training set inactives commonly featured carboxylic acids, N,N'-disubstituted ureas, secondary and tertiary amides, pyrazoles, aromatic sulfonamides, tertiary cyclopentanols, 1,2-mercaptoethanol, and penams (Figure 1B). For the replication assay training set, active features included piperazine, phenothiazine, tertiary amines, and alkoxyethylamino (Figure 2A). Inactive features included secondary amides, disubstituted amines, cyclopropylmethyl, carboxylic acids, 1,3-oxathiolanes, tertiary alcohols, phenethyl, and penams (Figure 2B). An actives feature common between both assays/models was alkoxyethylamino. Inactives features in common between both were carboxylic acids, secondary amides, penams and tertiary alcohols, which may relate to properties which prevent the molecules from accessing cellular sites of viral activity.\n\nA. Active and B. Inactive features for the Discovery Studio pseudotype Bayesian model.\n\nA. Active and B. Inactive features for the Discovery Studio EBOV replication model.\n\nThe MicroSource Spectrum set of 2320 compounds was then scored with both Bayesian models (Data S5). Predicted actives were quantified as to their chemical similarity, or distance, from molecules in the training set. When excluding compounds in the training set, those scoring highly were considered most interesting and included the antiviral tilorone, the antimalarials quinacrine and pyronaridine (Figure 3). Perhaps not surprisingly, tertiary amines scored particularly well. These molecules were also scored with the open Bayesian models (Data S6) and all replication models scored the compounds highly (values close to or greater than 1). None of these three compounds has been described in recent reviews of small molecules with activity against EBOV14–16, to our knowledge.\n\nFor comparison, chloroquine scored 31.38 in the replication Discovery Studio Bayesian model, 24.55 in the Discovery Studio Pseudovirus Bayesian model, 1.63 in the Open Bayesian Replication model and 0.51 in the Open Bayesian Pseudovirus model.\n\nThe MicroSource set had previously been screened with the published Ebola common feature pharmacophore26,27, using the van der Waals surface of amodiaquine (which was more potent than chloroquine3) to limit the number of hits retrieved42–44. Two of the three selected – compounds quinacrine (fit score 2.59) and tilorone (fit score 3.65) – were retrieved previously. We therefore used the ligand pharmacophore mapping to map pyronaridine to the pharmacophore without the van der Waals surface (Figure 4, Fit score of 3.60 suggested this was a good match to pharmacophore features).\n\nFit score of 3.60 (Chloroquine (yellow) = 4.21).\n\nThe three selected compounds were tested in vitro alongside the positive control chloroquine which gave an expected dose response curve (Figure 5, Table 2). Quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively which were lower than for chloroquine 4.0 μM.\n\nCells were treated and then challenged with Ebola virus encoding GFP. Infection efficiency was calculated as infected cells (expressing GFP)/total cells and normalized to infection efficiency seen in the untreated control. Shown is one representative experiment where each point is the average of 3 independent measurements of infection +/- standard deviation. Dose response curves were fitted by non-linear regression.\n\nThe cytotoxicity of compounds are represented as a 50% cytotoxicity concentration (CC50) estimated by the lowest concentration of drug that produced ≥ 50% loss in cell number by nuclei counting.\n\n\nDiscussion\n\nOur recent work on neglected diseases has shown that we can learn from existing assay datasets. Specifically we have previously analyzed large datasets for Mycobacterium tuberculosis to build machine learning models that use single point data, dose-response data43,45, combine bioactivity and cytotoxicity data (e.g. Vero, HepG2 or other model mammalian cells)28,29,46 or combinations of these sets47,48. These models in turn have been validated with additional non-overlapping datasets, demonstrating that it is possible to use publically accessible data to find novel in vitro active antituberculars. We have also applied the same approach recently to identify a molecule with in vitro and in vivo activity against T. cruzi30. In the current study we found that different machine learning methods produced similar 5-fold cross validation data, although the Bayesian models had ROC values consistently above 0.80, which is preferable. One of the issues with computational models is that they are rarely accessible to others due to the commercial software licensing requirements. We have previously showed that models built with open source tools can produce validation statistics comparable to commercial modeling tools49. We recently made “function class fingerprints of maximum diameter 6” (FCFP6) and “extended connectivity (ECFP6) fingerprints,” open source and have described their implementation with the Chemistry Development Kit (CDK)50 components41. In addition we described an open source Bayesian algorithm that can be used with these descriptors39,40. One way to make such models more accessible is to use mobile devices for their delivery and we have developed cheminformatics mobile apps41,51–55. Several of these apps combine Bayesian models and open source fingerprint descriptors to enable models that can be used within a mobile app (TB Mobile, MMDS, Approved Drugs and MolPrime). This enables a scientist to select a molecule and score it with models. In the current study we used the same training sets for the anti-EBOV activity using replication and pseudotype screening data to build open source models that we can share with the community (http://molsync.com/ebola/).\n\nThe Bayesian models allowed us to select three compounds from the MicroSource compound library that scored highly and were not in the model training sets. The Open Bayesian models also scored the three hits favorably, which bodes well for screening other compounds of interest. Two of these molecules had also been identified with our earlier pharmacophore model26. When tested in vitro the three compounds possessed EC50 values 230–420 nM, much lower than the positive control chloroquine (EC50 4.0 μM) used in this study and identified previously3. Tilorone is an investigational agent that has been known for over 40 years as an antiviral56 and is an inducer of interferon in mice57. It has been shown to possess a broad array of biological activities including cell growth inhibition in PC3 CDK5dn prostate cancer cells (IC50 8–12 μM)58, inhibition of Primase DnaG from Bacillus anthracis (IC50 7.1μM)59, in a mouse model of pulmonary fibrosis it decreased lung hydroxyproline content and the expression of collagen genes60, α7 nicotinic receptor (nAChR) agonist activity (Ki 56 nM)61, activated human alpha7 nAChR with an EC50 value of 2.5 μM62, radioprotective activity63, potent modulation of HIF-mediated gene expression in neurons with neuroprotective properties64 and induction of the accumulation of glycosaminoglycans, delay infectious prion clearance, and prolong prion disease incubation time65. Quinacrine is an old antimalarial drug now more widely used as an antiprotozoal for the treatment of giardiasis66 and as an anthelmintic. Pyronaridine is a potent antimalarial (IC50 13.5 nM)67, has activity against Babesia spp.68, is active in vitro (EC50 225 nM) and in vivo (85.2% efficacy 4 days treatment at 50 mg/kg) against T. cruzi30 and is a P-glycoprotein inhibitor69. Pyronaridine is used in combination with artesunate in the European Medicines Agency approved Pyramax70 which has performed well in clinical trials for malaria71. As this molecule has already been approved this may have a more direct path to clinical testing if it is found to be active in standard animal models infected with the Ebola Virus.\n\nAs stated before in perspectives by us72 and others2,16,20,73, the fact that approved drugs may be repurposed for other diseases should not be viewed as a negative aspect of the small molecules, belying undesirable target promiscuity74. Instead, we prefer to reference recently published crystallographic analyses75 demonstrating that small molecules may bind multiple proteins in different types of binding sites and with distinct conformations to ultimately facilitate molecular repurposing. While it would be most desirable to repurpose an approved drug and, thus, catapult a discovery effort into a Phase II trial, one should not ignore the significance of utilizing the discovery of a new use for an old drug to seed efforts in the lead optimization phase76. Such an expedited program would be expected to have a high probability of producing novel small molecules, closely related to or inspired by the drug, with the opportunity to translate quickly to clinical trials.\n\nIn summary, this study has added to the previous work that identified several FDA approved compounds active against EBOV in vitro. We propose that these three molecules may warrant further evaluation in vivo as they are significantly more active than chloroquine. Larger scale virtual screening could be performed on the millions of commercially available molecules or more complete sets of approved and older no longer used drugs than have already been screened. These computational efforts can then prioritize molecules for testing. Such an approach may be a useful way to leverage the HTS data that has already been developed at great cost. In this study we have focused on just the data from a single group3,31 but it may also be possible to combine this with the data from the other high throughput screens2,16,17 to provide a much larger training set. There is also the opportunity to apply many different computational approaches beyond those described here to identify whole cell active compounds against EBOV. Ultimately, we should be able to identify additional compounds that could be immediately useful to treat patients with the disease while we await the approval of a vaccine.\n\n\nData availability\n\nSupplemental data contains results from Bayesian models and SVM models as well as the output of predictions with Bayesian models and open Bayesian models.\n\nThe training sets used in the models are available as SDF files (http://molsync.com/ebola/).", "appendix": "Author contributions\n\n\n\nConceived and designed the experiments: S.E., M.A., R.A.D., P.B.M\n\nPerformed the experiments: S.E., M.A., R.A.D., P.B.M\n\nAnalyzed the data: S.E., J.S.F., M.A., R.A.D., P.B.M\n\nContributed reagents/materials/analysis tools: S.E., A.M.C., M.A., R.A.D., P.B.M\n\nWrote the manuscript: S.E., J.S.F., A.M.C., M.A., R.A.D., P.B.M\n\nAll authors have seen and agreed to the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nS.E. works for Collaborations in Chemistry, and Collaborations Pharmaceuticals, Inc. and S.E. and A.M.C. consult for Collaborative Drug Discovery Inc.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nSE kindly acknowledges Biovia for kindly providing Discovery Studio and Dr. Megan Coffee and Dr. Christopher Southan for initially stimulating interest in this topic.\n\n\nSupplementary materials\n\nSupplemental data S1–S4 and S6.\n\nSupplemental data S1. Pseudotype Bayesian model, Supplemental data S2. EBOV replication Bayesian, Supplemental Data 3. SVM output file for Pseudotype model, Supplemental Data S4. SVM output file for EBOV replication model, Supplemental Data S6. Predictions for Ebola activity using Open Bayesian models in the MMDS app.\n\nClick here to access the data.\n\nSupplemental data S5.\n\nMicroSource predictions with Bayesian models xl file.\n\nClick here to access the data.\n\n\nReferences\n\nEkins S, Southan C, Coffee M: Finding small molecules for the 'next Ebola' [version 2; referees: 2 approved]. F1000Res. 2015; 4: 58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohansen LM, Brannan JM, Delos SE, et al.: FDA-approved selective estrogen receptor modulators inhibit Ebola virus infection. Sci Transl Med. 2013; 5(190): 190ra79. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadrid PB, Chopra S, Manger ID, et al.: A systematic screen of FDA-approved drugs for inhibitors of biological threat agents. PLoS One. 2013; 8(4): e60579. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Coffee M: FDA approved drugs as potential Ebola treatments [version 2; referees: 2 approved]. F1000Res. 2015; 4: 48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrown CS, Lee MS, Leung DW, et al.: In silico derived small molecules bind the filovirus VP35 protein and inhibit its polymerase cofactor activity. J Mol Biol. 2014; 426(10): 2045–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHan Z, Lu J, Liu Y, et al.: Small-molecule probes targeting the viral PPxY-host Nedd4 interface block egress of a broad range of RNA viruses. J Virol. 2014; 88(13): 7294–306. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOpsenica I, Burnett JC, Gussio R, et al.: A chemotype that inhibits three unrelated pathogenic targets: the botulinum neurotoxin serotype A light chain, P. falciparum malaria, and the Ebola filovirus. J Med Chem. 2011; 54(5): 1157–69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohnson JC, Martinez O, Honko AN, et al.: Pyridinyl imidazole inhibitors of p38 MAP kinase impair viral entry and reduce cytokine induction by Zaire ebolavirus in human dendritic cells. Antiviral Res. 2014; 107: 102–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGehring G, Rohrmann K, Atenchong N, et al.: The clinically approved drugs amiodarone, dronedarone and verapamil inhibit filovirus cell entry. J Antimicrob Chemother. 2014; 69(8): 2123–31. PubMed Abstract | Publisher Full Text\n\nKazmi F, Hensley T, Pope C, et al.: Lysosomal sequestration (trapping) of lipophilic amine (cationic amphiphilic) drugs in immortalized human hepatocytes (Fa2N-4 cells). Drug Metab Dispos. 2013; 41(4): 897–905. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNadanaciva S, Lu S, Gebhard DF, et al.: A high content screening assay for identifying lysosomotropic compounds. Toxicol In Vitro. 2011; 25(3): 715–23. PubMed Abstract | Publisher Full Text\n\nCheng H, Lear-Rooney CM, Johansen L, et al.: Inhibition of Ebola and Marburg Virus Entry by G Protein-Coupled Receptor Antagonists. J Virol. 2015; 89(19): 9932–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Clercq E: Ebola virus (EBOV) infection: Therapeutic strategies. Biochem Pharmacol. 2015; 93(1): 1–10. PubMed Abstract | Publisher Full Text\n\nLitterman N, Lipinski C, Ekins S: Small molecules with antiviral activity against the Ebola virus [version 1; referees: 2 approved]. F1000Res. 2015; 4: 38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPicazo E, Giordanetto F: Small molecule inhibitors of ebola virus infection. Drug Discov Today. 2014; 20(2): 277–86. PubMed Abstract | Publisher Full Text\n\nKouznetsova J, Sun W, Martínez-Romero C, et al.: Identification of 53 compounds that block Ebola virus-like particle entry via a repurposing screen of approved drugs. Emerg Microbes Infect. 2014; 3(12): e84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohansen LM, DeWald LE, Shoemaker CJ, et al.: A screen of approved drugs and molecular probes identifies therapeutics with anti-Ebola virus activity. Sci Transl Med. 2015; 7(290): 290ra89. PubMed Abstract | Publisher Full Text\n\nBasu A, Mills DM, Mitchell D, et al.: Novel Small Molecule Entry Inhibitors of Ebola Virus. J Infect Dis. 2015; 212(Suppl 2): S425–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLong J, Wright E, Molesti E, et al.: Antiviral therapies against Ebola and other emerging viral diseases using existing medicines that block virus entry [version 2; referees: 2 approved]. F1000Res. 2015; 4: 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVeljkovic V, Loiseau PM, Figadere B, et al.: Virtual screen for repurposing approved and experimental drugs for candidate inhibitors of EBOLA virus infection [version 1; referees: 2 approved]. F1000Res. 2015; 4: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVeljkovic V, Goeijenbier M, Glisic S, et al.: In silico analysis suggests repurposing of ibuprofen for prevention and treatment of EBOLA virus disease [version 1; referees: 2 approved]. F1000Res. 2015; 4: 104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRaj U, Varadwaj PK: Flavonoids as Multi-target Inhibitors for Proteins Associated with Ebola Virus: In Silico Discovery Using Virtual Screening and Molecular Docking Studies. Interdiscip Sci. 2015; 1–10. PubMed Abstract | Publisher Full Text\n\nAbazari D, Moghtadaei M, Behvarmanesh A, et al.: Molecular docking based screening of predicted potential inhibitors for VP40 from Ebola virus. Bioinformation. 2015; 11(5): 243–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNishimura H, Yamaya M: A Synthetic Serine Protease Inhibitor, Nafamostat Mesilate, Is a Drug Potentially Applicable to the Treatment of Ebola Virus Disease. Tohoku J Exp Med. 2015; 237(1): 45–50. PubMed Abstract | Publisher Full Text\n\nDe Clercq E: Curious (Old and New) Antiviral Nucleoside Analogues with Intriguing Therapeutic Potential. Curr Med Chem. 2015. PubMed Abstract\n\nEkins S, Freundlich JS, Coffee M: A common feature pharmacophore for FDA-approved drugs inhibiting the Ebola virus [version 2; referees: 2 approved]. F1000Res. 2014; 3: 277. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S: A pharmacophore for of Ebola active compounds - predictions searching Microsource library. Figshare. 2014. Publisher Full Text\n\nEkins S, Reynolds RC, Franzblau SG, et al.: Enhancing hit identification in Mycobacterium tuberculosis drug discovery using validated dual-event Bayesian models. PLoS One. 2013; 8(5): e63240. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Reynolds RC, Kim H, et al.: Bayesian models leveraging bioactivity and cytotoxicity information for drug discovery. Chem Biol. 2013; 20(3): 370–378. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, de Siqueira-Neto JL, McCall LI, et al.: Machine Learning Models and Pathway Genome Data Base for Trypanosoma cruzi Drug Discovery. PLoS Negl Trop Dis. 2015; 9(6): e0003878. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadrid PB, Panchal RG, Warren TK, et al.: Evaluation of Ebola Virus Inhibitors for Drug repurposing. ACS Infect Dis. 2015; 1(7): 317–326. Publisher Full Text\n\nPrathipati P, Ma NL, Keller TH: Global Bayesian models for the prioritization of antitubercular agents. J Chem Inf Model. 2008; 48(12): 2362–70. PubMed Abstract | Publisher Full Text\n\nBender A, Scheiber J, Glick M, et al.: Analysis of pharmacology data and the prediction of adverse drug reactions and off-target effects from chemical structure. ChemMedChem. 2007; 2(6): 861–873. PubMed Abstract | Publisher Full Text\n\nKlon AE, Lowrie JF, Diller DJ: Improved naïve Bayesian modeling of numerical data for absorption, distribution, metabolism and excretion (ADME) property prediction. J Chem Inf Model. 2006; 46(5): 1945–56. PubMed Abstract | Publisher Full Text\n\nHassan M, Brown RD, Varma-O'brien S, et al.: Cheminformatics analysis and learning in a data pipelining environment. Mol Divers. 2006; 10(3): 283–99. PubMed Abstract | Publisher Full Text\n\nRogers D, Brown RD, Hahn M: Using extended-connectivity fingerprints with Laplacian-modified Bayesian analysis in high-throughput screening follow-up. J Biomol Screen. 2005; 10(7): 682–6. PubMed Abstract | Publisher Full Text\n\nJones DR, Ekins S, Li L, et al.: Computational approaches that predict metabolic intermediate complex formation with CYP3A4 (+b5). Drug Metab Dispos. 2007; 35(9): 1466–75. PubMed Abstract | Publisher Full Text\n\nAnon. R. Available from: http://www.r-project.org/Reference Source\n\nClark AM, Ekins S: Open Source Bayesian Models. 2. Mining a \"Big Dataset\" To Create and Validate Models with ChEMBL. J Chem Inf Model. 2015; 55(6): 1246–1260. PubMed Abstract | Publisher Full Text\n\nClark AM, Dole K, Coulon-Spektor A, et al.: Open Source Bayesian Models. 1. Application to ADME/Tox and Drug Discovery Datasets. J Chem Inf Model. 2015; 55(6): 1231–1245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClark AM, Sarker M, Ekins S: New target prediction and visualization tools incorporating open source molecular fingerprints for TB Mobile 2.0. J Cheminform. 2014; 6: 38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLamichhane G, Freundlich JS, Ekins S, et al.: Essential metabolites of Mycobacterium tuberculosis and their mimics. MBio. 2011; 2(1): e00301–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Bradford J, Dole K, et al.: A collaborative database and computational models for tuberculosis drug discovery. Mol Biosyst. 2010; 6(5): 840–851. PubMed Abstract | Publisher Full Text\n\nZheng X, Ekins S, Raufman JP, et al.: Computational models for drug inhibition of the human apical sodium-dependent bile acid transporter. Mol Pharm. 2009; 6(5): 1591–1603. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Kaneko T, Lipinski CA, et al.: Analysis and hit filtering of a very large library of compounds screened against Mycobacterium tuberculosis. Mol Biosyst. 2010; 6(11): 2316–2324. PubMed Abstract\n\nEkins S, Casey AC, Roberts D, et al.: Bayesian models for screening and TB Mobile for target inference with Mycobacterium tuberculosis. Tuberculosis (Edinb). 2014; 94(2): 162–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Freundlich JS, Reynolds RC: Are bigger data sets better for machine learning? Fusing single-point and dual-event dose response data for Mycobacterium tuberculosis. J Chem Inf Model. 2014; 54(7): 2157–65. PubMed Abstract | Publisher Full Text\n\nEkins S, Freundlich JS, Hobrath JV, et al.: Combining computational methods for hit to lead optimization in Mycobacterium tuberculosis drug discovery. Pharm Res. 2014; 31(2): 414–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGupta RR, Gifford EM, Liston T, et al.: Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties. Drug Metab Dispos. 2010; 38(11): 2083–2090. PubMed Abstract | Publisher Full Text\n\nSteinbeck C, Han Y, Kuhn S, et al.: The Chemistry Development Kit (CDK): an open-source Java library for Chemo- and Bioinformatics. J Chem Inf Comput Sci. 2003; 43(2): 493–500. PubMed Abstract | Publisher Full Text\n\nEkins S, Clark AM, Williams AJ: Incorporating Green Chemistry Concepts into Mobile Chemistry Applications and Their Potential Uses. ACS Sustain Chem Eng. 2013; 1(1): 8–13. Publisher Full Text\n\nEkins S, Clark AM, Sarker M: TB Mobile: a mobile app for anti-tuberculosis molecules with known targets. J Cheminform. 2013; 5(1): 13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClark AM, Williams AJ, Ekins S: Cheminformatics workflows using mobile apps. Chem-Bio Informatics J. 2013; 13: 1–18. Publisher Full Text\n\nEkins S, Clark AM, Williams AJ: Open Drug Discovery Teams: A Chemistry Mobile App for Collaboration. Mol Inform. 2012; 31(8): 585–597. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilliams AJ, Ekins S, Clark AM, et al.: Mobile apps for chemistry in the world of drug discovery. Drug Discov Today. 2011; 16(21–22): 928–39. PubMed Abstract | Publisher Full Text\n\nKrueger RE, Mayer GD: Tilorone hydrochloride: an orally active antiviral agent. Science. 1970; 169(3951): 1213–4. PubMed Abstract | Publisher Full Text\n\nStringfellow DA: Comparation interferon- inducing and antiviral properties of 2-amino-5-bromo-6-methyl-4-pyrimidinol (U-25,166), tilorone hydrochloride, and polyinosinic-polycytidylic acid. Antimicrob Agents Chemother. 1977; 11(6): 984–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWissing MD, Dadon T, Kim E, et al.: Small-molecule screening of PC3 prostate cancer cells identifies tilorone dihydrochloride to selectively inhibit cell growth based on cyclin-dependent kinase 5 expression. Oncol Rep. 2014; 32(1): 419–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiswas T, Green KD, Garneau-Tsodikova S, et al.: Discovery of inhibitors of Bacillus anthracis primase DnaG. Biochemistry. 2013; 52(39): 6905–10. PubMed Abstract | Publisher Full Text\n\nLeppäranta O, Tikkanen JM, Bespalov MM, et al.: Bone morphogenetic protein-inducer tilorone identified by high-throughput screening is antifibrotic in vivo. Am J Respir Cell Mol Biol. 2013; 48(4): 448–55. PubMed Abstract | Publisher Full Text\n\nSchrimpf MR, Sippy KB, Briggs CA, et al.: SAR of α7 nicotinic receptor agonists derived from tilorone: exploration of a novel nicotinic pharmacophore. Bioorg Med Chem Lett. 2012; 22(4): 1633–8. PubMed Abstract | Publisher Full Text\n\nBriggs CA, Schrimpf MR, Anderson DJ, et al.: alpha7 nicotinic acetylcholine receptor agonist properties of tilorone and related tricyclic analogues. Br J Pharmacol. 2008; 153(5): 1054–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim K, Damoiseaux R, Norris AJ, et al.: High throughput screening of small molecule libraries for modifiers of radiation responses. Int J Radiat Biol. 2011; 87(8): 839–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRatan RR, Siddiq A, Aminova L, et al.: Small molecule activation of adaptive gene expression: tilorone or its analogs are novel potent activators of hypoxia inducible factor-1 that provide prophylaxis against stroke and spinal cord injury. Ann N Y Acad Sci. 2008; 1147: 383–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMayer-Sonnenfeld T, Avrahami D, Friedman-Levi Y, et al.: Chemically induced accumulation of GAGs delays PrPSc clearance but prolongs prion disease incubation time. Cell Mol Neurobiol. 2008; 28(7): 1005–15. PubMed Abstract | Publisher Full Text\n\nWolfe MS: Giardiasis. Clin Microbiol Rev. 1992; 5(1): 93–100. PubMed Abstract | Free Full Text\n\nOkombo J, Kiara SM, Mwai L, et al.: Baseline in vitro activities of the antimalarials pyronaridine and methylene blue against Plasmodium falciparum isolates from Kenya. Antimicrob Agents Chemother. 2012; 56(2): 1105–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRizk MA, El-Sayed SA, Terkawi MA, et al.: Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites. PLoS One. 2015; 10(4): e0125276. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQi J, Wang S, Liu G, et al.: Pyronaridine, a novel modulator of P-glycoprotein-mediated multidrug resistance in tumor cells in vitro and in vivo. Biochem Biophys Res Commun. 2004; 319(4): 1124–31. PubMed Abstract | Publisher Full Text\n\nAnon. Pyramax® (pyronaridine artesunate). Available from: http://www.mmv.org/access-delivery/access-portfolio/pyramax®-pyronaridine-artesunateReference Source\n\nPoravuth Y, Socheat D, Rueangweerayut R, et al.: Pyronaridine-artesunate versus chloroquine in patients with acute Plasmodium vivax malaria: a randomized, double-blind, non-inferiority trial. PLoS One. 2011; 6(1): e14501. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Williams AJ, Krasowski MD, et al.: In silico repositioning of approved drugs for rare and neglected diseases. Drug Discov Today. 2011; 16(7–8): 298–310. PubMed Abstract | Publisher Full Text\n\nMartínez-Romero C, García-Sastre A: Against the clock towards new Ebola virus therapies. Virus Res. 2015; pii: S0168-1702(15)00236-1. PubMed Abstract | Publisher Full Text\n\nSeidler J, McGovern SL, Doman TN, et al.: Identification and prediction of promiscuous aggregating inhibitors among known drugs. J Med Chem. 2003; 46(21): 4477–4486. PubMed Abstract | Publisher Full Text\n\nBarelier S, Sterling T, O'Meara MJ, et al.: The recognition of identical ligands by unrelated proteins. ACS Chem Biol. 2015. PubMed Abstract | Publisher Full Text\n\nEkins S, Williams AJ: Finding promiscuous old drugs for new uses. Pharm Res. 2011; 28(8): 1785–1791. PubMed Abstract | Publisher Full Text" }
[ { "id": "10995", "date": "10 Nov 2015", "name": "Sanja Glisic", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nEkins and his colleagues by using machine learning models and molecular modeling have successfully identified from collection of 2320 compounds 3 promising anti Ebola compounds with in vitro nanomolar activity. It is a perfect example which confirms suitability of in silico approaches in selection of molecules against Ebola virus.This result will be strengthened with suggestion of possible therapeutic target(s) of selectedcandidate drugs by using resources of curated chemistry-to-protein relationships. Such information could help in further improvement of proposed therapeutic molecules, as well as for selection of some other candidates for Ebola drugs.", "responses": [ { "c_id": "1738", "date": "04 Jan 2016", "name": "Sean Ekins", "role": "Author Response", "response": "We thank the reviewer for their constructive feedback Response: We are unsure which resources the reviewer is referring to and how these would help identify the antiviral target. We have tried to extensively describe the known activities of the three compounds against various targets outside of viruses. In addition we have previously suggested such antimalarials with Ebola activity may dock into VP35. Preliminary docking results suggest pyronaridine may dock into the same site which is also indicated by the pharmacophore provided already (fig 4). We have added the statement “Future work may include identification of targets using computational or experimental approaches.” to the discussion." } ] }, { "id": "11173", "date": "03 Dec 2015", "name": "Sandeep Chakraborty", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nEkins et al. have presented a crisp and lucid manuscript on a very relevant topic. They have presented a methodology that implements machine learning techniques to learn from known active and inactive compounds (an ever increasing set, that will tend to provide improved results as time goes by), and score a larger set of compounds (MicroSource Spectrum set of 2320 compounds). The in silico methodology described here provides an excellent method to quickly screen known compounds for possible therapies (against Ebola in particular), and other viruses in general. Finally, they demonstrate (in vitro) the increased effectiveness of three compounds - the antiviral tilorone and the antimalarials quinacrine and pyronaridine - in comparison to the known active chloroquine (albeit at a higher cytotoxicity) in inhibiting viral infection of HeLa cells. Further, their efforts in ensuring open-access to such tools is commendable as the next pathogen caused humanitarian crisis looms in some nations.The biggest open question is how good are the molecular descriptors, and how much of this is serendipitous. For example, I find it hard to believe that molecular weight and the number of rotatable bonds can be good predictors of drug-protein interactions (although I may be wrong). I have been investigating promiscuous ligand protein interactions for a couple of years on a molecular basis (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0032011, http://f1000research.com/articles/2-286/v3). One interesting example (unpublished) is suramin used in the treatment of African sleeping sickness (African trypanosomiasis) and river blindness (onchocerciasis), infections caused by parasites. Suramin binds eight non-homologous proteins in the PDB database, through different parts of the molecule and in binding sites that share little similarity in residues involved. Also, the molecule (in addition to the protein) undergoes conformational changes, underscoring the difficulty of computational methods to model such interactions. In the face of such data, the m/c learning models appear too simplistic.Also, it is not completely clear why the 23, 31 and 34th compound was chosen from the Table S5, which is ordered on column H (all three have amines, don't the previous ones have it?).Some minor comments:It would be an interesting case study to evaluate how favipiravir (which I understand is yet to be clinically approved in the US, but approved in Japan) and BCX4430 would rank through the m/c learning methodology. It would be good to have a set of images, and the corresponding nuclei counts obtained from CellProfiler. Is there a way to quantify the color green as a measure of viral infection? AlogP as a molecular descriptor has not been explained (page 3).", "responses": [ { "c_id": "1737", "date": "04 Jan 2016", "name": "Sean Ekins", "role": "Author Response", "response": "We thank the reviewer for their constructive feedback. The approach we have taken uses FCFP_6 fingerprints as well as 8 interpretable descriptors, and therefore the models do not depend on molecular weight and rotatable bond number. On the whole this approach has been remarkable useful for predicting whole cell activity as we described for mycobacterium tuberculosis, T. Cruzi and now Ebola. In all cases we are not considering a single target. This suggests the machine learning approach and descriptors used can discern active molecule features from those that are inactive and identify new molecules. The three compounds were chosen as those above them in the list were either compounds in the training set or antipsychotics and other CNS acting compounds etc. which were deemed less desirable.The model could certainly be used to predict additional molecules. These two suggested by the reviewer are structurally distinct from any of the actives in our current training set. We didn’t identify any of the classical antiviral polymerase-looking compounds in our screening against Ebola. However we have previously collated and described many other diverse compounds active against Ebola in vitro as described in this manuscript. Perhaps the next step would be to utilize all of the different HTS screening data to build a combined model that considers this structural diversity and may overcome limitations in the current models.Response: we have now added a new figure with cell images (S7). Response: AlogP is a widely used measure of hydrophobicity and we have refered to its use along with the other descriptors in the previous machine learning papers." } ] } ]
1
https://f1000research.com/articles/4-1091
https://f1000research.com/articles/5-2573/v1
25 Oct 16
{ "type": "Opinion Article", "title": "Processes of believing: Where do they come from? What are they good for?", "authors": [ "Rüdiger J. Seitz", "Raymond F. Paloutzian", "Hans-Ferdinand Angel", "Raymond F. Paloutzian", "Hans-Ferdinand Angel" ], "abstract": "Despite the long scholarly discourse in Western theology and philosophy on religion, spirituality, and faith, explanations of what a belief and what believing is are still lacking. Recently, cognitive neuroscience research addressed the human capacity of believing. We present evidence suggesting that believing is a human brain function which results in probabilistic representations with attributes of personal meaning and value and thereby guides individuals’ behavior. We propose that the same mental processes operating on narratives and rituals constitute belief systems in individuals and social groups. Our theoretical model of believing is suited to account for secular and non-secular belief formation.", "keywords": [ "Belief", "belief systems", "behavior", "credition", "cerebral networks", "meaning making", "narratives", "rituals", "representations", "perception", "prediction", "religion", "valuation" ], "content": "Summary\n\nAlthough largely neglected in contemporary science, we will show that believing is a fundamental brain function on which individual and societal behavior is grounded.\n\n\nIntroduction\n\nIn this communication we address the neurophysiological and anthropological dimensions in the contemporary sciences of the largely neglected but nevertheless important phenomenon of believing. Explanations are proposed for the putative physiological and psychological implementations of the process in the human brain (i.e., where do beliefs come from?), and their functions (i.e., what are beliefs good for?). In sorting out the levels of building blocks and their functions, the neurophysiological processes underlying the behavioral process of believing that can be studied empirically in the individual are differentiated from more general belief system processes that operate in large collections of people such as communities and societies. It is nevertheless important to explore the relationship between what individuals believe and the processes by which they do so and the more over-arching belief systems in a society. Fortunately, a new openness for understanding “religious phenomena” including “believing” as human abilities and activities is emerging (Connors & Halligan, 2015; Krueger & Grafman, 2013).\n\n\nA brief history of belief and believing\n\nSince the time of the great Greek philosophers Plato and Aristotle, the issue of what a belief is and how beliefs are related to knowledge and rationality has been a fundamental issue in Western philosophy. It raises the question of how to best understand the relation between knowledge and belief (Armstrong, 1973; Helm, 1999). Later, the writings of Saint Paul and other texts in the Christian Bible emphasized the importance of belief for understanding the role and meaning of Jesus Christ. Thus, in the history of Western thinking religious beliefs became more and more integrated into a dogmatic set of that what might be called “the Christian belief”.\n\nAfter the Enlightenment, religious truth claims became suspect. By the 20th century under the widespread influence of psychoanalytic theory (Freud, 1928), religion was considered as an obsessional neurosis and “belief” negatively interpreted as a sign of human weaknesses. Thus, in psychology, religious phenomena including beliefs have from time to time been interpreted as deviant or at least as unneeded and subordinated under pathological labels such as neuroticism (Hills et al., 2004). A consequence is that there have been few attempts to empirically study the phenomenon of belief or to conceptualize “normal” belief on a scientific basis (Connors & Halligan, 2015). Unfortunately, the “cognitive turn” in psychology, which occurred around 1960, did not succeed in making conscious the specific feature and relevance of “believing” as cognitive ability of humans. Nevertheless it opened the door to examining a wide range of cognitive processes such as thinking, meaning making, and perception and prepared psychology to integrate the believing process into further research. Recently, there has been increased interest in scientific discourse as well as in the general public in the nature of human belief. For example, in current psychology of religion there is increasing interest in belief and disbelief for both religious and atheist orientations (Bullivant & Ruse, 2013; Schnell & Keenan, 2011) and in the relationship between religiousness and specific religious beliefs to spirituality and health (Koenig et al., 2012). This interest includes extensive and recently intensified debates about the meaning and utility of concepts such as faith, belief, transcendence, and spirituality (Oviedo, 2016; Paloutzian & Park, 2013; Paloutzian & Park, 2015; Visela & Angel, 2016). In addition, in attempting to explore the neural correlates of religious experience, cognitive neuroscience implicitly implies that believing is a component of normal mental activity (Azari et al., 2001; Harris et al., 2009).\n\nWhen we say holding a belief is a human ability, we mean that believing is envisioned as a mental activity generated by neural circuits in the brain (Boyer, 2003). Thus, a “normal belief” is a putative brain product of a believing individual and in general is entertained as a belief by humans. Beliefs serve a purpose in that they are linked to personal intuitive judgments about the subjective certainty of mental constructs and sensory perceptions (Harris et al., 2008). Personal beliefs thereby function as part of the building blocks of intelligent behaviour (Elliott et al., 1995; Howlett & Paulus, 2015; Taves, 2015).\n\n\nThe need for theory\n\nWe need to formulate a theoretical concept for normal believing in order to integrate findings such as the above into a coherent framework. Of course, the issue of what constitutes “normal” must first be settled. In a philosophical sense there is no self-understanding concept that would explain terms like “normality,” “norms,” or “normative.” Deep discussion of this issue is beyond the concerns of this paper, but we emphasize that normal belief is neither a pathological brain state nor strictly limited to religions. Nevertheless, to develop such an innovative understanding of “normal belief” requires some theoretical considerations, as follows.\n\nA) In psychology of religion, belief is often understood as a religious phenomenon. However, a concept of “normal belief” does not equal religious belief because people believe all manner of things, most of which are a-religious. Thus “belief” has to be understood generically, not only religiously. Consequently, “normal belief” is a proper characterization and is relevant for secular and religious domains.\n\nB) As a noun, “belief” is defined in different ways. Often it is treated as a “state” (Churchland & Churchland, 2013) or as an attitude towards someone or something, such as liking and favoring a person or disliking and seeing only the negative side of an issue. In contrast, in understanding “belief” as a mental activity generated by neural circuits in the human brain, we emphasize the procedural character of belief: it is not a state; believing is a mental process. When understood this way, the notion of belief can be dissociated from concepts with static meanings, which are usually expressed in substantive terms like “belief,” “faith,” or “spirituality”.\n\nC) A model of the believing process, which is inherently a procedural concept, has to account for how a concept that inherently includes fluctuation relates to “beliefs” that we perceive as more static. How the fluidity of the believing process is related to our perception of belief stability has to be integrated into the model of normal believing. In addition to this itself being illuminating, it will provide a new approach to topics such as the “formation of belief” and maintaining “belief systems” (Langdon & Coltheart, 2000).\n\nD) A concept of a “normal process of believing” has to integrate the complex notions of “time” and “process”. In philosophy, the understanding of time has been broadly reflected since antiquity and still is widely discussed (Le Poidevin & McBeath, 1993), indicating that it cannot be reduced to merely “measuring” time or time being a “measurable” variable or property. Similarly, process thinking has a long tradition in Western philosophy, starting with Heraclitus` position “πάντα ῥεȋ“ (panta rhei; everything flows) and developed in the field of process philosophy as spawned in the writings of Bergson, Merleau-Ponty, and especially Whitehead indicating that process constitutes change and occurs through and interacts with time.\n\nE) A full account of the believing process has to explain what happens from the time the process starts until it comes to closure. This explanation needs to include an account of the subliminal aspects of the process.\n\n\nFrom neuroscientific to anthropological dimensions of believing\n\nAdvances in the natural sciences have made it possible to scientifically explore to a certain degree virtually all physiological processes enabling human life. Analogous to the physical sciences, the life sciences seek to construct simplified models of biological processes that can be tested empirically by appropriate experiments. This approach was adopted by cognitive neuroscience with the aim of identifying fundamental processes underlying human behavior. There are four levels of exploration in this enterprise.\n\nA) There is a hermeneutic level rooted in philosophical issues. Cognitive neuroscience uses terms that come from different philosophical traditions and have their own history of meaning. We need to clarify which terms are most adequate to shape the theoretical paradigm of neuroscientific research and to reflect its findings. For example, the terms “process”, “relevance”, “imagination”, “meaning”, “value”, and “evidence” are less clear for cognitive neuroscience purposes (and, therefore, are shaped to indicate specific, technical meanings) as they might appear in everyday speech. Such attaching of technical meanings to everyday words is often necessary and a normal part of doing science.\n\nB) The so-called linguistic turn in philosophy had sensible implications for the use of language. Even our everyday phrases such as “to think,” “to know,” or “to do” imply a connection to reality. We cannot avoid terms of this sort in this paper, but space constraints do not allow for philosophical discussion of their use. Suffice it to say that the terminology used in this field, especially when distinguished from the language of folk psychology, includes their relation to reality as a key aspect.\n\nC) There is the behavioral level, which can be accounted for by heuristic cognitive models. By empirical research, models of this sort have been validated as, for example, the multimodal networks of memory, attention, and language (Levelt, 1993; Mesulam, 1990).\n\nD) Modern neurophysiological methods such as functional imaging and electroencephalographic techniques allow us to explore the temporal order in which different brain areas are engaged during performance of specific behavioral tasks. Studies of this sort have yielded models of brain function in terms of both plausibility and topographic and temporal realization in the human brain. An example is the review on the recently widely debated issue of “free will” (Hallett, 2016).\n\nWe greatly expand the complexity of the psychological processes involved when individuals interact with their environment to understand what is going on around them. From birth onwards humans have to learn to analyze signals coming from the environment in order to behave appropriately in response to them (Seitz et al., 2009). Also, they must develop insight into their own sensory capacities and bodily strength, and rely on them. These capacities are linked to the concept of a bodily and mental identity combined with self-esteem and a sense of agency (Farrer & Frith, 2002; Northoff, 2011). This linkage allows people to retain a high degree of subjective certainty even when the situation is objectively unclear.\n\nWe can observe events at a behavioral level but we have yet to learn what cognitive processes make for evaluative judgments at any level of specificity. Where do cognitive processes get the criteria they “use” to measure and evaluate internal and external stimuli? If we suppose the existence of a kind of “valuation system,” it may be best and intellectually sound to consider it as one of several aspects of a meaning system. This is because the concepts of measuring and valuing are intimately linked through meaning. For example, what has positive meaning is also valued, and what is valued is or has been measured, which affords its meaning for the person. In fact, all of the recent scholarship on meaning systems either explicitly or implicitly includes values or valuing among the list of components of a meaning system (Markman et al., 2013; Park, 2005). Therefore, just as global meaning systems are psychological structures comprised of interactive components that guide the interpretation of and response to information, so also are “valuation systems” a component of meaning systems that interact with all the other components of meaning systems and contribute importantly to the operation of the whole system.\n\nPeople combine formal analytic and subjective affective judgments to arrive at propositions of the form “I believe that …”. We assume that the basic processes of believing are universal but are also modulated by human individuality. For example, it has been shown that individuals differ in how they detect and interpret noisy optical signals, and some might be prone to magical ideation. In fact, magical ideation might influence the judgement of contingencies (Adelson, 1993; Brugger & Graves, 1997).\n\nRecently, processes of believing have been labelled “credition” (Angel, 2013). Credition is a neologism based on the Latin verb credere (to believe). The notion of credition is different from faith, religion, and spirituality, and provides an empirical psychophysiological framework for the study of what believing is at the psychological, neuroscientific, and social levels of analysis. Doing this involves multilevel data mapping (Paloutzian & Park, 2013; Paloutzian & Park, 2015) or bi-directionally “translating\" the data and concepts from one level of analysis to an adjacent level of analysis in order to assess the degree to which they correspond. In outlining the heuristic model of credition, Angel & Seitz (2016) summarized it to include cognitive and emotional operations affording believing. In particular, they presented correspondences between cognition, emotion, and credition operations and the neurophysiological processes of perception and valuation.\n\nTo understand the process of believing, it is essential to understand how people attribute personal meaning to specific sensory perceptions (Paloutzian & Mukai, 2016; Seitz & Angel, 2015). Two dynamic and reciprocal processes are at work to enable this.\n\n(1) Perception deals with the formal characteristics of physical stimuli in the outside world (see Figure 1). The process employs sensory systems such as vision, audition, and somatosensation as well as higher order sensory information processing. The resulting representations comprehend feature identification, stereognosis, associations of pragmatic use, object-name associations, etc. This is a physical process that involves highly complex interactions between explorative movements and object perception and results in comprehending the object’s features (Jeannerod, 1995; Roland & Mortensen, 1987). Internal mental states effectively represent external states in a probabilistic fashion (Friston et al., 2014). As illustrated in research on unstable picture puzzles, objects become more identifiable against a noisy background when either the signal-to-noise ratio or the duration of exposure increases (Takei & Nishida, 2010). Moreover, the physical characteristics of objects are processed such that the perceived composition of the components of an object is matched against that of a previously perceived item (Adelson, 1993). This process may distort the perception of the physical characteristics of the object but results in a meaningful illusion.\n\nPerception refers to the formal comprehension of external items and events; valuation affords the attribution of personal meaning to them. Both psychic functions operate in a dynamic and reciprocal fashion leading to personal probabilistic representations in the human brain. As the personal probabilistic representations are formed in milliseconds typically being implicit, they may become explicit owing to a high emotional loading and a repetitive exploration. The subsequent actions are based on the personal probabilistic representations being loaded with probabilistic predictions of reward and cost.\n\nThe speed with which sensory information is processed is important. Nervous tissue is exceptionally fast at doing so owing to the high excitability of nervous membranes, which allows for rapid inter-cell communication. High speed information processing is a phylogenetic advantage for control of behavior. Because of this high speed, most sensory information is processed non-consciously in the brain; only some bits of information enter consciousness. Accurate material categorization of real-world images occurs as fast as 30 ms but increases in accuracy with longer exposure times, up to 120 ms (Sharan et al., 2014; van Gaal et al., 2011). As soon as information reaches the primary visual cortex, early automatic processing in the 100ms range affects information transmission further downstream (Nortmann et al., 2015). This feed-forward processing has the feature of predictive coding. For example, early responses in the amygdala (40 to 140 ms) are unaffected by attentional load, while later responses (280 to 410 ms) in the amygdala are modulated by attention (Kuo et al., 2009).\n\nThe perception of objects has many commonalities with the perception of events, but there are also clear differences (Radvansky & Zacks, 2014). The most important difference is that objects are static entities, whereas events are fluid and evolve over time. Accordingly, events are perceived from a succession of patterns in which items of interest change over time in a coherent manner. Thus, an event becomes a meaningful percept by temporal coding. As an example, in the virtual reality environment the impression of a ball moving towards the observer is generated by the increasing size of the ball against a static background (Cameirão et al., 2010). Similarly, the movement of a ball between two persons in a virtual landscape generates the impression that the two people are throwing a ball to each other. Thus, the observer constructs a meaning and attributes it to the observed temporally evolving events. Other examples of temporal coding of events include the processes involved in writing, reading, and playing and listening to music. Again, decoding of the temporal sequence of single events provides the meaning of the events. In fact, the rapid temporal evolution of the electrical activity in the peripheral nerves recorded during hand writing can be played back to the nerves and shown to be capable of producing the same limb movements (Aimonetti et al., 2007).\n\n(2) In addition, a dynamic and reciprocal process is concerned with processing the affective value of physical stimuli in the outside world and attributing person-specific meaning to them (Figure 1). The personal probabilistic representations that result from these processes are typically implicit but can become explicit when the stimuli trigger high personal meaning (Friston et al., 2014). Prominent negative features of this sort are signals of potential harm or threat; prominent positive features may be signals of beauty or pleasantness (Rolls, 2006). Such affective labeling is behaviorally highly relevant because it may evoke opposite motivations and responses, such as avoidance or desire.\n\nObjects of special relevance for humans are human faces, since facial expressions of emotion characterize interpersonal encounters and induce meta-analytic processes leading to the interpretation of the mental as well as the emotional state of the counterpart (Potthoff & Seitz, 2015). As in the case of identification of other objects, these processes are fast; they take place within 40 ms, as is evident from behavioral and neurophysiological studies (Bar et al., 2006; Smith, 2012). In a more general sense, one may wonder how subjective categories such as aesthetic judgments become important. Pertinent to this issue, it has been shown that individual aesthetic preferences for faces are shaped by an environment associated with an individual -- not by genes, as found in judgments of attractiveness in over 570 monozygotic twins (Germine et al., 2015).\n\nFinally, repeated experience with the same environmental objects or events stabilizes their cognitive-emotional representations so that, e.g., familiarity with an object or information promotes learning about it and increases a sense of trust in the object or information (Chang et al., 2010; d’Acremont et al., 2013; Henkel & Mattson, 2011). In addition, representations already formed will be updated as new items and information are accommodated to the store of acquired knowledge. Also, there is recent evidence that learning is accompanied by subjective emotional loading. For example, learning invokes a sense of confidence that increases in proportion to the number of observations of a task as well as task performance (Meyniel et al., 2015).\n\nPerceived information is what motivates the generating of actions (Figure 1). Generating actions involves intentions to act, action selection, inhibition of unwanted acts, and predicting reward and costs of acts (Nachev et al., 2008; Passingham et al., 2010). In general terms, this refers to deciding what to do next. The neuroscientific basis for decision making has been shown to be related to reward valuation (Grabenhorst & Rolls, 2011) and unconscious or intuitive selection that evolves within far less than 400 ms (Chen et al., 2010; Kahnt et al., 2010; Schultze-Kraft et al., 2016). The processes that regulate the performance of actions are replete with probabilistic reward and cost predictions determined by the personal meanings attributed to the mental representations of the signals from the outside world (Friston et al., 2014). The action-perception-valuation triad has been postulated to account, in context of a hierarchical dimension, for computations of physical, social, and cultural matters (Sugiura et al., 2015).\n\n\nFunctional anatomy of the believing process\n\nValuation of perceived information and making attributions intimately involves the medial frontal cortex (Grabenhorst & Rolls, 2011; Kahnt et al., 2010; Seitz et al., 2006; & van Overwalle, 2009). This is also true for attributions about the mental states of others (Bird & Viding, 2014; Kanske et al., 2015). The dorsolateral prefrontal cortex is specifically involved in making attentive decisions (Gray et al., 2002; Niendam et al., 2012). These psychological processes include valuation of delayed reward and engage extensive brain circuits including the medial and lateral prefrontal cortex (Niendam et al., 2012; Peters & Büchel, 2009; Thompson & Duncan, 2009). The activation of the dorsolateral prefrontal cortex during decision processes was correlated with gamma-activity and found to be related to the capacity of the working memory system and fluid intelligence scores (Federenko et al., 2013; Roux et al., 2012). There is recent experimental evidence that believing can also affect activity in these brain areas (Howlett & Paulus, 2015; Ninaus et al., 2013). As dopaminergic midbrain areas are tightly connected with these areas and encode shifts in beliefs, they may contribute to belief updating (Schwartenbeck et al., 2016). This means that the control of a person’s behavior is mediated by extensive neural networks that include the medial and dorsolateral prefrontal cortex related to comprehension of the formal sensory information and to emotional valuation of that information. As a model of the believing process we studied understanding other people’s behaviour in terms of most probable explanations using multidimensional functional magnetic resonance imaging (fMRI). The set-up of the experiment summarized in Figure 2 (an event-related fMRI study) allowed us to separate a pre-decision phase in which emotional information was presented, from the phase when the subjects were required to make their decision on verbal material. Since the information on which the decisions had to be based was presented only below the level of awareness, the subjects were put in a state of uncertainty. This is a situation in which it is typical for someone to rely on what he or she already believes to be correct.\n\nRed areas were functionally connected to activation of the posterior dorsolateral prefrontal cortex (pDLFC) during empathic evaluation in the pre-decision phase. The right pDLFC was connected with the left anterior insula (AI). Green areas belonged to a widespread network involving the anterior dorsolateral prefrontal cortex (aDLFC) which was activated related to verbal processing in the actual decision phase. The right aDLFC was connected to the right and left dorsolateral prefrontal cortex (DLFC), the mediodorsal prefrontal cortex (MFC), and the left inferior parietal lobule (IP). Note, that masking of the subliminal (40 ms) visual stimulus required the subjects to make a decision according to what they believed was the right answer. Further details in Prochnow et al. (2015).\n\nThe figure shows that two different functional circuits including the brain areas described above. In addition, these circuits included the supplementary motor area (SMA) in the dorsal portion of the medial frontal cortex that provides the link to proactive movement control by adjusting the level of motor readiness affording response inhibition (Chen et al., 2010; Meder et al., 2016) as well as free choice movement coding and behavioral tactics (Matsuzaka et al., 2016; Passingham et al., 2010). Meta-analytic studies of functional neuroimaging data have revealed that different nodes in the medial frontal cortex, including the SMA and pre-SMA, are involved in the proactive and inhibitory control of actions (Seitz et al., 2006; van Overwalle, 2009). The neural hubs identified in these studies were arranged in a caudo-rostral gradient of increasingly more abstract information processing. As evident from resting state connectivity, the medial cortical areas and the parietal cortex were shown to be part of the so-called default mode perspective of humans (Gusnard et al., 2001).\n\nProspective valuation of rewards in a familiar context was reported to be related to activation of key nodes of emotional and autobiographical memory retrieval and dynamically modulated by frontal-striatal connectivity (Sasse et al., 2015). The modulation of cortical information by processing in trans-striatal relay loops has been described as of key importance for learning routines and rules as well as their combinations (Graybiel & Grafton, 2015). Accordingly, the multiplex aspect of probabilistic cognitive-emotional representations involves cortical and subcortical networks. In effect, these data support the notion that believing in personal probabilistic representations is a normal brain function.\n\n\nBelief systems\n\nIn initiating this scientific examination of believing and belief systems, several distinctions in the meanings of key terms must be kept in mind. For example, in addition to “belief” too often assumed to connote something religious or spiritual, it has also too often been assumed that issues of belief do not concern people who are nonreligious or generically secular (Stich, 1996). Sometimes it is assumed that they don’t have any beliefs (Bullivant & Ruse, 2013). But social science research documents that believing abounds in such persons even though the content differs from that in typical religious beliefs (Schnell & Keenan, 2011). Further, the meanings of specific terms should be meticulously teased apart in order to avoid confusion. For example, because “religion” is not one thing but many, it is better to talk about specific religions, because almost no statement about what “religion” does will hold for all of them (Paloutzian & Park, 2013). And it is circular to define religion with reference to “the sacred” or as the “search for significance in ways related to the sacred” (Pargament, 1992) because anything can be significant (i.e., matter to someone) and literally anything, including a rock, idea, or war, can be attributed the property of sacrality. Above that it is important to acknowledge that “religiosity” and “religion” are not the same. For example, various and even contradictory expressions of religiosity may be associated with a given religion. Moreover, “religiousness” connotes the processes that mediate how one appropriates and manifests one’s religion in life, not “the religion” as such (whatever “the religion” might “really” mean).\n\nTheoretically, “religious experience” cuts across most of the above constructs, in addition to being manifest in both the individual and collective realms. But “experience” is a phenomenologically private sphere. It is not a matter of public knowledge even though the claims, words, and behaviors associated with purported experiences are. For reasons such as these, “religion” explains little about how individuals make use of religion when fostering their “religiosity.” For instance, some may integrate their religion into their worldview in a more peaceful and harmonious way, while others may do it in a more aggressive or aversive manner. Likewise, for some, adopting one specific version of one religion is the key to this life and a life in the hereafter, whereas for others anything or nothing will do just fine. For some, William James’ (1985) emphasis on “religious experience” constitutes the defining moment of a life. In contrast, it has recently been argued that humans are born with an “implicit religiosity” (Schnell, 2003) and that they are “born believers” (Barrett, 2012). These ideas suggest that humans come automatically equipped to engage in the process of believing many things -- whether secular or religious, or ordinary and mundane vs. lofty and idealized. That such objects of belief become incorporated into over-arching belief systems is consistent with the accumulating empirical evidence that the human proclivity toward worldview construction can be conceptualized as a by-product of normal human cognitive processes (Boyer, 2003; Kapogiannis et al., 2009).\n\nThe raw phenomenological mental representations in a person’s mind are not accessible to scientific exploration; thus their veridicality in the purest sense is neither provable nor disconfirmable. They constitute personal beliefs (i.e., meanings made) that can be characterized as falling into two general categories. First, there are beliefs that everyone in a group or society will hold, such as the belief that they all see an apple sitting on the table and that they can eat it and it will taste good. Beliefs of this sort have been addressed earlier and are subject to some degree of public verifiability. Second, there are beliefs apparently unique to one person, such as someone being certain that he or she saw God and heard God’s voice – a report not subject to public verification. Either way, we engage in a functional inherent valuation process that involves focusing attention on the incoming information in a dynamic bottom-up-top-down fashion, the result of which forms our probabilistic accounts and beliefs about what is observed in the outside world (Wiese et al., 2014). Thus, beliefs of individuals are created by mental processes that involve perception, attention, valuation, and storage as well as up-dating of information as described in detail in the previous part of this communication.\n\nGiven the above, it is obvious that religious-related and belief-related words can become easily conflated and cause great confusion. To explain this confusion and help avoid some of it, let us probe simple language expressions commonly used in everyday circumstances. When someone says, “I believe God is the creator of the world,” they are stating a subjective proposition that cannot be empirically verified. Not uncommonly, the person becomes emotionally upset upon hearing others question or negate the statement. Similarly, the statement “I believe this is so and so” expresses a person’s subjective perspective that the observed fact or event has only a limited degree of certainty for him or her. The statement is similar to “I think this is so and so” but, in contrast, reflects the person’s higher degree of certainty on which he or she builds an emotional inclination to defend this stance. This is probably because people’s intuitive belief system appears to represent beliefs as either true or false rather than on an uncertainty gradient (Johnson et al., 2015). Moreover, beliefs activated by cues can profoundly affect behaviour, as has been found for gaze and other behaviors similar to those that respond to primes (Kristjansson & Capana, 2010; Wiese et al., 2014). Conversely, hypnotic suggestion can be seen as a form of inducing an imagination that is temporarily accepted as if it is believed, since hypnosis can exert profound changes on a person’s mood, thoughts, perceptions, and behaviour (Halligan & Oakley, 2014). Consequently, we assume that personal probabilistic representations form the knowledge system of an individual displaying a high degree of momentary subjective relevance.\n\nAs people grow up and are imbedded in social groups, successful communication is fundamental to the exchange of meanings of perceptions, imaginations, and mental states. Thus, group evolution in addition to, not in place of, the evolution of individuals becomes important. Owing to the wealth of information to which each individual is confronted every day, information is communicated from person to person by language. Language is characterized by the human capacity to combine meaningful units into an unlimited variety of larger structures, each differing systematically in meaning. The capacity to generate a limitless range of meaningful expressions from a hierarchical structure of finite elements differentiates human language from all other animal communication systems (Fitch & Hauser, 2004). This means that the most complete understanding of the processes of believing and communicating among humans requires that we examine the processes from micro to macro levels within a multilevel interdisciplinary paradigm (Emmons & Paloutzian, 2003). In this instance, the intersection of anthropology and neuroscience can help us understand the relationships between the socio-cultural contexts in which people live and human brain function (Keysers & Gazzola, 2010; Vogeley & Roepstorff, 2009).\n\nIndividuals are constantly faced with boundaries imposed by the surrounding people. There are universal dimensions of interpersonal and pergroup social cognition that guide the individual’s behavior (Fiske et al., 2007). Thus, living in a society requires the generation of systems of probabilistic representations that are similar across individuals and exhibit a liability for communities as they give meaning to people’s collective work. As detailed by Schnell (2012), narratives typically taught implicitly and explicitly in families and schools provide the historical and identity-relevant background information for social groups, through passive listening as well as active reading and reciting. Thereby, narratives provide the formal content of belief systems (Figure 3).\n\nNarratives are socially transmitted in communities with many repetitions and ramifications in colloquial and formal settings enabling individuals to comprehend their contents. Similarly, socially practiced rituals allow for attribution of personal meaning to the belief systems. Importantly, subjects may be exposed to both sources repetitively over many years internalizing their meaning by dedicated and reciprocal psychophysical processes that lead to complex probabilistic representations, i.e. belief systems. These belief systems are similar among individuals who belong to the same social group or society. The subsequent behavior of the individuals is loaded with probabilistic predictions about their future, termed hope and fear.\n\nNarratives constitute a mental construct or meaning for the history of a community or society as well as for occasions of festive events throughout the year. People may be repeatedly exposed to a narrative (e.g., at annual special events); this affords them an opportunity to comprehend their meaning and learn them by heart. Acceptance of the narratives is strengthened as people participate in group rituals, which involve defined actions whose performance within a community or society leads to highly predictive experience by their members (La Cour & Hvidt, 2010; Seligman & Brown, 2010). Through such group activities, emotional value and personal meaning is attributed to the belief systems (Figure 3). Looked at from a more molecular level of analysis, the psychophysical and neurophysiological processes affording the internalization of narratives and rituals in the individuals have been summarized above.\n\nPeople in communities and societies may be similar in their belief systems due to exposure to virtually identical narratives and rituals. Rituals play a key role in stabilizing beliefs owing to their standardized practice and regular repetitions at a present moment or more importantly at regular times each year. This is mediated by high fidelity imitation that mediates conventional learning (Legare & Nielsen, 2015). Because rituals are rooted in narratives or myths that refer to the past, even beyond the limits of personal experience, their regular repetition produces the feeling of familiarity, high predictability, reward and transcendence. They can constitute the experience and knowledge and, thereby, the belief systems of individuals from childhood onwards (Seligman & Brown, 2010). With this background, the individual’s experience gets linked to narrative knowledge through instruction and/or associative learning – which is extremely powerful because it may take only one to two repetitions (Blechert et al., 2016). In addition, the combination of the verbal and pragmatic information generates trust in the promise provided by the narratives, strengthening people against perceived threats, even to their physical integrity (Boyer & Liénard, 2006). Ultimately, it leads to the inference of moral and ethical standards that are derived from such narratives and limits the possible actions to be selected (Mesulam, 2008).\n\nNevertheless, belief systems have a personal aspect that reflects the experience, attitudes, and personality of the individual. As experience changes over time, belief systems are likely to change as well. In addition, each individual has a unique intuitive experience of the world. For example, intuitive beliefs may originate from the naive but nevertheless fundamental dualistic experience of the surrounding immanent physical world and the seemingly immaterial sky. They can be a powerful component of an individual’s belief system. Belief systems include the individuals’ implicit or explicit answers (or attempts at such) on how to cope with the future, and how to provide existential meaning whether secular, spiritual, or religious (La Cour & Hvidt, 2010). They also address other issues such as what values to hold, what priorities to live by, and what is ultimately most important. The predictions for the future are probabilistic; they may provide hope of reward or fear of punishment depending on how one has lived (Figure 3). Also, depending on the promises of a particular belief system, the individual may be in a position to anticipate his or her future in a way most suited to his or her preference. At a neurophysiological level, it was shown that the cultural self-construal mind-set involves parieto-frontal brain areas including the medial frontal cortex as described above for spontaneous evaluation and behavioral control (Leuthold et al., 2015; Wang et al., 2013).\n\nOverall, we are beginning to understand one of the most fundamental processes that enable humans to be human – the process of believing. We suggest that the mental processes described in this paper represent fundamental human brain functions that transform cognitive and emotional perspective taking into account personal perspective making, i.e., into views of secular and non-secular transcendence. One limitation of the concept of believing as presented here is that it is rooted in Western thinking, especially in the English language. Although beliefs and believing can have different connotations in various religions and cultural environments (Angel & Seitz, 2016), our model can nevertheless generate a diversified but collaborative discussion on how to relate empirical data to science-based models of believing.", "appendix": "Author contributions\n\n\n\nRJS wrote and edited the manuscript and designed the figures.\n\nHFA and RLP contributed to its content and edited the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by funds of the City of Graz, Austria, to the Credition Project.\n\n\nReferences\n\nAdelson EH: Perceptual organization and the judgment of brightness. Science. 1993; 262(5142): 2042–2044. PubMed Abstract | Publisher Full Text\n\nAimonetti JM, Hospod V, Roll JP, et al.: Cutaneous afferents provide a neuronal population vector that encodes the orientation of human ankle movements. J Physiol. 2007; 580(Pt. 2): 649–658. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAngel HF: Credition. In: Runehov, ALC, Oviedo L, Azari NP. (eds) Encyclopedia of Sciences and Religion. Springer Reference, Dordrecht. 2013; 1: 536–539. Publisher Full Text\n\nAngel HF, Seitz RJ: Process of believing as fundamental brain function: the concept of credition. SFU Research Bulletin. 2016; 3: 1–20. Publisher Full Text\n\nArmstrong DM: Belief, truth, and knowledge. Cambridge: Cambridge University Press; 1973. Reference Source\n\nAzari NP, Nickel J, Wunderlich G, et al.: Neural correlates of religious experience. Eur J Neurosci. 2001; 13(8): 1649–1652. PubMed Abstract | Publisher Full Text\n\nBar M, Neta M, Linz H: Very first impressions. Emotion. 2006; 6(2): 269–278. PubMed Abstract | Publisher Full Text\n\nBarrett JL: Born believers: The science of children’s religious belief. New York: Atria Books. 2012. Reference Source\n\nBird G, Viding E: The self to other model of empathy: providing a new framework for understanding empathy impairments in psychopathy, autism, and alexithymia. Neurosci Biobehav Rev. 2014; 47: 520–532. PubMed Abstract | Publisher Full Text\n\nBlechert J, Testa G, Georgii C, et al.: The Pavlovian craver: Neural and experiential correlates of single trial naturalistic food conditioning in humans. Physiol Behav. 2016; 158: 18–25. PubMed Abstract | Publisher Full Text\n\nBoyer P: Religious thought and behaviour as by-products of brain function. Trends Cogn Sci. 2003; 7(3): 119–124. PubMed Abstract | Publisher Full Text\n\nBoyer P, Liénard P: Why ritualized behavior? Precaution Systems and action parsing in developmental, pathological and cultural rituals. Behav Brain Sci. 2006; 29(6): 595–613; discussion 613–50. PubMed Abstract | Publisher Full Text\n\nBrugger P, Graves RE: Testing vs. Believing Hypotheses: Magical Ideation in the Judgement of Contingencies. Cogn Neuropsychiatry. 1997; 2(4): 251–72. PubMed Abstract | Publisher Full Text\n\nBullivant S, Ruse M: The Oxford Handbook of Atheism. Oxford: Oxford University Press. 2013. Publisher Full Text\n\nCameirão MS, Badia SB, Oller ED, et al.: Neurorehabilitation using the virtual reality based Rehabilitation Gaming System: methodology, design, psychometrics, usability and validation. J Neuroeng Rehabil. 2010; 7: 48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChang LJ, Doll BB, van ‘t Wout M, et al.: Seeing is believing: trustworthiness as a dynamic belief. Cogn Psychol. 2010; 61(2): 87–105. PubMed Abstract | Publisher Full Text\n\nChen X, Scango KW, Stuphorn V: Supplementary motor area exerts proactive and reactive control of arm movements. J Neurosci. 2010; 30(44): 14657–14675. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChurchland PS, Churchland PM: What are Beliefs? In: Krueger F, Grafman J (eds), The neural basis of human belief systems. Psychology Press, Hove, 2013; 1–17. Reference Source\n\nConnors MH, Halligan PW: A cognitive account of belief: a tentative road map. Front Psychol. 2015; 5: 1588. PubMed Abstract | Publisher Full Text | Free Full Text\n\nd'Acremont M, Schultz W, Bossaerts P: The human brain encodes event frequencies while forming subjective beliefs. J Neurosci. 2013; 33(26): 10887–10897. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElliott R, Jobber D, Sharp J: Using the theory of reasoned action to understand organizational behaviour: The role of belief salience. Br J Soc Psychol. 1995; 34(2): 161–172. Publisher Full Text\n\nEmmons RA, Paloutzian RF: The psychology of religion. Annu Rev Psychol. 2003; 54: 377–402. PubMed Abstract | Publisher Full Text\n\nFarrer C, Frith CD: Experiencing oneself vs another person as being the cause of an action: the neural correlates of the experience of agency. Neuroimage. 2002; 15(3): 596–603. PubMed Abstract | Publisher Full Text\n\nFederenko E, Duncan J, Kanwisher N: Broad domain generality in focal regions of frontal and parietal cortex. Proc Natl Acad Sci U S A. 2013; 110(41): 16616–16621. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFiske ST, Cuddy AJ, Glick P: Universal dimensions of social cognition: warmth and competence. Trends Cogn Sci. 2007; 11(2): 77–83. PubMed Abstract | Publisher Full Text\n\nFitch WT, Hauser MD: Computational constraints on syntactic processing in a nonhuman primate. Science. 2004; 303(5656): 377–380. PubMed Abstract | Publisher Full Text\n\nFreud S: The future of an illusion. Translation from German by Robson-Scott WD, Strachey J. Hogarth Press, London; 1928. Reference Source\n\nFriston K, Sengupta B, Auletta G: Cognitive dynamics: from attractors to active inference. Proc IEEE. 2014; 102(4): 427–445. Publisher Full Text\n\nGermine L, Russell R, Bronstad PM, et al.: Individual Aesthetic Preferences for Faces Are Shaped Mostly by Environments, Not Genes. Curr Biol. 2015; 25(20): 2684–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrabenhorst F, Rolls ET: Value, pleasure and choice in the ventral prefrontal cortex. Trends Cong Sci. 2011; 15(2): 56–67. PubMed Abstract | Publisher Full Text\n\nGray JR, Braver TS, Raichle ME: Integration of emotion and cognition in the lateral prefrontal cortex. Proc Natl Acad Sci U S A. 2002; 99(6): 4115–4120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGraybiel AM, Grafton ST: The striatum: where skills and habits meet. Cold Spring Harb Perspect Biol. 2015; 7(8): a021691. PubMed Abstract | Publisher Full Text\n\nGusnard DA, Akbudak E, Shulman GL, et al.: Medial prefrontal cortex and self- referential mental activity: relation to a default mode of brain function. Proc Natl Acad Sci U S A. 2001; 98(7): 4259–4264. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHallett M: Physiology of Free Will. Ann Neurol. 2016; 80(1): 5–12. Publisher Full Text\n\nHalligan PW, Oakley DA: Hypnosis and beyond: exploring the broader domain of suggestion. Psychology of Consciousness: Theory, Research, and Practice. 2014; 1(2): 105–122. Publisher Full Text\n\nHarris S, Kaplan JT, Curiel A, et al.: The neural correlates of religious and nonreligious belief. PLoS One. 2009; 4(10): e0007272. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarris S, Sheth SA, Cohen MS: Functional neuroimaging of belief, disbelief, and uncertainty. Ann Neurol. 2008; 63(2): 141–147. PubMed Abstract | Publisher Full Text\n\nHelm P: Faith and Reason. Oxford University Press, Oxford; 1999. Reference Source\n\nHenkel LA, Mattson ME: Reading is believing: the truth effect and source of credibility. Conscious Cogn. 2011; 20(4): 1705–1721. PubMed Abstract | Publisher Full Text\n\nHills P, Francis LJ, Argyle M, et al.: Primary personality trait correlates of religious practice and orientation. Pers Indiv Differ. 2004; 36(1): 61–73. Publisher Full Text\n\nHowlett JR, Paulus MP: The neural basis of testable and non-testable beliefs. PLoS One. 2015; 10(5): e0124596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJames W: The varieties of religious experience: A study in human nature. Cambridge MA, Harvard University Press (Original work published in 1902); 1985. Reference Source\n\nJeannerod M: Mental imagery in the motor context. Neuropsychologia. 1995; 33(11): 1419–1432. PubMed Abstract | Publisher Full Text\n\nJohnson SGB, Merchant T, Keil FC: Predictions from uncertain beliefs. In: Noelle DC, Dale R, Warlaumont AS, Yoshimi J, Matlock T, Jennings CD, Maglio PP (Eds.), Proc 37th Ann Conf Cogn Sci Soc. Austin, TX: Cogn Sci Soc; 2015; 1003–1008. Reference Source\n\nKahnt T, Heinzle J, Park SQ, et al.: The neural code of reward anticipation in human orbitofrontal cortex. Proc Natl Acad Sci U S A. 2010; 107(13): 6010–6015. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKanske P, Böckler A, Trautwein FM, et al.: Dissecting the social brain: Introducing the EmpaToM to reveal distinct neural networks and brain–behavior relations for empathy and Theory of Mind. Neuroimage. 2015; 122: 6–19. PubMed Abstract | Publisher Full Text\n\nKapogiannis D, Barbey AK, Su M, et al.: Cognitive and neural foundations of religious belief. Proc Natl Acad Sci U S A. 2009; 106(12): 4876–4881. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKeysers C, Gazzola V: Social neuroscience: mirror neurons recorded in humans. Curr Biol. 2010; 20(8): R353–354. PubMed Abstract | Publisher Full Text\n\nKoenig H, King D, Carson VB: Handbook of religion and health. (2nd ed.). New York: Oxford University Press. 2012. Reference Source\n\nKristjansson A, Campana G: Where perception meets memory: a review of repetition priming in visual search tasks. Atten Percept Psychophys. 2010; 72(1): 5–18. PubMed Abstract | Publisher Full Text\n\nKrueger F, Grafman J: The neural basis of human belief systems. Hove, East Sussex; New York: Psychology Press, 2013. Reference Source\n\nKuo WJ, Sjöström T, Chen YP, et al.: Intuition and deliberation: two systems for strategizing in the brain. Science. 2009; 324(5926): 519–522. PubMed Abstract | Publisher Full Text\n\nLa Cour P, Hvidt NC: Research on meaning-making and health in secular society: secular, spiritual and religious existential orientations. Soc Sci Med. 2010; 71(7): 1292–1299. PubMed Abstract | Publisher Full Text\n\nLangdon R, Coltheart M: The cognitive neuropsychology of delusions. Mind Lang. 2000; 15(1): 184–218. Publisher Full Text\n\nLe Poidevin R, McBeath M (eds.): The Philosophy of Time. Oxford: Oxford University Press, 1993. Reference Source\n\nLegare CH, Nielsen M: Imitation and innovation: The Dual Engines of Cultural Learning. Trends Cogn Sci. 2015; 19(11): 688–699. PubMed Abstract | Publisher Full Text\n\nLeuthold H, Kunkel A, Mackenzie IG, et al.: Online processing of moral transgressions: ERP evidence for spontaneous evaluation. Soc Cogn Affect Neurosci. 2015; 10(8): 1021–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLevelt WJM: Speaking - from intention to articulation. MIT Press, 1993. Reference Source\n\nMarkman KD, Proulx T, Lindberg ML: The Psychology of Meaning. Wash. DC: American Psychological Association, 2013. Publisher Full Text\n\nMatsuzaka Y, Tanji J, Mushiake H: Representation of Behavioral Tactics and Tactics-Action Transformation in the Primate Medial Prefrontal Cortex. J Neurosci. 2016; 36(22): 5974–5987. PubMed Abstract | Publisher Full Text\n\nMeder D, Haagsensen BN, Hulme O, et al.: Tuning the Brake While Raising the Stake: Network Dynamics during Sequential Decision-Making. J Neurosci. 2016; 36(19): 5417–5426. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMesulam MM: Large-scale neurocognitive networks and distributed processing for attention, language, and memory. Ann Neurol. 1990; 28(5): 597–613. PubMed Abstract | Publisher Full Text\n\nMesulam M: Representation, inference, and transcendent encoding in neurocognitive networks of the human brain. Ann Neurol. 2008; 64(4): 367–378. PubMed Abstract | Publisher Full Text\n\nMeyniel F, Schlunegger D, Dehaene S: The Sense of Confidence during Probabilistic Learning: A Normative Account. PLoS Comput Biol. 2015; 11(6): e1004305. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNachev P, Kennard C, Husain M: Functional role of the supplementary and pre-supplementary motor areas. Nat Rev Neurosci. 2008; 9(11): 856–869. PubMed Abstract | Publisher Full Text\n\nNiendam TA, Laird AR, Ray KL, et al.: Meta-analytic evidence for a superordinate cognitive control network subserving diverse executive functions. Cogn Affect Behav Neurosci. 2012; 12(2): 241–268. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNinaus M, Kober SE, Witte M, et al.: Neural substrates of cognitive control under the belief of getting neurofeedback training. Front Hum Neurosci. 2013; 7: 914. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNorthoff G: Self and brain: what is self-related processing? Trends Cogn Sci. 2011; 15(5): 186–187. PubMed Abstract | Publisher Full Text\n\nNortmann N, Rekauzke S, Onat S, et al.: Primary visual cortex represents the difference between past and present. Cereb Cortex. 2015; 25(6): 1427–1440. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOviedo L: Recent Scientific Explanations of Religious Beliefs: A Systematic Account. In: Angel H-F, Oviedo L, Paloutzian R F, Runehov A L C, Seitz R.J (Eds.). Process of Believing: The Acquisition, Maintenance, and Change in Creditions. Dordrecht, Heidelberg: Springer, in press. 2016.\n\nPaloutzian RF, Mukai KJ: Believing, Remembering, and Imagining: The Roots and Fruits of Meanings Made and Remade. In: Angel H-F, Oviedo L, Paloutzian R F, Runehov A L C, Seitz R.J (Eds.). Process of Believing: The Acquisition, Maintenance, and Change in Creditions. Dordrecht, Heidelberg: Springer, in press, 2016. Reference Source\n\nPaloutzian RF, Park CL: Directions for the future of psychology of religion and spirituality: research advances in methodology and meaning systems. In: Paloutzian RF, Park CL (eds) Handb Psychology Religion Spirituality. 2nd edition, Guilford Press, 2013; 651–665. Reference Source\n\nPaloutzian RF, Park CL: Religiousness and spirituality: The psychology of multilevel meaning-making behavior. Religion Brain Behavior. 2015; 5(2): 166–178. Publisher Full Text\n\nPargament KI: Of means and ends: Religion and the search for significance. The Int J Psychology Religion. 1992; 2(4): 201–229. Publisher Full Text\n\nPark CL: Religion and meaning. In: Handb Psychology Religion Spirituality. Paloutzian RF, Park CL (eds) Guildford Press, New York, 2005; 295–314.\n\nPassingham RE, Bengtsson SL, Lau HC: Medial frontal cortex: from self-generated action to reflection on one‘s own performance. Trends Cogn Sci. 2010; 14(1): 16–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeters J, Büchel C: Overlapping and distinct neural systems code for subjective value during intertemporal and risky decision making. J Neurosci. 2009; 29(50): 15727–15734. PubMed Abstract | Publisher Full Text\n\nPotthoff D, Seitz RJ: Role of the first and second person perspective for control of behaviour: understanding other people’s facial expressions J Physiol Paris. 2015; pii: S0928-4257(15)30003-6. PubMed Abstract | Publisher Full Text .\n\nProchnow D, Brunheim S, Kossack H, et al.: Anterior and posterior subareas of the dorsolateral frontal cortex in socially relevant decisions based on masked affect expressions [version 1; referees: 2 approved with reservations]. F1000Res. 2015; 3: 212. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRadvansky GA, Zacks JM: Event Cognition. NY: Oxford University Press. 2014. Publisher Full Text\n\nRoland PE, Mortensen E: Somatosensory detection of microgeometry, macrogeometry and kinaesthesia in man. Brain Res Rev. 1987; 12(1): 1–42. Publisher Full Text\n\nRolls ET: Brain mechanisms underlying flavour and appetite. Philos Trans R Soc Lond B Biol Sci. 2006; 361(1471): 1123–1136. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoux F, Wibral M, Mohr HM, et al.: Gamma-band activity in human prefrontal cortex codes for the number of relevant items maintained in working memory. J Neurosci. 2012; 32(36): 12411–12420. PubMed Abstract | Publisher Full Text\n\nSasse LK, Peters J, Büchel C, et al.: Effects of prospective thinking on intertemporal choice: The role of familiarity. Hum Brain Mapp. 2015; 36(10): 4210–4221. PubMed Abstract | Publisher Full Text\n\nSchnell T: A framework for the study of implicit religion: the psychological theory of implicit religiosity. Implicit Religion. 2003; 6(2): 86–104. Publisher Full Text\n\nSchnell T: Spirituality with and without religion. Arch Psychol Religion. 2012; 34(1): 33–62. Publisher Full Text\n\nSchnell T, Keenan WJF: Meaning-making in an atheist world. Arch Psychol Religion. 2011; 33(1): 55–78. Publisher Full Text\n\nSchultze-Kraft M, Birman D, Rusconi M, et al.: The point of no return in vetoing self-initiated movements. Proc Natl Acad Sci U S A. 2016; 113(4): 1080–1085. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwartenbeck P, FitzGerald THB, Dolan R: Neural signals encoding shifts in beliefs. Neuroimage. 2016; 125: 578–586. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeitz RJ, Angel HF: Psychology of Religion and Spirituality: Meaning Making and processes of believing. Religion Brain Behav. 2015; 5(2): 139–147.Publisher Full Text\n\nSeitz RJ, Franz M, Azari NP: Value judgments and self-control of action: the role of the medial frontal cortex. Brain Res Rev. 2009; 60(2): 368–378. PubMed Abstract | Publisher Full Text\n\nSeitz RJ, Nickel J, Azari NP: Functional modularity of the medial prefrontal cortex: involvement in human empathy. Neuropsychology. 2006; 20(6): 743–751. PubMed Abstract | Publisher Full Text\n\nSeligman R, Brown RA: Theory and method at the intersection of anthropology and cultural neuroscience. Soc Cogn Affect Neurosci. 2010; 5(2–3): 130–137. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSharan L, Rosenholtz R, Adelson EH: Accuracy and speed of material categorization in real-world images. J Vis. 2014; 14(9): pii: 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith ML: Rapid processing of emotional expressions without conscious awareness. Cereb Cortex. 2012; 22(8): 1748–1760. PubMed Abstract | Publisher Full Text\n\nStich S: From Folk Psychology to Cognitive Science. The Case against Belief. 2nd Ed., Cambridge Massachusetts, 1996.\n\nSugiura M, Seitz RJ, Angel HF: Models and neural bases of the believing process. JBBS. 2015; 5: 12–23. Publisher Full Text\n\nTakei S, Nishida S: Perceptual ambiguity of bistable visual stimuli causes no or little increase in perceptual latency. J Vis. 2010; 10(4): 23.1–15. PubMed Abstract | Publisher Full Text\n\nTaves A: Reverse engineering complex cultural concepts: Identifying building blocks of “religion”. J Cogn Culture. 2015; 15(1-2): 191–216. Publisher Full Text\n\nThompson R, Duncan J: Attentional modulation of stimulus representation in human fronto-parietal cortex. Neuroimage. 2009; 48(2): 436–448. PubMed Abstract | Publisher Full Text\n\nvan Gaal S, Scholte HS, Lamme VA, et al.: Pre-SMA graymatter density predicts individual differences in action selection in the face of conscious and unconscious response conflict. J Cogn Neurosci. 2011; 23(2): 382–390. PubMed Abstract | Publisher Full Text\n\nVan Overwalle F: Social cognition and the brain: a meta-analysis. Hum Brain Mapp. 2009; 30(3): 829–858. PubMed Abstract | Publisher Full Text\n\nVisela A, Angel HF: The theory of credition and philosophical accounts of belief: looking for common ground. In: Angel H-F, Oviedo L, Paloutzian R F, Runehov A L C, Seitz R.J (Eds.). Process of Believing: The Acquisition, Maintenance, and Change in Creditions. Dordrecht, Heidelberg: Springer, in press. 2016. Reference Source\n\nVogeley K, Roepstorff A: Contextualising culture and social cognition. Trends Cogn Sci. 2009; 13(12): 511–516. PubMed Abstract | Publisher Full Text\n\nWang C, Oyserman D, Liu Q, et al.: Accessible cultural mind-set modulates default mode activity: evidence for the culturally situated brain. Soc Neurosci. 2013; 8(3): 203–216. PubMed Abstract | Publisher Full Text\n\nWiese E, Wykowsky A, Müller HJ: What we observe is biased by what other people tell us: beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues. PLoS One. 2014; 9(4): e94529. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "17438", "date": "08 Nov 2016", "name": "Motoaki Suguira", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a good comprehensive review and model of the scientific status of the religious and secular belief or believing process.\nMajor point:\nIt would be more helpful for the readers if there was emphasis on what is the key aspects of the current model that makes it new and different from previous models of the authors or other researchers.\nMinor point:\nWhile the two figures have very similar format, in the figure 1 there is a specific quantity of the time (i.e. 40ms) but not in the figure 2; the former looks a kind of experimental data and the latter a conceptual schema. It may be better to clarify whether two figures are similar or very different in terms of the roles in the manuscript (i.e. data or schema).", "responses": [ { "c_id": "2417", "date": "17 Jan 2017", "name": "Rudiger Seitz", "role": "Author Response", "response": "We thank the reviewer for his comments and now make clear on page 14 what the specific aspects of our model of believing are as compared to previous accounts. We assume that for some reason the time scale was not in the print-out of Figure 3. We absolutely agree that a time scale is important for this figure, since in contrast to Figure 1 the scale goes over years. We respectfully submit that the scale is shown in the figure." } ] }, { "id": "17817", "date": "21 Nov 2016", "name": "Peter W. Halligan", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper, the authors discuss the notion of belief and its explanatory role across a wide range of academic disciplines. These include, for example, psychology, religion, philosophy, anthropology, and cognitive neuroscience. The authors aim to demonstrate that believing is a core function of the brain that guides much of our behaviour. They also argue that these cognitive processes operate in narratives and rituals across both religious and secular contexts. To address these goals, the authors provide a brief, if sometimes dense, discussion of belief, which includes the history and philosophy of the concept, the role of belief in anthropology and neuroscience, and belief’s more specific functions in religion, rituals, and narratives.\n\nThe paper provides an interesting contribution to a historically neglected area of psychology and cognitive neuroscience, namely understanding and characterising the nature of belief. That said, the paper is perhaps too ambitious in scope, covering many different topics – including a conceptual history of belief and the role of belief in areas such as philosophy, religion, anthropology, perception, and cognitive neuroscience. Consequently, the paper at times appears a little unfocused and difficult to follow. It could be improved by having a clearer structure and possibly outlining how the various sections contribute to the central goals of the paper. Given the breadth of coverage, many of the topics cannot be covered in the detail they deserve. One could argue, given the authors’ central argument, that much of the historical review – including the sections on the history and philosophy of belief – could be omitted without detracting from the main purpose of the paper.\n\nThe main focus of the paper remains the anthropology and cognitive neuroscience of belief. The authors here outline four levels of explanation for understanding belief – hermeneutic, linguistic, behavioural, and neurophysiological. These levels of explanation are not, however, always fully justified. It is not clear, for example, what the distinction between the hermeneutic and linguistic is and also whether the behavioural level can be necessarily grouped with the cognitive. It also remains unclear how the levels of explanation inform the rest of the paper.\n\nIn one particular section, the authors discuss the importance of perception. Specifically, they note “To understand the process of believing, it is essential to understand how people attribute personal meaning to specific sensory perceptions” (p. 4). The authors then provide a description of the processes involved in perceiving the physical characteristics of stimuli and the processes involved in attributing affective value and meaning. This important section could be expanded to include more evidence in support of the claims being put forward. In its current form, it is unclear how much is intended as a description of fact and how much involves the authors’ own theoretical take.\n\nThe section on neuroscience is also important to the authors’ central thesis. The authors outline a model of believing as involving different brain regions. As this is possibly one of the most novel and significant parts of the paper, this section would benefit by being expanded to provide more details. For example, the authors make several claims about the role of different brain regions for selective cognitive functions (e.g., they suggest that valuation of perceived objections and attributions involves the medial frontal cortex) and then cite papers without much further explanation. The authors should engage more with the literature that they cite and explain how the research support their claims. They could also outline their model more clearly and how it relates to belief overall (rather than just sub-component processes, like the valuation of percepts).\n\nA further issue in these sections that could be addressed is that it is not clear whether the authors reach their overall goal. They state in the paper’s abstract: “We present evidence suggesting that believing is a human brain function which results in probabilistic representations with attributes of personal meaning and value and thereby guides individuals’ behaviour” (p. 1). However, much of these sections on perception and neuroscience is descriptive and, as already noted, relies on citations of previous papers without further explanation of how these support their claims. The authors could more specifically highlight what they consider to be evidence for their model and discuss more critically how it supports their account.\n\nIn the final section, the authors turn to belief systems in religion, rituals, and narratives. The authors here suggest that similar belief processes are common across both religious and secular contexts. It is not clear, however, why people would doubt this idea in the first place. The discussion, like other sections, contains some interesting material, but again is limited somewhat by selective engagement with existing literature. For example, the authors suggest “because ‘religion’ is not one thing but many, it is better to talk about specific religions, because almost no statement about what ‘religion’ does will hold for all of them” (p. 6), then appear to make general claims about religion, religiosity, and religious experience. Apart from this apparent inconsistency, it is unclear whether this is a consensus opinion by all religious scholars. Other minor examples include claims about how “humans are born with an ‘implicit religiosity’” (p. 6) and that rituals “constitute the experience and knowledge and, thereby, the belief systems of individuals from childhood onwards” (p. 7), where it is similarly unclear how established these claims are.\n\nIn sum, the paper addresses an important and timely topic, and presents proposals for a potentially novel perceptual/neural model for understanding belief. The paper, however, would benefit by focusing and clarifying its underlying purpose and reducing the breadth of coverage. The paper could also be improved by discussing the evidence and arguments supporting their central claims in greater detail.", "responses": [ { "c_id": "2415", "date": "17 Jan 2017", "name": "Rudiger Seitz", "role": "Author Response", "response": "We appreciate the constructive comments of the reviewers which helped us to revise our manuscript as we explain in the following point-to-point response. According to their suggestion we rephrased the Introduction to outline more clearly how the various sections contribute to the goal of this paper. Furthermore, the section about the history and philosophy of belief was shortened and rephrased with a couple of additional citations to highlight our standpoint. We now have introduced further explanations to make clear the different levels of exploration such as the hermeneutic, linguistic, behavioral and the cognitive level to define the process of believing (page 5). Following the suggestions of the reviewers we have expanded our description about the physiological basis of perception and probabilistic coding (pages 12, 13, 14). Also, we now spell out more clearly our standpoint to this theoretical matter (page 10). Admittedly, however, we can only sketch out the topics relevant for our discussion (page 14), while a comprehensive review of the literature would go beyond the limits of this opinion paper. We substantiated the relation of the references to our manuscript and describe that a number of physiologically well described processes are essential for the processes of believing. We do not claim that they are exclusive. But in our view empirical evidence provides the framework for their contribution to the processes of believing as we now state explicitly on pages 5. We have rephrased the criticized statements about religion, religiosity and religious experience (page 20). Also, we introduced appropriate references in the final section to substantiate our claims." } ] }, { "id": "18600", "date": "19 Dec 2016", "name": "Tatjana Schnell", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors provide a broad introduction to a novel theoretical concept, the process of believing. It is complemented by references to empirical findings that lend support to the theoretical claims. The concept’s usefulness for neuroscience and anthropology, in particular, is discussed.\n\nAs the authors note, the act, or process, of believing has largely been ignored by scholars. While psychology offers insights into both cognition and emotion, believing seems not to be well covered by these theories. The authors suggest defining processes of believing as probabilistic representations, associated with specific personal meaning and value. Rooted in neural processes, processes of believing are proposed to guide human behaviour.\n\nThe endeavour is topical and worthwhile, since much of our proposed knowledge is actually belief, and clear differentiation between both will be fruitful. Furthermore, belief is all too often associated with the realm of the supernatural, and thus viewed as something that is only of concern for studies of religion, etc. But believing is widespread, also in mundane everyday life, as the authors suggest.\n\nIn some places, the text might benefit from more coherence. In the first section, e.g., we had the sense that the provided list of definitions is of no consequence for the rest of the manuscript.\n\nDue to the novelty of the research subject, the terminology is of special importance. The authors call “normal belief” a “brain product” linked to personal intuitive judgements about the subjective certainty of mental constructs and sensory perceptions. The term “normal belief” might not be the best choice of term, since it seems to refer to something normative, or some belief held by the majority of people (thus being normal).\n\nThe reader is also left wondering about the relationship between “belief” as in “predictions about the next event”, or judgments where religious issues play no role, and religious belief. If the purpose of the term “normal belief” is to root religious belief into belief as a more general cognitive process, it should be renamed accordingly and more explicitly.\n\nThe process of perception is described as resulting in “comprehending the object’s features” (p. 4), and, later, as resulting “in a meaningful illusion” (p. 4). The two descriptions seem to contradict each other; instead of calling the outcome an illusion, it might better be viewed as a “Gestalt”, i.e. a meaningful perception that provides clues for action to follow1,2.\n\nStarting from page 4, the authors review neural circuits of stimulus perception, stimulus valuation (including its emotional tone) and decision making. The authors propose that this circuit underlies belief formation (\"personal probabilistic representations\").\nIt should be noted that the neuroscience they refer to here deals with predictions about events whose valuation, and consequent motivational effects, are entirely utilitarian3,4. The purported computational processes associated with these networks concern prediction of rewarding or aversive outcomes (probabilistic predictions of reward or fear of punishment, p. 8). This leaves the reader wondering if this has the consequence that there is nothing specific about religious beliefs, relative to other utilitarian or adaptive cognitive-behavioural strategies. The ventromedial prefrontal cortex, where, as the authors note, complex properties of stimuli5-8 or the context of their presentation9 may be integrated to compute stimulus valuation and its emotional quality10 has been touted as the neurological machinery that allows consistent decision making, in effect computing utilities as in the economic understanding of this term4,11.\n\nIt is interesting to conjecture that these same networks could be the starting point of beliefs, but the evidence does not imply this in any straightforward manner. If predictions are beliefs also Pavlov’s dog was ‘believing’ after hearing the bell. Importantly, one may not be conscious of anticipating anything but still bear the motivational consequences of these unconscious predictions12, as when presented with conditioned cues subliminally. An issue that seems to be relevant in this respect is that these networks are very likely not monolithic (as the authors themselves note in commenting Figure 2), but are thought to involve parallel valuation and decision mechanisms, associated with different strategies to process the inputs and compute response13. But then one would expect the notion of belief to be explicated in terms that are more specifically relevant for some, but not all, of these mechanisms.\nPrevious functional neuroimaging of generic beliefs14-15 or of beliefs that are more specifically religious16 raise the issue of whether cognitive decision making bears the traces of different strategies to arrive to a response, associated with different brain networks. This may be particularly relevant for beliefs, as there is good reason to single out those cognitive decisions that are marked by bias. The ventromedial prefrontal cortex seems again to play an important role in this respect11,14,17, as an important hypothesis is that beliefs may be a specific class of intuitive, pre-evidential judgments, demonstrably produced when a specific part of the valuation-decision making network gains the upper hand. The relationship of religious belief with biased judgment seems important, but is an issue that the manuscript leaves entirely open.\n\nSeveral findings also implicate the amygdala in choices where emotions and intuitions play an important role 17,18. Considering the role of neural substrates directly linked to emotion when choosing between sources of meaning could add to the bigger picture of how acts of believing are linked to non-rational processes. Decision making between potential sources of meaning such as religion/spirituality, nature, community, may be related to the affective network in the brain, but not to more refined goal-directed mechanisms in the prefrontal cortex, as is suggested by a recent study19.\n\nOn a psychological level, further construct clarification could ensue from discussing relationships between processes of believing and intuition. Research findings on intuition20 might help to better understand processes of believing, and to identify the boundaries between both.\n\nMinor points:\n\nSchnell, 200321 is quoted as suggesting that “humans are born with an implicit religiosity” (p. 6). This seems to be a misreading of the publication, which aims to clarify in which cases implicit religiosity is present: “All those contents which are structured as myth, ritual or transcendent experience, and evaluated as meaningful by an individual, can be said to constitute implicit religiosity“ (p. 89). Not every person is implicitly religious, and it is not an inborn characteristic – although the propensity to elevate what is personally meaningful, and to express it by means of stories, rituals, and extraordinary experiences that go beyond the rational, appears to be common to all humans.\n\nWhy does a process of believing necessarily have to have a start and a closure, as suggested by the authors?", "responses": [ { "c_id": "2413", "date": "17 Jan 2017", "name": "Rudiger Seitz", "role": "Author Response", "response": "We thank the reviewers for their constructive comments and thoughtful questions concerning a number of aspects in our manuscript that stimulated us to engage in further clarifying these points. We will detail successively the corresponding changes we made in the manuscript as follows: As suggested we reviewed the first section carefully and rephrased it to enhance the visibility of coherence within the section and with respect to the rest of the paper. We explain that the notion of \"normal belief\" is usually contrasted to its pathological manifestations in brain diseases (page 4). In accordance with the reviewer dropped this expression from the manuscript, since it may cause associations mentioned by the reviewers which were not intended. We now are more specific concerning the questioned relation of \"belief\" and religious belief. We do not want to expand on the adjective \"religious\" which is ambiguous as we have outlined elsewhere (Angel and Seitz 2016). With respect to the question of the reviewer and for improved clarity we transposed paragraphs in the final section and now state on page 20 of the revised manuscript that \"religious belief\" in the sense of \"non-secular belief\" and \"secular belief\" are hypothesized to be brought forward by similar, if not identical processes of believing but that they differ by their specific contents. In fact, they differ by narratives as we state on page 19. Please, note that believing is considered a fundamental brain process entirely separate from religious beliefs as stated explicitly on page 20. We are entirely happy with the suggestion of the reviewers to refer to \"Gestalt\". This notion was developed by von Weizsäcker in the 40ies of last century. To his work we now refer in the revised manuscript on page 11. With respect to the discussion about prediction we would like to point out that believing and beliefs should not be reduced to prediction. We now state on page 14 that believing pertains to experience, i.e. knowledge acquired in the past. Moreover, we outline that believing links the past to the future as it allows the individual to make predictions (pages 12, 14). For example, acquired knowledge and predictions work on the physical world being entirely probabilistic. This becomes apparent by the automatic unconscious servo control of the hand for object grasping (Diamond et al, Pélisson et al; not cited on page 12). Likewise the meaning a subject attributes to an object or event reflects his/her prior experience with such an object or event providing a basis for estimating its future implication; specifically, the question is whether the object or event is beneficial or satisfying for the subject or adverse or deleterious (page 14). Finally, decision making is consistent only as long as a belief has not been modified in response to new and violating information. This topic is explained in greater detail elsewhere (Angel and Seitz 2017) now cited on page 21. The comment about Pavlows reflex is noteworthy. As the reviewer points out, this is a subconscious process that cannot be influenced voluntarily by the subject. And certainly the onset of salivation induced by the conditioning ringing of a bell reflects an automatic prediction. We can exaggerate this argument extending it to all types of reflexes including the eyelid reflex and the muscle tendon reflexes: the neuronal machinery operates with the goal to protect the integrity of the organism against predicted harm or to prepare it optimally for an upcoming action. If we accept this, we may want to argue that even the basic neural circuits are equipped with a machinery that internalizes sensory experience to code for appropriate behaviour for integrity and survival of the organism. Higher cognitive and emotional functions that constitute believing are more complex working in parallel cortico-cortical and cortico-subcortical circuits in the human brain. These complex brain systems are suited to afford the integrity and survival of the individual human and of groups of humans including societies as we now state on page 21. We agree that beliefs are based on intuitive pre-evidential and probabilistic judgments. We now state on page 20 that rituals are likely to bias humans to accept narratives as beliefs held in social groups as well as in secular and religious faith groups. Decision making is a brain function following and engaging judgments and predictions. In other words, decision making is down the road with respect to believing. We absolutely agree that different types of information processed in parallel and highly interconnected brain circuits come into play. The point we would like to make, however, is that the interface for such complex self-oriented computations seems to be the medial prefrontal cortex that comprises multiple functional units as evident from Figure 2 and outlined in greater detail elsewhere (Seitz et al. 2006, van Overwalle 2009). We now emphasize this issue on page 16 and refer to it later in the manuscript again (page 21). We are grateful about the clarification concerning the notion of implicit religiosity and modified this phrase accordingly (page 23). In our empirical model we use the novel sensory stimulus as starting point of the believing process and the establishment of a probabilistic representation as the endpoint of the believing process. This has now been stated clearly on page 11. However, as write on page 21 new experiences constantly entertain an update of such a representation reflecting the potentially fluid nature of beliefs." } ] } ]
1
https://f1000research.com/articles/5-2573
https://f1000research.com/articles/5-2816/v1
05 Dec 16
{ "type": "Opinion Article", "title": "FAIRness in scientific publishing", "authors": [ "Philippa C. Matthews" ], "abstract": "Major changes are afoot in the world of academic publishing, exemplified by innovations in publishing platforms, new approaches to metrics, improvements in our approach to peer review, and a focus on developing and encouraging open access to scientific literature and data. The FAIR acronym recommends that authors and publishers should aim to make their output Findable, Accessible, Interoperable and Reusable. In this opinion article, I explore the parallel view that we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being mindful of equity and justice for patients, clinicians, academics, publishers, funders and academic institutions. The views I represent are founded on oral and written dialogue with clinicians, academics and the publishing industry. Further progress is needed to improve collaboration and dialogue between these groups, to reduce misinterpretation of metrics, to reduce inequity that arises as a consequence of geographic setting, to improve economic sustainability, and to broaden the spectrum, scope, and diversity of scientific publication.", "keywords": [ "Academic publishing", "peer review", "impact factor", "metrics", "data visualization", "open access" ], "content": "Introduction\n\nSubstantial and positive changes are currently underway in academic publishing; now is an important time to capitalize on the opportunity to explore the many potential benefits that can stem from new ways to share and disseminate scientific data1. Despite the improvements that are emerging, it remains the case that discussions in academia frequently focus on the pitfalls, frustrations and difficulties of publishing. Managing a piece of work from conception to publication can indeed be a long and complicated journey, and elements of the process can often feel ‘unfair’.\n\nAdvocates of data dissemination suggest that we should aspire to the principles enshrined in the ‘FAIR’ acronym; work should be Findable, Accessible, Interoperable and Reusable2. However, as well as endorsing these attributes of any work, I here represent the view that we should also develop a collective responsibility to make data sharing fair in the conventional sense; the way we generate, represent, review, share and use data should also be underpinned by justice. This means our handling of the whole process is fair to everyone involved, including academic institutions, funders, authors, reviewers, publishers, research participants and patients.\n\nAs well as being driven by ethical and moral imperatives to improve our approaches to publishing scientific research, questions around data sharing have to be set in the context of the exponential increases in the volume of data generated; a responsible and robust approach to archiving, cataloguing and managing access to such datasets is crucial to allow optimum, equitable and collaborative approaches to Big Data.\n\nI embarked on this journey as a result of investigating the best way to publish a database. Rather than seeking output via a conventional ‘paper’, I was keen to produce something live, open access, creative, evolving, promoting new collaborations, and linking to other relevant resources. These discussions around the dissemination of my own data led to presentations at a meeting hosted by the University of Oxford’s Interactive Data Network (https://idn.web.ox.ac.uk/event/data-visualisation-and-future-academic-publishing) and subsequently at a conference of publishers (Annual meeting of the Association of Learned and Professional Society Publishers, http://www.alpsp.org/). In order to represent a wider cross-section of academic medicine, I collected opinions from my peers and colleagues within academic medicine using an online questionnaire (https://www.surveymonkey.co.uk/), and then followed this up with a parallel approach to seek feedback from the publishing industry.\n\nThis piece is a representation of some of the key themes that arose as a result of the two-pronged questionnaire, and the ongoing discussions between the medical research community and the publishing industry. The feedback that I present is intended to represent individual and collective opinion, to prompt and challenge further discussion, to build bridges between publishing and academia, and to help us move forward with constructive dialogue.\n\n\nQuestionnaire results\n\nDetails of the questionnaires, and the entire dataset collected from each of the two questionnaires used to collect quantitative and qualitative feedback from 102 academics and 37 representatives of the publishing industry, are available to view and download as PPT files from 3 and 4 respectively.\n\nThe feedback I have collated represents individual opinion, collective discussion, and sometimes emotion, and the resulting work is my own personal synthesis of this experience. This does not aspire to be a formal or scientific study, but rather to represent views on some important themes in academic publishing, and to underpin further dialogue.\n\n\nDomains for discussion in academic publication\n\nDelays in the conventional routes to publication commonly amount to weeks and months consumed by submission, peer-review, editorial decisions, potential corrections and resubmission, followed not infrequently by a re-initiation of the whole process5. Among academic survey respondents, over 70% agreed or strongly agreed with the statement ‘I am frustrated by the length of time it takes to publish my work’3, and over 80% of publishers agreed that reducing the timelines involved in academic publication should be a ‘crucial priority’4.\n\nDelays suppress and stifle scientific progress in a variety of ways. Over the long time courses of publication, data frequently decay such that they are out of date before they go to press, and it is impossible for authors to provide a written context for their work that reflects the very latest literature or advances in the field6,7. Delay also leads to academic paralysis: until their work is published, academics may refrain from presenting or discussing their work publicly, thereby limiting its impact, reducing the possibility of developments and collaborations, and allowing flaws and discrepancies to go unchallenged. There is also personal paralysis – delays can cause difficulty in progressing with the next phase of an existing body of work, moving on to a new project, recruiting a team, or applying for an academic post or funding3,7.\n\nReducing delays is clearly an important aspiration but one that comes with practical caveats. One publisher says: ‘Timeliness is important. So is quality control. The latter negatively impacts the former’4. In conventional models of publishing this may have been the case, but we should now strive to dismantle the view that long delays are an inevitable consequence of producing work that is robust, high quality, and endorsed by expert peer review. Happily, this framework is shifting as a result of parallel improvements in allowing academics to post their own work online, and in new approaches to post-publication peer review (discussed in more detail in the section below).\n\nAsked to respond to the statement ‘peer review functions equitably and contributes to improving the quality of my work’, 58% of academics agreed or strongly agreed3. This seems to reflect a general consensus that the process remains valuable and fit for purpose, though evidently tempered by background ambivalence and anxieties, and not endorsed by everyone.\n\nPeer review is intended to provide quality assurance, a principle that is of universal importance to authors, readers, publishers, funders and academic institutions. However, no-one doubts the potential pitfalls of such a process: a reviewer may not be impartial, may be less expert than the authors of the work for which they are providing critique, may not give the task the time that it deserves, and may – on occasion – just get it wrong8. There can also be concern, as stated by one academic, that ‘creativity is stifled in this process’. On these grounds, peer review has continued to be accepted as the ‘least worst’ model8, only persisting for lack of a better alternative.\n\nHowever, many new approaches to peer review are evolving, with support and enthusiasm from both academics and publishers3,4. These include:\n\nMaking peer reviews open access (e.g. F1000, https://f1000research.com and PeerJ, https://peerj.com/), or providing double-blind peer review8;\n\nUsing structured formats or templates for critical review, and developing collaborative peer review so that a consensus opinion is provided by a team (e.g. Frontiers, http://home.frontiersin.org/);\n\nPromoting a model that seeks online feedback from the entire scientific community (now a component of many open access review systems, including those at https://f1000research.com);\n\nAsking reviewers to suggest additional experiments only when these are deemed essential to the work and can be conducted within an agreed time frame (e.g. eLife, https://elifesciences.org/);\n\nImproving editorial oversight and influence to ensure the process is conducted fairly and to arbitrate in cases where there is conflict of opinion.\n\nAdjustments to the timeline that put publication ahead of review can also have substantial influence on the process. Authors have the potential to disseminate their work through pre-publication archives (e.g. BioRxiv, http://biorxiv.org/) or on data-sharing platforms (e.g. Figshare, https://figshare.com/). Alternatively, post publication peer review has been adopted as an inventive compromise that reduces delays and promotes data sharing, without sacrificing a quality assurance framework, for example by the F1000Research and Wellcome Open Research platforms (https://f1000research.com, https://wellcomeopenresearch.org/)1,9.\n\nRecognising and rewarding the substantial contribution made by reviewers is also crucial, and strides forward are afoot in providing formal acknowledgement of the body of work undertaken by reviewers; this includes the potential for logging this in a systematic way (e.g. using Publons, https://home.publons.com/). Reviews themselves are becoming independently accredited pieces of scientific work that are a recognised part of a formal academic portfolio (including visibility on ORCID, http://orcid.org/), can be ranked and rated, are published with a DOI to make them accessible and citable, and can lead to the award of CME points10,11.\n\nMuch of the communication between academia and publishers happens in a one-way direction through rigid online portals. True open dialogue frequently seems to be lacking, potentially leading to frustrations on both sides. Only 23% of academic respondents agreed or strongly agreed that they would feel ‘comfortable and confident contacting editors and publishers to discuss work before submitting for publication’3. In response to the same question about dialogue with academics, publishers fared slightly better with over half being comfortable pursuing dialogue4.\n\nOnly one in three academic respondents reported having experienced positive interactions with editors and publishers to help them present and disseminate their work in the best way. Interestingly, academics’ views on this point also reflect a degree of uncertainty about whether discussion with editors and publishers is appropriate at all: they raise concerns that this amounts to ‘coercion’ or is in some way ‘cheating’ the system3.\n\nCollective responses to how communication should be improved include the need for improving formal and public interdisciplinary discussion at workshops, conferences and seminars, as well as the more personal view from academics who ask editors and publishers to provide a reliable and named point of contact for authors. There is also a collective responsibility for both publishers and academics to promote and participate in communication, to recognize the ways in which appropriate dialogue can improve the content or accessibility of our work, and to promote an environment in which we work in partnership.\n\nThe impact factor, the most widely quoted metric, has disproportionate influence over the behaviour of academics, despite never being designed as a measure of the quality of any given piece of work5. To quote one publisher, impact factor is ‘embedded in researcher culture’4. Although it can still exert a very potent effect, there has been increasing recognition that the metrics of any individual piece of work should be of more importance than the metrics of the journal in which it is published, and that we should move away from assessing ourselves, or each other, based on this criterion7,12. It is also important to be mindful that citations can be relatively easy to amass for articles written on major topics, while if you publish in a niche field, your work may be of equal scientific rigour and quality, but has a much smaller audience.\n\n‘The impact factor is broken’ stated one academic medic3. Only 19% of publishers disagreed with this statement, and others added their own descriptions of the impact factor as ‘misused and outdated’, ‘obsolete’ and ‘a horrible obsession for editors and authors’4. We should collectively be encouraged to assess output using a much broader approach, for which an increasing number of tools is becoming available, including online resources such as Google Analytics (https://analytics.google.com/) or Google Scholar (https://scholar.google.com/), Altmetric (https://www.altmetric.com/), author-specific metrics such as h-index, and – most importantly - the application of common sense to viewing and interpreting metrics in the right context12–14.\n\nOpen access publication offers a system that should be inherently fair in promoting free access to published resources. However, the challenge to equity here is an economic one15. In a traditional, non open access model, the fees required for access to a journal or individual manuscript are frequently prohibitive for individuals; access therefore depends on institutional subscriptions. In the open access model, in order to make the work freely accessible to their readers, the publisher passes the costs on to their authors. Both systems discriminate strongly against those in less affluent settings.\n\nUnsurprisingly, open access publication can influence article metrics, as those articles that are freely available may be more frequently accessed and cited16. So authors from wealthy institutions can potentially feed their own personal metrics by publishing their work in open access fora. In reality, the situation is more complicated, as the open access citation advantage is not consistent across studies17, many publishing houses waive fees for authors from under-resourced settings, and there are now increasing options for free data sharing (including those discussed above, such as self-publishing, archiving in online repositories, or pre-print publication).\n\nInsisting on consistency in the presentation of scientific work can be a way that individual publishers or journals contribute to quality control and maintaining their unique identity through preservation of a ‘house style’. However, academics often see the process as an array of trivial but time-consuming formatting obligations, demanded of them before the work has even been accepted for publication, and without any appreciable benefit to quality3. In addition to manuscript formatting, multiple journal-specific details are frequently requested for online submission. Among publishers, a more diverse body of opinion is reflected, with an equal split between those who are in favour of relaxing (or unifying) formatting requirements, those who have no strong opinion, and those who do not feel any change is required4.\n\nThe conveyor-belt process of conventional publication can be very constraining. An academic manuscript usually has to be assembled into a standardised package that meets strict formatting requirements, most obviously with respect to manuscript layout, length, and the number of figures, tables and references that can be included. This dates from the – now bygone – era in which a paper was indeed just that, printed across several pages of a glossy journal into whose binding it needed to be neatly fitted. Online publication should be providing an escape route from these constraints – albeit not one that has been consistently deployed or accepted.\n\nHowever, there is also a broader boundary in operation which may be less immediately apparent – that which governs so strictly the fundamental nature of a piece of work, that which inhibits (or even prohibits) publication of a work-in-progress, or an unproved hypothesis, or results that are negative, unexplained or in conflict with previous data. Only 9% of academics agreed with the statement ‘the process of publication is flexible, supports innovation, and allows me to creative’, and none strongly agreed3.\n\nThis should be of significant concern when new ideas and novel approaches are so crucial to our collective progress, and in an era in which there is ever better recognition of the risks and costs associated with the suppression of negative results18,19. Furthermore, when new ideas and novel approaches underpin so much true scientific progress, why are such tight restraints imposed on the nature, style, content and substance of academic output? We should move towards a system that welcomes the publication of a diversity of innovation and ideas: there is much for us all to gain from encouraging dissemination of a wider body of work. This might include new concepts, methods and strategies, diverse commentary and critique, approaches that have been tried and failed, negative results, unfinished projects, protocols and registries for clinical trials, and live datasets that evolve over time.\n\nThe traditional publication of an academic ‘paper’ makes it impossible to add incremental advances or updates, and the only way to correct inconsistencies that emerge post-publication is to submit and publish a formal erratum. This is a substantial missed opportunity for quality improvement. The version control option offered by newer publishing platforms allows authors to maintain their work in its best possible form, adding updates, corrections and refinements, while preserving records of the original work. This is the approach I have been ultimately been able to pursue for my own data, via the Wellcome Open Research platform (https://wellcomeopenresearch.org/)20.\n\n\nCaveats to this work\n\nThe discussions represented here took place over a short time frame and are based on opinions collected from a small section of academia3 and from an even smaller slice of the publishing fraternity4. Taking the opportunity to share feedback from academic clinicians does not mean that I represent all academic clinicians, or that the views of other sectors of academia are congruent. Although I have engaged in productive and interesting discussions with publishers, as well as seeking written anonymous feedback, it is not possible for me to represent this sector cohesively, and further commentary is undoubtedly needed.\n\n\nFuture challenges\n\nDespite the marked improvements, new ideas, and increased flexibility emerging around data sharing, there are still some substantial challenges to be addressed around the publication of academic data.\n\nA publishing process perceived as equitable by one individual or institution may not operate in the best interests of another. In particular, we have a crucial collective responsibility to be mindful of the resource gap between different settings. Generating high quality scientific output, and publishing and disseminating this appropriately, is significantly influenced by access to library services, IT infrastructure, institutional access to online resources, funding, manpower and skills. Real fairness means reallocation of resources, waivers for institutions unable to pay access or publishing fees, better sharing of skill sets, balanced review, and capacity building in resource-poor settings21.\n\nDiminishing or diluting quality is a potential concern as we enter an era in which a greater number of authors release a more diverse pool of work without pre-publication review. However, experts in the dissemination of open access literature have argued that market forces will tend to operate to maintain quality, and that the overall benefits of increasing data availability substantially outweigh any potential risk to quality22.\n\nChange can be difficult; old habits die hard and new approaches to data sharing can be met with suspicion or opposition5. Many authors are either overtly or subliminally wedded to the idea of a journal based impact factor and to blind peer review. Some authors also express anxiety arising from the potential conflict between wanting to share their output yet needing to retain ownership of the work. Substantial power is still held by a small subset of traditional journals and editorial boards; the undue influence of the publishing industry on science output has even been described as ‘toxic’23. It will take time for confidence in the newer publishing systems and models to grow. Vigilance is required for ‘predatory’ journals that often send unsolicited emails trying to entice authors with offers including rapid and open access publication, but that may not deliver on their promises, fail to provide suitable peer review, or publish the work only on receipt of a substantial fee9,21,24.\n\nI have not set out to include detailed discussion of economic cost, but it is clear that a substantial financial investment is crucial to support innovative approaches to publishing, to develop new metrics, to support accredited peer review, and to maintain publishing platforms ranging from journals to internet sites. Academia has to be willing to accept and underwrite these costs, and the publishing industry to develop a system that is lean and competitive, and that offers value for money.\n\n\nConclusions\n\nWe are in an era in which the process of disseminating scientific work is becoming quicker and more flexible, in which we can retain ownership while gaining the benefits of public sharing, in which metrics are more about our own output that the collective assessment of the journal that publishes our work, and in which a ‘paper’ no longer has to be a carbon-copy manuscript of a pre-specified length and format.\n\nThere is still much progress to be made. We should continue to be flexible, creative and open-minded in developing the best ways to present and share scientific work. The process has to be underpinned by good communication between academia and publishing, and significant effort is required to dismantle taboos around communication, particularly the view that open dialogue is in some way ‘cheating’ the system. We should be more discerning about metrics, using them appropriately and in context, and not allowing impact factor to drive behaviour, stifle creativity or delay output. Careful thought is required to support, develop and sustain output from under-resourced settings, and to ensure that diverse options for data dissemination and access are not confined to wealthy institutions in rich countries.\n\nAs well as promoting the FAIR principles, changes in the way we publish scientific output are increasingly moving towards a process that is genuinely fair – something that is timely, that we can all access and judge for ourselves but that can still be scrutinized by a process of equitable peer review, that demands rigour and scrutiny while at the same time making efforts to minimise delays, that can be shared, reproduced and collectively applied for the advancement of understanding.", "appendix": "Competing interests\n\n\n\nPCM was an invited speaker at the Association of Learned and Professional Society Publishers (ALPSP) annual conference in September 2016.\n\n\nGrant information\n\nPCM is funded by a Wellcome Trust Intermediate Fellowship Grant, Ref. 110110/Z/15/Z.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThis work is founded on support from several individuals and agencies, who have provided me with expert feedback and discussion, opportunities to speak at publishing meetings, and direct input into the design and distribution of questionnaires. In particular, I would like to acknowledge Robert Kiley (Wellcome Trust), Howard Noble (Academic IT, University of Oxford), Louise Page (PLOS), Juliet Ralph (Bodleian Library, University of Oxford), and Isabel Thompson (Oxford University Press). The questionnaires were distributed with the support of Oxford University Clinical Academic Graduate School (OUCAGS), the Peter Medawar Building for Pathogen Research, and the Association of Learned and Professional Society Publishers (ALPSP). I am grateful to all those individuals within academia and publishing who contributed generously to completing questionnaires in order to develop and inform the discussions represented here.\n\n\nReferences\n\nTracz V, Lawrence R: Towards an open science publishing platform [version 1; referees: 2 approved]. F1000Res. 2016; 5: 130. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatthews PC: Experiences, reflections, gripes and a wish list: representing how academic clinicians relate to the publishing industry [v1; not peer reviewed]. F1000Res. 2016; 5: 2286 (slides), [cited November 2016]. Publisher Full Text\n\nMatthews PC: Improving dialogue between publishing and academia: results of a questionnaire to the publishing industry [v1; not peer reviewed]. F1000Res. 2016; 5: 2617 (slides), [cited November 2016]. Publisher Full Text\n\nTracz V: The five deadly sins of science publishing [version 1; referees: not peer reviewed]. F1000Res. 2015; 4: 112. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBjörk BC, Solomon D: The publishing delay in scholarly peer-reviewed journals. 2016, [cited 2016]. Reference Source\n\nPowell K: Does it take too long to publish research? Nature. 2016; 530(7589): 148–151. PubMed Abstract | Publisher Full Text\n\nSmith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006; 99(4): 178–182. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeixeira da Silva JA, Dobránszki J: Problems with traditional science publishing and finding a wider niche for post-publication peer review. Account Res. 2015; 22(1): 22–40. PubMed Abstract | Publisher Full Text\n\nSchekman R, Watt F, Weigel D: The eLife approach to peer review. eLife. 2013; 2: e00799. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSammour T: Publons.com: credit where credit is due. ANZ J Surg. 2016; 86(6): 512–513. PubMed Abstract | Publisher Full Text\n\nCallaway E: Beat it, impact factor! Publishing elite turns against controversial metric. Nature. 2016; 535(7611): 210–211. PubMed Abstract | Publisher Full Text\n\nKreiner G: The Slavery of the h-index-Measuring the Unmeasurable. Front Hum Neurosci. 2016; 10: 556. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMasic I, Begic E: Scientometric Dilemma: Is H-index Adequate for Scientific Validity of Academic's Work? Acta Inform Med. 2016; 24(4): 228–232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTennant JP, Waldner F, Jacques DC, et al.: The academic, economic and societal impacts of Open Access: an evidence-based review [version 3; referees: 3 approved, 2 approved with reservations]. F1000Res. 2016; 5: 632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiwowar HA, Vision TJ: Data reuse and the open data citation advantage. PeerJ. 2013; 1: e175. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavis PM, Lewenstein BV, Simon DH, et al.: Open access publishing, article downloads, and citations: randomised controlled trial. BMJ. 2008; 337: a568. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDirnagl U, Lauritzen M: Fighting publication bias: introducing the Negative Results section. J Cereb Blood Flow Metab. 2010; 30(7): 1263–1264. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoldacre B, Heneghan C: How medicine is broken, and how we can fix it. BMJ. 2015; 350: h3397. PubMed Abstract | Publisher Full Text\n\nLumley S, Noble H, Hadley M, et al.: Hepitopes: A live interactive database of HLA class I epitopes in hepatitis B virus [version 1; referees: 1 approved]. Wellcome Open Res. 2016; 1: 9. Publisher Full Text\n\nSiriwardhana C: Promotion and Reporting of Research from Resource-Limited Settings. Infect Dis (Auckl). 2015; 8: 25–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHawkes N: Full access to trial data holds many benefits and a few pitfalls, conference hears. BMJ. 2012; 344: e3723. PubMed Abstract | Publisher Full Text\n\nMolinie A, Bodenhausen G: On toxic effects of scientific journals. J Biosci. 2013; 38(2): 189–199. PubMed Abstract | Publisher Full Text\n\nViale PH: Publishing in open-access journals: potential pitfalls. J Adv Pract Oncol. 2013; 4(4): 195–196. PubMed Abstract | Free Full Text" }
[ { "id": "18242", "date": "07 Dec 2016", "name": "Gustav Nilsonne", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is an opinion article, which discusses several areas where unresolved questions exist in the transition to more open scientific publication practices. The discussion is underpinned by survey data, although a full description of the survey and its results are not within the scope of the paper.\nAreas covered in this paper are publication delays in scientific publishing, peer review, communication between scientists and publishers, metrics such as the impact factor, models for open access publication, journals' formatting requirement, and boundaries imposed by traditional publishing on paper, which need not persist in a time with online publishing, but still do.\nThe paper provides a timely discussion, based on survey data, and on relevant and valid arguments. The abstract promises to explore the notion of fairness from the points of view of many different stakeholders. This point of departure lacks clear justification. In particular, it is not obvious why changes in scientific publishing should need to be perceived as fair by publishers. Also, not all of the stakeholder perspectives are explicitly addressed in the main text.\nThe survey data are available in two linked slide presentations. I recommend that the data be made available as a data frame in a non-proprietary file format. This will facilitate re-use and further exploration of the data set. Best practice is to use a repository that provides access to data in a format that is time-stamped, immutable, and permanent, and with a persistent identifier and an open licence. Documentation including metadata that describes how the survey was performed can be provided with the data or in this paper.\nIn the current movement towards more open publication practices, it is important to find out how scientists and other stakeholders perceive barriers and possibilities. This paper makes a valuable contribution in gathering scientists' views and arguments surrounding publication practices. I am happy to approve it with a reservation about the format of openly published survey data.", "responses": [ { "c_id": "2382", "date": "22 Dec 2016", "name": "Philippa Matthews", "role": "Author Response", "response": "Thank you Dr Nilsonne for the positive feedback and helpful critique. I have uploaded the metadata to Oxford University Research Archive; this record can be accessed using the following link: https://doi.org/10.5287/bodleian:J5aekGAMy. I will address the other suggestions in more detail in a revised version of the article." } ] }, { "id": "19008", "date": "06 Jan 2017", "name": "Dragan Pavlović", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral description.\nThis is a well written “opinion” article where the author examined the parallel view that “we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being mindful of equity and justice for patients, clinicians, academics, publishers, funders and academic institutions.” The views are based on oral and written dialogue (including 2 online questionnaires, 102 academics and 37 representatives of the publishing industry,) with clinicians, academics and the publishing industry. Parts of the work were presented earlier at 2 meetings. It is concluded that further progress is needed to improve collaboration and dialogue between these groups, to reduce misinterpretation of metrics, to reduce inequity that arises as a consequence of geographic setting, to improve economic sustainability, and to broaden the spectrum, scope, and diversity of scientific publication.\n\nMajor comments.\n\nThis is in some way a relatively “short” text, mostly presented as a letter or a superficial comment, yet as such it appears to be quite long. There is no precise analysis of the announced results of the 2 online questionnaires, with 102 academics and 37 representatives of the publishing industry. The text remains to be just, as indeed announced, an opinion.\n\nIt looks to me that the text could be much shorter and much more focused on the acute problems, like expertise of the peer reviewers, negligence of the editors and the editorial boards of the journals, co-authorship and commercialization of the open access journals (‘predatory’ journals). Probably also the last paragraphs (Future challenges and Conclusions) could be substantially shortened. Or, if the questionnaires were appropriate, it would be possible to develop much more relevant and informed study. It is hard to see what how the study would look like since the questionnaires are not available.\n\nThe explosion of the number of the journals worldwide in the last decade or so was not discussed and there is no mention of the problem with the printed journals that are facing their slow disappearance.\n\nThe discussion does not reach deep enough to provide more concrete solutions to the problems that are presented in the paper.\n\nMinor comments\n\nPeer review Insisting on the expertise of the reviewers is justified, although the existing methods - some are mentioned in the text, do not guarantee it. It should be mentioned that the journals should have some more secure methods to choose the relevant experts for the peer review. May be the reviewers should supply some evidence what kind of the expertise they have in the relation to the paper that they give an opinion and the journals should be obliged to respect it.\n\nMetrics Problem of co-authorship and possible unjustified benefits for the co-authors was not mentioned.\n\nOpen access Problem of commercialization (of the ‘predatory’ journals) could be more elaborated.\n\nFormatting requirements Probably some negative comments are not fully justified. I personally find impossible to review an article that, even if well written, is badly formatted. Badly presented text, even if it is of high quality, inevitably loses its impact. Please revise if you agree that your judgment was not carefully measured.", "responses": [] }, { "id": "18988", "date": "16 Jan 2017", "name": "Oyewale Tomori", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title for this article is appropriate.\nThe abstract adequately summarises the article. In addition, the article goes beyond the FAIR acronym (Findable, Accessible, Interoperable and Reusable), for authors and publishers and addresses FAIRNESS on equity and justice for patients, clinicians, academics, publishers, funders and academic institutions. This is an opinion and not an \"opinionated\" paper. It is backed by a succinct and balanced analysis of responses to distributed questionnaire and discussions with stakeholders in publishing, authors and editors.\nThe conclusions are balanced and justified on the basis of the results.\nI have also made some additional comments to the manuscript. To see this, please click here.", "responses": [] } ]
1
https://f1000research.com/articles/5-2816
https://f1000research.com/articles/5-1307/v1
09 Jun 16
{ "type": "Method Article", "title": "Digital methodology to implement the ECOUTER engagement process", "authors": [ "Rebecca C. Wilson", "Oliver W. Butters", "Tom Clark", "Joel Minion", "Andrew Turner", "Madeleine J. Murtagh", "Oliver W. Butters", "Tom Clark", "Joel Minion", "Andrew Turner", "Madeleine J. Murtagh" ], "abstract": "ECOUTER (Employing COnceptUal schema for policy and Translation Engagement in Research) – French for ‘to listen’ – is a new stakeholder engagement method incorporating existing evidence to help participants draw upon their own knowledge of cognate issues and interact on a topic of shared concern. The results of an ECOUTER can form the basis of recommendations for research, governance, practice and/or policy. This paper describes the development of a digital methodology for the ECOUTER engagement process based on currently available mind mapping freeware software. The implementation of an ECOUTER process tailored to applications within health studies are outlined for both online and face-to-face scenarios. Limitations of the present digital methodology are discussed, highlighting the requirement of a purpose built software for ECOUTER research purposes.", "keywords": [ "digital methodology", "Stackeholder engagement", "freeware software" ], "content": "Introduction\n\nEngaging stakeholders is understood to be essential to produce responsible practice in research as well as in business and public provision of social and health services. Stakeholder engagement brings together individuals or groups who have an interest, a stake, in a topic or issue. It makes sense that the people most involved or most affected by research, business or public actions would best understand how these practices affect them. However, achieving effective stakeholder engagement – engagement which represents all stakeholders equally, not just the articulate and powerful, and which engages at a depth and breadth that is appropriate to the issue at hand – is known to be potentially difficult, time consuming and expensive1. Most existing methods rely on being able to bring people together in real time in a single or small number of locations. Because of these difficulties, stakeholder engagement often represents only a partial understanding of an issue and may not take account of potentially important perspectives. Even when there is a genuine commitment to giving voice to diverse perspectives, stakeholder engagement may exclude the very people it seeks to involve because its structures aren’t sufficiently agile, inclusive or accessible. Our aim was to develop a method and mechanism that was simple and accessible, yet allowed for a depth of analysis needed to uncover and disentangle the complexities and nuances that emerge when bringing together numerous personal understandings and experiences to understand an issue or topic.\n\nEmploying COnceptUal schema for policy and Translation Engagement in Research (ECOUTER, http://www.bristol.ac.uk/ecouter) is a new methodology for stakeholder engagement, utilising concept mapping to collaboratively address a question of interest in a defined stakeholder community (the method is summarised in the ECOUTER introductory video). Taken from the French verb ‘to listen’, ECOUTER brings together the knowledge, skills and experience of stakeholder contributors and supports a two-way process of informing and generating evidence and understandings of the issue in question from those who know it best. Social science methods of analysis (such as those described by Glaser2) of contributions made during the engagement process are applied iteratively resulting in qualitative findings and recommended actions. The ECOUTER process as outlined below can lead to the development of recommendations for research, governance, policy and practice.\n\nIn this paper we describe the development of a digital methodology for the ECOUTER engagement process and share the instructions for its implementation. A forthcoming paper (Murtagh, MJ., Minion, J.T., Turner, A., Wilson, R.C., Blell, M., Ochieng, C.A., Murtagh, B.M., Roberts, S. and Butters, O.W. ECOUTER (Employing COnceptUal schema for policy and Translation Engagement in Research): a tool to engage research stakeholders, 2016 unpublished report), fully describes the rationale and development of the ECOUTER stakeholder engagement process, including the analysis of a number of use cases.\n\nIn practice ECOUTER is a four stage process based on:\n\n1. Engagement and knowledge exchange: This stage involves defining a central question/issue and relevant stakeholder group(s) to facilitate discussion and contributions. The exchange in question may be undertaken online or face-to-face, though the online mechanism is anticipated to be of greatest utility for engaging stakeholders who are geographically distributed. We therefore describe the essential components for online engagement below.\n\n2. Analysis: Once the ECOUTER has been conducted, the data are analysed using social science methods.\n\n3. Concept and recommendation development: Analytic findings are then summarised in a conceptual schema; that is, a map of key concepts, their nature and relationships.\n\n4. Feedback and refinement: The conceptual schema is fed back to the contributors and wider community along with recommendations for research, governance, policy and practice.\n\n\nECOUTER technical development and implementation\n\nAn essential aspect of the ECOUTER methodology has been the requirement for contributions to be linked/threaded within a structured discussion space. A mind mapping approach offered an appropriate solution, providing a mechanism for the relationships between comments as well as enabling the comments themselves to be captured and visualised. The nature of the ECOUTER methodology necessitated the capacity to run within a number of stakeholder groups in multiple localities simultaneously. A synchronised, digital, software solution (rather than a paper-based one) was needed. A range of open-source and proprietary software exists for mind mapping. Given the importance of removing cost as a barrier to participation in an ECOUTER, we assessed and trialed a selection of open-source software and freeware solutions based on the research and user requirements outlined below. An online web-based solution rather than an installed computer program was identified as more inclusive, enabling real-time contributions across different platforms (across Windows, Linux, Mac) and from internet-enabled devices (e.g. smartphones, tablets, computers, etc.) regardless of a contributor’s physical location.\n\nECOUTER initiators required a simple user interface to administer, setup and manage the collaborative mind map. In addition, the data needed to be exported from the software in both image-based and text-based formats prior to analysis, with a mechanism to trace how the collaborative discussion space evolved. From a user perspective, it was essential for the software to have a simple user interface that enabled multi-user contributions within a mind map whilst retaining the anonymity of individual contributors.\n\nThe web-based collaborative mind mapping freeware Mind42 was identified as an appropriate solution to be used within the ECOUTER framework. It has a simple user interface via a website, allowing researchers to initiate an ECOUTER mind map and to manage invited contributors. Multi-user, collaborative mind maps are possible in Mind42, with users remaining anonymous both to other contributors and to the researchers. The software includes versioning and a periodic history, documenting how the mind map evolves.\n\nMind42 has both a manual web accessible mechanism and an application program interface (API) to export the mind map in multiple formats. These include exports as an image, pdf, text record of all contributions, and formats compatible with other mind mapping software. Furthermore, the Mind42 native data format (a Mind42 .m42 file) is nested and hierarchical following a JSON file format, retaining metadata about the mind map including anonymised identifiers for individual contributors linked to their contributions, the number of individual contributions, and the date and time of individual contributions.\n\nThe Mind42 software was capable of being implemented in both an online ECOUTER and a face-to-face ECOUTER as summarised below. Full documentation on the ECOUTER wiki provides complete instructions on how to set up, run and manage an ECOUTER in either format.\n\nImplementation Online. An online ECOUTER implementation has been developed to enable running an ECOUTER over a longer time period (e.g. weeks to months or longer). It has the benefits of allowing people to contribute to discussions regardless of time zone or geographic location, and supports contributors dropping in and out of discussions over the entire time period.\n\nAn ECOUTER administrator account is set up on Mind42 allowing ECOUTER facilitators to initiate, manage and moderate a mind map from start to finish. Once a stakeholder group has been defined, invitations are sent to potential participants followed by the creation of individual accounts on Mind42. The information required to create a Mind42 account is minimal: only an email address and a password.\n\nECOUTER facilitators seed the mind map with themes and evidence (based on review of existing evidence) using the administrator account on Mind42. This task can include linking to online material including videos, photos, papers, articles and other mind maps. Figure 1 contains an example of a seeded ECOUTER mind map based on the ECOUTER question What are the ethical, legal and social issues related to trust in data linkage undertaken in a pilot conducted with the Public and Population Project in Genomics and Society, (P3G) in late 2014. The ECOUTER question is in a blue box in the map centre, with seeded themes in capitalised text forming the primary branches and subsequent branches containing further seeded comments and evidence in the form of web links.\n\nMind42 stores versions of the visual mind map periodically. At this stage, back-ups of the mind map in the desired file formats (typically as a .png and .m42 (JSON) file formats) can be taken by the ECOUTER facilitators manually after seeding the map, and regularly during the ECOUTER to prevent data loss. This is particularly important as Mind42 has no advanced user management facility and contributors are able to over-write and delete contributions made by others. Alternatively, auto-backup of the ECOUTER mind map data can be implemented at this stage using the Mind42 API, including a snapshot of the initial seeded mind map. An example back-up script is available on the ECOUTER Github repository.\n\nECOUTER facilitators invite registered stakeholders to contribute to the seeded mind map and moderate contributions, checking data backups periodically whilst the ECOUTER is running. Once the ECOUTER period is finished, facilitators close the mind map to new contributions and check the final data backups. An open source script available on the ECOUTER Github repository is used to flatten the native Mind42 data file, creating a human-readable table (.csv) of the mind map metadata, individual text contributions and preserving the final map structure. The data are then imported into computer assisted qualitative research tools (e.g. NVivo) for analysis, with a second copy archived.\n\nImplementation face to face. A face-to-face implementation has been developed to run an ECOUTER over a shorter time period from hours to one day. It has the benefits of allowing people to contribute to discussions within an exhibition-style setting, which may be placed in a high traffic public place, conference or exhibition venue.\n\nECOUTER facilitators initiate, manage and arrange data backups in the same manner as outlined previously. Facilitators create a number of generic ECOUTER participant accounts on Mind42 through which individuals can contribute to the mind map. Internet enabled laptops and/or tablets, provided as part of an ECOUTER exhibition stand are each logged into the ECOUTER using these accounts on Mind42, thus allowing anonymous contributions to the mind map.\n\nIn an exhibition setting it is also possible to publish the mind map online on the Mind42 website so that it is publicly viewable (read-only) including its live evolution. The live mind map can then be displayed using a large-screen television or monitor at the exhibition stand or made available to participants via a QR code or similar.\n\n\nDiscussion and conclusions\n\nThe ECOUTER method utilising Mind42 has now been implemented and piloted five times:\n\n1. September to November 2014 in collaboration with the Public Population Project in Genomics and Society, Montreal, Canada. What are the ethical, legal and social issues related to trust and data linkage? Online, internationally available ECOUTER implementation over a period of several weeks.\n\n2. November 2014 during the ESRC Festival of Social Research, Bristol. Your medical records - hand over or hands off? Facilitated digital face-to-face ECOUTER implementation in a public space on one Saturday in a busy shopping centre.\n\n3. June 2015 during the Translation in Healthcare conference, Oxford. Translation and emerging technologies: what are your views on the social, ethical and legal issues? Digital face-to-face ECOUTER implementation during the lunch break of an international academic conference.\n\n4. July 2015 during the BioSHaRE tool roll out meeting, Milan. BioSHaRE Tools - Where to now? Manual ECOUTER implementation (paper-based, without Mind42) during a day-long workshop.\n\n5. May to November 2016 during the data collection clinic of the Avon Longitudinal Study of Parents and Children (ALSPAC, publicly known as Children of the 90’s) cohort study asking study participants What areas would you like Children of the 90s to research? Online ECOUTER over a period of six months.\n\nExperience from the above pilots established the efficacy using the free mind mapping platform Mind42 during an ECOUTER. While it was an appropriate solution for the initial specification the pilots did, however, highlight a series of critical limitations and technical issues that need to be resolved before the full potential of the ECOUTER methodology can be realised.\n\nMind42 is free to use because it generates revenue via targeted advertising, with ECOUTER initiators having no control over the advertisements users are exposed to. Advertising has the potential to distract or influence ECOUTER contributors; users are able to remove adverts only by paying a fee to Mind42. In addition, there are concerns around confidentiality and analysis given that the data sits with Mind42, a company located in Austria. Data are therefore are subject to Austrian law.\n\nFurthermore, the inclusion of several additional features are required within a collaborative mind mapping tool, tailored to the ECOUTER process, to facilitate and strengthen data analysis. These include:\n\nEnhancements to facilitate ECOUTER management.\n\nAdvanced permission management is essential during an ECOUTER to manage users and user groups. This could be used to help define administrator, moderator and contributor roles during an ECOUTER and ensure secure use of the mind map (e.g. preventing contributions from being modified or deleted by others).\n\nEnhancements to user experience.\n\nAdvanced mind map formatting and customisation will enhance readability and user experience. Mind42 has limited formatting capabilities, with only basic methods to format the size of text and colour of mind map branches. As an ECOUTER mind map grows, it can become difficult to navigate the volume of contributions without the use of more advanced formatting features such as bold or italic faces, multiple fonts, font size and text colour. Furthermore, it would be useful for researchers to customise publication grade mind maps for visual impact.\n\nAgree/disagree buttons would allow contributors to agree/disagree with contributions made by others and to enable researchers to gauge agreement with a comment among the stakeholder community.\n\nEnhancements for ECOUTER analysis.\n\nCategorisation of contributors would provide researchers with additional information about participants which may be relevant to the ECOUTER question (e.g. level/area of expertise, gender, age) whilst still retaining their anonymity.\n\nadvanced analytics such as activity auditing would assist researchers in understanding and evaluating how the mind mapping tool is used by contributors during an ECOUTER process.\n\nFinally, reliance on third party freeware poses risks to long life-cycle research projects because the software may change substantially in functionality and/or terms and conditions. The software can also shutdown, fail to be maintained or have software errors fixed. An open source self-built solution may be preferable for long term sustainability as an ECOUTER tool and mind mapping service that addresses both researcher and user requirements.\n\n\nData and software availability\n\n1. The latest versions of the ECOUTER implementation scripts are available from the ECOUTER Github repository: https://github.com/beccawilson/ecouter\n\n2. Link to the software repository doi on Zenodo: http://dx.doi.org/10.5281/zenodo.513523\n\n3. Software license: GPLv3", "appendix": "Author contributions\n\n\n\nMM developed the ECOUTER process. RW developed technical and implementation methodology, contributed to the ECOUTER Github repository and wrote the ECOUTER wiki. OB and TC contributed to the ECOUTER Github repository. JM developed and monitored ECOUTER processes during pilot implementation. All authors reviewed and edited the manuscript for important intellectual content.\n\n\nCompeting interests\n\n\n\nThe authors have no professional or personal connections to Mind42 (http://www.Mind42.com.)\n\n\nGrant information\n\nThe research leading to these results was supported by the Biobank Standardisation and Harmonisation for Research Excellence in the European Union (BioSHaRE-EU) program which received funding from the European Union Seventh Framework Programme (FP7/2007–2013) under grant agreement no 261433, and the University of Bristol’s ‘Thinking Futures Festival’ which received funding from the Economic and Social Research Council, Festival of Social Science.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors are grateful to all those who have supported the development of ECOUTER through their contributions in the Exploring innovative mechanisms to build trust in human health research biobanking (Brocher Foundation workshop, June 2013) and the participants and supporting institutions involved in the ECOUTER use cases: Avon Longitudinal Study of Parents and Children (ALSPAC), University of Bristol; Economic and Social Research Council (ESRC) Festival of Social Research; ELSI 2.0: International Collaboratory for Genomics and Society; Centre for Health, Law and Emerging Technologies (HeLEX), University of Oxford; and, Public Population Project in Genomics and Society.\n\n\nReferences\n\nThomson R, Murtagh M, Khaw FM: Tensions in public health policy: patient engagement, evidence-based public health and health inequalities. Qual Saf Health Care. 2005; 14(6): 398–400. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlaser BG: The constant comparative method of qualitative analysis. Soc Prob. 1965; 12(4): 436–445. Publisher Full Text\n\nWilson RC, Butters OW, Clark T, et al.: ecouter: ECOUTER implementation scripts. Zenodo. 2016. Data Source" }
[ { "id": "14352", "date": "14 Jun 2016", "name": "Danya F. Vears", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper constitutes a valuable contribution to the literature. The title appropriately represents the article and the abstract nicely summarises the content of the paper. All components of the article have been explained to a high standard and are appropriate to the authors' assessment of the mind mapping tool as an appropriate digital methodology for the ECOUTER engagement process. The conclusions drawn are balanced and justified given the assessments made.\n\nI would encourage the authors to consider elaborating on the following minor points for clarity:\n\nThe paper states that the participants' remain anonymous to both the other participants and also the researchers. However, when implementing ECOUTER online using Mind42, an account is generated for the participants using their email address (which often includes participants' names). Can the administrator see who has registered? Can they see which participants are saying what and if they are part of the research team do they think this has any impact on the study?\n\nHow can it be assured that the seeded ECOUTER mind map is unbiased and representative of the current state of the literature/debate on the topic?\n\nOn page 3 (column 2, paragraph 3) the authors state that the facilitator moderates contributions. Could they very briefly clarify what this involves?\n\nIn the discussion and conclusions, they list 5 successfully implemented pilot studies, the last of which is stated to be running from May to November 2016. Is this year correct? If so, I would exercise caution in calling something that was only set up one month ago and is set to run for another five months \"successful\".\nWell done to the authors on a very interesting initiative and a good assessment of the strengths and limitations of their current approach.", "responses": [ { "c_id": "2310", "date": "22 Nov 2016", "name": "Rebecca Wilson", "role": "Reader Comment", "response": "We are grateful for the comments provided by Dr Vears and have submitted revisions for the paper to clarify these points.  In particular we have re-structured the section Implementation Online and added additional text to describe how ECOUTER participants retain anonymity during contributions." } ] }, { "id": "14267", "date": "16 Jun 2016", "name": "Michael Morrison", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes a novel method for generating responses, opinions and perspectives from stakeholders in a given issue, using a digital engagement processes. The article represents a useful overview of this particular technique and represents a worthwhile contribution to what is an increasingly important area of research. This appraisal comes with the caveat that the article could be improved by addressing the following issues relating to the article content:\n\nGiven that a forthcoming paper describing the ECOUTER process and authored by the same research group is explicitly referenced in the introduction, it would be helpful to have a clearer delineation of how the article in F1000Research is distinct from the forthcoming paper (i.e. what is the specific aim of this paper), or why two separate papers describing the ECOUTER process are warranted.\n\nThe authors state that mind-mapping data generated through the ECOUTER process are analysed using ‘social science methods’. This description is unduly vague. As the ECOUTER process generates primarily textual data, it would seem that methods of textual analysis, whether qualitative (e.g. grounded theory-driven thematic analysis) or quantitative (e.g. content analysis) are the most relevant and most applicable subset of social sciences methods (with appropriate methodological additions if visual data is also collected).\n\nThe concept of ‘mind mapping’ is employed but is not explained. It is not necessarily self-evident and should be explained in more depth or a suitable reference provided.\n\nImplementation online: As pointed out by other reviewers, email addresses can be used to identify individuals especially when combined with other data.", "responses": [ { "c_id": "2311", "date": "22 Nov 2016", "name": "Rebecca Wilson", "role": "Reader Comment", "response": "We are grateful for the comments provided by Dr Morrison and Dr Finlay and have submitted revisions for the paper to clarify these points.  In particular we have added additional text to clarify the scope of this digital methods paper and that of the theory paper in press by Murtagh et al." } ] } ]
1
https://f1000research.com/articles/5-1307
https://f1000research.com/articles/6-41/v1
12 Jan 17
{ "type": "Research Article", "title": "Genome-wide measurement of spatial expression in patterning mutants of Drosophila melanogaster", "authors": [ "Peter A. Combs", "Michael B. Eisen" ], "abstract": "Patterning in the Drosophila melanogaster embryo is affected by multiple maternal factors, but the effect of these factors on spatial gene expression has not been systematically analyzed. Here we characterize the effect of the maternal factors Zelda, Hunchback and Bicoid by cryosectioning wildtype and mutant blastoderm stage embryos and sequencing mRNA from each slice. The resulting atlas of spatial gene expression highlights the intersecting roles of these factors in regulating spatial patterns, and serves as a resource for researchers studying spatial patterning in the early embryo. We identify a large number of genes with both expected and unexpected patterning changes, and through integrated analysis of transcription factor binding data identify common themes in genes with complex dependence on these transcription factors.", "keywords": [ "Drosophila melanogaster", "embryo", "Zelda", "Hunchback", "Bicoid" ], "content": "Introduction\n\nIn the early Drosophila melanogaster embryo, the spatially restricted activity of several maternally deposited factors establishes the main body axes of the animal by triggering cascades of patterned gene expression. Until recently, however, it was not practical to systematically characterize the effects of these factors on patterned expression, as the dominant method for studying spatial expression, in situ hybridization, does not scale well.\n\nWe previously introduced a method for the genome-wide measurement of spatial patterns of expression in the Drosophila embryo based on cryosectioning individual embryos along the anteroposterior axes and sequencing the mRNA from each slice. Here we extend this method to characterize embryos mutant for three maternal transcription factors (TFs): Bicoid, Zelda, and Hunchback.\n\nAll of these factors are crucial for proper patterning of the embryo. Bicoid is the primary, maternally provided anterior specifier that directly regulates many key gap and pair-rule genes1–4. More recently, the role of Zelda (also known as vfl) has been identified as an important factor in establishing cis-regulatory chromatin domains of patterned genes5–12. Finally, Hunchback is both maternally and zygotically expressed, and helps to specify the expression domains and levels of various gap and pair-rule genes in the thorax4,13,14. Although broad-reaching examinations of the effects of mutating these genes on their targets has not previously been possible, mutating bicoid results in the anterior adopting an anterior-like fate15,16, mutating zelda leads to shifts of expression patterns in time and space12, and Hunchback binding has been implicated in multiple eve stripes, among many other expression patterns.4,17; all of these mutations are lethal. Given the crucial roles of each of these factors in spatial patterning, we expected that perturbing their levels would lead to widespread direct and indirect effects on patterned genes.\n\n\nResults\n\nWe sliced embryos and sequenced the resulting mRNA from 4 mutant genotypes (Figure 1A): a zld germline clone, an RNAi knockdown for bcd, a knockdown for hb, and an overexpression line for bcd with approximately 2.4× wildtype expression. We chose two time points: cycle 13 (determined using nuclear density of either DAPI stained embryos or of the Histone-RFP present in the zld line) and mid-to-late cycle 14 (determined using 50–65% membrane invagination at stage 5) (Figure 1B). Genes expressed in cycle 13 are towards the end of the early round of genome activation and are enriched for Zld binding10,19,19, but are early enough that the majority of patterning disruptions are likely to be direct effects of the mutants. By contrast, we chose the stage 5 time point in order to highlight the full extent of the patterning changes across the network.\n\nA) We fixed embryos in methanol, then selected individual embryos at the correct stage, aligned them in sectioning cups, and sliced to the indicated thickness. We extracted RNA from individual slices, prepared barcoded libraries, then pooled them prior to sequencing. B) Overview of the mutant genotypes used. Two replicates per time point at two time points, based on nuclear density and morphology. C) Cartoon of heatmaps. Each genotype is assigned its own color (matching those in B), with darker colors representing higher expression and white representing no expression detected in that slice. Each boxed column represents a single individual, and within that column, slices are arranged posterior to the left and anterior to the right.\n\nIn order to show the range of patterning differences observed, we generated heatmaps of all the gene expression present in the dataset (Figure 2). Of the 7104 genes with at least 15FPKM in at least one slice, approximately 3000 had uniform expression in all the wild-type embryos that was not greatly perturbed in any of the mutants. The total number of expressed genes is very consistent with previous estimates of the number of maternally deposited and zygotically transcribed genes19.\n\nEach individual embryo is represented by one boxed column in the heatmap. Within a column, slices are arranged anterior to the left, and posterior to the right. Each embryo is colored according to genotype, with green for the bcd over-expression, blue for wild-type, magenta for bcd knockdown, cyan for hb knockdown, and orange for the zld mutant. Within a genotype, darker colors correspond to higher expression and white to zero expression, on a linear scale normalized for each gene separately to the highest expression for that gene in the embryo or to 10FPKM, whichever is greater. Slices that did not match quality control standards are replaced by averaging the adjacent slices, and are marked with hash marks. Rows are arranged by using Earth Mover’s Distance to perform hierarchical clustering, so that genes with similar patterns across all of the embryos are usually close together.\n\nThe set of genes with anterior or posterior localization recapitulate the known literature16 and general expectations in the bcd- case: those expressed in the anterior typically lose expression (Figure 3A), and those in the posterior also frequently gain an expression domain in the anterior (Figure 3B). Surprisingly, most of these patterns are qualitatively unaffected in the other mutants. In the absence of zld, most of these genes are able to retain the proper anterior patterning (although they may have differences in expression levels). Similarly, these genes seem not to be strongly dependent on maternal Hb for patterning information, with most genes retaining a distinct anterior expression domain. As described in Liang et al.12, there are some genes that are normally ubiquitously expressed in the wild-type that become localized to the poles in the zld- embryo (Figure 3C).\n\nWe manually selected subsets of the larger heatmap in Figure 2 that showed clear differences between the genotypes. Each individual embryo is represented by one boxed column in the heatmap. Within a column, slices are arranged anterior to the left, and posterior to the right. Each embryo is colored according to genotype, with green for the bcd over-expression, blue for wild-type, magenta for bcd knockdown, cyan for hb knockdown, and orange for the zld mutant. Within a genotype, darker colors correspond to higher expression and white to zero expression, on a linear scale normalized for each gene separately to the highest expression for that gene in the embryo or to 10FPKM, whichever is greater. Genes identified in Liang et al.12, Nien et al.53, and Staller et al.16 as responsive to either Bcd or Zld have red labels. A) Genes with anterior expression in the wild type embryos. B) Genes with posterior expression in wild type embryos. C) Genes with broad expression in wild type and anterior expression in later stage zld- embryos.\n\nWe next compared expression patterns from each of the mutant lines at late cycle 14 to similar expression patterns in wild-type. Because there are a different number of slices both between the wild type and mutant flies, and between replicates of the mutant flies, we decided to use Earth Mover’s Distance (EMD) to compare patterns20. This metric captures intuitive notions about what kinds of patterns are dissimilar, yielding higher distances for dissimilar distributions of RNA, and zero for identical distributions. Patterns were normalized to have the same maximum expression, in order to highlight changes in positioning of patterns, rather than changes in absolute level. In contrast to traditional RNA-seq differential expression metrics, this approach takes advantage of the spatial nature of the data, and with the fine slices, adjacent slices are able to function as “pseudo-replicates”. Adjacent slices are, on average, much more similar than those from farther away in the same embryo (Figure S1).\n\nThe overall level of divergence in pattern across all genes is, in most cases, slightly larger than when comparing nearby time-points in wild-type or replicates of the same genotype and time point (Figure 4). Notably, the zld- mutant is more similar to wild-type than the mutants of the other, spatially distributed transcription factors. This suggests that Zld is a categorically different TF, consistent with its role as a pioneer factor rather than a direct activator. However, the low level of divergence is a reflection of the fact that the majority of genes are not dynamically expressed in either time or space.\n\nAdjacent time points from the wild-type dataset in Combs and Eisen44 are colored green, and replicates of the same genotype and time point are colored blue. Median distances are marked in red. A) Cycle 13 and B) mid cycle 14.\n\nIn order to demonstrate that these mutants are more likely to affect already known Bcd regulatory systems, we examined genes that were close to 64 Bcd dependent enhancers previously identified21–27. Although the bulk of these enhancers do not have validated associations with particular genes, we assumed that they would be relatively close to the genes that they drive. Of the 66 genes whose transcription start sites (TSSs) were the closest in either direction and within 10kb of the center of the tested CRM, only 32 were expressed at greater than 10FPKM in at least one slice of any of the wild-type embryos. Of these, only 10 had an obvious anterior localization bias (31%), with the majority of the rest being approximately uniformly expressed across the embryo (Figure 5). The majority of genes with ubiquitious or central localization did not radically change in either the Bicoid overexpression or knockdown conditions. As expected, genes with anterior localization suffered a loss of patterning in the depletion mutant, and a posterior shift in the over-expression condition. We assume that genes that are not localized to the anterior are either driven by multiple enhancers, such that loss of expression from one does not severely affect the overall expression, or that they are merely close to the enhancer, but unrelated.\n\nA) Each individual is represented in its own heatmap. The magenta heatmaps are from the bcd- embryos, blue from wild-type, and green from 2.4× bicoid. B) Each gene with anterior localized expression in WT, with data from each individual as its own row, to highlight position changes across the mutant genotypes.\n\nWe next sought to demonstrate that the technique of cryoslicing mutants is useful for identifying the effects of these early patterning genes. In comparison to Figure 3, where we looked for known patterning changes that we would expect from the literature, we also want to make sure that the largest and most common patterning changes that naturally arise from the data recapitulate the known literature. For each mutant genotype, we identified the 100 genes with the largest patterning change in that genotype, then averaged the pattern in all of the cycle 14 embryos (Figure 6).\n\nEach individual is represented in its own heatmap. The magenta heatmaps are from the bcd- embryos, blue from wild-type, and green from 2.4× bicoid. A–D) The average pattern in each of the embryos of the 100 genes with the greatest change in bcd knockdown (A), bicoid overexpression (B), hunchback knockdown (C), and zelda knockout (D).\n\nUnsurprisingly, depletion of TFs known to be important for patterning are likely to make an otherwise non-uniform pattern more so (Figure 1). Of the 465 genes that have clearly non-uniform patterning in the wild-type at cycle 14D, 12–20% are affected in each depletion mutant, either losing expression entirely or becoming uniform. The over-expression line is at the low end of this range, also at 12%.\n\nWe measured EMD for each gene at cycle 14D in each genotype compared to a uniform distribution. We considered genes uniform if they had an EMD<0.04, and non-uniform if they have an EMD>0.08. We then considered genes with at least 15 FPKM in at least one slice in both wild-type and the mutant line.\n\nHowever, this is not always simply abrogating expression—a large number of genes seem to have higher expression everywhere. In the case of bcd depletion, approximately a third of these cases are genes that are restricted to the anterior in wild-type that become approximately uniform throughout the embryo (Figure 7). While some of these are due to genes with an early uniform pattern that fails to properly resolve into spatially restricted domains, approximately half are true ectopic expression (Figure S2).\n\nEach embryo is normalized independently.\n\nAs a first step to identifying likely regulatory motifs, we used binding data for 9 non-pair-rule AP TFs28 and for Zld10 to search for factors with differential rates of binding among the sets of genes with patterning changes (Figure 2). This analysis highlights that Zelda operates in a qualitatively different manner from the other transcription factors—in it’s absence other TFs are likely to continue expression, though in abnormal patterns. Additionally, Zld is crucial for maintaining patterned expression, as the most common change is from patterned genes integrating one or more AP factors to minimal overall expression.\n\nUsing the genes with identified patterning changes in Figure 1, we performed a χ2 test with a Bonferroni-corrected p-value of 0.05.\n\nFurthermore, bicoid stands out as a major factor involved in AP patterning. In all of the mutant conditions except bcd and zld depletion, having a Bcd binding site is associated with an increase in patterned expression. In all of the conditions, a Bcd binding site is associated with a ubiquitous becoming patterned, and this pattern is often anterior expression.\n\nIn addition to patterning changes, some genes with ubiquitous localization actually showed the same response in absolute level as a result of both bcd depletion and over-expression. Of these genes, 1002 showed at least 1.5 fold higher expression on both conditions, and 414 showed a 1.5 fold decrease in expression. Such a scenario suggests that these genes are, at wild-type levels, tuned to a particular level of Bicoid expression.\n\nIt is difficult to reconcile increases of expression in the posterior with any local model of transcription factor action. Bcd protein is only present at approximately 5nM at 50% embryo length, and negligible levels more posterior29. It is conceivable that Bcd activates a repressor gene somewhere in the anterior, which then diffuses more rapidly than Bcd to cover at least some of the posterior of the embryo. Nevertheless, there have previously been hints that Bicoid can function as far to the posterior as hairy stripe 730.\n\nWe next asked whether patterning changes in one genotype could be used to predict whether the pattern changes in another. Therefore, we plotted the EMD between wild-type and the bicoid RNAi line on the X axis, and wild-type to the zelda GLC on the Y axis (Figure 8). Unsurprisingly, the majority of genes did not change, but of those that did, only a small fraction of them changed in one condition but not the other (the blue and green regions near the axes). We grouped genes according to whether they were in the top 20% of the EMD distribution for each genotype independently, then performed a Pearson’s χ2 test of independence of change in bcd- versus zld-. The result was highly significant (p<1×10-100), with the largest overrepresentation coming from the case where both changed. Repeating this across all combinations of wild-type and two other mutant genotypes yielded the same results: in every case, there were between 2.2 to 2.7 times as many genes that changed in both categories as would be expected (Figure S3).\n\nChange versus the wild-type is plotted on the x and y axes. Each point is colored according to its ΔD score, calculated in Equation 1, in order to highlight genes that change differently between the two conditions.\n\nOf these genes that do change in both conditions, the majority changed in effectively identical ways. We computed a modified EMD that down-weights genes that are very similar to wild-type in at least one mutant genotype:\n\nΔD = EMD(M1, M2)—|EMD(M1, WT)—EMD(M2,WT)| (1)\n\nwhere EMD(x, y) is the Earth Mover’s Distance between identically staged embryos of genotype x and genotype y. Even among only the set of genes that change in both conditions, ΔD is small (mean of 3.5%, 95th percentile of 11.9%)—equivalent to a shift of the entire pattern by about 1 or 2 slices in either direction. However, there are 13 genes that change differently between wild-type, bcd-, and zld- (ΔD>20%). These genes have noticeably different patterns in all three genotypes (Supplemental Figure S4).\n\nWe sought to understand what is different about genes with a high ΔD, compared to those that change in response to wild-type, but have a low ΔD (that is, those that change in the same way in response to distinct mutant conditions). We found that genes with a high ΔD score were strongly enriched for a number of TF binding sites (Table 3 and Table S1).\n\nχ2 test results for TF binding within 10kb of the TSS for the wild-type/bcd-/zld- three-way comparison. We examined the top 50 genes by ΔD, compared to the 200 genes closest to the median ΔD of genes that change in response to both mutations. Base frequency indicates the fraction of genes expressed at this time point with at least one ChIP peak for that TF. In this comparison, only Dichaete and Zelda binding were not significant at a Bonferroni-corrected p-value of 0.05.\n\nNext, we binned genes by ΔD score, then examined trends in combinatorial transcription factor binding. As ΔD score increases, genes are more likely to be bound by multiple TFs (Figure 9). Due to the high background rate of binding, assaying the presence of at least 3 factors is not readily able to distinguish between genes with high and low ΔD’s, as nearly 70% of all genes expressed have at least 3 TFs bound. Assaying for the presence of more factors is better able to identify which genes are likely to change, and the top 50 genes all have at least 8 factors bound.\n\nWe grouped genes into non-overlapping windows of 50 genes by ΔD score, and calculated the fraction of those genes with at least 3,4, . . . etc. of the 10 early AP TFs bound (including Zld). We also plotted a simple linear regression on the binned points.\n\nWe sought to understand the extent to which genes with the same pattern of upstream regulators had the same responses to perturbation. We grouped genes according to the complement of ChIP-validated TF binding sites near that gene, then examined the patterning changes. Although with 10 different TFs there are potentially over one thousand distinct combinations of binding patterns, in practice the dense, combinatorial patterns found around patterning enhancers reduces this set to a much more manageable 157 different combinations, of which only 52 had at least 30 genes.\n\nWithin these sets of genes with similar TF binding profiles, we then asked whether the distribution of patterning changes was any different from the distribution of patterning changes for all genes. For each gene, we summed the EMD scores for the 2.4×bcd, bcd-, and zld-, then performed a KS-test between the summed EMD scores of genes with a given binding pattern and the summed scores for all expressed genes. We found only 2 binding patterns with a Bonferroni-corrected p-value less than .05. Both of these sets were highly bound, and they were also very similar to each other in their binding, differing only in the presence of a Knirps (Kni) binding site (Figure 10).\n\nDespite the similar binding patterns near these genes, there is a wide range of responses. The wild-type expression patterns run nearly the complete gamut, including uniform expression, anterior stripes, posterior stripes, and central expression domains. Additionally, the presence of a Kni site seems to yield an increased number of genes with an anterior expression domain.\n\n\nDiscussion\n\nWe have generated a dataset that is unparalleled in its coverage assaying patterning changes in mutant conditions. When these patterning mutants have been described previously, either major morphological readouts like cuticle staining or in situ hybridization has been used to illustrate the effects on downstream target genes12,16,31. However, in situ hybridization suffers from a strong selection bias in the genes that are chosen. By assaying spatial differences in the patterning of every gene in the genome, we demonstrate the full effect that these TFs have on developmental gene expression networks.\n\nDespite the importance of the factors we chose for establishing spatially and temporally correct patterning, only a relatively small number of genes have significant expression pattern changes. Many of the targets that do show clearly abnormal expression patterns are, themselves, key transcription factors. This suggests that, even though key, maternally provided patterning factors bind to thousands of places throughout the genome28, many of those binding sites are not functional in any meaningful sense. Certainly some of this binding is due to artifacts in the ChIP data, and even reproducible, non-artifactual binding should not be confused with function32,33. However, the fact that genes near binding sites for multiple factors tend to have more complicated responses to mutation suggests that there is some truth to the idea that gene regulation in complex animals tends to be combinatorial, even if the ChIP data are imperfect.\n\nWe were surprised how much proper bicoid expression seems to be required for proper patterning at all points along the embryo, not just in the anterior. The fact that bcd is normally understood to be an activator, while the plurality of genes with higher, ubiquitous expression in the mutant are normally localized to the anterior in wildtype suggests that this is normally mediated through one or more repressors that depend on bcd. As one of three TFs overrepresented at genes with this phenotype, gt is likely to be involved in this global derepression, but since it is itself neither ubiquitous throughout the embryo nor universally bound at the genes that change, it is likely not the only player.\n\nThe mutants we examined seemed to produce very similar changes in their downstream targets, despite the wild-type TFs having widely varying spatial distributions. Our initial expectation was that there would be many more ways to fail to properly pattern expression, and that different mutations would have different average effects from each other. Indeed, relying on different mutations having different responses has been the key to genetic analysis of fine scale patterns such as the eve stripes17,34–36. Although averaging across the most different genes in a mutant genotype does yield different patterns (Figure 6), for any given gene excursions from the correct spatial expression pattern seems to be largely canalized (Figure 8). This seemingly-canalized expression change may be a consequence of the types of genes we can easily measure patterning changes among—we cannot resolve individual pair-rule stripes, for instance—so genes with coarser patterns may be more likely to have a single “failure” phenotype, as compared to those with finer patterns, which have more layers of regulation to perturb.\n\nWe do recognize a number of distinct limitations of this dataset towards predicting gene expression change as a function of mutation. The spatial resolution is still much coarser than in situ hybridization based experiments. This is especially concerning near regions where there are fine stripes of expression, which cannot be resolved between adjacent slices, or at regions where there is a transition between expression domains, where it is possible that the slicing axis is not perfectly aligned with the domain border. Finally, it is worth remembering that especially in the later stages examined, the gap gene positions will also be perturbed, so any observed changes in pattern positioning is likely to be a combination of direct effects and downstream effects of the original mutation.\n\nA number of recent studies have used various technical or experimental techniques to improve the resolution of RNA-seq maps of gene expression in developing embryos. Iterated sectioning of different embryos in all three dimensions can be deconvolved to yield estimates of the original pattern37. Similarly, sequencing mRNA from dissociated nuclei allows for the maximum possible spatial resolution, assuming the original location of those nuclei can be estimated38,39. While these approaches are worthwhile for establishing a baseline map of expression patterns in wild-type embryos, the expense of sequencing still makes single-dimensional studies worthwhile. Furthermore, the single-cell approaches in Satija et al.38 and Achim et al.39 require some prior knowledge of spatial gene expression, which may be significantly perturbed in patterning mutants. Other approaches for multiplexed in situ profiling of mRNA abundance have been described, but are not yet cheap or reliable enough to be readily useful for screening mutants40,41.\n\nAdditionally, the time and expense required for a single individual necessarily means that we have profiled only a small number of individuals. We were therefore careful to choose only highly penetrant mutations for analysis, and to choose individuals at as similar staging as we could. However, even for genes with a consistent, precise time-dependent response between individuals, the differences in staging are likely to be a significant contributor to variation. Furthermore, we only examined two relatively distant time points in this study (approximately 45 minutes apart), making comparisons across time fraught at best.\n\nNevertheless, this experiment suggests a number of genes for more detailed follow up studies. As our predictive power for relatively well-studied model systems, such as the eve stripes improves, it will be especially important to take these insights to other expression patterns in the embryo. The risk of over-fitting increases with the depth of study of any particular model system, even if any given study is relatively well controlled. Therefore, by demonstrating that particular insights hard-won in these model systems are broadly applicable, we can gain some confidence in the results, and we approach having a rigorous, broadly applicable predictive model of gene regulation.\n\nUltimately, we believe more datasets addressing chromatin state in response to different conditions will be necessary for accurate prediction of spatial responses to mutation. In a ChIP-seq dataset on embryos with different, uniform levels of bicoid expression, hundreds of peaks seem to vary with differing affinities to Bcd protein (Colleen Hannon and Eric Wieschaus, personal communication, March 2015). The zygotically expressed genes near these differential peaks also have different spatial localization in wild-type, and different average responses to the mutants presented here. In addition to spatially resolved expression measurements, spatially resolved binding and chromatin accessibility data will likely be necessary. While ChIP-seq experiments currently require several orders of magnitude more input material than can be reasonably collected from spatially resolved samples, recent method developments in measuring chromatin accessibility have shown that it is possible to collect data from as few as 500 mammalian nuclei42. A similar amount of DNA is present in a single Drosophila embryo, which suggests that spatially resolved chromatin accessibility data may be achievable.\n\n\nMaterials and methods\n\nZelda germline clone flies (w zld- FRT/FM7a; His2Av RFP) were a gift of Melissa Harrison, and were mated and raised as described previously10. Embryos were collected from mothers 3–10 days old.\n\nThe construction of the bcd and hb RNAi flies has been described previously43 and were obtained from the DePace Lab at Harvard Medical School. Briefly, we generated F1s from the cross of maternal tubulin Gal4 mothers (line 2318) with UAS-shRNA-bcd or UAS-shRNA-hb fathers (lines GL00407 and GL01321 respectively), then collected embryos from the sibling-mated F1s. In order to take advantage of the slowed oogenesis and resulting greater RNAi efficiency, we aged the F1 mothers for approximately 30 days at 25°C.\n\nThe bcd overexpression lines were a generous gift of Thomas Gregor at Princeton University. We used line 20, which has 2.4× wild-type levels of eGFP-Bcd fusion. Flies were kept in uncrowded conditions, and embryos were collected at 25°C from 3–7 day old mothers.\n\nWe washed, dechorionated, and fixed the embryos according to our standard protocol (see 44), incubated in 3 µM DAPI for 5 minutes, washed twice with PBS, and then imaged on a Nikon 80i microscope with a Hamamatsu ORCA-Flash4.0 CCD camera. We did not DAPI stain the zld- embryos because they had a histone RFP marker. After selecting embryos with the appropriate stage according to density of nuclei in histone-RFP or DAPI staining and membrane invagination for the cycle 14 embryos, we washed embryos with methanol saturated with bromophenol blue (Fisher), aligned them in standard cryotome cups (Polysciences Inc), covered them with VWR Clear Frozen Section Compound (VWR,West Chester, PA), and froze them at -80C.\n\nWe sliced the embryos as in Combs and Eisen44. Single slices were placed directly in non-stick RNase-free tubes (Life Technologies), and kept on dry ice until storage at -80C.\n\nWe performed RNA extraction in TRIzol as previously44. All RNA quality was confirmed using a BioAnalyzer 2100 RNA Pico chip (Agilent).\n\nWe generated libraries of the zld- embryos using the TruSeq mRNA unstranded kit (Illumina). As described previously, we added in 70 ng of yeast total RNA as a carrier and performed reactions in half-sized volumes to improve concentration44.\n\nWe generated libraries from the RNAi and overexpression embryos using the SMARTseq2 protocol; we skipped the cell lysis steps because RNA had already been extracted45,46. As described previously, tagmentation steps were performed at 1/5th volume to reduce costs46.\n\nAll data was compared to FlyBase genome version r6.03 (ftp://ftp.flybase.net/releases/FB2014_06/). Mapping was performed using RNA-STAR v2.3.0.147, and expression estimates were generated using Cufflinks v2.2.1 on only the D. melanogaster reads48. Reads from Combs and Eisen44 were re-mapped to the new genome version. When carrier RNA was used (data from Combs and Eisen44 and the zld- embryos), we discarded as ambiguous reads with 3 or fewer mismatches to prefer one species or the other. Due to the extensive divergence between the yeast carrier RNA and fly target RNA, the vast majority of mapped reads (>99.99%) were unambiguous as to the species of origin. After mapping, we removed samples that had fewer than 500,000 D. melanogaster reads and samples with less than a 70% mapping rate when no carrier RNA was used; no other filtering or corrections were performed.\n\nSpecific analysis code was custom-written in Python 2.7.6. Custom analysis and heatmap generation code is available from https://github.com/petercombs/EisenLab-Code. All analyses presented here and all data figures were made using commit 2c144be (doi: 10.5281/zenodo.160787). EMDs were calculated using the python-emd package by Andreas Jansson (no version number available, version used archived under doi: 10.5281/zenodo.160797). Violin plots, histograms, and scatter plots were made using Matplotlib v1.4.2 and Numpy v1.9.2,49–51. Linear regressions (stats.linregress), Kolmogorov-Smirnov tests (stats.ks_2samp), and χ2 tests (stats.chi2_contingency) were performed using Scipy v 0.14.0.\n\nNewly generated sequencing reads have been deposited at the Gene Expression Omnibus under accession GSE71137. Additional files and a searchable database are available at http://eisenlab.org/supplements/combs2016/.\n\n\nData availability\n\nNewly generated sequencing reads have been deposited at the Gene Expression Omnibus under accession GSE71137 (https://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE71137).\n\nCustom analysis code: https://github.com/petercombs/EisenLab-Code\n\nArchived custom analysis code at the time of publication: http://dx.doi.org/10.5281/zenodo.160787", "appendix": "Author contributions\n\n\n\nConceived and designed the experiments: PAC MBE. Performed the experiments: PAC. Analyzed the data: PAC. Algorithms used in analysis: PAC. Contributed reagents/materials/analysis tools: PAC MBE. Wrote the paper: PAC.\n\n\nCompeting interests\n\n\n\nMBE is on the International Advisory Board for F1000.\n\n\nGrant information\n\nThis work was supported by a Howard Hughes Medical Institute investigator award to MBE. PAC was supported by the National Institutes of Health under a Genomics Training Grant (5T32HG000047-13).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Steven Brenner, Han Lim, and Lior Pachter for helpful comments on the manuscript as part of PAC’s dissertation.\n\n\nSupplementary material\n\nSupplemental Figure 1: Adjacent slices are more similar than distant ones. Violin plots of the Spearman Rank correlations between adjacent slices and pairs of slices separated by more than one third of the embryo length.\n\nClick here to access the data\n\nSupplemental Figure 2: Figure 7 normalized to expression in wild-type cycle 14D highlights absolute expression level changes. Slices with higher expression are clipped to the maximum expression in wild-type.\n\nClick here to access the data\n\nSupplemental Figure 3: Genes that change tend not to change in only one condition. Three-way comparisons, as in Figure 8, between wildtype and the remaining combinations of bcd depletion, bcd overexpression, hp depletion, and zld depletion. The wildtype-mutant comparison is indicated on each axis, and the color indicates the ΔD score between the two mutants. G20 indicates the bicoid overexpression line, line #20 from 52.\n\nClick here to access the data\n\nSupplemental Figure 4: Only a handful of genes change differently between the different conditions. In the WT vs bcd- vs zld- three- way comparison, only 13 genes had a ΔD score above 20%. Thumbnails indicate wild-type pattern in blue, bcd- pattern in both replicates in pink, and zld- pattern in orange. All expression is scaled to the highest in each individual.\n\nClick here to access the data\n\nSupplemental Table 1: TF binding is enriched near differentially changing genes across all three-way comparisons. χ2 test results for TF binding within 10kb of the TSS for the indicated three-way comparison. We examined the top 50 genes by ΔD, compared to the 200 genes closest to the median ΔD of genes that change in response to both mutations. Base frequency indicates the fraction of genes with at least one ChIP peak for that TF and that are expressed at this time point in all three conditions.\n\nClick here to access the data\n\n\nReferences\n\nBerleth T, Burri M, Thoma G, et al.: The role of localization of bicoid RNA in organizing the anterior pattern of the Drosophila embryo. EMBO J. 1988; 7(6): 1749–1756. PubMed Abstract | Free Full Text\n\nDriever W, Nüsslein-Volhard C: The bicoid protein is a positive regulator of hunchback transcription in the early Drosophila embryo. Nature. 1989; 337(6203): 138–143. PubMed Abstract | Publisher Full Text\n\nKraut R, Levine M: Spatial regulation of the gap gene giant during Drosophila development. Development. 1991; 111(2): 601–609. PubMed Abstract\n\nSmall S, Kraut R, Hoey T, et al.: Transcriptional regulation of a pair-rule stripe in Drosophila. Genes Dev. 1991; 5(5): 827–839. PubMed Abstract | Publisher Full Text\n\nSun Y, Nien CY, Chen K, et al.: Zelda overcomes the high intrinsic nucleosome barrier at enhancers during Drosophila zygotic genome activation. Genome Res. 2015; 25(11): 1703–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXu Z, Chen H, Ling J, et al.: Impacts of the ubiquitous factor Zelda on Bicoid-dependent DNA binding and transcription in Drosophila. Genes Dev. 2014; 28(6): 608–621. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi XY, Harrison MM, Villalta JE, et al.: Establishment of regions of genomic activity during the Drosophila maternal to zygotic transition. eLife. 2014; 3: e03737. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSatija R, Bradley RK: The TAGteam motif facilitates binding of 21 sequence-specific transcription factors in the Drosophila embryo. Genome Res. 2012; 22(4): 656–665. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKanodia JS, Liang HL, Kim Y, et al.: Pattern formation by graded and uniform signals in the early Drosophila embryo. Biophys J. 2012; 102(3): 427–433. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarrison MM, Li XY, Kaplan T, et al.: Zelda binding in the early Drosophila melanogaster embryo marks regions subsequently activated at the maternal-to-zygotic transition. PLoS Genet. 2011; 7(10): e1002266. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarrison MM, Botchan MR, Cline TW: Grainyhead and Zelda compete for binding to the promoters of the earliest-expressed Drosophila genes. Dev Biol. 2010; 345(2): 248–255. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiang HL, Nien CY, Liu HY, et al.: The zinc-finger protein Zelda is a key activator of the early zygotic genome in Drosophila. Nature. 2008; 456(7220): 400–403. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWimmer EA, Carleton A, Harjes P, et al.: Bicoid-independent formation of thoracic segments in Drosophila. Science. 2000; 287(5462): 2476–2479. PubMed Abstract | Publisher Full Text\n\nJaeger J: The gap gene network. Cell Mol Life Sci. 2011; 68(2): 243–274. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrohnhöfer HG, Nüsslein-Volhard C: Organization of anterior pattern in the Drosophila embryo by the maternal gene bicoid. Cell. 1986; 324(2): 120–125. Publisher Full Text\n\nStaller MV, Fowlkes CC, Bragdon MD, et al.: A gene expression atlas of a bicoid-depleted Drosophila embryo reveals early canalization of cell fate. Development. 2015; 142(3): 587–596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmall S, Blair A, Levine M: Regulation of two pair-rule stripes by a single enhancer in the Drosophila embryo. Dev Biol. 1996; 175(2): 314–324. PubMed Abstract | Publisher Full Text\n\nTadros W, Lipshitz HD: The maternal-to-zygotic transition: a play in two acts. Development. 2009; 136(18): 3033–3042. PubMed Abstract | Publisher Full Text\n\nLott SE, Villalta JE, Schroth GP, et al.: Noncanonical compensation of zygotic X transcription in early Drosophila melanogaster development revealed through single-embryo RNA-seq. PLoS Biol. 2011; 9(2): e1000590. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRubner Y, Tomasi C, Guibas LJ: A metric for distributions with applications to image databases. In: Computer Vision, 1998. Sixth International Conference on 1998; 59–66. Publisher Full Text\n\nChen H, Xu Z, Mei C, et al.: A system of repressor gradients spatially organizes the boundaries of Bicoid-dependent target genes. Cell. 2012; 149(3): 618–629. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOchoa-Espinosa A, Yucel G, Kaplan L, et al.: The role of binding site cluster strength in Bicoid-dependent patterning in Drosophila. Proc Natl Acad Sci U S A. 2005; 102(14): 4960–4965. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchroeder MD, Pearce M, Fak J, et al.: Transcriptional control in the segmentation gene network of Drosophila. PLoS Biol. 2004; 2(9): E271. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiemar F, Zinzen R, Ronshaugen M, et al.: Spatial regulation of microRNA gene expression in the Drosophila embryo. Proc Natl Acad Sci U S A. 2005; 102(44): 15907–15911. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHartmann B, Reichert H, Walldorf U: Interaction of gap genes in the Drosophila head: tailless regulates expression of empty spiracles in early embryonic patterning and brain development. Mech Dev. 2001; 109(2): 161–172. PubMed Abstract | Publisher Full Text\n\nKantorovitz MR, Kazemian M, Kinston S, et al.: Motif-blind, genome-wide discovery of cis-regulatory modules in Drosophila and mouse. Dev Cell. 2009; 17(4): 568–579. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRiddihough G, Ish-Horowicz D: Individual stripe regulatory elements in the Drosophila hairy promoter respond to maternal, gap, and pair-rule genes. Genes Dev. 1991; 5(5): 840–854. PubMed Abstract | Publisher Full Text\n\nMacArthur S, Li XY, Li J, et al.: Developmental roles of 21 Drosophila transcription factors are determined by quantitative differences in binding to an overlapping set of thousands of genomic regions. Genome Biol. 2009; 10(7): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGregor T, Tank DW, Wieschaus EF, et al.: Probing the limits to positional information. Cell. 2007; 130(1): 153–164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLa Rosée A, Häder T, Taubert H, et al.: Mechanism and Bicoid-dependent control of hairy stripe 7 expression in the posterior region of the Drosophila embryo. EMBO J. 1997; 16(14): 4403–4411. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDriever W, Nüsslein-Volhard C: The bicoid protein determines position in the Drosophila embryo in a concentration-dependent manner. Cell. 1988; 54(1): 95–104. PubMed Abstract | Publisher Full Text\n\nGraur D, Zheng Y, Price N, et al.: On the immortality of television sets: \"function\" in the human genome according to the evolution-free gospel of ENCODE. Genome Biol Evol. 2013; 5(3): 578–590. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeytelman L, Thurtle DM, Rine J, et al.: Highly expressed loci are vulnerable to misleading ChIP localization of multiple unrelated proteins. Proc Natl Acad Sci U S A. 2013; 110(46): 18602–18607. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrasch M, Levine M: Complementary patterns of even-skipped and fushi tarazu expression involve their differential regulation by a common set of segmentation genes in Drosophila. Genes Dev. 1987; 1(9): 981–995. PubMed Abstract | Publisher Full Text\n\nFrasch M, Warrior R, Tugwood J, et al.: Molecular analysis of even-skipped mutants in Drosophila development. Genes Dev. 1988; 2(12B): 1824–1838. PubMed Abstract | Publisher Full Text\n\nAndrioli LP, Vasisht V, Theodosopoulou E, et al.: Anterior repression of a Drosophila stripe enhancer requires three position-specific mechanisms. Development. 2002; 129(21): 4931–4940. PubMed Abstract\n\nJunker JP, Noël ES, Guryev V, et al.: Genome-wide RNA Tomography in the zebrafish embryo. Cell. 2014; 159(3): 662–675. PubMed Abstract | Publisher Full Text\n\nSatija R, Farrell JA, Gennert D, et al.: Spatial reconstruction of single-cell gene expression data. Nat Biotechnol. 2015; 33(5): 495–502. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAchim K, Pettit JB, Saraiva LR, et al.: High-throughput spatial mapping of single-cell RNA-seq data to tissue of origin. Nat Biotechnol. 2015; 33(5): 503–9. PubMed Abstract | Publisher Full Text\n\nLee JH, Daugharthy ER, Scheiman J, et al.: Highly multiplexed subcellular RNA sequencing in situ. Science. 2014; 343(6177): 1360–1363. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen KH, Boettiger AN, Moffitt JR, et al.: RNA imaging. Spatially resolved, highly multiplexed RNA profiling in single cells. Science. 2015; 348(6233): aaa6090. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuenrostro JD, Giresi PG, Zaba LC, et al.: Transposition of native chromatin for fast and sensitive epigenomic profiling of open chromatin, DNA-binding proteins and nucleosome position. Nat Methods. 2013; 10(12): 1213–1218. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStaller MV, Yan D, Randklev S, et al.: Depleting gene activities in early Drosophila embryos with the \"maternal-Gal4-shRNA\" system. Genetics. 2013; 193(1): 51–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCombs PA, Eisen MB: Sequencing mRNA from cryo-sliced Drosophila embryos to determine genome-wide spatial patterns of gene expression. PLoS One. 2013; 8(8): e71820. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPicelli S, Faridani OR, Björklund AK, et al.: Full-length RNA-seq from single cells using Smart-seq2. Nat Protoc. 2014; 9(1): 171–181. PubMed Abstract | Publisher Full Text\n\nCombs PA, Eisen MB: Low-cost, low-input RNA-seq protocols perform nearly as well as high-input protocols. PeerJ. 2015; 3: e869. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDobin A, Davis CA, Schlesinger F, et al.: STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1): 15–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrapnell C, Roberts A, Goff L, et al.: Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks. Nat Protoc. 2012; 7(3): 562–578. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJones E, Oliphant T, Peterson P, et al.: SciPy: Open source scientific tools for Python. 2001. Reference Source\n\nHunter JD: Matplotlib: A 2D graphics environment. In: Computing In Science & Engineering. 2007; 9(3): 90–95. Publisher Full Text\n\nMcKinney W: Data Structures for Statistical Computing in Python. In: Proceedings of the 9th Python in Science Conference. Ed. by Stéfan van der Walt and Jarrod Millman. 2010; 51–56. Reference Source\n\nLiu F, Morrison AH, Gregor T: Dynamic interpretation of maternal inputs by the Drosophila segmentation gene network. Proc Natl Acad Sci U S A. 2013; 110(17): 6724–6729. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNien CY, Liang HL, Butcher S, et al.: Temporal coordination of gene networks by Zelda in the early Drosophila embryo. PLoS Genet. 2011; 7(10): e1002339. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19327", "date": "20 Feb 2017", "name": "Thomas Gregor", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Combs and Eisen extends the previously published dataset of RNA-seq measurements in cryo-sliced Drosophila embryos to mutants of patterning genes. As the authors state, most common methods for gene expression measurements are either limited in throughput or average over spatial coordinates. The development and application of an experiential approach addressing this gap is therefore of high interest. Datasets produced with this approach are an extremely valuable resource to diverse studies in the community.\n\nIn the presented application the author go beyond the characterization of wild type embryos to the examination of Zelda, Hunchback and Bicoid mutants. This study thus offers the opportunity to examine the contribution of these key factors to the formation of spatial patterns of gene expression in the early embryo.  That said, the description of the analyses carried out and the presentation of the results could be improved to further facilitate the usefulness of this dataset and of the approach.\n\nWhile the analyses focuses on comparison between samples (and here a single slice from a single embryo at a single time point can be considered a sample) the authors provide little information on how these comparisons are done. It would be beneficial to specifically note what type of assumptions are made in the analysis, how samples are normalized with respect to one another and how these normalizations influence the quantitative nature of the statements that can be made. For example, does the analysis performed implicitly assume that the total number of RNA produced (and captured) in each slice is the same? Can the authors address more extensively how they deal with differences in the exact number and size of slices. As patterns are then normalized to their maximal level prior to comparison between conditions, it seems genes are not assumed to maintain the same maximal activity yet claims about absolute changes in level cannot be made; can the authors clarify if this is indeed the case. Given these issues, it would be useful to include a more explicit description how a “uniform” vs a “patterned” gene is defined. Maybe the authors could show this graphically using an average plot of all of the heat map data for the genes classified this way. A clarification of the quantitative meaning of phrases as a “greatly perturbed pattern”, “Higher expression everywhere” or “same response in absolute level”, would help elucidate the insights obtained from the data.\n\nIt would also be useful if the authors provided a more extensive explanation of their usage of EMD, a method that was not employed in their analysis in their 2013 paper. This can be included the in methods/supplemental information. How would the similarity in the changes between genes/mutants ascertained with this measure compare to others measures (e.g. correlating patterns or changes in patterns). In Figure 10, could the summation of the EMD scores mask some of the signal? (is the result the same for each EMD score?). What is the initial diversity in patterns among genes that belong to the same set in this analysis (i.e. sharing a similar ChIP binding pattern)? The authors present figure 10 as an investigation of how genes with binding sites for the same TFs respond to perturbation (assuming “patterned responses” means responses to mutation), but only discuss their wild-type patterns. This is a good example of the heatmaps having too much information to be illustrative. It is difficult to relate the EMD to physical changes in patterns of target genes. It's not obvious, based on the barplot and the heatmaps in the figure, what it is about these expression patterns that resulted in these two groups of genes being significantly different from each other. Maybe a better explanation of the EMD would clarify this. It is not directly clear mathematically how EMD measures how quantitatively different two groups are. For example, only two cohorts of TFs showed a significant difference in the EMD of the genes in their group; the histograms show the distributions of EMDs in genes with each cohort of TF binding sites vs. the EMDs in the total dataset; it looks like the two significantly different groups are both skewed toward higher EMDs than the total dataset, but it is unclear what makes them different from each other, based on the figure and the explanations provided.\n\nAdditionally, despite showing the replicates in most figures, I still find it hard to assess to what extent are patterns variable between replicates (due to both biological and technical reasons) and what is the magnitude of these differences compared to those observed in the mutants. For instance in Figure 4A, do the zld replicates show bigger differences than most of the other presented comparisons? How is the significance of the differences between the presented distributions should be assessed?\n\nWhile the large heatmaps are useful for initially appreciating the scope of the method and for showing overall patterns, it is difficult to assess the change in a pattern for a specific gene (or subset of genes). Possibly presetting the data in each mutant genotype as the change from wild-type would help in that regard?\n\nThe legend of the figures can be more elaborate and clearer. For example I am not sure I understand the analyses presented in Figure 6. Is the pattern or the change in pattern averaged and presented? The replicates in this figure seem quite different from one another, and it is hard to see how the “changes recapitulate known TF localization and function”, as stated in the title. Isn’t averaging over the 100 genes with largest changes masking the patterning changes (unless they are highly similar among all of these genes)? Only some of the colors presented are mentioned in the legend. There is a reference to panels A-D but these do not appear in the figure. Some figures include “G20”, yet only in a supplementary figure legend it is specified that it indicates the bicoid overexpression line.\n\nFinally, as the method is still rather new, and it is very hard to assess experimental error, can the authors pick one of the more surprising observations, like a gene with increased expression in the posterior in bcd mutant, and provide a control experiment with an alternative method (e.g. in situ) demonstrating at least a qualitative agreement?", "responses": [] }, { "id": "19328", "date": "22 Feb 2017", "name": "Angela H. DePace", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper by Combs and Eisen examines spatially patterned gene expression in Drosophila embryos in response to mutation of key developmental transcription factors. The unique contribution of this work is the scale of their measurement — mRNA expression assessed genome-wide in slices along the anterior/posterior axis of individual embryos. Typically, genome-wide measurements are not spatially resolved; conversely spatially resolved measurements are typically only made for a handful of genes at once. This technique was described and validated in a previous paper. Here they apply it to characterize the response to mutations of bicoid, zelda and hunchback.\nThe dataset is valuable and the underlying data collection is well described. There results also point to a number of interesting conclusions. In particular, it’s notable that despite being bound widely across the genome, knocking down these TFs produces only modestly affects patterning on average. It is also notable that Bicoid knockdown can affect patterning in the posterior, where it is not expressed. My suggestions for improving the paper are below.\nThe introduction does not introduce a biological question that they will be able to address with their method, but instead focuses on the novelty of the measurement. They state that “Given the crucial roles of each of these factors in spatial patterning, we expected that perturbing their levels would lead to widespread direct and indirect effects on patterned genes.”  It would be useful to expand on this hypothesis in terms specific to the data they will collect and how they will analyze it. Which features of the data would confirm or refute this hypothesis? What will they find interesting or surprising? For example, they do some of this in the discussion. “Our initial expectation was that there would be many more ways to fail to properly pattern expression, and that different mutations would have different average effects from each other.” Can their expectation be made more specific given the diversity of patterns they can detect in WT embryos? Why did they have this expectation about the average effects when so many genes are unpatterned? Bringing some of the more interesting points in the discussion to the introduction to frame the paper would motivate the reader to understand the various analyses.\n\nThe quality, reproducibility and structure of the data should be more explicitly discussed, and used to explain their choice of analytical strategy. For example, they justify their use of the Earth Mover’s Distance to examine expression patterns: “Because there are a different number of slices both between the wild type and mutant flies, and between replicates of the mutant flies…”  Does this sentence mean that there are more slices per embryo for a WT versus a mutant embryo, or that there are in aggregate more mutant slices than WT slices (because they measured more mutants)? How does this impact their choice of analyses? Given the structure of the data and its variability, what are the limits of analysing effects on single genes (which would be more familiar to developmental biologists)? Simple language could help more readers appreciate the utility and limits of their data for future analyses.\nIn addition, a narrative description of the EMD would be helpful. The authors state : “...we decided to use Earth Mover’s Distance (EMD) to compare patterns[20]. This metric captures intuitive notions about what kinds of patterns are dissimilar, yielding higher distances for dissimilar distributions of RNA, and zero for identical distributions.”  This states that the range will go from 0 (identical) to “higher” (dissimilar) but does not justify their use of this metric. It would be useful to explain the value of this choice of metric over available alternatives.\n\nThe figure legends and titles could be more informative. In general, the legends state only what is depicted and does not point the reader to the most relevant comparisons. While this isn’t strictly necessary, it can help readers who are unfamiliar with the particular type of plot used. Figure 6 refers to panels A - D, which are not labeled in the figure. The legend of Figure 7 is a single sentence stating “Each embryo is normalized separately.” In some cases, the grammar of the titles is difficult to parse. For example, the title of Figure 4 is: “Distributions of patterning differences show that mutants have wide-spread subtle patterning effects and more genes with large patterning differences than replicates.” The clause of this sentence is unclear. The title of Figure 5 is similarly difficult to parse.", "responses": [] } ]
1
https://f1000research.com/articles/6-41
https://f1000research.com/articles/5-2520/v1
14 Oct 16
{ "type": "Research Article", "title": "Human brain harbors single nucleotide somatic variations in functionally relevant genes possibly mediated by oxidative stress", "authors": [ "Anchal Sharma", "Asgar Hussain Ansari", "Renu Kumari", "Rajesh Pandey", "Rakhshinda Rehman", "Bharati Mehani", "Binuja Varma", "Bapu K. Desiraju", "Ulaganathan Mabalirajan", "Anurag Agrawal", "Arijit Mukhopadhyay", "Anchal Sharma", "Asgar Hussain Ansari", "Renu Kumari", "Rajesh Pandey", "Rakhshinda Rehman", "Bharati Mehani", "Binuja Varma", "Bapu K. Desiraju", "Ulaganathan Mabalirajan", "Anurag Agrawal" ], "abstract": "Somatic variation in DNA can cause cells to deviate from the preordained genomic path in both disease and healthy conditions. Here, using exome sequencing of paired tissue samples, we show that the normal human brain harbors somatic single base variations measuring up to 0.48% of the total variations. Interestingly, about 64% of these somatic variations in the brain are expected to lead to non-synonymous changes, and as much as 87% of these represent G:C>T:A transversion events. Further, the transversion events in the brain were mostly found in the frontal cortex, whereas the corpus callosum from the same individuals harbors the reference genotype. We found a significantly higher amount of 8-OHdG (oxidative stress marker) in the frontal cortex compared to the corpus callosum of the same subjects (p<0.01), correlating with the higher G:C>T:A transversions in the cortex. We found significant enrichment for axon guidance and related pathways for genes harbouring somatic variations. This could represent either a directed selection of genetic variations in these pathways or increased susceptibility of some loci towards oxidative stress. This study highlights that oxidative stress possibly influence single nucleotide somatic variations in normal human brain.", "keywords": [ "somatic variations", "exome sequencing", "oxidative stress", "8-OHdG", "brain", "SNVs", "axon guidance", "G:C>T:A transversions." ], "content": "Introduction\n\nSomatic variations are an inevitable consequence of continuous cell divisions in multicellular complex organisms and can lead to genomic heterogeneity. Somatic variations can arise due to replication error with or without external environmental factors like mutagens, exposure to UV rays etc – and accumulate over time as the organism ages. Depending on how early a somatic variation occurs in a particular cell lineage and the rate of division for that cell, somatic variations may clonally expand and cross the threshold of detection by genome sequencing technology. As reviewed by De, somatic variations can range from single nucleotides to whole chromosomes and can be found in both ‘healthy’ and ‘diseased’ tissues – cancer being a unanimously accepted example1. The contribution of somatic variations is widely reported in cases where the DNA from affected tissue was found to harbor causal mutations whereas they were absent in the DNA from peripheral blood2–4. Mutation reversal due to somatic variation has also been reported in Mendelian diseases, indicating the stochastic nature of these variations5. The rates of somatic variations have been a matter of debate, ranging between 10-4 to 10-8 per base-pair, per generation, depending on whether these estimates were genome-wide (lower estimates) or locus-specific (higher estimates)6,7. In addition, it has been speculated that rates of somatic variations might differ for different tissue-types and different developmental times but this remains to be clarified1.\n\nSomatic variations acquired and accumulated during the course of development have been modelled, predicting a higher risk of cancer and neurodegenerative diseases. The multitude of possible outcomes would increase with increased complexity of the tissue type8–10. Thus, somatic variations could be of great importance for an organ like mammalian brain, which has complex structural and functional organization, high plasticity, and limited regenerative capabilities. The extreme interconnectivity of cortical neurons permits a disproportionately large impact of small changes while retaining robust adaptive mechanisms. Recently, there have been attempts to explore the extent of somatic variations in diverse healthy human tissues at the level of microarray or deep sequencing based copy number changes11,12. Interestingly, normal human brain, especially in the neuron rich regions, has been shown to harbor a wide variety of somatic variations – ranging from whole chromosomes13,14, large-scale retro transpositions15,16, and copy number variations at the single neuron level17. Recent reports have also started revealing the importance of such variations in neurological disorders10. But there have been very few systematic studies so far exploring the nature, extent and impact of somatic variations at the single nucleotide resolution in neuron-rich parts of the healthy human brain18,19.\n\nIn this study, we have analyzed single nucleotide level somatic variations between frontal cortex (rich in neurons) and corpus callosum (lack neurons) from healthy individuals and correlated the findings with markers of oxidative stress.\n\n\nMethods\n\nPaired tissues were taken from a total of nine individuals, with age ranging from 23 years to 45 years (Supplementary table S1). For four individuals (post-mortem, road accident victims), tissue sections from two different parts of the brain viz. frontal cortex (FC) and corpus callosum (CC), were procured from NIMHANS Brain Bank, Bangalore, India. For the other five individuals, peripheral blood and saliva was taken from healthy volunteers representing circulatory cell types with high turnover and therefore high likelihood of spontaneous somatic variations. The project was approved by institutional human ethics committee of CSIR-Institute of Genomics and Integrative Biology and adhered to the international ethical guidelines (Declaration of Helsinki). For brain tissues, DNA was isolated using Omniprep Genomic DNA isolation kit (G-Biosciences, USA) and DNA from two other cell types viz. leukocytes (from blood) and epithelial cells (saliva) were isolated using Qiagen DNA kit (Qiagen, USA) and Oragene Saliva kit (DNA Genotek, Canada), respectively using manufacturer recommended protocols.\n\nExome capture for the isolated DNA was done using Illumina TruSeq Exome capture kit (62 Mb) and further exome libraries were sequenced (100 bp paired end) using Illumina Hiseq 2000 (Illumina, USA). Exome sequencing was done using manufacturer recommended protocols. A total of ~1.5 billion reads were generated for all the 18 samples. Supplementary figure S1 represents the overall pipeline followed for the analysis of the data and further text explains each step in detail. Raw data was checked for per base quality score and reads having 80% bases with phred quality score 30 (Q30) and greater were carried forward for downstream analysis and rest were discarded. Also, last few low quality bases were trimmed from all the reads (4–10 bases, depending upon the sample quality). This was done using Fastx (version 0.0.13) and FastQC (version 0.10.0). About 9–14% of data was removed from each sample. After checking quality of the data in various aspects, reads (Read 1 and Read 2 for each sample) were aligned to the reference genome (hg19) using BWA (version 0.6.1)20 allowing for maximum 2 mismatches. More than 98% percent of the data was aligned to reference for each sample. Data was also checked for PCR duplicates and the same were removed. Only reads with mapping quality (MQ) more than 40 were taken forward for further analysis. The sequence depth for the samples ranged between 91×–120× (average 100×) for the FC-CC samples and 25×–86× (average 51×) for the blood saliva samples (Supplementary figure S2). Data has been deposited in the NCBI Sequence Read Archive and can be found at accession number SRP045655.\n\nVarscan221 (version 2.3.5) somatic module along with Samtools (version 0.1.18)22 was used to call somatic variations from all the paired samples (CC vs. FC and blood vs. saliva). Firstly .bam files were processed using Samtools for making .mpileup files which were further processed through Varscan2 to call somatic variations using somatic SNP module. Following parameters were considered while calling variations:\n\nMinimum coverage to call somatic variation at a locus was kept at 8 reads\n\nMinimum variant frequency to call a heterozygote was kept at 0.1 of total reads for that position.\n\nMinimum variant frequency to call a homozygote was kept at 0.90 of total reads for that position.\n\nVariants with more than 90% strand bias were removed\n\nMinimum base quality score was kept at 20\n\nMinimum mapping quality of a read was kept at 40\n\nAfter calling variations, the data was checked for read bias. In this filter we excluded variations for which all the reads with the variant allele were read only from one direction (either F1R2 or F2R1) using in house developed perl scripts. Variations with more than 90% reads from one strand were removed. Annotation of all the variations called was done using Annovar (version 2012-03-08)23 and VcfCodingSnps (v1.5). Sites falling in regions with 87% and above identity with another genomic region were also removed from the data. Sites were termed as somatic sites if they had different genotype in two tissues and somatic p-value was less than 0.05. Overall out of a total 371 somatic sites (with p<0.05) in brain 93.8% had at least 4 reads for the variant allele. All the sites confidently called as same genotype in both tissue types (p<0.05) were termed as germline variations. Further, somatic variations with supporting reads for the variant allele >10% of total reads were considered as heterozygotes and those with <5% reads supporting variant allele were called homozygotes. All the somatic sites with their details are provided as CSV files under ‘Data availability’. Percentage of somatic variations across different sample-pairs was calculated by the following formula: (number of somatic variations/number of total variations) × 100. Number of somatic sites were defined as sites with different genotype between two tissues of the same individual. and Nnumber of total variations was refers to, all sites that had varying genotype from the reference genome (hg19) in two tissues of the same individual. We also used MuTect24 (version 1.4.4) variant caller to call somatic variations from exome sequencing data for the brain samples. Details of the concordance between the two platforms is described in Supplementary table S2. It was found that up to 78.8% of the somatic sites called by Varscan2 were also called as somatic sites by MuTect. Concordant sites between the two software also showed an enrichment of G:C>T:A transversions with FC harboring heterozygotes. Pathway analysis of somatic variations was done using Gene Set Enrichment Analysis25 (release 2.2.0). The total 359 genes harbouring somatic variations from all brain samples (from varscan2 dataset) were included for pathway analysis. Pathway analysis for the concordant sites also showed similar results.\n\nWestern blotting for NeuN: Total tissue lysates from FC and CC were made using radio-immuno precipitation assay (RIPA) buffer from 5 samples (3 samples viz. Brain_152, Brain_156 and Brain_202 that were sequenced and extra 2 samples viz. Brain_174 and Brain_119 that were not sequenced). Two extra samples were considered to emphasize that this contrast in NeuN between FC and CC tissue is true across all samples and is not specific to only the ones that were sequenced. Protein estimation was done using bicinchoninic acid assay method (BCA method) and 30 micro gm (ug) of protein was used for western blotting for NeuN, which was selected as marker for detecting neuronal cell bodies in FC and CC. Anti-NeuN monoclonal antibody (ab104224, Abcam), raised in mouse was used at a dilution of 1:1000. This antibody gives 2–3 bands between 46–48 kDa as per manufacturer data-sheet, which we also observed. GAPDH (2118L- Cell Signalling, 1:2000) was used as loading control. For this experiment a 5% stacking gel and a 12% resolving gel were used (both with 30% acrylamide, all reagent were from Sigma-Aldrich, USA). Images were analyzed by Image J 1.48 version.\n\nImmuno-Fluorescence for NeuN: The same antibody was used to do immuno-fluorescence in both the tissues. Immunofluorescence for NeuN was performed on 8 µm sections of FC or CC. Briefly, tissue sections were fixed in chilled acetone, permeabilized with 0.1% Triton X-100, blocked in 1.5% FBS, incubated with Anti-NeuN monoclonal antibody (1:500, 16–18hrs at 4°C) and anti mouse Alexa 488-conjugated secondary antibody (1:200, 1hr at RT), mounted with DAPI containing solution before taking images using a confocal microscope (at 63×) [Zeiss-510 Meta, Carl Zeiss, Zen-2009 software (December 2010 release)].\n\n8-OHdG, oxidative DNA damage marker, was measured in lysates of FC and CC by competitive ELISA (Cayman, USA). Briefly, 3 µg lysates were incubated with conjugate and antibody for 18 hrs and developed using Ellman’s reagent and absorbance readings was taken at 405 nm and results were expressed in pg/3 µg for each lysate.\n\nWe randomly selected 20 sites from Brain_156 (15% of the total sites) due to availability of sufficient DNA and this pair having the highest number of somatic sites, for validation using Illumina MiSeq Low Sample protocol (LS). Amplicons ranging from 200 bp to 350 bp for 14 sites were generated using PCR. The PCRs were done for 35 cyles with annealing temperatures at either 62 or 58 degrees C (primer details are provided under ‘Data availability’). Further all the amplicons from each tissue-type were pooled together in equimolar ratios, resulting in 2 freshly prepared libraries. These pooled amplicons were then processed using LS protocol of MiSeq sample preparation kit. About 12 million total reads were obtained from the two libraries. Quality check of trimming 3’ ends of the reads and filtering reads with less than 80% bases with Q30 phred quality score was applied on the raw data. Further, frequency of the variant allele (as observed in HiSeq data) was checked in both the tissues (FC and CC) in MiSeq data. A site was considered validated if the variant allele was supported by minimum of 100 reads along with at least two fold differences in the variant allele frequencies between the two tissues. On an average read depth of 5000× was obtained across all sites captured.\n\n\nResults\n\nWe analyzed about 1.5 billion reads from whole exome sequencing of 18 samples each having ~60 Mb coverage and identified 2,49,607 single nucleotide variations (SNVs), with an average of ~27,700 sites per sample (Supplementary figure S3). For technical confidence of the genotype calls all the samples were genotyped on Illumina Infinium 660W-Quad microarrays and we observed 99.8%–99.9% genotype concordance between the NGS and the microarray data (Supplementary table S3).\n\nThe number of high-confidence somatic variation calls seen for each pair ranged between 32–132 for the FC-CC pairs while it varied between 13–35 for the blood-saliva pairs (Figure 1). The percentage of somatic variations ranged between 0.1%–0.48% for the brain sample-pairs (FC-CC) and between 0.03%–0.17% for the blood-saliva pairs. The number of germline variations across all samples was comparable (Supplementary figure S3). The observed somatic variations do not show any bias for the position of the variant loci. The distribution was 37–65% in CDS, up to 57% in 3’ and 5’ UTRs and a minor proportion from other regions such as non-coding RNAs, splice sites etc. A comparison of these distributions between germline and somatic variations is represented in Supplementary figure S4.\n\nBlue bars represent brain samples and red bars represent blood-saliva samples.\n\nAs much as 64% of the somatic variations in the brain samples would lead to non-synonymous changes at the protein level (Figure 2A). Whereas, for the germline variations the trend was towards higher synonymous variations as expected (Figure 2B). For the blood-saliva pairs such a consistent trend towards non-synonymous variations for the somatic sites was not observed (Supplementary figure S5). We analyzed the possible effect of the somatic sites at the amino-acid level for all samples and did not find a consistent bias for any particular amino acid change (Supplementary figure S6).\n\nDistribution of synonymous and non-synonymous variations in brain samples A) for somatic variations and B) for germline variations.\n\nHaving observed somatic variation in the brain with a trend towards higher non-synonymous changes, we analyzed the level of transversion over transition amongst the somatic sites. Up to 87% of the total somatic variations found in FC-CC pairs were G:C>T:A transversions (Figure 3A)! The germline variations (for all samples) matched the expected and reported distribution where A:T>G:C (36%) and G:C>A:T (38%) transitions were the most common types of changes (Figure 3B). In Figure 3C we have represented the enrichment of G:C>T:A transversion events as a proportion of somatic variations, with respect to the germline variations. The blood-saliva pairs did not show such consistent positive enrichment for the same class of transversion events (Supplementary figure S7).\n\nFor each pair of FC-CC samples the absolute numbers of different types of changes are plotted for somatic (A) and germline variations (B). (C) The difference in proportion between somatic SNVs to germline SNVs is plotted for the brain samples. The positive values on the vertical axis denote enrichment of the type of variation in the somatic subset while a negative value indicates enrichment of the type of variation in the germline subset. As evident in the figure only the G:C>T:A transversions show enrichment for the somatic variations in brain.\n\nSurprisingly, for the G:C>T:A transversion sites, 70–100% of GT and CA heterozygotes were present in the frontal cortex and the corresponding homozygotes were observed in the corpus callosum (Figure 4). For all other sub-classes of somatic variations in brain and blood-saliva pairs the distribution of reference homozygotes versus heterozygotes did not show such a consistent and overriding bias (Supplementary figure S8).\n\nThe figure shows that across the four pairs, for the somatic G:C>T:A transversion sites majority of the heterozygotes (variant alleles) were present in the frontal cortex while the corresponding corpus callosum DNA had the homozygote (reference) genotype.\n\nThe genes harboring the somatic variations in all the brain samples were analyzed for canonical pathways. It was observed that the genes were enriched for axon guidance pathways and neuronal processes related pathways (Figure 5) with p-values ranging from 0.04 to 4.9×10-8 (FDR corrected). When analyzed by individual samples, three out of four samples also showed similar enrichment – but for Brain_152 although the relevant genes were found in the dataset but due to small number of variations, enrichment could not be established. It has been shown that DSBs in neuronal cells tend to occur in long genes involved in neuronal functions26. We also find a similar trend in our data: 46% of the pathway genes harboring somatic variations were long (>100kb) compared to total genes (18% are >100kb, p<0.002).\n\nEnrichment of genes harboring somatic variations in brain for pathways related to neuronal function. In the figure the horizontal axis shows the negative logarithm of FDR corrected p-value. Different biological processes are indicated on the left.\n\nThe most common cause of G:C>T:A transversions is mis-pairing of G to A (instead of G to C) due to modification of deoxy guanosine (dG) to 8-hydroxy-2’-deoxy-Guanosine (8-OHdG) mediated by oxidative/metabolic stress27–29. We found significantly higher levels of 8-OHdG in the frontal cortex samples, when compared to the corpus callosum of the same individuals (Figure 6), which corresponded with the abundance of neuronal cells in the frontal cortex (Figure 7). Thus an increased accumulation of 8-OHdG in the frontal cortex (compared to the corpus callosum of the same individual) DNA might result in localized DNA variations with a bias towards G:C>T:A transversions. Using immunoblotting and immuno fluorescence staining on tissues against the neuron specific marker NeuN we confirmed that the majority of the cells in the FC samples were neurons whereas, CC samples were almost devoid of neurons (Figure 7) – implicating a direct correlation of abundance of neuronal cells with accumulation of 8-OHdG leading to G to T somatic transversions in the FC samples. Further effect of postmortem could be ruled out as one of the contributing factors, as time of collection of tissues, storage conditions and DNA isolation protocol were exactly the same for both the tissues.\n\nFrontal cortex samples show significantly higher 8-OHdG accumulation. The figure represents concentration of 8-OHdG in the frontal cortex and corpus callosum samples. For each tissue type five independent samples were analyzed by ELISA. The error bars represent standard deviation (SD for FC, 57.5 and SD for CC, 24.4). The p value is calculated using paired T-test.\n\n(A): Western blot for NeuN in five pairs of FC-CC samples (for three of them sequence data is presented) shows distinct abundance of neurons in the FC in contrast to CC. GAPDH expression is used as a control. (B): The same antibody is used in immuno-fluorescence to show the neuronal abundance in FC. Note the near absence of signal for NeuN in the corpus callosum panel.\n\nAmplicon sequencing using MiSeq (Illumina) was performed for validation of the somatic variations. Out of the total somatic sites across all brain samples 20 sites from sample number Brain_156 were chosen for validation. We have randomly selected 20 sites from 156 (15% of the total sites) due to availability of DNA and this pair having the highest number of somatic sites. Data for 14 sites was obtained and 10 out of these 14 sites showed difference in variant allele frequencies between the two tissues (CC vs FC) in accordance with the HiSeq data (71% validation rate) (Supplementary Table S4). Rows marked in red shade are the ones that got validated between two platforms. It should be noted that although the allele frequencies of the variations in the validation set were not exactly the same as in the HiSeq data but the trend (same tissue harboring the higher amount of variant allele) was same. In general, low variant allele frequencies were obtained in MiSeq data as compared to HiSeq data. This could be attributed to very high read depth per site in MiSeq data. A trend of varying allele frequencies with read depth was observed, as in, lower the read depth higher the allele frequencies (and vice versa), thereby explaining the high frequencies in HiSeq data (with average read depth of 100×) as compared to MiSeq data (with average read depth of 5000×).\n\n\nDiscussion\n\nHere, we report somatic variations in normal human brain at single base resolution in whole exomes. Earlier studies have reported somatic genomic rearrangements such as, aneuploidy, insertions/deletions, CNVs etc in the neurons as part of the normal brain development and neurogenesis9,13–17,30–32. In our data the percentage of somatic variants in brain is 0.1%–0.48%, which was significantly enriched for genes in axon guidance (details below). It has been reported that somatic events present in low abundance of the cells can bring about striking phenotypic consequences in the brain10,33. We observed an enrichment of somatic non-synonymous variations, which was unique and was not found in variants common to both tissues (germline variations, Figure 2) – implying a functional neutrality or advantage of such variants. In absence of additional tissues and parental information for the individuals, we cannot definitively distinguish between inherited and acquired genotypes.\n\nWe have estimated the error rates to rule out that most of our variations can arise due to errors in sequencing experiments. Towards this we have performed genome-wide genotyping of the same samples and genotype discordance (error rates) between the microarray and the sequencing data varied from 0.002 to 0.0008 (Supplementary table S3). Our observed somatic site frequencies were higher than that. In addition, to further strengthen the criteria, we have chosen to only accept those sites as variants, where the variant allele was represented by at least 10% of the total reads for that position. We also performed targeted amplicon sequencing to validate a subset of the somatic sites.\n\nWe observed a higher proportion of somatic variations between the FC-CC samples compared with the proportion found between blood-saliva samples. A recent report studied somatic sites between brain and the blood of the same individual and found higher somatic sites in the blood34. This apparent contrast in the two findings is perhaps due to the fact that in our study we did not compare between blood and brain of the same individuals. In addition, a lower proportion of somatic variations between DNA from blood and saliva can be due to, inter-mixing of the two cell types and/or faster regeneration and circulatory nature of these cells resulting in dilution of clonal populations of cells harboring the somatic sites.\n\nOur data shows all possible types of nucleotide changes amongst the somatic SNVs, as would be expected for a random event, but with an unexpected bias (up to 87%) for G:C>T:A transversions (Figure 3A). Moreover, more than 70% of these transversion events were found in the frontal cortex while DNA from the corpus callosum harbored the homozygotes for the reference alleles (Figure 4). Recently, two studies18,19 have looked into somatic SNVs at the single neuron level. Although, neither of these studies report enrichment of G:C>T:A transversions, but these transversions are the second most abundant class of somatic variations as reported by Hazen et al.19 (16% of total somatic variations) in their dataset. Moreover, the choice of tissue in these two studies (single neurons) and ours (frontal cortex v/s corpus callosum) is different, which might be the reason for differences between the classes of somatic variations observed.\n\nIt is well known that G:C>T:A transversions are mediated primarily by oxidative stress which modifies deoxy-guanine (dG) to 8-hydroxy-2’-deoxy-Guanosine (8-OHdG)27. Towards this when probed for oxidative stress levels in two tissues, we also found significantly higher levels of 8-OHdG in the frontal cortex compared to the same individual’s corpus callosum correlating the enriched transversion events with increased oxidative stress (Figure 6). A recent study reported G:C>T:A transversions arising in sequencing data as an artefact due to DNA shearing stress35. We have tested for this bias in both the germline and somatic datasets. A major fraction of the somatic sites initially called by our pipeline was observed to have this bias and were removed in the modified analysis workflow. However, it is still possible that in the remaining data-set presented here, some of the G:C>T:A transversions are actually artefacts. Interestingly, even in such a possible scenario, these artefacts are not randomly observed in all tissue types analysed unlike the earlier report35. Instead, we observed that almost exclusively the GT and CA heterozygotes were in the FC samples. Further, the observation of higher 8-OHdG in FC was independent of the shearing stress as the experiments were performed on lysates isolated from fresh sections of the same tissue samples. Whether the specific cell types in FC make them more prone to either in-vivo (biological) or in-vitro (artefactual) stress mediated variations needs to be explored further.\n\nIt is known that having an Adenine (A) 3’ to the oxidized G significantly reduces the efficiency of the repair process and thereby enhancing the possibility of a G>T transversion36. We also find a bias for 3’ Adenine for the somatic G:C>T:A sites in our data (Supplementary figure S9). As reported in the above-mentioned study, the artefactual C>A transversions have an enrichment of CCG motif, which perhaps makes the base (underlined) more susceptible for oxidation35. We did not observe any enrichment for this motif for the somatic sites found in our study.\n\nOur data indicates that normal brain accumulates single nucleotide somatic variations with age during the lifetime of an individual. This might happen due to various mechanisms, high oxidative stress generated during normal physiological brain activity, being one of them. Physiological levels of oxidative free radicals are essential in various key cellular processes such as cellular differentiation, proliferation and survival37 though pathological level is detrimental for cellular health. Oxidative stress is also induced during normal neurogenesis in adults38 and oxidative stress susceptible genetic alleles in drosophila are connected to axon guidance39. A recent study showed that physiological levels of H2O2 are essential for neurogenesis. Their data revealed that exposure to H2O2 mediated oxidative stress promoted neurogenesis of neural progenitor cells (NPC) in rat40. In this context, interestingly, the somatic transversion events we found in brain samples were enriched for genes involved in processes like axonal guidance, neurogenesis etc. (Figure 5). These indicate that the accumulation of somatic variations could be the possible molecular explanation for physiological oxidative stress mediated enhancement of neuronal differentiation from NPCs. Other linked processes like interaction between L1 and ankyrins, NCAM signaling, long term potentiation etc. were also found to be significantly enriched. These evidences indicate that the acquired somatic variations might provide required functional diversity in the growing neurons during development as well as during adult neurogenesis.\n\n\nConclusions\n\nOur study shows presence of somatic SNVs in functionally relevant genes in different parts of the brain possibly influenced by oxidative stress along with other known contributing factors. Recent reviews suggested that local somatic events could strike a balance between the plasticity and robustness of the genome indicating a continuum of normal-through-disease scenario1. A study showed that oxidative stress mediated double-strand breaks (DSBs) in DNA of neuronal cells and its delayed repair was a feature of normal mice brain related with its learning ability41. On similar lines our study also indicates that the acquired somatic variations might provide required functional diversity during development as well as during adult neurogenesis.\n\n\nData availability\n\nData has been deposited in the NCBI Sequence Read Archive under accession number SRP045655.\n\nF1000Research: Dataset 1. Raw data for ‘Human brain harbors single nucleotide somatic variations in functionally relevant genes possibly mediated by oxidative stress’., 10.5256/f1000research.9495.d13550742", "appendix": "Author contributions\n\n\n\nAnchal Sharma, Asgar Hussain Ansari, Bharati Mehani and Bapu K. Desiraju has performed the entire analysis of exome data and subsequent analysis. Renu Kumari has performed the sample preparation for the NGS experiments. Rajesh Pandey has played crucial role in Mi-Seq validation of the somatic sites. Rakhshinda Rehman has helped in immunofluoscent and ELISA experiments with supetvision from Ulaganathan Mabalirajan and Anurag Agrawal. Arijit Mukhopadhyay has conceptualised the study, acquired the necessary funding and wrote the manuscript along with Anchal Sharma and Asgar Ansari.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is funded by the Council of Scientific and Industrial Research (CSIR), Government of India (grant number BSC-0123, Arijit Mukhopadhyay is a PI). In addition, BSC-0403 (imaging facility) and BSC-0121 (computing facility) are also acknowledged for central facilities. Anchal Sharma acknowledges the Department of Science and Technology (DST), Govt. of India for her INSPIRE fellowship. Funding for open access charge will be from BSC-0123 funded by the Council of Scientific and Industrial Research (CSIR), Government of India.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nWe acknowledge the Human Brain Tissue Repository at the National Institute of Mental Health and Neuro Sciences (NIMHANS), Bengaluru, India for providing the brain samples. We thank Dr. Mohammed Faruq, Ms. Kiran Narta and Mr. Pradeep Tiwari for their technical help in the next generation sequencing experiments and statistical analyses. Mr. Manish Kumar is acknowledged for technical help in imaging and the high-performance computing facility at CSIR-IGIB is also acknowledged for the computational infrastructure. We thank Dr. Nidhan K. Biswas from the National Institute of Bio-Medical Genomics, West Bengal, India for critical comments on the analysis of the exome data. Dr. Sheetal Gandotra and Ms. Mansi Vishal are acknowledged for their critical comments on the manuscript.\n\n\nSupplementary tables and figures\n\nClick here to access the data.\n\n\nReferences\n\nDe S: Somatic mosaicism in healthy human tissues. Trends Genet. 2011; 27(6): 217–223. PubMed Abstract | Publisher Full Text\n\nEllis NA, Ciocci S, German J: Back mutation can produce phenotype reversion in Bloom syndrome somatic cells. Hum Genet. 2001; 108(2): 167–173. PubMed Abstract | Publisher Full Text\n\nGottlieb B, Chalifour LE, Mitmaker B, et al.: BAK1 gene variation and abdominal aortic aneurysms. Hum Mutat. 2009; 30(7): 1043–1047. PubMed Abstract | Publisher Full Text\n\nLindhurst MJ, Sapp JC, Teer JK, et al.: A mosaic activating mutation in AKT1 associated with the Proteus syndrome. N Engl J Med. 2011; 365(7): 611–619. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGregory JJ Jr, Wagner JE, Verlander PC, et al.: Somatic mosaicism in Fanconi anemia: evidence of genotypic reversion in lymphohematopoietic stem cells. Proc Natl Acad Sci U S A. 2001; 98(5): 2532–2537. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAraten DJ, Golde DW, Zhang RH, et al.: A quantitative measurement of the human somatic mutation rate. Cancer Res. 2005; 65(18): 8111–8117. PubMed Abstract | Publisher Full Text\n\nBaer CF, Miyamoto MM, Denver DR: Mutation rate variation in multicellular eukaryotes: causes and consequences. Nat Rev Genet. 2007; 8(8): 619–631. PubMed Abstract | Publisher Full Text\n\nKennedy SR, Loeb LA, Herr AJ: Somatic mutations in aging, cancer and neurodegeneration. Mech Ageing Dev. 2012; 133(4): 118–126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim J, Shin JY, Kim JI, et al.: Somatic deletions implicated in functional diversity of brain cells of individuals with schizophrenia and unaffected controls. Sci Rep. 2014; 4: 3807. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPoduri A, Evrony GD, Cai X, et al.: Somatic mutation, genomic variation, and neurological disease. Science. 2013; 341(6141): 1237758. PubMed Abstract | Publisher Full Text | Free Full Text\n\nO'Huallachain M, Karczewski KJ, Weissman SM, et al.: Extensive genetic variation in somatic human tissues. Proc Natl Acad Sci U S A. 2012; 109(44): 18018–18023. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiotrowski A, Bruder CE, Andersson R, et al.: Somatic mosaicism for copy number variation in differentiated human tissues. Hum Mutat. 2008; 29(9): 1118–1124. PubMed Abstract | Publisher Full Text\n\nRehen SK, Yung YC, McCreight MP, et al.: Constitutional aneuploidy in the normal human brain. J Neurosci. 2005; 25(9): 2176–2180. PubMed Abstract | Publisher Full Text\n\nYurov YB, Iourov IY, Vorsanova SG, et al.: Aneuploidy and confined chromosomal mosaicism in the developing human brain. PLoS One. 2007; 2(6): e558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCoufal NG, Garcia-Perez JL, Peng GE, et al.: L1 retrotransposition in human neural progenitor cells. Nature. 2009; 460(7259): 1127–1131. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKano H, Godoy I, Courtney C, et al.: L1 retrotransposition occurs mainly in embryogenesis and creates somatic mosaicism. Genes Dev. 2009; 23(11): 1303–1312. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcConnell MJ, Lindberg MR, Brennand KJ, et al.: Mosaic copy number variation in human neurons. Science. 2013; 342(6158): 632–637. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLodato MA, Woodworth MB, Lee S, et al.: Somatic mutation in single human neurons tracks developmental and transcriptional history. Science. 2015; 350(6256): 94–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHazen JL, Faust GG, Rodriguez AR, et al.: The Complete Genome Sequences, Unique Mutational Spectra, and Developmental Potency of Adult Neurons Revealed by Cloning. Neuron. 2016; 89(6): 1223–1236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–1760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoboldt DC, Zhang Q, Larson DE, et al.: VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res. 2012; 22(3): 568–576. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–2079. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang K, Li M, Hakonarson H: ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 2010; 38(16): e164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCibulskis K, Lawrence MS, Carter SL, et al.: Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotechnol. 2013; 31(3): 213–219. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSubramanian A, Tamayo P, Mootha VK, et al.: Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005; 102(43): 15545–15550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWei PC, Chang AN, Kao J, et al.: Long Neural Genes Harbor Recurrent DNA Break Clusters in Neural Stem/Progenitor Cells. Cell. 2016; 164(4): 644–655. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheng KC, Cahill DS, Kasai H, et al.: 8-Hydroxyguanine, an abundant form of oxidative DNA damage, causes G----T and A----C substitutions. J Biol Chem. 1992; 267(1): 166–172. PubMed Abstract\n\nOhno M, Sakumi K, Fukumura R, et al.: 8-oxoguanine causes spontaneous de novo germline mutations in mice. Sci Rep. 2014; 4: 4689. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShibutani S, Takeshita M, Grollman AP: Insertion of specific bases during DNA synthesis past the oxidation-damaged base 8-oxodg. Nature. 1991; 349(6308): 431–434. PubMed Abstract | Publisher Full Text\n\nFaggioli F, Wang T, Vijg J, et al.: Chromosome-specific accumulation of aneuploidy in the aging mouse brain. Hum Mol Genet. 2012; 21(24): 5246–5253. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMkrtchyan H, Gross M, Hinreiner S, et al.: The human genome puzzle - the role of copy number variation in somatic mosaicism. Curr Genomics. 2010; 11(6): 426–431. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaillie JK, Barnett MW, Upton KR, et al.: Somatic retrotransposition alters the genetic landscape of the human brain. Nature. 2011; 479(7374): 534–537. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPoduri A, Evrony GD, Cai X, et al.: Somatic activation of AKT3 causes hemispheric developmental brain malformations. Neuron. 2012; 74(1): 41–48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolstege H, Pfeiffer W, Sie D, et al.: Somatic mutations found in the healthy blood compartment of a 115-yr-old woman demonstrate oligoclonal hematopoiesis. Genome Res. 2014; 24(5): 733–742. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCostello M, Pugh TJ, Fennell TJ, et al.: Discovery and characterization of artifactual mutations in deep coverage targeted capture sequencing data due to oxidative DNA damage during sample preparation. Nucleic acids research. 2013; 41(6): e67. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHatahet Z, Zhou M, Reha-Krantz LJ, et al.: In search of a mutational hotspot. Proc Natl Acad Sci U S A. 1998; 95(15): 8556–8561. PubMed Abstract | Free Full Text\n\nRay PD, Huang BW, Tsuji Y: Reactive oxygen species (ros) homeostasis and redox regulation in cellular signaling. Cell Signal. 2012; 24(5): 981–990. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWalton NM, Shin R, Tajinda K, et al.: Adult neurogenesis transiently generates oxidative stress. PLoS One. 2012; 7(4): e35264. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJordan KW, Craver KL, Magwire MM, et al.: Genome-wide association for sensitivity to chronic oxidative stress in Drosophila melanogaster. PLoS One. 2012; 7(6): e38722. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerez Estrada C, Covacu R, Sankavaram SR, et al.: Oxidative stress increases neurogenesis and oligodendrogenesis in adult neural progenitor cells. Stem Cells Dev. 2014. 23(19): 2311–2327. PubMed Abstract | Publisher Full Text\n\nSuberbielle E, Sanchez PE, Kravitz AV, et al.: Physiologic brain activity causes DNA double-strand breaks in neurons, with exacerbation by amyloid-β. Nat Neurosci. 2013; 16(5): 613–621. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSharma A; Genomics & Molecular Medicine Unit, CSIR-Institute of Genomics & Integrative Biology, Delhi - 110020, India, Academy of Scientific and Innovative Research, CSIR-Institute of Genomics & Integrative Biology (AcSIR-IGIB), Delhi - 110020, India et al.: Dataset 1 in: Human brain harbors single nucleotide somatic variations in functionally relevant genes possibly mediated by oxidative stress. F1000Research. 2016. Data Source" }
[ { "id": "17217", "date": "26 Oct 2016", "name": "Kunihiko Sakumi", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript, Sharma et al. reported that adult human brains harbor single nucleotide somatic variations in the genome. Using whole exome sequencing by Illumina Truseq and Hiseq2000 system, authors compared single nucleotide variations between frontal cortex (FC) and corpus callosum (CC) of human brains. These variations were partially validated by Amplicon sequencing using MiSeq. The authors found that G to T transversion and 8-OHdG levels were increased in FC. They observed that the somatic variations were enriched in genes belonging to axon guidance pathways and neuronal processes related pathways. They propose that oxidative stress influences single nucleotide somatic variations in normal human brain.\nThere is a problem with the interpretation of the data. Standard protocol of Truseq uses 100 ng of DNA, which is equivalent to 107 human cells. To get 10% of alt sequence, 2 x 106 cells should carry the same variation in hetero. In the adult brain tissue, it is hardly to occur the 8-OHdG caused mutation at the same site of 2 x 106 cell genome. If this observation is correct, at least 21 times DNA replication events are required to expand the cell population carrying the same somatic variation. Such mosaicism is well known in development as described by the authors in Introduction. Probably, the seed of the observed single nucleotide variation caused by 8-OHdG was generated around birth, rapidly neuronal cells replicating stage, and not after adulthood.\nAnother possibility to explain the 10% of alt sequence is contamination of blood cells in brain tissue sample. There is no quantitative purity analysis of tissue (source of DNA). If blood or other type of cell were contaminated in the FC sample, somatic mosaicism might be detectable in this system. If the author could provide the SNV of blood DNA from the same brain donor individual, we can get the answer.\n\nI recommend the authors to reconsider the data, and to prepare the model including time scale of mutation accumulation reflecting these results.\n\nComments:\nFigure 4 - In this figure, proportion of genotype exhibits completely symmetrical pattern between CC and FC. I cannot understand the reason. The authors should discuss this observation with any possible mechanism.\n\np9, 1st paragraph - To avoid inappropriate bias, authors should use high fidelity DNA polymerase for PCR, and Sanger sequencing method to validate the variant ratio, quantitatively. By showing the chromatogram pattern, we can discuss the variant ratio quantitatively.\n\nDataset 1 - I could not read the 9 somatic variation.csv files in Dataset 1.", "responses": [ { "c_id": "2349", "date": "05 Dec 2016", "name": "arijit mukhopadhyay", "role": "Author Response", "response": "Rev Comment: In the manuscript, Sharma et al. reported that adult human brains harbor single nucleotide somatic variations in the genome. Using whole exome sequencing by Illumina Truseq and Hiseq2000 system, authors compared single nucleotide variations between frontal cortex (FC) and corpus callosum (CC) of human brains. These variations were partially validated by Amplicon sequencing using MiSeq. The authors found that G to T transversion and 8-OHdG levels were increased in FC. They observed that the somatic variations were enriched in genes belonging to axon guidance pathways and neuronal processes related pathways. They propose that oxidative stress influences single nucleotide somatic variations in normal human brain. There is a problem with the interpretation of the data. Standard protocol of Truseq uses 100 ng of DNA, which is equivalent to 107 human cells. To get 10% of alt sequence, 2 x 106 cells should carry the same variation in hetero. In the adult brain tissue, it is hardly to occur the 8-OHdG caused mutation at the same site of 2 x 106 cell genome. If this observation is correct, at least 21 times DNA replication events are required to expand the cell population carrying the same somatic variation. Such mosaicism is well known in development as described by the authors in Introduction. Probably, the seed of the observed single nucleotide variation caused by 8-OHdG was generated around birth, rapidly neuronal cells replicating stage, and not after adulthood.   Ans: We agree with the reviewer. As it is known in the literature, for human brain the neurogenesis process starts at E13 and finishes by E108. The cell division is asymmetric, as after a certain point from each division results one mitotic and one post-mitotic neuron – and hence accurate prediction of the number of dividing neurons within this time-frame is not possible. According to the estimates an infant brain has neurons in the order of 1010 – 1011. As pointed by the reviewer, following our criteria at least 106 cells needs to have the genomic variation. So, we also hypothesize that the variations we are observing had occurred before birth or within the first couple of years after birth. Looking at the specific type of majority of the changes (G>T transversions), it is even more compelling that this is most likely as the high metabolic demand of the cortical region would create a local oxidative stress and such ‘errors’ might have a rapid clonal expansion if these would enhance the ‘fitness’ of the local population. Rev Comment: Another possibility to explain the 10% of alt sequence is contamination of blood cells in brain tissue sample. There is no quantitative purity analysis of tissue (source of DNA). If blood or other type of cell were contaminated in the FC sample, somatic mosaicism might be detectable in this system. If the author could provide the SNV of blood DNA from the same brain donor individual, we can get the answer.     Ans: We understand the reviewer’s concern. However, we would like to argue that the observed results cannot come from contamination for the following reasons: We see a specific pattern in the frontal cortex of heterozygous sites which is very different from that of corpus callosum obtained from the same brain. Such a pattern would not appear  for each sample if it was due to contamination   We see enrichment of relevant biological processes for each individual only for the relevant tissue type. Such a pattern is also not expected for a random contamination event We agree that data on blood SNV for the same samples would be useful in addressing it in a more robust way – but unfortunately, such samples were not available.   Rev Comment: I recommend the authors to reconsider the data, and to prepare the model including time scale of mutation accumulation reflecting these results.  Ans: We attempted to devise a model but finally could not do it as the neuronal division is asymmetric. It starts at E13 initially in a symmetric manner (i.e. two mitotic daughter cells from one mother cell) but soon becomes asymmetric (one mitotic and one post-mitotic daughter cells) which relies on a milieu of chemical cues – both inducing as well as inhibitory. Thus any model would be highly inaccurate. Instead, in the revised version we have now added the following sentences under the discussion section. We hope the reviewer will find it acceptable.   “From our observed results, it seems that most of the variations appeared around the time of birth when neurons are rapidly dividing. It is known in the literature that the process of neurogenesis spans from E13 to E108 and the number of neurons in an infant brain is in the order of 1010 – 1011. With our analysis threshold of at least 10% abundance of the variant allele, the variation should be present in 106-107 cells – which is in order with the developmental time-frame. However, studies with more tissues (from brain and outside brain) from the same individuals would be needed to rule out other possible reasons.” Rev Comments: Figure 4 - In this figure, proportion of genotype exhibits completely symmetrical pattern between CC and FC. I cannot understand the reason. The authors should discuss this observation with any possible mechanism.   Ans: The reason is our analysis pipeline. As described in the manuscript any variation qualifies as a somatic variation where genotype of one tissue (e.g. frontal cortex) varies from that of the other pair (e.g. corpus callosum). Hence, by definition if at a locus FC genotype is heterozygous then CC would have the homozygous genotype – and vice versa. This is why we see a mirror image of genotype distribution in figure 4. p9, 1st paragraph - To avoid inappropriate bias, authors should use high fidelity DNA polymerase for PCR, and Sanger sequencing method to validate the variant ratio, quantitatively. By showing the chromatogram pattern, we can discuss the variant ratio quantitatively. Ans: High-fidelity DNA polymerase was used in the amplification steps. For Sanger sequencing, majority of these sites would not be picked up due to the low abundance of variant allele. Sanger sequencing is more efficient when the variant allele frequency is 20% and above. This is why we opted for a very deep sequencing approach like Mi-Seq. Dataset 1 - I could not read the 9 somatic variation.csv files in Dataset 1. Ans: We are sorry for the technical difficulty. We have freshly uploaded the files (New Dataset 1.zip) and hope that it is readable now." } ] }, { "id": "17435", "date": "07 Nov 2016", "name": "Arindam Maitra", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript is well written. However, I would like to point out that oxoG artefacts can also be generated during fragmentation of DNA by sonication (standard step in the method followed by authors). Given that only 71% of these mutations could be verified (the verification was not by an orthogonal method but the same sequencing technology), I am not sure if there are some residual artefacts in the final data set (even after all filteration and taking into account the fact that both tissue and blood DNA were subjected to the same method).", "responses": [ { "c_id": "2348", "date": "05 Dec 2016", "name": "arijit mukhopadhyay", "role": "Author Response", "response": "Rev Comment: The manuscript is well written. However, I would like to point out that oxoG artefacts can also be generated during fragmentation of DNA by sonication (standard step in the method followed by authors). Given that only 71% of these mutations could be verified (the verification was not by an orthogonal method but the same sequencing technology), I am not sure if there are some residual artefacts in the final data set (even after all filteration and taking into account the fact that both tissue and blood DNA were subjected to the same method). Ans: We understand the reviewer’s concern and in general agree with him. It is possible that if we were able to validate all the sites reported in the study we might get a good proportion (at least 30% by the validation rate) of the sites to be false positives for oxoG artefacts as well as other less known reasons. It is also true that given our stringent analysis thresholds we would also have some false negatives. Recently published estimates of somatic variations in single neurons in the whole genomes also points to this possibility. Given the scope of the present study we will not be able to accurately quantify either type of errors. However, we would like to point out that the main point of our study is not the exact locations of the somatic variations but the fact that physiological oxidative stress, in moderation, can generate such variants which have a biological significance (in terms of pathway enrichment). We hope that the reviewer will see merit in the argument. The reviewer is right to point out that our validation platform was not an orthogonal technique. We chose Mi-Seq over other methods considering not only the orthogonality but also robustness of the platform and error rates. Comparing all parameters Mi-Seq was our method of choice as we reasoned that even with the same sequencing chemistry, we should be able to validate true positives given a very high depth of reads obtained." } ] } ]
1
https://f1000research.com/articles/5-2520
https://f1000research.com/articles/6-34/v1
11 Jan 17
{ "type": "Case Report", "title": "Case Report: Severe hypernatremia from psychogenic adipsia", "authors": [ "Sarah Manning", "Rehan Shaffie", "Shitij Arora", "Sarah Manning", "Rehan Shaffie" ], "abstract": "Hypernatremia is a common emergency room presentation and carries high mortality. We describe a case of a 56-year-old male patient with who presents with refusal to drink water for several weeks leading to the admission. He was diagnosed with psychogenic adipsia and was treated successfully with fluids, mirtazapine and clonazepam.", "keywords": [ "hypernatremia", "adipsia", "osmolality", "altered mental status", "osmotic demyelination", "paranoia", "stroke", "infection" ], "content": "Introduction\n\nHypernatremia is a common electrolyte abnormality seen in the emergency department and can carry an estimated mortality of 40–60% depending on the degree of severity1. Psychogenic adipsia is a rare cause of hypernatremia and represents a subgroup where chronic long term management is critical as these patients are likely to relapse. There has been a reported case where hypernatremia has been corrected with hemodialysis using a high sodium dialysate to prevent osmotic demyelination syndrome2,3. One of the mechanisms involved in psychogenic adipsia is the destruction or improper functioning of osmoreceptors in the hypothalamus that controls the thirst mechanism; this may be a result of a congenital malformation or acquired as in the case of stroke, trauma, or infection4,5.\n\nWe report a case of hypernatremia with serum sodium as high as 181meq/l and no neurologic manifestations, after patient refused to drink water in the nursing home.\n\n\nCase presentation\n\nThe patient being described is a 56 year old male with cognitive developmental delay and anxiety who is sent from his assisted living facility with hypernatremia on routine labs and documented refusal to drink water. There were no other complaints. He was afebrile and recorded a blood pressure of 99/97mmHg. Physical exam showed a cachectic male with dry mucous membranes. A complete medication list provided by the patient’s assisted living facility included famotidine 40 mg daily, docusate 100 mg daily, and a daily multivitamin.\n\nLaboratory analysis showed a plasma sodium concentration of 181 mEq/L, plasma chloride concentration of 138 mEq/L, and plasma potassium concentration of 4.6 mEq/L. Serum osmolality was revealed to be 359 mOsm/kg. The urine sodium level was less than 20 mEq/L and the urine chloride level was also less than 20 mEq/L. Urine osmolality was 1080 mEq/L. The patient was immediately rehydrated with D5 1/2 normal saline solution not to exceed a correction rate of 6–8 mEq/L of sodium per day. The patient continued to refuse most oral intake and denied thirst. A CT scan was obtained without contrast and showed mild microvascular ischemic disease without evidence of intraparenchymal hemorrhage, acute infarct, or hydrocephalus. No hypothalamic infarct or other mass lesion or focal mass effect were seen. Later in the course of his admission he admitted to severe stress from a recent emotional break up. He was started on mirtazapine 7.5 mg daily and clonazepam 0.25 mg twice daily to address his anxiety and that led to an improvement in appetite and regained thirst mechanism. He was stable when discharged back to his assisted living facility. His sodium remained stable within the normal range when discharged back to his assisted living facility and was normal at 6 months post discharge follow up.\n\n\nDiscussion\n\nThe above case describes a patient with profound hypernatremia devoid of thirst and remarkably asymptomatic on neurologic exam. There are at least two very interesting phenomenon that can be discussed through this case. One is the presence of an intense emotional response and its effect on the thirst mechanism. Thirst is a very powerful mechanism meant to protect against hypernatremia. Functional MRI studies have demonstrated anterior cingulate gyrus as the core are associated with the consciousness of thirst6. The same area is also implicated in a number of psychiatric disorders like schizophrenia depression and autism7. While osmoreceptors sense the plasma sodium levels, the consciousness of thirst involves a very different and complex limbic system involvement. A patient with a plasma sodium concentration of 150 mEq/L or more who is alert but not thirsty has, by definition, a hypothalamic lesion affecting the thirst center8. Psychiatric illness affecting osmoreceptors of the hypothalamus appears to be very rare and very few cases have been reported; one of them involved a 17-year-old boy with psychosis who displayed an impaired thirst mechanism similar to the patient described above. When the psychosis was treated and began to resolve, the thirst mechanism returned9. A detailed psychiatric history should be very useful in preventing recurrences and identifying cases with psychogenic adipsia.\n\nThe case also highlights the cerebral adaptation to chronic hypernatremia that results in absent neurologic sequelae. The latter response involves an initial uptake of sodium and potassium, followed by the later accumulation of osmolytes; mainly myo-inositol and the amino acid glutamine. The delayed efflux of these osmolytes as seen when the sodium is corrected too rapidly, is what results in cerebral edema, seizures and coma10.\n\nIn conclusion, psychogenic adipsia represents a rare cause of severe hypernatremia and this case highlights the importance of psychiatric history in patients who present with severe chronic or recurrent hypernatremia.\n\n\nConsent\n\nWritten informed consent for publication of the patient’s details was obtained from the patient.", "appendix": "Author contributions\n\n\n\nSM wrote the manuscript, performed the literature search. RS and SA conceptualized and were involved in patient care.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nMount DB: Fluid and Electrolyte Disturbances. In: Kasper D, Fauci A, Hauser S, Longo D, Jameson J, Loscalzo J. eds. Harrison's Principles of Internal Medicine. 19e. New York, NY: McGraw-Hill; 2015. Reference Source\n\nHan MJ, Kim DH, Kim YH, et al.: A Case of Osmotic Demyelination Presenting with Severe Hypernatremia. Electrolyte Blood Press. 2015; 13(1): 30–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSabatine MS: Pocket Medicine. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins; 2011. Reference Source\n\nRobertson GL: Disorders of the Neurohypophysis. In: Kasper D, Fauci A, Hauser S, Longo D, Jameson J, Loscalzo J. eds. Harrison's Principles of Internal Medicine. 19e. New York, NY:McGraw-Hill; 2015. Reference Source\n\nFarley PC, Lau KY, Suba S: Severe hypernatremia in a patient with psychiatric illness. Arch Intern Med. 1986; 146(6): 1214–1215. PubMed Abstract | Publisher Full Text\n\nYücel M, Yücel SJ, Fornito A, et al.: Anterior cingulate dysfunction: implications for psychiatric disorders? J Psychiatry Neurosci. 2003; 28(5): 350–4. PubMed Abstract | Free Full Text\n\nEgan G, Silk T, Zamarripa F, et al.: Neural correlates of the emergence of consciousness of thirst. Proc Natl Acad Sci U S A. 2003; 100(25): 15241–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlpern RJ, Giebisch GH, Hebert SC, et al.: Seldin And Giebisch's the Kidney: Physiology and Pathophysiology. Amsterdam: Elsevier Acad. Press; 2008. Reference Source\n\nHossam HA, Al Aseri ZA, Suriya OM: Behavioural induced severe hypernatremia without neurological manifestations. Saudi J Kidney Dis Transpl. 2010; 21(1): 113–117. PubMed Abstract\n\nHeilig CW, Stromski ME, Blumenfeld JD, et al.: Characterization of the major brain osmolytes that accumulate in salt-loaded rats. Am J Physiol. 1989; 257(6 Pt 2): F1108–16. PubMed Abstract" }
[ { "id": "19526", "date": "20 Jan 2017", "name": "Amarpali Brar", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSarah Manning and co-authors present a case of psychogenic adipsia and hypernatremia. Overall the case report is well written. Although previously described as cited by authors and also additionally reported by others as listed below1,2, this is a known clinical presentation.\n\nThis case report will add to literature on this rare clinical presentation.\nUrine osmolality units should be changed to mOsm/kg.\nAdd other published reports about this presentation in discussion.", "responses": [] }, { "id": "19222", "date": "20 Jan 2017", "name": "Bijin Thajudeen", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nInteresting case. It would be more interesting if authors can comment of the role played by mirtazapine in curing the adipsia. One of the side effects of mirtazapine is increased thirst. Mirtazapine increases dopaminergic neurotransmission and dopamine has role in the modulation of thirst. On reviewing some of the case reports which deals with adipsia and psychiatric disorders (depression, schizophrenia), the treatments or intervention (electroconvulsive therapy) used have one thing in common. They all increase dopaminergic activity, supporting the hypothesis that deficiency of dopamine or lack of dopaminergic activity may be playing role in the pathogenesis of adipsia.\nAdipsic hyponatremia is uncommon in patients with psychiatric disorders. Hence secondary causes like tumour, histiocytosis, sarcoidosis involving the hypothalamus should be ruled out. An MRI of the brain with or without contrast would be the most appropriate investigation of choice rather than a CT head without contrast.\n\nPatients with adipsic hyponatremias associated with psychiatric disorders will have a normal ADH response and an appropriate increase in urine osmolality.", "responses": [] } ]
1
https://f1000research.com/articles/6-34
https://f1000research.com/articles/4-177/v1
01 Jul 15
{ "type": "Research Note", "title": "Trialling the Use of Google Apps Together with Online Marking to Enhance Collaborative Learning and Provide Effective Feedback", "authors": [ "Nicky J. D. Slee", "Marty H. Jacobs" ], "abstract": "This paper describes a new approach to an ecology practical where the cohort was divided into four groups to collect data. Each group studied a different habitat; the cohort was further subdivided into seven groups to collect field data. Each of the four groups collaborated through Google Drive on descriptions and images of the habitat site, and also collaborated at the subgroup level on their own habitat data. The four groups then shared habitat descriptions with the aim to provide enough information to enable everyone to understand entire data set.\n\nGroup work was assessed online and feedback was given at both the group and subgroup levels. At the end of the first stage, peer assignment of all the work was carried out on an individual basis to engage students in other habitats. A complete set of data was finally provided to all students, so that individuals could carry out their own analysis of all four habitats; work was again assessed online and feedback given to each individual.\n\nThe three-stage assignment from group work to peer assessment to individual analysis was a success. The collaborative work through Google Drive enabled students to produce high quality documents that were valuable for the next step. The peer assignment enabled students to gain information on expected Minimum Standards and exposed them to a variety of habitats. The final stage was open ended and challenged students. This approach is recommended but the data collection process needs modification, and students need more guidance when completing the final stage of the assignment.", "keywords": [ "Collaboration", "Digital Literacy", "Ecology", "Fieldwork", "Google Docs", "Google Drive", "Peer assessment", "Self-Directed" ], "content": "Introduction\n\nThis article is aimed at university lecturers and outlines a method that enables students to enhance their knowledge and understanding of ecology by working in the field, in the lab, and online as members of a small group. The method makes use of free online tools—provided by Google—to facilitate collaborative activities that can be easily monitored by a lecturer; these activities also aim to increase the digital literacy of both the staff and students involved.\n\nThis report demonstrates that an academic with limited technological ability, but with an open mind and a small amount of support from a learning technologist, can create and manage an online assignment method that enables students to collaborate within a framework that is both defined and monitored. The approach also meets the “Challenger Philosophy” that is a key driving force at the University of Essex.\n\nThe rationale for trying new methods came from a Society for Experimental Biology (SEB) “Researchers – Teachers – Learners” conference (2012). Other influences included the concept of learner autonomy to encourage student engagement (Scott, 2012), Voelkel’s work (2012) on staged assignment to engage students, and the use of technology to promote student engagement and self-directed learning (Mello, 2012). The approach was also developed as part of a process to enrich large-scale teaching (Biggs, 2003) and to encourage student-student interaction, as well as student-led group work to enhance cognitive understanding.\n\nThe approach taken involved a new cohort of first-year students taking part in a three-stage assignment designed to give individuals an opportunity to gain practical experience in ecology, both in the field and in the lab, and to increase their knowledge of their local surroundings. This combined field and lab work represented 50% of the practical work associated with their first year ecology module (BS111). The students gained first-hand knowledge of one field site and this, together with the group work, gave them the confidence to analyse data from other unfamiliar habitats; Scott et al. (2012) concluded that first-hand exposure to field work enhanced pupils’ knowledge in ecology, and thus made them capable of engaging in self-directed learning. It was hoped that the practical work would give students a feeling of ownership over the data collected, and hence make it possible for them to take part in self-directed tasks connected with their data, as well as associated unrelated data (which the students would be able to contextualise through their hands-on experience of collecting data at one of the other four habitats).\n\nIt was also hoped that this new approach would help the students—new to university life—make the transition from the small group session format popular at most colleges, to the large group teaching scenarios that are commonplace within a university setting. The divided practical work aimed to create a small, friendly atmosphere which would enable students to ask each other questions, and encourage them to engage in discussion with the Graduate Lab Assistant or academic leading the group.\n\nThe students were given open-ended templates to enable creativity within the parameters of the standards required. This fulfils the educator’s task to create an environment with a high level of control where student learning is possible (Brown, 2004). Brown states that a series of tasks are needed to enable successful learning, and that students should start with simple tasks, but later be challenged to complete more complex work. With this in mind, students were first asked to produce group documents; these documents were then used by the whole cohort for peer assignment, and then by the lecturer as a means to accurately assess each student's contribution to the task as a whole.\n\nStudents regularly work in pairs for practical tasks (School of Biological Sciences, University of Essex), but generally write up reports independently. However, scientific research papers are frequently written by multiple authors. Therefore, another aim of the assignment was to teach students to work collaboratively, using digital tools and self-directed methods, to produce work that met with Minimum Undergraduate Standards. See School of Biological Sciences UG Handbook for full details of these standards.\n\nThe Google Apps service was selected to support the assignment process because Google is a familiar brand and is responsible for running the world’s most popular search engine; it was understood that this familiarity would make engagement more likely. Ease of access and versatility were also important deciding factors when selecting technology to support the assignment. Office 365 and wiki tools were also considered when deciding what technology to use, as they can both be used for online collaborative tasks (Doolan, 2007). However, the Google Apps service has a stronger emphasis on real-time collaboration. Also, the various apps use concepts that students are already familiar with (borrowing several features from traditional office software packages). Google Docs has also been used for effective collaborative English Language Learning (Hosseini, 2014) and has been used by business schools in a similar manner (see unpublished paper by Schneckenberg) and the apps enable multiple users to work on a single document via a web browser. The document owner can also retrieve an earlier version if necessary. Knowledge can be created, edited and exchanged (Doolan, 2007) which empowers learners to develop team skills. By using a platform not controlled by the University, students were given greater ownership of the learning environment, which helped with engagement and motivation.\n\nThe rationale for a three-stage process of assignment was to encourage students to engage in tasks outside of their comfort zone, and to enhance deep learning rather than surface learning (Biggs, 2003; Rust, 2008). The multi-staged assignment was also designed to promote student engagement at each stage of the assignment (Brown, 2004; Voelkel, 2012).\n\nA summative assignment was used to improve student engagement in the task. A peer assignment element was also included to promote a deep learning strategy (Biggs, 2003). (The quality of the peer assignment activity was also a key component of the summative assignment.) The learning outcome of engagement in peer assignment is twofold: the students were exposed to the other habitats they did not visit, which helped them with their independent task, and, secondly, the peer assignment process forced students to understand task requirements and, as a result, involved them in a deeper learning experience (Biggs, 2003).\n\n\nMethods\n\nA new cohort of students (n=76), from the School of Biological Sciences, taking a first-year module in Ecology (BS111) were involved in the assignment. Five students did not submit work for the collaborative coursework and four did not complete any part of the three-stage assignment. There may be a variety of reasons why students failed to submit work, e.g. transferring to another module.\n\nTwo practical sessions were used to carry out fieldwork, followed up by subsequent analysis of samples in the laboratory. The cohort was divided into four groups of 20, which, in turn, were split into 7 subgroups.\n\nThe assignment was split into three stages:\n\n1. Collaborative work (group & subgroup collaboration) - By working within one of the four groups on shared Google Slides and Docs files. Each group worked on their own set of documents. The seven subgroups within each group used a separate shared Google Sheets file to record their dataset.\n\n2. Peer assignment - This involved individual assessment of the group work, to make all students engage in the group work data from all sites, and also demonstrate their gained knowledge of the Minimum Undergraduate Standards.\n\n3. Independent work - To produce a document for individual assessment.\n\nDue to the early scheduling of the practical session in the autumn term, it was not possible to have a lecture to prepare the students prior to the session. Therefore, information was disseminated via Moodle, the University’s Learning Management System, and through a PowerPoint presentation given during one of the first practical sessions of the term.\n\nEach group (W, X, Y and Z) went to a different habitat on campus. The habitats used were as follows:\n\nGroup W - Benton’s Top Heath & Hay Meadows (Site Number 16).\n\nGroup X - Bluebell Wood (Site Number 15).\n\nGroup Y - Kingfisher Lake (Site Number 14).\n\nGroup Z - Campus farm and pond (Site Number 13).\n\n(For more information about these habitats and site numbers, visit the University’s biodiversity web page.)\n\nEach subgroup collected data from two quadrats within their selected habitat and samples of soil, plants and invertebrates were taken for further analysis in the laboratory. The site was also overviewed to assess different plant species, whether monocot or dicot, and allocated a code to be used by the group. Samples and photos were also taken. This information was used during data collection from the quadrats which were thrown randomly, twice per subgroup. Field measurements taken included a number of different types of plant species, percentage cover for each species, and bare ground.\n\nAdditionally, point quadrat was used to assess the height of the vegetation at 10 cm intervals across the quadrat and also to determine the number of hits (e.g. number of leaves contacted before the pin hits the ground when following a vertical path). Temperature of air, plant (IR temperature gun) and soil (temperature probe) were also recorded. Soil was assessed in the field for dampness, odour, and colour. A sample was taken from the centre of the quadrat using a trowel to a depth of 10 cm. The vegetation was put in one labelled sample bag and the soil in a second. In the lab, fresh and dry weight measurements were determined for the vegetation and a subsample of the mixed soil core. Soil samples were also analysed for pH using the method described in the soil test kits (Palintest, 1997); texture was determined from feeling wet and dry samples, and from sedimentation profiles and porosity. Plant samples collected at group level were identified and the range and type of invertebrates found in soil samples were recorded.\n\nInstructions were given via Moodle and verbally during a practical session, so that each student could create a personal Google Account; in addition, a handwritten record of their account details was collected during this practical sessions. Students were instructed to navigate to https://drive.google.com/ to access the Google Apps service.\n\nThe lecturer created a folder for the module (BS111) containing four group folders (labelled W, X, Y and Z). Within the group folders there was one folder that was shared at the group level and seven subfolders, each shared only with the relevant subgroup. The Google Drive desktop application was used to duplicate a template folder/file structure, as this proved to be much more time efficient than creating all of the folders and files through the web interface (see this support article for more details about the desktop application and how it works).\n\nEach group folder (X to Z) contained two template documents: a Google Docs file called “BS111 Practical 1 Description of Habitat” (see Supplementary materials 1 and a Google Slides file named “BS111 Photo Submission for Group Work (Supplementary materials 2); the group folder was shared with all students in the group, who were granted full editing rights over the content of the folder. Within each group folder there were a further seven separate sub-folders; these folders contained the Google Sheets document for the data collection task, and were only shared with members of the subgroup. The Google Sheets included pre-formatted tables, with some of the column headers already containing units. The documents at the group and subgroup level also contained inline instructions for the collaborative work (Supplementary materials 3 for example).\n\nStudents worked collaboratively at the subgroup and group level; each student downloaded a copy of the Slides and Docs file that was then uploaded to the University’s online coursework submission service (FASER). The group Slides and Docs files were exported as PDF files and the data collection Sheets were exported in Excel format (to make further manipulation possible).\n\nOnce this point was reached, the lecturer removed the editing rights of the students in Google Drive, so that further editing was not possible, effectively making all of the collaborative documents “read only”. The documents were then uploaded to Moodle and students used these copies for stages 2 and 3 of the assignment.\n\nThe lecturer also established a discussion forum site on the BS111 Moodle page to facilitate student discussion of the fieldwork.\n\nA pro forma for peer assignment was issued to the students (see Supplementary materials 4). Each student peer assessed the other group work (including the subgroup data collection sheet). This was worth 10% of the overall assignment mark.\n\nThe lecturer assessed two aspects of the student peer assignment process:\n\n1. Evidence of engagement in peer assignment, which required students to review information produced by the other groups and subgroups for the other three habitats;\n\n2. Evidence of engagement/research into definition of question terms used and correct application of Minimum Undergraduate Standards.\n\nPrinted instructions, also available in Moodle, were issued. The final part of the assignment was individual analysis of the four habitats for patterns, correlations, variability and species richness. This was an open-ended task (see Supplementary materials 5), facilitated by a guidance file (Excel document), which was provided on Moodle for download. Students requested further help and support, which was subsequently given.\n\nThe individual assignment files were uploaded to FASER by each student. Electronic marking and feedback was returned at an individual level via FASER.\n\n\nResults\n\nAs in the previous section, the results have been sorted into the three distinct stages of the assignment to ease understanding.\n\nThe students mainly succeeded in setting up Google Accounts, and the lecturer shared the documents at the appropriate level to enable group work in Google Drive. Handwritten records of the student Google Accounts were nearly complete for two of the four groups; the other two groups emailed the missing account details to the lecturer. There was a good level of online collaboration, evidenced by the group-level documents. The group work demonstrated that students were more aware of the campus grounds, and had an increased understanding of ecology through collecting and analysing the biotic and abiotic characteristics of the campus habitats.\n\nThree groups successfully collected all of the expected field data; the fourth group collected half the expected amount, i.e. from single quadrats rather than two. A considerable amount of data was collected within the two three-hour practical sessions. This involved a lot of collaboration in the field, as well as in the lab, to ensure data sampling and recording was accurate.\n\nStudents successfully accessed the documents on Google Drive to create a written description of the habitat. All groups also managed to curate photographs of the habitat site and plant species found using Google Slides. Self-directed learning clearly took place during the creation of the group documents, which surpassed the expectations of the lecturer in terms of overall quality and attention to detail. The descriptions and figures for the submissions were of a very high standard, especially considering the students had no formal training on expected undergraduate academic standards.\n\nAn example of the type of data created from one group (X) can be seen in Figure 1; it shows the habitat site ‘The Bluebell Wood’ and one of the plants identified, Plantago major (Figure 2). This data demonstrates that students did research the Minimum Undergraduate Standards required, as evidenced by the correct use of binomial names and the use and position of figure legends. Figure 3a provides evidence that the students engaged in the task, researching and identifying plants to species-level. They produced a high quality document with expected standards, all within six weeks of arriving at university. In general, correct standards were applied to the group level documents. The task instructions were purposely brief to enable students to make their own contribution. Figure 1 and Figure 2 show that some students used this opportunity to deepen their learning and understanding; the inclusion of additional notes shows that several groups were engaging in high quality research into their allocated habitat and the species discovered there (Figure 3a).\n\n(a) Example of feedback comments on descriptive task of stage 1 of assignment. (b) Lecturer comments added to student descriptive task in Figure 3a.\n\nThe Google Sheets data, collected during the subgroup task, highlighted a greater range of student ability and was less successful overall. Raw and calculated data were expected; the submitted data ranged from no data, incomplete data sets, to data sets complying with Minimum Undergraduate Standards.\n\nThe approach required students to be responsible for keeping data safe until it was inputted online, and to understand the Minimum Undergraduate Standards for tables. Evidence suggests that many students were unaware of these expectations, as more than one of the data sheets had no data submitted and many others contained data with the units in the body of the table.\n\nThe small size of the subgroups meant that there was greater variation in the standard of work. (This is to be expected for normal distribution of data.) This demonstrates that the subgroups were working within the team that they were assigned to. The large number of data sheets generated (21) meant it was difficult to quickly achieve a single, combined dataset in a format that was suitable for the students to access for the second and third parts of the assignment. The deadline for the assignment was adjusted to take account of the time needed to collate and disseminate the data to the cohort.\n\nInteresting group dynamics were evident from monitoring the Google Drive activity. Members of one group engaged in the process very actively, and some members became natural leaders. These leaders then became a little too controlling and removed the other group members’ editing permissions. The lecturer intervened by sending an email out via Moodle to let them know this was not acceptable behaviour, and reinstated the permissions for all group members. Students used the Moodle forum after this intervention. The students also engaged in group discussions using the Google Apps comments feature. This was particularly useful when students were refining their documents; the comments stream acted as a private, student-led forum for discussion.\n\nThe second part of the assignment, the peer assignment of the group work, was aimed at making students aware of the marking process and the criteria that the final individual work would be marked against (questions set and standards expected). The process also exposed individuals to the entire dataset collected. By engaging students in peer review of the document and data collected, it was hoped that individuals would become familiar with the habitats they had not visited, and not just rely on the site that they were actively involved in.\n\nThe peer assignment required students to look at the eight group documents, descriptions and figures of the four habitats, as well as the 28 data sheets. A high proportion of students fully engaged in the task (68.4%), i.e. they reviewed the minimum standards and used their observations to peer assess the work of the other three groups (see Figure 4 for an example). This enabled them to achieve greater than a 2.1 mark for the task. It was also found that 52% of students obtaining a first class mark for the peer assignment also obtained a first class mark for their independent work.\n\nA number of students struggled with the peer assignment process. The assignments in these cases had arbitrary marking patterns and over-inflated scores, e.g. 5/5, and comments were very brief or absent. 5.3% of students obtained lower second class marks for this second stage of the assignment and demonstrated that they had limited knowledge of expected requirements and types of answers.\n\nThe last set of students (26%) did not engage in stage 2 of the assignment at all. There was no evidence that these students knew about Minimum Undergraduate Standards and this was reflected in the work they submitted for stage 3, as the maximum mark obtained for this set of students was 55%.\n\nThe independent work was an open-ended task that challenged the students to work within, and then beyond, their current knowledge base, e.g. use the tutorial guidance to find out about how to carry out averages and correlation. Students also needed to write some accompanying text. The initial tasks (e.g. finding averages) should have been accessible to all students. Stage 3 assumed that stages 1 & 2 had been achieved to a high standard. However, the data sheets between subgroups were not consistent—one group had a complete data set missing and other groups had sheets with incomplete data.\n\nThe endpoint of the entire assignment was to carry out correlations to establish if there were any discernible patterns in the data from the different habitats. Students found that the data sheets were not in a suitable format to enable correlation, and had to make difficult decisions on what data to include/exclude and how to standardise the results. As a consequence, the students asked for guidance on sorting and analysing the data. Some individuals became stressed, evidenced by emails sent to the lecturer. An emergency tutorial was run in a computer lab to support these students; however some had overcome these issues by that stage, e.g. they had produced correlation tables and figures which they submitted in their individual assignment (see Figure 5 and Figure 6). During the tutorial, the majority of the attendees felt anxious because they had not achieved the ultimate endpoint of the assignment. As a result, the task was modified to include a simpler outcome that they could achieve. Students had put a huge amount of time into the activity; the lecturer feedback indicated that they had done well and listed achievements for figures and descriptions, and highlighted the fact that most had created tables and figures that mainly complied with expected academic standards.\n\n\nDiscussion/Conclusions\n\nThe three-stage assignment was a successful learning experience for most of the students involved, who had no prior knowledge of Google Drive or the collaborative Google Apps learning tools. The lecturer embraced the University ethos which involves being tenacious, bold and challenging. The collaborative activity stretched the lecturer as well as the students; both learnt how to use these new technologies together.\n\nThe division of the cohort into four smaller learning groups and seven subgroups provided an effective and friendly environment for practical work, which provided a good situation to meet the challenge of online collaborative work. However, improved management, e.g. rotating the groups around the laboratory-based activities, would increase the efficiency and reduce the pressure on the limited equipment available in the laboratory.\n\nIt was evident from the practicals and data collection stage that there was too much data to collect and analyse; this could be simplified in the future, e.g. reduce sample collection to one quadrat per sub group. This would decrease the time pressure for staff and students during the practical sessions and would also simplify the class data set.\n\nA better system is needed for subgroup data collection, as the method used was cumbersome and not as effective as alternative methods. Other options include the use of a single document at the group level; use of electronic voting devices (clickers) to collect data, or the use of an online form to collect class data in a standard format (Google also have a tool to facilitate this). This would help team leaders provide class data in a usable form. These alternative approaches would make it easier (and quicker) for students to collect and clean the data.\n\nStudents collaborated within Google Drive and, in some cases, managed to produce high quality documents (Figure 1 to Figure 3). There was a noticeable variation of style between the four groups. This indicates that students worked within their own group and were not using other mechanisms to exchange information with other students outside of their allocated group. Therefore this part of the assignment met its intended learning outcomes.\n\nThe group work, together with the use of comments within Google Docs, demonstrates that students engaged in self-directed learning. The open-ended nature of the assignments allowed the students to deepen their learning, which is evident from research notes added to Figure 1 and Figure 2. Students demonstrated emergent behaviour, which included self-discovery of features in Google Apps, e.g. the discussion stream and document permission settings, which suggests student-student learning took place. Therefore, this met the intended outcome of this approach.\n\nDuring the group work, it emerged that students were using three methods of electronic communication, e.g. discussing and using comments and emails in Google Drive; using the Moodle forum, and also emailing each other via Outlook. The communication tools within Google Drive were most advantageous, as they acted as a private forum for student discussion. The disadvantage of the Moodle forum was that it was accessible to the whole cohort rather than just the group. Also, more importantly, all lecturers teaching first year modules in the School of Biological Science were able to see comments made; this might colour a lecturer’s view of a particular student when setting and marking their future work. Therefore, it would be wise if the lecturer directed the students to a single method (most likely within the Google ecosystem) as this would avoid such problems.\n\nThe success of the three-stage assignment required the rapid creation and collection of Google Account details from students. Overall this worked well, but students who didn’t create accounts did not create any evidence of collaboration, hence did not pass stage 1 of the assignment. Reasons for lack of student engagement were varied: some individuals were reluctant to try something new, others distrusted the online services, and a few didn’t want to put their data in the “Google cloud”. Another possible reason for this poor engagement is the timing of the task; it took place early in the term, so students had to contend with many new things, such as some subgroup members changing degree schemes.\n\nIn the future, it would be advantageous to use Google Apps for Education or the Microsoft Office 365 suite of tools, as all members of the University have access to both of these services now. (However, Office 365 would only be used if the collaboration features are as good as those found in Google Drive.)\n\nUsing Google Apps for Education would give the lecturer greater control, e.g. the ability to establish student accounts for Google Drive before the start of the practical session. Google takes the education arm of its business very seriously, which should reassure students that their data is safe. Showing the students a short video revealing the security measures that Google enforces at its data centres would perhaps ease any concern, e.g. Inside a Google data center. Getting students to look at the Google Drive privacy policies and terms of service would also be helpful.\n\nUsing Google Drive/Apps did make it a little difficult to see which students had done what, as the activity updates are presented as one continuous stream of information. Using Google Apps for Education would also help improve this situation.\n\nIt would be possible to provide formative feedback for the stage 1 work rather than wait for formal feedback. This environment is good for self-directed learning, experimentation, and for encouraging informal communication (which will help students engage at a deeper level and help with a more comprehensive understanding of the subject).\n\nThe peer assignment provided the link between the group and individual work. It also ensured that the students engaged in data that they were not involved in creating. Those students who engaged in the peer assignment activity went on to do a more thorough individual report (third stage of the assignment). This was in stark contrast to the students who had dealt with the peer assignment superficially, who achieved a much lower score.\n\nStage 2 enabled students to regain entry into the assignment process if they had failed to complete stage 1. By engaging students at this stage of the assignment, it was hoped that they would become familiar with the habitats they had not visited and not just rely on the site they sampled. Students did analyse all of the data sets, so the peer assignment activity met its intended aim.\n\nFor the third stage of the assignment, some students used the tutorial and managed to achieve all of the expected outcomes, i.e. correlation figures and identified plants to species level through their own endeavours (Figure 5 and Figure 6). It is important to let students know that there is no correct answer expected, and that the assignment is designed to force them to work outside of their comfort zone.\n\nThe fact that there were three stages to the assignment allowed any student not succeeding at a particular stage to still be successful at a later stage. Another benefit of this staggered approach was that it required a variety of skills for completion, thus enabled a diverse range of learners to succeed at one or more parts of the assignment.\n\nThe overall assignment demanded a high level of engagement, so students who succeeded were clearly involved in a deep learning experience. Many students met this challenge, but some became stressed as they had other assignments to carry out in the same time frame; there was a definite cost versus benefit relationship between the stress created and the learning achieved. To increase the balance towards increased learning, rather than increased stress, a simpler data set together with a more directed final assignment, with some scope for an open-ended approach, will be used in the future.\n\nFinally, the collaborative behaviour that students took part in represents the development of important transferable life skills. The students have been encouraged to use Google Apps for their third year project (but this time they took ownership rather than the lecturer).\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for 'Trialling the use of Google Apps together with online marking to enhance collaborative learning and provide effective feedback', 10.5256/f1000research.6520.d50329\n\n\nConsent\n\nConsent has been obtained from BS111 students using an electronic voting system and only students giving consent had data used.", "appendix": "Author contributions\n\n\n\nNicola Slee (NS) and Marty Jacobs (MJ) conceived the study. NS designed the experiments. MJ carried out the research and training of Google Drive collaborative tools and integration with FASER. MJ provided support to NS and BS111 students. NS contributed to the design of experiments and assignments. NS and MJ prepared the first draft of the manuscript and preparation of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nThe authors declared no competing interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe would like to thank Students of BS111 (2014–15) for their participation and granting permission to use their data in this research note. Also, we would like to show our appreciation to Google for their provision of free tools and resources.\n\n\nSupplementary materials\n\nAim to apply minimum Undergraduate Standards of Presentation. See Undergraduate Handbook for Guidance: http://www.essex.ac.uk/bs/current_students/default.aspx (see page 27–32).\n\nProvide a written description of the habitat you visited and sampled including details of its location. Apply Minimum Undergraduate Standards of Presentation and appropriate details for term “Describe” (see Undergraduate Handbook for guidance). Ideal word count 500–750 words.\n\n[10% of overall mark Practical 2 and 3.]\n\nProvide photos of site and some of the species found. Apply standards required for figures (See guidance in Undergraduate Handbook, section on Minimum Standards of Presentation Diagram/Figures). Ideal number of Figures: 12–20.\n\n[10% of overall mark Practical 2 and 3.]\n\nBS111 Ecology Practical 2 and 3\n\nPart A Subgroup Data Sheet\n\nIt is important that you all access your own version of this via your Google Drive. Divide the tasks up, e.g. for your subgroup one member can put in raw data information from data sheets, one member can work on calculation needed, other members to check.\n\nOne member to check requirements of Minimum Undergraduate Standards for tables, and apply them.\n\nOne member can cross-check with descriptions and photos for details on species identification.\n\nFinal version to be downloaded as an Excel document from Google Drive. The final version will be uploaded to the BS111 Moodle page after the Part A deadline to be used for Part B (evaluation and analysis).\n\nThis will be worth 10% of your assignment.\n\nAdd units, etc, to comply with Minimum Undergraduate Standards.\n\nWorksheet for BS111 Practicals 2 & 3 Part B: Individual submission:\n\nAim to apply minimum Undergraduate Standards of Presentation. See Undergraduate Handbook for Guidance) http://www.essex.ac.uk/bs/current_students/default.aspx (see page 27–32).\n\nUsing all the group data that is accessible on Moodle BS111 i.e.\n\nThe written description of the 4 different habitats; photos of habitats and plant species of the four sites and group data from quadrats, cores that included vegetation, soil, invertebrate analysis carry out the following for this worksheet.\n\nPart B submission: (Stage 2 & 3)\n\nWork to be assessed (see Practical 2 & 3 Part B):\n\n1. Peer assessment & submitted with Part B. Use the Peer assessment form from the BS111 Practical 2 & 3 handbook to assess the other groups information on [10% of overall mark Pract 2 & 3].\n\nInstructions:\n\n1. Access the group data which will be uploaded to Moodle in section for BS111 Practical 2 & 3. Using Peer assessment form peer assess the following:\n\na. Group descriptions for each habitat (excluding own group).\n\nb. Figures of the site and species (excluding own group).\n\nc. Group data sheets (excluding own group).\n\nUse the FEEDBACK Checklist of Expectations and Common Errors to guide you in your peer assessment & Minimum Undergraduate Standards (p27–32 of Undergraduate Handbook).\n\nUse Guidance given in instructions and example excel workbook.\n\nWorksheet for BS111 Practical 2 & 3 Part B:\n\nIndividual work submission:\n\n1. BS111 Practical 2 & 3 Peer Assessment form.\n\n[10% of overall mark Pract 2 & 3, part of Part B mark.]\n\nFeedback sheet used by students to complete Peer assignment task and used by lecturer for Stage 3 assessment.\n\nPart 3: Mark out of 60\n\nPart 3: Assignment of Marks for Stage 2 Assignment: Individual Submission\n\nStage 3: Instructions for Individual Work to be assessed (see Practical 2 & 3 Part B):\n\nQuestions\n\n1. Individual evaluation of class data on the 4 habitats. Details\n\nYou will have access to 3 excel documents that will provide you with\n\n1. Information and examples of the types of analysis to carry out.\n\n2. Class data for the four habitat sites in one workbook where each habitat will have a separate page in the excel workbook. There is also the original downloaded sub-group data in another excel workbook.\n\n3. In the class data workbook, save with your surname BS111 2 & 3 Part B then Analyse to determine habitats species richness and variability in habitats following quadrat sampling & further laboratory analysis: Some guidelines of potential analysis.\n\na. For each habitat calculate the average, standard deviation and standard error for each parameters (numerical data), just a few are shown in the example document.\n\nb. Tabulate the average data into a new table (must comply with Minimum Undergraduate Standards).\n\nc. Decide how many dp are appropriate for numbers.\n\nd. Past Table(s) & if appropriate Figures into word document, reformat as necessary to comply with expected standards.\n\ne. Analysis of data to find out if there are patterns and correlations in the habitats investigated. See information at the end of worksheet on how to carry out analysis.\n\nf. Your aim is to create a correlation table of the different parameter,\n\ng. Work out if there is a significant correlation between the different factors.\n\nh. For two significant correlations produce figures of the scatter plots of the data but add a different symbol or colour for the different habitat sites. (See examples on example sheet).\n\ni. Past relevant tables and Figures in this document of correlation data.\n\nSteps e to i became a bonus section\n\nj. Write your interpretation of the determine habitats species richness and variability in habitats results in text box (ideal word count <750 words).\n\n\nReferences\n\nBiggs J: Teaching for Quality Learning at University: What the student does. 2nd Ed. The Society for Research into Higher Education & Open University. 2003. Reference Source\n\nBrown G: How Students Learn. A supplement to the RoutledgeFalmer Key Guides for Effective Teaching in Higher Education Series. 2004. Reference Source\n\nDoolan MA: Setting up online collaborative learning groups using Wiki technology - a tutors’ guide. SEDA (Staff and educational development association) Educational Developments 8.2. 2007; 12–13.\n\nHosseini D: Using Google Docs as a tool for collaborative learning at the University of St Andrews. JISC Scotland, Showcase, 2014. Reference Source\n\nMello LV: The use of technology promoting student engagement and self-directed learning in a PGT cohort. (University of Liverpool). Researchers – Teachers – Learners We’re all in this together Society For Experimental Biology Education and Public Affairs Section Symposium EPA1.15. 2012. Reference Source\n\nPalintit: Palintest Soil Tests Manual Photometer 5000 System, Soil. 4. Soil pH Colour Match Method. ELE Operating Instructions. 1997.\n\nRust C: Presentation at University of Essex ‘assignment: assignment & Learning’ From Oxford Brookes University, ASKe (assignment, standards Knowledge exchange. Centre for Excellence in Teaching and Learning. 2008. Reference Source\n\nSEB: “Researchers – Teachers – Learners” conference. 2012. Reference Source\n\nScott G: Learner autonomy as a means to promote engagement. (University of Hull) Researchers – Teachers – Learners We’re all in this together Society For Experimental Biology Education and Public Affairs Section Symposium EPA1.6. 2012. Reference Source\n\nScott G, Churchill H, Grassam M, et al.: Can the integration of field and classroom-based learning enhance writing? The life on our shore case study. Education 3–13: International Journal of Primary, Elementary and Early Years Education. 2012; 40(5): 547–560. Publisher Full Text\n\nSlee N, Jacobs M: Dataset 1 in: Trialling the use of google apps together with online marking to enhance collaborative learning and provide effective feedback. F1000Research. 2015. Data Source\n\nVoelkel S: Engagement and learning through formative e-assessment. (University of Liverpool) Researchers – Teachers – Learners We’re all in this together Society For Experimental Biology Education and Public Affairs Section Symposium EPA1.11. 2012. Reference Source" }
[ { "id": "9316", "date": "10 Jul 2015", "name": "Kay Yeoman", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting paper which gives a lecturer who is new to learning technology an opportunity to try a relatively straight forward intervention. The paper describes the use of Google Web 2.0 technology tools, namely Google Docs and Google Slides to help with the ecology field work of first year students. I think it would be good to alter the use of the terminology, and refer to first year students as ‘Level 4’. I would also like to know the range of degree programmes which these students are enrolled on, and what the gender ratio is.  The first link in the introduction ‘Challenger Philosophy’ requires the use of a password to enter the content. I think it would be good in the introduction to say something about what the challenger philosophy is, as readers will not be able to access this information via the link. In the development of this idea, it would be good to know what had been done in previous years, and what promoted the change to this new approach, or was this a new module? The introduction gives a good rationale for this choice of technology intervention, e.g. enhanced group work, opportunity for peer assessment, and a greater opportunity for cohorts to get to know each other and to discuss data generated. Other educational literature is cited to support the rationale. In the results, the authors stated that the students ‘mainly succeeded’ in setting up Google Accounts. This suggests that some did not? If they did not, what were the barriers? (This was actually subsequently looked at in the discussion). The authors indicate that after the group work the students showed a greater awareness of ecology techniques and the campus, how was this information gathered? Was it through informal discussion, or was it through comparison to previous experiences of work produced in this module? This needs some clarification. There were interesting reflective comments from the authors, but I would like to see some information on the student evaluation, it would be particularly interesting to see if any themes were emerging from student free text comments on the skills which they considered they had developed. If this module has run before, without this technology intervention, is there a difference in the marks obtained for the module?  The authors mentioned ‘ownership’ of the information several times during the paper. I think some referral to the wealth of literature surrounding research-led teaching where ownership is a key outcome of this type of learning, would enhance the paper.", "responses": [ { "c_id": "1525", "date": "10 Jan 2017", "name": "Nicky Slee", "role": "Author Response", "response": "We thank the reviewer for her helpful comments. We will address these, along with others, in a revision of our paper once the other reviews have come in.Nicky Slee" }, { "c_id": "2322", "date": "10 Jan 2017", "name": "Nicky Slee", "role": "Author Response", "response": "The authors would like to thank Kay Yeoman for reviewing the paper and giving comments to improve it. We have addressed the comments raised and incorporated them into the updated version of the paper. The terminology has been updated, first year students are referred to as Level 4 students. Details have been added about the degree programmes studies and gender ratio. An updated link has been provided for the Challenger Philosophy which provides information needed.The rationale for the new approach and reasons for changing the previous practical have been given and information about the lecturer’s experience has been added. Greater awareness of ecology and campus was based on work submitted rather than from informal discussions with students or comparison to other years. Literature has been added about student centered learning and proposed benefits of the approach. Student evaluation was not included in this work but would be an important addition in the future." } ] }, { "id": "9314", "date": "20 Jul 2015", "name": "Melanie Link-Perez", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes a multiple-week ecological exercise that has both field and laboratory components and combines opportunities for collaborative learning, peer assessment, and individual work, with instructor feedback included at several stages. An important component of the three-stage assignment is the pivotal role of technology, in the form of free Google online tools, in order to facilitate the collaboration between students and the delivery of feedback from the instructor. The introduction does a nice job of providing context and articulating the rationale behind the design of the method described. The assignment is well-conceived, and there is a natural progression from group work and collaboration toward individual demonstration of learning outcomes. Especially valuable is the opportunity for the students to engage in assessment of their peers, a high-impact activity that the authors noted led to higher achievement in the final stage of the assignment for those students who fully participated in the peer-review. This method should be easy to implement for instructors willing to do so. General comments to consider during revision:Change “peer assignment” to “peer assessment”; better reflects the nature of the activity, less confusing. Make the abstract a bit more explicit; for example, in the third paragraph it states that the assignment “was a success.” Based on? How so? The answers are there, but the reader has to wait for them and is not certain if she has correctly identified them. Go all the way when making a statement; don’t lead the reader part of the way there, and expect him to complete the thought the way you intended. Be explicit. This comment goes for the article in general. In the introduction, the authors refer to the “Challenger Philosophy”; the hyperlink requires additional log-in information and is not generally accessible. Please summarize the key points and remove the hyperlink. The authors repeatedly reference the “Minimum Undergraduate Standards,” which appear on pages 27-32 of a 79-page PDF that is hosted elsewhere online. These standards should at a minimum be summarized (so that the reader doesn’t have to find them in the aforementioned document in order to know what type of criteria are included). Since the PDF that contains them is likely to be updated regularly the by School of Biological Sciences at University of Essex, therefore causing the page reference to change, the authors may wish to create a static, free-standing document containing this information. Under Methods, Stage 1: What kind of information was disseminated via Moodle to prepare students for the habitat visits? What background information did the students have? I was often distracted by the lack of information provided about the field and laboratory portion of this assignment (although I could clearly follow the collaborative and individual written work). What size quadrats did the students use? How did they go about species identification? When the soil samples were collected, the authors state that “the vegetation was put in one labeled sample bag and the soil in a second.” Was that the upper, vegetative layer of the soil sample, and for what purpose? How were the insects collected? By Berlese funnel from the soil samples? Were they just the invertebrates that were observed while the students were in the field? Add more information about Instructor Feedback in the Methods section; what kind of feedback was provided at each stage? Were students assessed primarily based upon the presentation criteria (following guidelines regarding Tables and Figures, etc.) or were they also assessed to some degree on accuracy/correctness of information presented (for example, in Figure 2, the student notes state that Plantago major is a monocotyledon, which is erroneous; it is a Eudicot)? Based on the article title, a reader will expect more information about this aspect of the research. I like the examples of student generated work. It might be nice to include a few more examples under “Supplemental Materials” for those who would like to view them. I was a bit surprised by the amount of students who didn’t participate in stage 2 (26%). Any suggestions for how to resolve this issue? Why the low “buy in” by these students? I think it would be nice to include a little more information about the lecturer’s experience. Could some of this be added to the Results and/or Discussion? The final paragraph of the paper ends oddly. It seems to reference a future event (having students use Google Apps for their third year project) but talks about it in the past tense (“but this time they took ownership”); consider revising. It is not a good idea to end an article with a parenthetical comment (weakens it).Quick fixes:Data are plural (see 5th paragraph under Results) Second to last paragraph under Results, Stage 1: “The large number of data sheets generated (21) meant…” I think the number in parentheses should be 28.Overall, a nice teaching module that I would encourage instructors to try; I can envision several standard laboratory or field exercises that could be modified and expanded to include the collaborative group work, peer assessment, and individual analyses presented here. It is a really nice model. The article itself can be strengthened by providing more details, where relevant, and by being more explicit.", "responses": [ { "c_id": "1526", "date": "10 Jan 2017", "name": "Nicky Slee", "role": "Author Response", "response": "We thank the reviewer for her helpful comments. We will address these, along with others, in a revision of our paper once the other reviews have come in.​Nicky Slee" }, { "c_id": "2321", "date": "10 Jan 2017", "name": "Nicky Slee", "role": "Author Response", "response": "The authors would like to thank Melanie Link-Perez for her review and helpful comments to improve the paper. We have addressed the comments raised and incorporated them into the updated version of the paper e.g.terminology has been changed from peer assignment to peer assessment. The hyperlinks have been updated (Challenger Philosophy) or changed e.g. Minimum Undergraduate Standards is now in Supplementary Materials. Information has been added about the Lecturer's background and experience. The methods section has been expanded to give more details about the ecological sampling. Information on feedback has been included. The new version Numerical data been added to make information more explicit together with definitions about success criteria (performance related to average for stage 1 and success related to degree class marks). Abstract and paper has been updated to include specific details on success at each stage, looking at different success criteria that have been defined; statistical analysis has been carried out and reported; results are the final results assigned to the students for this work. An example of one student's group work, Google slide document, is added in the supplementary materials." } ] } ]
1
https://f1000research.com/articles/4-177
https://f1000research.com/articles/5-2533/v1
18 Oct 16
{ "type": "Opinion Article", "title": "Puzzles in modern biology. III. Two kinds of causality in age-related disease", "authors": [ "Steven A. Frank" ], "abstract": "The two primary causal dimensions of age-related disease are rate and function. Change in rate of disease development shifts the age of onset. Change in physiological function provides necessary steps in disease progression. A causal factor may alter the rate of physiological change, but that causal factor itself may have no direct physiological role. Alternatively, a causal factor may provide a necessary physiological function, but that causal factor itself may not alter the rate of disease onset. The rate-function duality provides the basis for solving puzzles of age-related disease. Causal factors of cancer illustrate the duality between rate processes of discovery, such as somatic mutation, and necessary physiological functions, such as invasive penetration across tissue barriers. Examples from cancer suggest general principles of age-related disease.", "keywords": [ "cancer", "neurodegeneration", "genetics", "epidemiology" ], "content": "Introduction\n\nIf you inherit certain mutations of the p53 gene, you have an increased risk of cancer1. If you do not inherit such mutations, but nonetheless develop cancer, your tumor likely has a somatically acquired mutation in the apoptotic pathways associated with p532.\n\nIn each case, p53-associated mutation has a causal effect on cancer.\n\nThe inherited mutation increases the rate of cancer development and shifts disease onset to earlier ages. Shift in age of onset defines a cause of cancer.\n\nThe physiological change, breakdown of apoptosis, provides a necessary function in cancer development. Physiological necessity defines a cause of cancer.\n\n\nDuality of rate and function\n\nA factor that shifts the age of onset may not be important physiologically.\n\nFor example, a rise in somatic mutation may increase the rate of breakdown in apoptosis. Rapid breakdown in apoptosis shifts the age of onset. In this case, increased mutation directly changes the rate of onset but does not itself directly change physiological function.\n\nA factor that changes physiology may not shift the age of onset.\n\nFor example, tumors often adapt their metabolism to hypoxic conditions3,4. The necessary physiological changes may arise relatively rapidly in response to hypoxia. The functional changes are a necessary cause of tumor development. However, rapidly acquired changes do not causally influence the rate of cancer development or the age of onset.\n\nThe duality of rate and function recur. Each causal factor must be evaluated simultaneously in two dimensions. How does a causal factor alter the rate of tumor development? How does a causal factor alter the physiological function of the tumor?\n\n\nIdentifying causal factors\n\nWhat sort of evidence could we collect to show that a factor plays a causal role in cancer?\n\nShift in age of onset is often studied in experiments5. Start with a particular mouse genotype. Create a knockout variant that lacks expression of a particular gene. Compare the age of tumor onset between the initial and knockout types. If the incidence curve in the knockout shifts to earlier ages, then loss of the target gene is a potential cause of cancer.\n\nIn general, we can relate the change in a potential causal factor to the change in the rate of cancer development and age of onset.\n\nAlternatively, studies may focus on physiological function. Experimentally, one may reverse a physiological change and measure the abrogation of a cancerous state. Success points to a physiologically necessary function.\n\nIn general, we can relate the change in a potential causal factor to the change in the physiological function of a tumor.\n\nLarge datasets allow one to correlate changes with cancer. A strong correlation suggests a candidate cause. However, the correlation may identify a factor that either increases the rate of cancer development or has a necessary physiological function in tumors.\n\n\nSolving different puzzles\n\nFull analysis requires simultaneous study of rate and function. The relative roles of the two causal dimensions vary with particular puzzles.\n\nTreatment requires a dual focus on interfering with cancer’s physiological function and on altering the rate of escape from treatment. One typically begins by finding a way to block an essential physiological function. An initially successful block loses value in proportion to the rate at which the tumor escapes control.\n\nPrevention depends only on slowing the rate of onset. Physiologically important functions may provide targets for slowing onset. However, some processes may significantly slow the rate of onset yet be physiologically unimportant. For example, the rate of onset may be increased by wound healing associated with a temporary increase the rate of cell division, by increased epigenetic instability, or by increased mutagenesis. Reduction of these rate-enhancing processes aids prevention.\n\nEarly detection may focus on direct evidence of functional change. Small precancerous tumors associate with cancerous changes in physiology. Elevated levels of specific markers associate with cancerous physiological changes. Alternatively, one may focus on indicators associated with rate processes that shift the age of onset. Such indicators suggest elevated risk and the need to screen more carefully for direct signs of physiological change.\n\nBasic understanding of onset ultimately depends only on rate. Each causal factor must be evaluated within the complex interacting ensemble of processes that determine the overall rate of onset5. One must study how change in a causal factor shifts the age of onset within a particular background of other rate processes. Although only rate matters, function provides clues about which factors may influence rate.\n\nBasic understanding of physiology depends only on function. An important function does not necessarily influence rate.\n\n\nRate is the search, function is the find\n\nIn general, the relation between rate and function is similar to the relation between the process of discovery and the actual discovery itself6. In tumor evolution, the duality becomes the relation between the processes that change physiological function and the physiological function itself. For example, somatic mutation and natural selection between cellular lineages are processes that change physiological function. Acquired ability to invade across tissue barriers is a common physiological function of tumors.\n\n\nAge-related disease\n\nAge-related disease expresses the same duality of rate and function. Factors that influence rate alter the timing of disease onset. Factors that influence physiological function may be important targets for treatment, prevention and early detection.\n\nBasic understanding always demands a clear separation of rate and function. Only from that two-dimensional perspective can one solve particular puzzles. The solutions inevitably express the interactions of rate and function.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Science Foundation grant DEB–1251035 supports my research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nKamihara J, Rana HQ, Garber JE: Germline TP53 mutations and the changing landscape of Li-Fraumeni syndrome. Hum Mutat. 2014; 35(6): 654–662. PubMed Abstract | Publisher Full Text\n\nVogelstein B, Lane D, Levine AJ: Surfing the p53 network. Nature. 2000; 408(6810): 307–310. PubMed Abstract | Publisher Full Text\n\nSemenza GL: Hypoxia-inducible factors: mediators of cancer progression and targets for cancer therapy. Trends Pharmacol Sci. 2012; 33(4): 207–214. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilkes DM, Semenza GL, Wirtz D: Hypoxia and the extracellular matrix: drivers of tumour metastasis. Nat Rev Cancer. 2014; 14(6): 430–439. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrank SA: Dynamics of Cancer: Incidence, Inheritance, and Evolution. Princeton (NJ): Princeton University Press; 2007. PubMed Abstract\n\nFrank SA: Puzzles in modern biology. II. Language, cancer and the recursive processes of evolutionary innovation [version 1; referees: 1 approved]. F1000Res. 2016; 5: 2289. Publisher Full Text" }
[ { "id": "17104", "date": "21 Nov 2016", "name": "Anya Plutynski", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title is somewhat misleading. Are rate and functional changes genuinely two different \"kinds\" of causality? How exactly are kinds of causality distinguished?  As a general rule, I'm not a fan of multiplying kinds (whether of causation, or other entities, processes, etc.) without good reason. While the author is right to point out that one can (and should) distinguish how a disruption affects rate of onset versus how disruptions affect specific functions, I'm not entirely sure that this warrants the claim that these are two distinct kinds of causation.\nAlso, regarding the general thesis: surely it's true that changes to rate of onset can involve compromises in function (in some sense), and compromises in function can also change rate of onset? Suppose a gene (e.g., BRCA) is associated with genetic stability or appropriate chromosomal division during mitosis. Mutations such a gene can lead to earlier onset of cancer, but surely also mutations to such genes compromise a function (namely, cell division). Surely the two are not altogether independent?\nOther than this major worry, most of my concerns have to do with clarity of expression:\nSome of the explication of key ideas is all too brief, or the writing is a bit unclear, or difficult to understand.  E.g.,“Causal factors of cancer illustrate the duality between rate processes of discovery” - I’m not sure what “rate processes of discovery” means… does the author mean rates of incidence? The causes of rates of actual discovery of a tumor, via screening, or perhaps diagnoses on the basis of symptoms surely include but are not limited to biological causes (e.g., the skill of pathologists, the effectiveness of our screening tools, etc.).  i.e., \"rate processes of discovery\" is potentially misleading.\n\nAlso the claim that there is a “duality between rate… and necessarily physiological function\" is somewhat difficult to interpret….  I think that the author simply means that these two outcomes (rate of onset and functional disruption) are different, and their causes are different as well. Moreover, I think that the author simply means that we ought to be clear about which outcome interests us, and not assume that whenever we affect function, we also affect rate of onset, and vice versa? Is this a common conceptual confusion in the literature? If so, an example or two as illustration would motivate the reader to see this as a serious concern worth policing in future.\n\nAlso, the claim that X or Y functional change is a “necessary cause of tumor development” is somewhat misleading. Few very specific functional changes are \"necessary\" for cancer, though some may be more important than others. To be sure, some \"generic\" functional changes are necessary for cancer, but I don't think that the author means to suggest that ONLY IF this particular function were disrupted in this particular way, would cancer eventuate. Many functions are disrupted in a variety of different ways – the same pathway may be compromised in quite different manners.\n\nI’m also skeptical of claims about the notion of “physiologically necessary function.”  The author claims, “Experimentally, one may reverse a physiological change and measure the abrogation of a cancerous state. Success points to a physiologically necessary function.” Many functions in biological systems are robust exactly because there are duplicated gene or blocks of genes that play similar functions.  E.g., redundancy is a common feature of biological systems. So, while we may think of a particular realization of function as \"necessary,\" it's of course possible that when such a function is compromised, another (similar) mechanism could play a similar functional role.", "responses": [ { "c_id": "2414", "date": "10 Jan 2017", "name": "Steven Frank", "role": "Author Response F1000Research Advisory Board Member", "response": "I appreciate Anya Plutynski's thoughtful comments. I understand her critical perspective and agree with many of her specific points. However, in my comments below, I suggest that we may be focusing on different aspects of the problem. Both perspectives are valuable. In that regard, it is very helpful to have this exchange included as part of the published version of the article.  Italics quote from Plutynski’s review. Are rate and functional changes genuinely two different \"kinds\" of causality? How exactly are kinds of causality distinguished?  As a general rule, I'm not a fan of multiplying kinds (whether of causation, or other entities, processes, etc.) without good reason. While the author is right to point out that one can (and should) distinguish how a disruption affects rate of onset versus how disruptions affect specific functions, I'm not entirely sure that this warrants the claim that these are two distinct kinds of causation. I appreciate that philosophers have refined understanding of notions such as “causality” and “function.” If I had written for a philosophy journal, I would have taken a different approach or, more likely, I would have collaborated with a philosopher to help in getting things right.  In my view, I was simply describing what I have repeatedly encountered in the biological literature. Biologists search for what they think of as the causal basis of disease, without reflecting deeply on what they mean. I addressed only one specific aspect of the difficulties that arise from not giving sufficient thought to the motivating goal. I believe that difficulty hinders progress in biological research.  In particular, Plutynski has correctly identified my goal as distinguishing between how a disruption affects rate of onset versus how disruptions affect specific functions. I had in mind biologically trained readers. I chose a language that I believe will communicate most effectively with those readers. I welcome this exchange as an addendum that helps to cross the language divide between biology and philosophy, a very useful step for both sides. Also, regarding the general thesis: surely it's true that changes to rate of onset can involve compromises in function (in some sense), and compromises in function can also change rate of onset? Suppose a gene (e.g., BRCA) is associated with genetic stability or appropriate chromosomal division during mitosis. Mutations such a gene can lead to earlier onset of cancer, but surely also mutations to such genes compromise a function (namely, cell division). Surely the two are not altogether independent? I agree. The first sentence of my abstract is: “The two primary causal dimensions of age-related disease are rate and function.” A two-dimensional space does not imply exclusivity. Rather, it provides a way to locate a factor simultaneously with respect to the two aspects. Later in the article I say: “The duality of rate and function recur. Each causal factor must be evaluated simultaneously in two dimensions. How does a causal factor alter the rate of tumor development? How does a causal factor alter the physiological function of the tumor?” Some of the explication of key ideas is all too brief, or the writing is a bit unclear, or difficult to understand.  E.g.,“Causal factors of cancer illustrate the duality between rate processes of discovery” - I’m not sure what “rate processes of discovery” means… does the author mean rates of incidence? The causes of rates of actual discovery of a tumor, via screening, or perhaps diagnoses on the basis of symptoms surely include but are not limited to biological causes (e.g., the skill of pathologists, the effectiveness of our screening tools, etc.).  i.e., \"rate processes of discovery\" is potentially misleading. F1000Research has alternative word limits that set the kind of article and the associated open access fees. I wrote this article to fit within the limit of 1000 words, leading to brevity. To provide examples, I coupled this article with a following article in this series, which includes discussion of neurodegeneration, cancer and heart disease (http://dx.doi.org/10.12688/f1000research.9790.1). In my revision, I have added a final section “Prospect” with a pointer to the following article. My article also cited my extensive summary of cancer in my 2007 book (ref. 5). Also the claim that there is a “duality between rate… and necessarily physiological function\" is somewhat difficult to interpret….  I think that the author simply means that these two outcomes (rate of onset and functional disruption) are different, and their causes are different as well. Moreover, I think that the author simply means that we ought to be clear about which outcome interests us, and not assume that whenever we affect function, we also affect rate of onset, and vice versa?  Correct. Is this a common conceptual confusion in the literature?  Yes. If so, an example or two as illustration would motivate the reader to see this as a serious concern worth policing in future. Because of the length restriction and the following article noted above, this article is designed only to sketch the problem in the briefest manner. The following article (see above) notes some applications, for example, in the section Candidate Mechanisms.  Also, the claim that X or Y functional change is a “necessary cause of tumor development” is somewhat misleading. Few very specific functional changes are \"necessary\" for cancer, though some may be more important than others. To be sure, some \"generic\" functional changes are necessary for cancer, but I don't think that the author means to suggest that ONLY IF this particular function were disrupted in this particular way, would cancer eventuate. Many functions are disrupted in a variety of different ways – the same pathway may be compromised in quite different manners. I mostly agree. I was trying to express what I see as a common mode of expression in the biological literature. For example, most or perhaps nearly all colorectal tumors seem to have acquired abrogation of apoptosis. My interpretation of the biological literature is that, in this case, abrogation of apoptosis is indeed thought of as a necessary cause of tumor development. Of course, all biologists know that words such as “necessary” or “always” are always wrong, because the fundamental lesson of biology is variability. Nonetheless, I think there remains an implicit sense of thinking this way, and thus I wanted to reflect that thought and redirect it to the duality of rate and function, rather than take on refinements of expression. I’m also skeptical of claims about the notion of “physiologically necessary function.”  The author claims, “Experimentally, one may reverse a physiological change and measure the abrogation of a cancerous state. Success points to a physiologically necessary function.” Many functions in biological systems are robust exactly because there are duplicated gene or blocks of genes that play similar functions.  E.g., redundancy is a common feature of biological systems. So, while we may think of a particular realization of function as \"necessary,\" it's of course possible that when such a function is compromised, another (similar) mechanism could play a similar functional role. I said if A leads to B, then success points to a physiologically necessary function, in which A is “reverse a physiological change” and B is “abrogation of a cancerous state.” I have changed my wording in my revision to “success points to a candidate for a physiologically necessary function.” I had meant my original “points to” in exactly this way, in the sense of providing a clue." } ] }, { "id": "18447", "date": "03 Jan 2017", "name": "Marta Bertolaso", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nReading the title of this paper, I have at least two questions to pose to the author: first, what are ‘age-related diseases’ and how does cancer qualify as such; second, what is the puzzle here. As for the second question, I don’t find in the paper any puzzle. Instead, what I find is a pretty clear exposition of two different approaches to finding and evaluating causal factors. Four arguments and a final suggestion follow.\n\nAlthough philosophers would be more demanding in using terms such as “function” and “kinds of causality”, I think Frank’s distinction between rate and function can be useful and bring clarity in the following domains: setting up experiments, interpreting results, and making mathematical models. In the paper, the section entitled “Identifying causal factors” is the most illustrative under this respect, the rest of the paper lacking a bit of context for the reader to understand the importance of what is being argued.\n\nAbout age-related diseases, I would not think of cancer primarily as an age-related disease, and for good reasons I think, although I agree that there are important theoretical links. Cancer is one of those diseases that include alterations of the dynamic stability of the organism. Age is not enough, not essential, although, of course, it increases the risk. Therefore, the role that the emphasis on cancer as an age-related disease plays in this paper should be clarified. Rather, cancer may be more properly defined as a life history related disease (indeed, it might help us in reconceptualising aging itself). I think this is a key feature that Frank captures in his distinction between “the physiological function itself” and “the processes that change physiological function”. These latter processes have to do with the dynamic stability of the organism. If the author wants to stress this point, in my view, it would be more useful to talk about “the modified processes that don’t stabilize function anymore”, therefore letting new functions emerge. This way of expressing would be no slight difference, since it would adopt an organism centered perspective as opposed to a cancer centered one (centered on genetic mutations in individual cells and cell populations). The author’s own distinction, reformulated as a distinction between ‘function change’ and ‘process change’, would then be more sounding and would express all its usefulness to understand scientific practice.\n\nThe author seems to be adopting a strict selectionist perspective (cancer is an evolutionary process of discovery). It is in this perspective that “rate” makes sense: it is the rate of an ongoing process of discovery of particular biological functions that are necessary for cancer. The cancer centered selectionist perspective imbues the language of prevention, leading to paradoxical statements such as saying that wound healing, cell division, epigenetic instability, or increased mutagenesis are “physiologically unimportant”. Yet, several available interpretations and models of cancer, both in the lab and in the clinic, seem to suggest a different preventive and therapeutic approach, one that empowers the organism and helps it to find new stable states. In a recent book (Bertolaso 2016) I have reviewed some of this territory and argued for a more articulated perspective that recomposes some paradoxes.\n\nFrank’s reductionist approach seems useful for the methodological purposes mentioned above, but it might be misleading if adopted as a general approach to carcinogenesis and cancer onset. “Basic understanding of onset”, as it is expressed here, seems to be a rather impossible goal: we should find all the factors that influence rate (also by using factors functions as a clue) and then study their influence “within a particular background of other rate processes” and “within the complex interacting ensemble of processes that determine the overall rate of onset”. This seems to be the kind of approach of ‘getting complexity by aggregation’ that has been declared as desperate by Weinberg (2014). I would make clearer that the selectionist perspective (with the associated ‘rate’ talk) is one possible useful reduction of a disease that is probably best seen as affecting a tissue, an organ or an organism and their dynamic stability and stabilizing functionalities.\n\nIn summary, I would suggest the author to reframe and clarify the structure of the paper so as to make clear its scope: i.e., proposing a clear distinction (not a puzzle) that must be taken into account in modeling and experiment. I would add a little bit of background to emphasize this importance. Comments in n.2 can be of help although the wider perspective I am suggesting is not strictly needed if the methodological approach and the explanatory import of the experimental context of this paper is adequately narrowed down.", "responses": [ { "c_id": "2416", "date": "10 Jan 2017", "name": "Steven Frank", "role": "Author Response F1000Research Advisory Board Member", "response": "I thank Marta Bertolaso for her many thoughtful comments that extend the scope of the discussion. This exchange is included as part of the final publication of my article, so I confine most of my response to these comments. In a few cases, I note revisions in Version 2 that address some issues.  Italics quote from Bertolaso’s review. Reading the title of this paper, I have at least two questions to pose to the author: first, what are ‘age-related diseases’ and how does cancer qualify as such; second, what is the puzzle here.  “Age-related diseases” are diseases that change with age, implicitly, diseases that change with age in some regular manner. Colorectal cancer incidence increases with age. Retinoblastoma incidence increases very early in life, then the incidence declines to nearly zero after age five or so. Both diseases are related to age. With regard to the puzzle here, the abstract states: “The rate-function duality provides the basis for solving puzzles of age-related disease.” Thus, I agree that this article does not pose a puzzle, but rather provides some tools that can help in solving puzzles. The article contains a section with the title “Solving different puzzles” that emphasizes this goal. In my revision, I have also added a final section “Prospect” to further emphasize that point.  In the paper, the section entitled “Identifying causal factors” is the most illustrative under this respect, the rest of the paper lacking a bit of context for the reader to understand the importance of what is being argued. F1000Research has alternative word limits that set the kind of article and the associated open access fees. I wrote this article to fit within the limit of 1000 words, leading to brevity. To provide examples, I coupled this article with a following article in this series, which includes discussion of neurodegeneration, cancer and heart disease (http://dx.doi.org/10.12688/f1000research.9790.1). In my revision, I have added a final section “Prospect” with a pointer to the following article. My article also cited my extensive summary of cancer in my 2007 book (ref. 5). About age-related diseases, I would not think of cancer primarily as an age-related disease, and for good reasons I think, although I agree that there are important theoretical links. Cancer is one of those diseases that include alterations of the dynamic stability of the organism. Age is not enough, not essential, although, of course, it increases the risk. … This comment and the following one raise a stimulating and nuanced view of dynamic stability and disease (see the full text of comments 2 and 3 in Bertolaso’s original review). I am not going to change course and take up that view, because it is not the way in which I was thinking about the problem. I do see the value and appreciate being introduced to this alternative. I think there is an opportunity, in the future, to consider the relative merits of Bertolaso’s framework in relation to my views. Perhaps, over time, there will be a merging of the best aspects of the alternative perspectives into something that will help us to understand these problems more clearly.    Frank’s reductionist approach seems useful for the methodological purposes mentioned above, but it might be misleading if adopted as a general approach to carcinogenesis and cancer onset. “Basic understanding of onset”, as it is expressed here, seems to be a rather impossible goal: we should find all the factors that influence rate (also by using factors functions as a clue) and then study their influence “within a particular background of other rate processes” and “within the complex interacting ensemble of processes that determine the overall rate of onset”. This seems to be the kind of approach of ‘getting complexity by aggregation’ that has been declared as desperate by Weinberg (2014).  I never suggested that one could find “all the factors that influence rate” or that one should try to do so. My book Dynamics of Cancer (ref 5) discusses at length how to go about studying individual processes that influence rate within the background of many other often unidentified processes that must also be acting. The essential approach combines two aspects. First, one must have a hypothesis about how a particular rate process alters progression, within a conceptual or theoretical framework for disease progression. Second, one must test that hypothesis by perturbing the particular process and observing the change in the age-incidence curve. If one can consistently predict the pattern of change in age-incidence curves with respect to perturbation of hypothesized rate processes, then one is moving in the right direction. This approach is the opposite of “getting complexity by aggregation.” It is instead a method to isolate putative causes in a testable manner. In this way, one can parse apparent complexity by a practical method of simplification, without oversimplifying.    In summary, I would suggest the author to reframe and clarify the structure of the paper so as to make clear its scope… My revised Version 2 adds a brief section “Prospect” to clarify the intended scope of the article, a small step in the suggested direction but not a complete reframe of the structure." } ] } ]
1
https://f1000research.com/articles/5-2533
https://f1000research.com/articles/6-29/v1
10 Jan 17
{ "type": "Research Article", "title": "Characterization of BRCA1/2 mutations in patients with family history of breast cancer in Armenia", "authors": [ "Sofi Atshemyan", "Andranik Chavushyan", "Nerses Berberian", "Arthur Sahakyan", "Roksana Zakharyan", "Arsen Arakelyan", "Sofi Atshemyan", "Andranik Chavushyan", "Nerses Berberian", "Arthur Sahakyan", "Roksana Zakharyan" ], "abstract": "Background. Breast cancer is one of the most common cancers in women worldwide. The germline mutations of the BRCA1 and BRCA2 genes are the most significant and well characterized genetic risk factors for hereditary breast cancer. Intensive research in the last decades has demonstrated that the incidence of mutations varies widely among different populations. In this study we attempted to perform a pilot study for identification and characterization of mutations in BRCA1 and BRCA2 genes among Armenian patients with family history of breast cancer and their healthy relatives. Methods. We performed targeted exome sequencing for BRCA1 and BRCA2 genes in 6 patients and their healthy relatives. After alignment of short reads to the reference genome, germline single nucleotide variation and indel discovery was performed using GATK software. Functional implications of identified variants were assessed using ENSEMBL Variant Effect Predictor tool. Results. In total, 39 single nucleotide variations and 4 indels were identified, from which 15 SNPs and 3 indels were novel. No known pathogenic mutations were identified, but 2 SNPs causing missense amino acid mutations had significantly increased frequencies in the study group compared to the 1000 Genome populations. Conclusions. Our results demonstrate the importance of screening of BRCA1 and BRCA2 gene variants in the Armenian population in order to identity specifics of mutation spectrum and frequencies and enable accurate risk assessment of hereditary breast cancers.", "keywords": [ "breast cancer", "BRCA1", "BRCA2", "mutation screening", "targeted exome sequencing" ], "content": "Introduction\n\nBreast cancer (BC) is one of the most common cancers in females worldwide1 and particularly in Armenia2. Despite the high prevalence of this disease in developed countries, it has become highly prevalent in developing countries (50% of all cancer cases) and is characterized by high mortality rate (58% of all breast cancer related deaths)3.\n\nThe germline mutations of the BRCA14 and BRCA25 genes are the most significant and well characterized genetic risk factors for hereditary breast cancer, which constitutes about 5–10% of all cases6. Inherited mutations in BRCA1 and BRCA2 genes account for 30–50% of all known mutations associated with this disease7,8. Women who carry BRCA1 mutations are particularly susceptible to the development of breast cancer before the age of 35–40 with a probability rate of 45%–60%, whereas women who inherit a BRCA2 mutation have a 25%–40% risk of developing breast cancer7,8. The association of BRCA1/BRCA2 gene mutations with breast cancer was first well described in Ashkenazi Jews8–11. Intensive research in the last decades has demonstrated that the incidence of mutations in high-risk families varies widely among different populations6. For example, the mutations in BRCA1 and BRCA2 were each estimated to account for 45–50% of families with multiple cases of breast and ovarian cancer in UK and USA3,12, whereas mutation prevalence among African–Americans with family breast and ovarian cancer history was 16.3% for BRCA1 and 11.3–14.4% for BRCA213,14, which is significantly lower compared to Caucasian populations. Identification of the BRCA1/BRCA2 mutations in different populations and ethnic groups is an important endeavor, which enables geneticists and oncologists to make more specific choices in genetic testing of members of high-risk families15–17.\n\nHere we have attempted to perform a pilot study for identification and characterization of mutations in BRCA1 and BRCA2 genes among Armenian patients with family history of breast cancer and their healthy relatives.\n\n\nMaterials and methods\n\nSix patients with confirmed family history of breast cancer (at least two cases in a family) and their first-degree healthy relatives were recruited in this study (except for the BC10 patient, see Table 1). Patients were admitted to the National Center of Oncology MH RA and ARTMED Medical Rehabilitation CJSC. Written informed consent forms were obtained from all the study participants. This study was approved by the Institutional Review Board (IRB00004079) of the Institute of Molecular Biology NAS RA.\n\nBlood samples were collected in EDTA-containing tubes and genomic DNA was extracted according to the protocol described elsewhere18. A260/A280 ratio measured for evaluation of quality and quantity of extracted DNA was in the range of 1.8–2.\n\nBRCA1 and BRCA2 exome sequencing was performed by an external service provider (Admera Health LLC, South Plainfield, NJ, USA) using the proprietary breast cancer panel iBRCATM, which detects genetic variations in all exons of BRCA1 and BRCA2. According to the service provider’s description, this panel utilizes the targeted amplicon (166 amplicons) sequencing method, based on Seq-Ready™ TE Panels protocol (WaferGen Biosystems Inc, Freemont, CA, USA). Reagent cocktails and samples were aliquoted into a 384-well sample source plate. The source plate and BRCA1/2 SmartChip™ were pre-dispensed with Seq-Ready™ TE BRCA1/2 Primers and were placed into the SmartChip™ Multisample Nanodispenser. The SmartChip™ was then amplified with Bio-Rad T100 SmartChip™ TE Cycler. PCR product was then purified with Agencourt AMPure XP (Beckman Coulter, Inc.), according to manufacturer’s instructions. Samples were then quantified with Qubit® 2.0 Fluorometer (Thermo Fisher Scientific, Inc.) and quality analyzed with Tapestation (Agilent Technologies). Sequencing was performed with Illumina MySeq platform on a single lane. Raw reads for each sequenced sample were stored in separate fastq files. DNA samples were shipped on ice to avoid degradation and were passed internal quality check before processing.\n\nFor each sample, raw sequences were aligned to the human reference genome sequence (hg19, see Public genome data section) using Burrows-Wheeler Aligner (BWA) version 0.7.10 with default parameters. The resulting bam files were used in downstream variant discovery analysis.\n\nVariant discovery was performed using Genome Analysis Tool Kit (GATK) version 3.6 according to recommended workflows for germline single nucleotide variations (SNVs) and indel discovery in whole genome and exome sequencing data19. Base quality score recalibration, indel realignment and mate pair fixing were performed in bam files. Variant calling was performed without duplicate read removal. SNV and indel discovery and genotyping were performed simultaneously across all samples using standard hard filtering parameters19.\n\nFor the alignment, we have used the human reference genome sequence (NCBI build 36.1/hg19) from the UCSC (University of California, Santa Cruz) database (http://genome.ucsc.edu). Known SNPs (single nucleotide polymorphisms) were annotated using the UCSC database (single nucleotide polymorphism database, dbSNP version 135). 1000 Genomes phase 1 genotype data was used for human genetic variations filtration (ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/). Allelic frequencies of detected variants were compared against 1000 Genomes phase 3 genotypes, as well as with the genome-wide association study (GWAS) data from 54 healthy Armenian females that were genotyped in the framework of population genetics study by Harber et al.20 (ftp://ngs.sanger.ac.uk/scratch/project/team19/Armenian). The Data on clinically significant BRCA1 and BRCA2 variants were obtained from Breast Cancer Core DataBase maintained by National Human Genome Research Institute (https://research.nhgri.nih.gov/bic/).\n\nComparison of allele frequency distributions in the study group with 1000 Genomes and healthy Armenians was performed using Fisher’s exact test available in R 3.3.2 base package. Variant functional annotation was performed using ENSEMBL Variant Effect Predictor tool21.\n\n\nResults\n\nIn this study we have performed exome sequencing of BRCA1 and BRCA2 genes in patients with a positive family history of breast cancer and their healthy relatives of Armenian origin. Patients’ clinical data and family structure of the studied subjects are presented in the Table 1. The aligned sequencing data is available in the NCBI Sequence Read Archive (SRA, https://www.ncbi.nlm.nih.gov/sra/) under accession SRP095082. For each sample, a total of 166 different primer pairs were used to amplify all the coding regions of BRCA1 and BRCA2 (as described in the Methods section). The average sequencing depth per base per sample was 6696±606. Detailed NGS statistics are presented in Table 2 and Supplementary file S1.\n\nIn total, variant calling resulted in detection of 232 sequence variations (200 SNVs and 32 indels, Supplementary datasets S2 and S3). Thirty-nine SNVs and 4 indels passed the thresholds after applying hard filters (Table 3).\n\nThis table provides functional annotation of mutations in BRCA1 and BRCA2 genes that passed filters during variant calling with GATK.\n\nHGVSg – genomic position of mutation notation by Human Genome Variation Society; Consequence – consequence of mutation; Impact – functional impact of mutation (MD – modifier, MO – moderate, L – low, H – high); HGVSp - protein sequence name notation by Human Genome Variation Society; SIFT - prediction of protein function change depending on amino acid substitution using SIFT software (http://sift.jcvi.org/); PolyPhen - prediction of protein function change depending on amino acid substitution using PolyPhen software (genetics.bwh.harvard.edu/pph2/).\n\nFrom these variants, 18 were novel (15 SNV and 3 indels), and the rest have already been described in 1000 Genomes populations (Table 4). The novel variants were detected only in one or two subjects (8 in healthy relatives and 7 in patients). We identified 12 missense variants (5 in BRCA1 and 7 in BRCA2), 8 synonymous variants (5 in BRCA1 and 3 in BRCA2), 15 intronic variants (8 in BRCA1 and 7 in BRCA2) and 4 in untranslated regions of BRCA2. The frequency distributions of known BRCA1/2 variants were similar to those in 1000 Genomes populations and/or GWAS of healthy Armenians, except for the g.32914236 C>T (pFisher=8.35E-24 vs Armenians, pFisher=0.013 vs 1000 Genomes) and g.41245471 C>T (pFisher=0.013 vs Armenians, pFisher=4.7-E05). No known clinically significant variants were detected in breast cancer patients and their healthy relatives.\n\nThe frequency distributions of identified mutations in the study group were compared with data from 1000 Genomes population, as well as the genome-wide association study from 54 healthy Armenian females20.\n\nMAF – minor allele frequency; RAF – reference allele frequency.\n\n\nDiscussion\n\nThis study provides preliminary characterization of variations in BRCA1 and BRCA2 genes in Armenian patients with family history of breast cancer. Our data suggest that no known clinically significant variants22 contribute to the disease development in these patients. Meanwhile, two other frequent mutations were identified that cause missense substitutions in coding regions of BRCA1 and BRCA2 and were predicted as having pathogenic consequence. The results of this study are in agreement with a a previous report, which also failed to identify known high risk mutations of BRCA1 and BRCA2 genes in Armenian patients using high-resolution melting PCR approach23,24.\n\nMutations in BRCA1 and BRCA2 genes are known markers for hereditary breast/ovarian cancer25. Currently more than 100 clinically important mutations and polymorphisms have been described. Genetic testing of these mutations was among the first included in the guidelines for cancer prognostics3,4. Nowadays, in many countries genetic testing is routinely prescribed to patients in high-risk groups for hereditary breast and ovarian cancer26–28. However, it has also become apparent that the distribution and appearance of particular risk alleles in BRCA1 and BRCA2 genes is population dependent, and in many cases population specific mutations are being identified8–11. This is especially relevant to populations that have for a long time remained culturally and genetically isolated8–11, as in the case of Armenians. Recent research has demonstrated that the genetic structure of Armenians “stabilized” about 4000 years ago and has remained almost unchanged since that time20. Furthermore, our own data indicate that the frequencies of genetic variations associated with various complex human diseases share similarities both with European and Asian populations29–31. From the other side, Armenian genomes are highly underrepresented in the current human genome sequencing initiatives and little is known about genetic predisposition to complex diseases in this particular population.\n\nIn conclusion, despite the small sample size limitation, our results demonstrate the importance of screening of BRCA1 and BRCA2 gene variants in the Armenian population in order to identity specifics of mutation spectra and frequencies and enable accurate assessment of the risk of hereditary breast cancers.\n\n\nData availability\n\nThe aligned sequencing data is available in the NCBI Sequence Read Archive (SRA) under accession number SRP095082 (https://www.ncbi.nlm.nih.gov/sra/?term=SRP095082). Scripts and vcf files with called and filtered genotypes are available: DOI, 10.5281/zenodo.21561532.", "appendix": "Author contributions\n\n\n\nAA conceived the study, performed data analysis and drafted the manuscript. SA, AC and RZ performed experiments, data analysis and participated in drafting. NB and SA were responsible for patient selection, data analysis and contributed to manuscript writing.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research received the grant from Armenian National Science & Education Fund (ANSEF) [#molbio-4334] to AC and SA.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary materials\n\nSupplementary File 1. Sequencing statistics: Coverage and target enrichment statistics.\n\nThis file contains details on sequencing coverage and enrichment, which were extracted from the QC report compiled by Admera Health LLC, South Plainfield, NJ, USA.\n\nClick here to download the file.\n\n\nReferences\n\nPorter P: “Westernizing” women’s risks? Breast cancer in lower-income countries. N Engl J Med. 2008; 358(3): 213–6. PubMed Abstract | Publisher Full Text\n\nWright HZ, Simonsen K, Cheng Y: High breast cancer-related mortality in Armenia: Examining the breast cancer knowledge gap. Ann Glob Health. 2014; 80(3): 230. Publisher Full Text\n\nFerlay J, Soerjomataram I, Ervik M, et al.: GLOBOCAN 2012 v1.0. Cancer Incidence and Mortality Worldwide: IARC CancerBase No. 11 [Internet]. International Agency for Research on Cancer. Lyon, France: 2013. Reference Source\n\nMiki Y, Swensen J, Shattuck-Eidens D, et al.: A strong candidate for the breast and ovarian cancer susceptibility gene BRCA1. Science. 1994; 266(5182): 66–71. PubMed Abstract | Publisher Full Text\n\nWooster R, Neuhausen SL, Mangion J, et al.: Localization of a breast cancer susceptibility gene, BRCA2, to chromosome 13q12-13. Science. 1994; 265(5181): 2088–90. PubMed Abstract | Publisher Full Text\n\nSzabo CI, King MC: Population genetics of BRCA1 and BRCA2. Am J Hum Genet. 1997; 60(5): 1013–20. PubMed Abstract | Free Full Text\n\nFerla R, Calò V, Cascio S, et al.: Founder mutations in BRCA1 and BRCA2 genes. Ann Oncol. 2007; 18(Suppl 6): vi93–vi98. PubMed Abstract | Publisher Full Text\n\nAntoniou A, Pharoah PD, Narod S, et al.: Average risks of breast and ovarian cancer associated with BRCA1 or BRCA2 mutations detected in case series unselected for family history: A combined analysis of 22 studies. Am J Hum Genet. 2003; 72(5): 1117–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeuhausen S, Gilewski T, Norton L, et al.: Recurrent BRCA2 6174delT mutations in Ashkenazi Jewish women affected by breast cancer. Nat Genet. 1996; 13(1): 126–8. PubMed Abstract | Publisher Full Text\n\nFriedman LS, Szabo CI, Ostermeyer EA, et al.: Novel inherited mutations and variable expressivity of BRCA1 alleles, including the founder mutation 185delAG in Ashkenazi Jewish families. Am J Hum Genet. 1995; 57(6): 1284–97. PubMed Abstract | Free Full Text\n\nTonin P, Weber B, Offit K, et al.: Frequency of recurrent BRCA1 and BRCA2 mutations in Ashkenazi Jewish breast cancer families. Nat Med. 1996; 2(11): 1179–83. PubMed Abstract | Publisher Full Text\n\nNeuhausen S, Gilewski T, Norton L, et al.: Recurrent BRCA2 6174delT mutations in Ashkenazi Jewish women affected by breast cancer. Nat Genet. 1996; 13(1): 126–8. PubMed Abstract | Publisher Full Text\n\nEaston DF, Bishop DT, Ford D, et al.: Genetic linkage analysis in familial breast and ovarian cancer: results from 214 families. The Breast Cancer Linkage Consortium. Am J Hum Genet. 1993; 52(4): 678–701. PubMed Abstract | Free Full Text\n\nNanda R, Schumm LP, Cummings S, et al.: Genetic testing in an ethnically diverse cohort of high-risk women: a comparative analysis of BRCA1 and BRCA2 mutations in American families of European and African ancestry. JAMA. 2005; 294(15): 1925–33. PubMed Abstract | Publisher Full Text\n\nHuo D, Senie RT, Daly M, et al.: Prediction of BRCA Mutations Using the BRCAPRO Model in Clinic-Based African American, Hispanic, and Other Minority Families in the United States. J Clin Oncol. 2009; 27(8): 1184–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKauff ND, Domchek SM, Friebel TM, et al.: Risk-reducing salpingo-oophorectomy for the prevention of BRCA1- and BRCA2-associated breast and gynecologic cancer: a multicenter, prospective study. J Clin Oncol. 2008; 26(8): 1331–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRebbeck TR, Kauff ND, Domchek SM: Meta-analysis of risk reduction estimates associated with risk-reducing salpingo-oophorectomy in BRCA1 or BRCA2 mutation carriers. J Natl Cancer Inst. 2009; 101(2): 80–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEvans DG, Baildam AD, Anderson E, et al.: Risk reducing mastectomy: outcomes in 10 European centres. J Med Genet. 2009; 46(4): 254–8. PubMed Abstract | Publisher Full Text\n\nSambrook J, Russell DW: Molecular Cloning: A Laboratory Manual. 3rd ed. New York: Cold Spring Harbor Laboratory Press; 2001. Reference Source\n\nVan der Auwera GA, Carneiro M, Hartl C, et al.: From FastQ data to high confidence variant calls: the Genome Analysis Toolkit best practices pipeline. Curr Protoc Bioinformatics. 2013; 43(1110): 11.10.1–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaber M, Mezzavilla M, Xue Y, et al.: Genetic evidence for an origin of the Armenians from Bronze Age mixing of multiple populations. Eur J Hum Genet. 2016; 24(6): 931–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLaren W, Gil L, Hunt SE, et al.: The Ensembl Variant Effect Predictor. Genome Biol. 2016; 17(1): 122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeto J, Collins N, Barfoot R, et al.: Prevalence of BRCA1 and BRCA2 gene mutations in patients with early-onset breast cancer. J Natl Cancer Inst. 1999; 91(11): 943–9. PubMed Abstract | Publisher Full Text\n\nBabikyan DT, Sarkisian TF: Preliminary genetic investigation of high-risk breast cacner patients in Armenia. Eur J Hum Genet. 2009; 17(2): 191.\n\nMkrtchyan AG: Genetic analisys of hereditary breast cancer (review). Proceedings of Yerevan State Medical University post-graduate students research. 2004; 2: 73–80.\n\nJanavičius R: Founder BRCA1/2 mutations in the Europe: implications for hereditary breast-ovarian cancer prevention and control. EPMA J. 2010; 1(3): 397–412. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeorge A, Riddell D, Seal S, et al.: Implementing rapid, robust, cost-effective, patient-centred, routine genetic testing in ovarian cancer patients. Sci Rep. 2016; 6: 29506. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJung J, Kang E, Gwak JM, et al.: Association between basal-like phenotype and BRCA1/2 germline mutations in Korean breast cancer patients. Curr Oncol. 2016; 23(5): 298–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNakamura S, Kwong A, Kim SW, et al.: Current Status of the Management of Hereditary Breast and Ovarian Cancer in Asia: First Report by the Asian BRCA Consortium. Public Health Genomics. 2016; 19(1): 53–60. PubMed Abstract | Publisher Full Text\n\nde Bruin MA, Kwong A, Goldstein BA, et al.: Breast cancer risk factors differ between Asian and white women with BRCA1/2 mutations. Fam Cancer. 2012; 11(3): 429–39. PubMed Abstract | Publisher Full Text\n\nNair AK, Baier LJ: Complex Genetics of Type 2 Diabetes and Effect Size: What have We Learned from Isolated Populations? Rev Diabet Stud. 2015; 12(3–4): 299–319. PubMed Abstract | Publisher Full Text\n\nZou WB, Boulling A, Masamune A, et al.: No Association Between CEL-HYB Hybrid Allele and Chronic Pancreatitis in Asian Populations. Gastroenterology. 2016; 150(7): 1558–60.e5. PubMed Abstract | Publisher Full Text\n\nArakelyan A: Raw BRCA1/2 variants in breast cancer patients and healthy relatives produced with GATK. [Data set]. Zenodo. 2016. Data Source" }
[ { "id": "20365", "date": "01 Mar 2017", "name": "David A. Goukassian", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe breast cancer is an important health problem in Armenia and identifying specific genetic factors that may predispose to breast cancer development, especially in the families of patient that were already diagnosed with this condition may improve significantly the dire situation with breast cancer prevention in Armenia. Although in the small number of patients and family members, the manuscript presents a good step forward and sets an example how genetic studies in the larger cohort of breast cancer patients and members of their families could identify clinically relevant variants in BRCA1/2 gene mutations that known elsewhere outside of Armenian population as well identify variants that could be specific for Armenian populations only.\nThe title of manuscript is appropriate and the abstract summarizes well the reported findings. Study design is appropriate, albeit with small number of patients. Materials and methods and data analyses are suitable for the design and conclusion are justified. Methodology provides sufficient information and references for replication of the experiments as well as to build up the data base with the larger cohort of patients and their family members.\nA few suggestions to make the discussion of the results better:\nIn this study there were no known clinically relevant variants identified. Could this be because of small number of patients in addition to the conceived notion that genetic structure of Armenians is “stabilized” 4000 years ago? Could there be other predisposing factors, as well? Needs a bit more discussion.\n\nWhat is the value of the novel variants identified in this study? Could these novel variants be specific for Armenian population? Are there any other “close ethnic groups” that have shown novel variants that are not clinically relevant for the \"mainstream population\" but became relevant for the specific ethnic group. Brief discussion will suffice.", "responses": [] }, { "id": "21044", "date": "17 Mar 2017", "name": "Lusine Nazaryan-Petersen", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very important pilot study characterizing variations in BRCA1 and BRCA2 genes in Armenian patients with family history of breast cancer. It provides a good background for further large-scale study in Armenia.\nI have few notes to consider:\nIn the section Methods, the authors used 1000 Genomes phase 1 genotype data for variations filtration. Is there any reason why they prefer phase1 data but not phase 3, which they used for assessing allelic frequencies?\n\nI noticed that the authors did not verify the NGS detected variants by other methods, e.g. by Sanger sequencing. It is especially important to confirm the detected novel mutations to exclude that they could be false positive.\n\nIn the Table 3, the authors report a frameshift variant 13:g.32913172delC, which has a high functional impact on BRCA2. Is it detected in a patient or in a healthy relative? Could it be a novel mutation specific for the Armenian population?  It is known that PolyPhen and SIFT may fail to predict the impact for some variants. The authors might consider to verify this mutation by other methods, e.g. Sanger sequencing, and report it to the appropriate databases. I would suggest to mention about this variant under the section Discussion.", "responses": [] } ]
1
https://f1000research.com/articles/6-29
https://f1000research.com/articles/6-25/v1
09 Jan 17
{ "type": "Research Note", "title": "Healthcare benefits linked with Below Poverty Line registration in India: Observations from Maharashtra Anaemia Study (MAS)", "authors": [ "Anand Ahankari", "Andrew Fogarty", "Laila Tata", "Puja Myles", "Andrew Fogarty", "Laila Tata", "Puja Myles" ], "abstract": "A 2015 Lancet paper by Patel et al. on healthcare access in India comprehensively discussed national health programmes where some benefits are linked with the country’s Below Poverty Line (BPL) registration scheme. BPL registration aims to support poor families by providing free/subsidised healthcare. Technical issues in obtaining BPL registration by poor families have been previously reported in the Indian literature; however there are no data on family assets of BPL registrants. Here, we provide evidence of family-level assets among BPL registration holders (and non-BPL households) using original research data from the Maharashtra Anaemia Study (MAS).\n\nSocial and health data from 287 pregnant women and 891 adolescent girls (representing 1178 family households) across 34 villages in Maharashtra state, India, were analysed. Several assets were shown to be similarly distributed between BPL and non-BPL households; a large proportion of families who would probably be eligible were not registered, whereas BPL-registered families often had significant assets that should not make them eligible. This is likely to be the first published evidence where asset distribution such as agricultural land, housing structures and livestock are compared between BPL and non-BPL households in a rural population. These findings may help planning BPL administration to allocate health benefits equitably, which is an integral part of national health programmes.", "keywords": [ "India", "Below Poverty Line", "Healthcare benefits", "Maharashtra Anaemia Study" ], "content": "Introduction\n\nPatel et al. (2015) provided a comprehensive picture of the current Indian healthcare structure, and also mentioned the National Health Mission’s (NHM) initiative to target inequalities in healthcare access1. Such national health programmes use the ‘Below Poverty Line’ (BPL) registration status to identify deprived families and provide them with free/subsidised healthcare services2. The registration is allocated at family level, based on a scoring system calculated using family level assets such as agricultural land, housing structures, electricity supplies, household equipment. The scoring system varies within Indian states. The BPL status provides access to free healthcare facilities along with monthly access to subsidised food products including but not limited to wheat, rice, cooking oil and sugar.\n\nThere are no data on family assets of BPL registrants. Therefore, in this study, we provide evidence of family-level assets among BPL registration holders (and non-BPL households) using research data we collected previously for the Maharashtra Anaemia Study (MAS)3–5. The MAS was conducted though a joint collaboration of Halo Medical Foundation (HMF), India and the University of Nottingham, UK.\n\n\nMethods\n\nThe MAS was conducted to identify risk factors associated with anaemia in pregnant women (3 to 5 months gestation), and in 13 to 17 year old adolescent girls, living in 34 villages of the Osmanabad district of Maharashtra state of India. MAS collected information on health and social conditions along with blood investigations to examine anaemia risks in rural Indian communities. Additional details of the MAS project are published elsewhere3–5.\n\nData collection also included information on family assets such as agricultural land, housing structure, livestock, automobiles, employment, and home electronics. In this research note, we evaluated family level assets in relation with the BPL registration. The comparison was made in BPL and non-BPL holders for each asset using Chi-square statistics in Stata Software (V.13.1, Texas, USA).\n\nIn total, 287 pregnant women and 1010 adolescent girls participated in data collection, giving an overall response rate of 95%. We selected one person per household at random for the analysis, which resulted in 287 pregnant women (Dataset 16), all from unique households, and 891 adolescent girls (Dataset 27). Therefore, 1178 total households across 34 villages (a population of approximately 65,500) were used in analyses. Written approval was obtained from each study participant and their guardian prior to data collection, and the same was counter signed by the primary investigator (AA). The study was approved by the Institutional Ethics Committee of Government Medical College of Aurangabad, India (Reference number: Pharma/IEC/GMA/196/2014), and also by the Nottingham University Medical School Research Ethics Committee (Reference number: E10102013).\n\n\nResults\n\n36.4% of adolescent girls (325/891), and 37.6% (108/287) pregnant women in our study had current BPL registration. 32.3% (105/325) of adolescent girl families with BPL registration had more than 5 acres of farming land, and 54.4% (177/325) had a colour television. Overall, of the 6 assets we assessed, 3 showed no significant differences in distribution (p>0.05) between BPL registered and non-registered families of adolescent girls (Table 1).\n\n+: Those who are likely to be ineligible but hold BPL registration.\n\n*: Those who appeared to be eligible but did not have registration.\n\nAnnual income is also presented in Great Britain Pound (GBP) based on the conversion rate of 1 GBP= 100 Indian Rupees (INR).\n\nNote: Family income/assets was defined as an immediate family’s resources only. For example: for adolescent girls, it included participants’ parents’ (mother and father only) income/assets; among pregnant women, it included participants’ (pregnant woman) and husbands’ income/assets only. P values were calculated using chi square test.\n\nAmong families of pregnant women, 6 out of 9 assets assessed showed no significant differences (p>0.05) between BPL registered and non-registered. Furthermore, 2% of the families of BPL registrants (2/108) had an annual income greater than 100,000 INR (~1000 GBP), 27.8% had more than 5 acres of land (30/108), and 8.4% had three/four wheeler vehicles (9/108).\n\n\nDiscussion\n\nNon-eligible families holding the BPL registration are likely to increase burden on healthcare services, while those with greatest need may remain untreated due to absence of BPL registration, or inability to pay for healthcare services out of their own pockets2,8. Subsidising non-eligible BPL holders also increases the burden on government finances, which in light of the current fragile economic situation, is an important issue to address8.\n\nWe observed several participants from both study groups in the MAS, who appeared eligible for the BPL scheme, but had not obtained the registration. Many participants reported technical difficulties as the reason for not having BPL registration. Some of these technical difficulties included having problems procuring the required documents from government officials, and being unable to complete paperwork and other legal documents that are needed to submit the BPL application. This suggests a need to re-evaluate and strengthen the current BPL registration system, and also demands further monitoring to ensure that poor families in need receive vital healthcare and other subsidy benefits. The National Health Mission’s initiatives are well meant and have the potential to provide universal health coverage in India; however, implementation is challenging. Strengthening the current BPL registration system and improving identification of poor and needy families might help with achieving the universal health model. This may also help in revising the current health budget to allocate funds for the improvement of the governmental health system. We welcome the review from Patel et al. (2015) and suggest continuing evaluation of both national health projects and the BPL registration process, which will be useful in underpinning healthcare facilities whilst widening access.\n\n\nData availability\n\nDataset 1: Pregnant Women MAS Project. The data has 287 pregnant women participants with self-explanatory variables on BPL registration, and related assets analysed in the paper.\n\ndoi, 10.5256/f1000research.10556.d1487436\n\nDataset 2: Adolescent Girls MAS Project. The data has 891 adolescent girls participants with self-explanatory variables on BPL registration, and related assets analysed in the paper\n\ndoi, 10.5256/f1000research.10556.d1487447\n\n\nEthics statement\n\nThe study was approved by the Institutional Ethics Committee of Government Medical College of Aurangabad, India (Reference number: Pharma/IEC/GMA/196/2014), and also by the Nottingham University Medical School Research Ethics Committee (Reference number: E10102013). All participants and their guardians provided signed informed consent for the survey and blood withdrawal separately. Each consent was countersigned by the primary investigator (AA). Other than those who declined to participate, all adolescent girls and pregnant women received a standardised health report including information on their haemoglobin level and anaemia status along with facilitated access to educational materials on anaemia through the health NGO, Halo Medical Foundation’s (HMF) village based services. Participant health reports were also provided to the village health worker/government nurse with arrangements for free consultation and assistance if any significant health problems requiring further assessment or treatment were identified during the study. HMF’s hospital was also made available for free consultation as a primary referral centre if more specialist assessment or treatment was needed. On completion of data collection, an additional reminder letter was issued to village health workers indicating details of each severe anaemic case in their village to ensure that necessary medical advice and treatment was available.", "appendix": "Author contributions\n\n\n\nThe MAS project was designed by AF, AA, PM and LT. The data collection, analysis and manuscript preparation was carried out by AA with additional advisory support from AF, PM and LT.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe Maharashtra Anaemia Study (MAS) was conducted as part of Dr Anand Ahankari’s PhD programme with the University of Nottingham UK, which was sponsored by the University’s Vice Chancellor Scholarship for Research Excellence International 2013 (Tuition fee support, Ref 12031). The anaemia project conducted in Maharashtra, India, was a joint collaboration between the University of Nottingham and the Halo Medical Foundation (HMF), with the latter providing laboratory testing and data storage facilities. Project management and data collection were funded by Dr Hardikar through the Maharashtra Foundation, USA. Dr Ahankari also received a bursary from the Durga Devi Charitable Trust, India during the PhD studies.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nProfessor (Mr) and Mrs Chawathe, Mumbai, India provided generous support for Dr Ahankari’s study. The authors acknowledge the support of Ms. Sandhya Rankhamb in data collection, data entry, and verification, and recognise her contribution in the project. The authors thank HMF village health workers for providing field level support for this study. Support for publication of this article was obtained from the University of Nottingham, UK.\n\n\nReferences\n\nPatel V, Parikh R, Nandraj S, et al.: Assuring health coverage for all in India. Lancet. 2015; 386(10011): 2422–2435. PubMed Abstract | Publisher Full Text\n\nAlkire S, Seth S: Identifying BPL Households: A Comparison of Methods. SSRN Electron J. 2012. Publisher Full Text\n\nAhankari AS, Fogarty AW, Tata LJ, et al.: Assessment of a non-invasive haemoglobin sensor NBM 200 among pregnant women in rural India. BMJ Innov. 2016; 2: 70–77. Publisher Full Text\n\nAhankari AS, Myles PR, Fogarty AW, et al.: Prevalence of iron deficiency anaemia and risk factors in 1,010 adolescent girls from rural Maharashtra, India: a cross-sectional survey. Public Health. 2017; 142: 159–166. Publisher Full Text\n\nAhankari AS, Dixit JV, Fogarty AW, et al.: Comparison of the NBM 200 non-invasive haemoglobin sensor with Sahli’s hemometer among adolescent girls in rural India. BMJ Innov. 2016; 2: 144–148. Publisher Full Text\n\nAhankari A, Fogarty A, Tata L, et al.: Dataset 1 in: Healthcare benefits linked with Below Poverty Line registration in India: Observations from Maharashtra Anaemia Study (MAS). F1000Research. 2017. Data Source\n\nAhankari A, Fogarty A, Tata L, et al.: Dataset 2 in: Healthcare benefits linked with Below Poverty Line registration in India: Observations from Maharashtra Anaemia Study (MAS). F1000Research. 2017. Data Source\n\nMishra RK, Raveendran J, editors: Millennium Development Goals: The Indian Journey. New Delhi: Allied Publishers Pvt Ltd. 2011; 279. Reference Source" }
[ { "id": "19107", "date": "19 Jan 2017", "name": "Sunil M. Sagare", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors had given importance of subject appropriately, clarified ethical considerations, self-explanatory results and well discussion on the issue.\n\nThe authors pinpointed the below poverty line (BPL) issue in the form of non-eligible BPL registrants and eligible BPL non-registrants. The article raise the following questions:\nUndue healthcare advantage taken by non-eligible BPL registrants Disadvantage to needy eligible BPL families due to their non-registration.\nSuggestions given for above mentioned questions are very much relevant and if implemented will help to reduce burden on government and healthcare finances as well.", "responses": [ { "c_id": "2533", "date": "06 Mar 2017", "name": "Anand Ahankari", "role": "Author Response", "response": "Dear Dr Sagare, Thank you for reviewing and submitting your comments, much appreciated.  Dr Anand Ahankari." } ] }, { "id": "19814", "date": "06 Feb 2017", "name": "Umesh Wadgave", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt’s an interesting and valid research tapping the vital topic of distribution of BPL cards in India. Overall methodology of the research is sound.\nThere are certain issues which require clarification from authors\nThe assets considered for issuing BPL cards in Maharashtra state are quite different from assets considered in the present study ( For details follow this link http://mahafood.gov.in/website/english/PDS.aspx). These differences should be considered before making recommendations about policy change in BPL card distribution.\n\nAuthors state that “Overall, of the 6 assets we assessed, 3 showed no significant differences in distribution (p>0.05) between BPL registered and non-registered families of adolescent girls (Table 1).” But all the three assets which didn’t show significant difference are not listed in criteria to issue BPL Cards as per Maharashtra State.\n\nTable 1 shows that about 44% of Non-BPL holders don’t have farming land which is far more than BPL card holders where only 21% don’t have farming land. However, in Table 2 it’s just opposite where 16.7% Non-BPL holders and 36.1 % BPL holders don’t have farming land. This contradicting finding needs to be justified in the discussion.\n\nWhy different assets were considered for adolescent girls (5 assets) and pregnant women (9 assets).\n\nWhat is the necessity to stratify the analysis for adolescent girls and pregnant women separately? Authors could have clubbed the data and did the analysis which will increase the power of the study.\n\nThe sample for the present study consists of households having adolescent girls and pregnant women which will limit the generalizability of the study findings. Hence, the issue of generalizability of study findings should be discussed.\n\nIn the discussion, it is necessary to discuss study limitations and future scope of doing research on this topic.", "responses": [ { "c_id": "2584", "date": "27 Mar 2017", "name": "Anand Ahankari", "role": "Author Response", "response": "Dear Dr Wadgave and Dr Khairnar,  Thank you for your valuable time to review our research article. I have provided a brief response to your comments below.  Regarding comment 1, 2, 4, 6 and 7: This paper used a retrospective dataset available from Maharashtra Anaemia Study, where the primary objective was to identify risk factors associated with adolescent and pregnancy anaemia in rural population of Maharashtra state of India. We had limited data on social class and family assets, which was used against the BPL status in the analysis. We agree that our variables are different compared to the listed BPL indicators. Nevertheless, our data reported discrepancies in the current BPL status, which is an incidental finding. We suggested to investigate the current challenges in the BPL administration to ensure appropriate allocation and monitoring. We have not suggested any policy changes based on our research. We acknowledge limitations of our study, and further work is necessary to address the outlined challenges. Due to article length restrictions, we could not add more on the study strengths and limitations. We hope that readers will find this comment section useful for additional clarification.  Regarding comment 3: The article has only one table (Table 1). In this table, there are 2 sections, adolescent girls and pregnant women. Each section used different variables (from 2 individual datasets). Therefore, statistical presentations are different as presented in the Table 1 for each study group. Regarding comment 5: As you may see in the attached datasets, we collected different set of variables from study participants, thus analysis was conducted independently for the two study groups. Data from pregnant women had more variables compared to adolescent girl dataset. Therefore, we could not combine these to form a single source.  I hope that F1000Research readers will find this section useful. Thank you once again for providing this opportunity to respond to your comments.  Dr Anand Ahankari" } ] } ]
1
https://f1000research.com/articles/6-25
https://f1000research.com/articles/6-23/v1
09 Jan 17
{ "type": "Research Note", "title": "Conservation of gene essentiality in Apicomplexa and its application for prioritization of anti-malarial drug targets", "authors": [ "Gajinder Pal Singh" ], "abstract": "New anti-malarial drugs are needed to address the challenge of artemisinin resistance and to achieve malaria elimination and eradication. Target-based screening of inhibitors is a major approach for drug discovery, but its application to malaria has been limited by the availability of few validated drug targets in Plasmodium. Here we utilize the recently available large-scale gene essentiality data in Plasmodium berghei and a related apicomplexan pathogen, Toxoplasma gondii, to identify potential anti-malarial drug targets. We find significant conservation of gene essentiality in the two apicomplexan parasites. The conservation of essentiality could be used to prioritize enzymes that are essential across the two parasites and show no or low sequence similarity to human proteins. Novel essential genes in Plasmodium could be predicted based on their essentiality in T. gondii. Essential genes in Plasmodium showed higher expression, evolutionary conservation and association with specific functional classes. We expect that the availability of a large number of novel potential drug targets would significantly accelerate anti-malarial drug discovery.", "keywords": [ "Plasmodium falciparum", "Toxoplasma gondii", "drug targets", "essential genes" ], "content": "Introduction\n\nMalaria killed an estimated half a million people in the year 2015, 70% of them were children under the age of five1. The emergence and spread of Plasmodium falciparum strains resistant to all currently used anti-malarial drugs2 has created an urgent need to discover new drugs. New anti-malarial drugs are also needed for malaria elimination and global eradication, for which the currently available drugs are not adequate3. There are two main approaches for drug-discovery against pathogens: Phenotype screening and target-based approach4. In phenotype screening, compounds are identified that inhibit the cellular growth of the pathogen. Large-scale screening of millions of compounds against the erythrocytic stage of P. falciparum has identified thousands of such inhibitors5. Some of these inhibitors have progressed to clinical trials6. In the target-based approach, compounds are identified that inhibit the activity of a protein essential for the viability of the pathogen. Thus target-based approach requires previous knowledge about genes that are essential for the pathogen. Only a few essential genes have been identified in P. falciparum, hampering the target-based approach for anti-malarial drug discovery. Consequently, target-based approach has only identified a few anti-malarial candidates6. However, recent large-scale screening of about 2500 genes in a rodent malaria parasite P. berghei has identified about 1200 essential genes7,8. A recent genome-scale CRISPR screen in a related apicomplexan parasite Toxoplasma gondii has identified about 3000 essential genes9. Here we analyse this data and find significant conservation of gene essentiality in these two pathogens. From this, we identified potential anti-malarial drug targets that exhibit conserved essentiality in apicomplexan parasites; we predict novel essential genes in Plasmodium based on the essentiality of their orthologs in T. gondii. These targets could serve as starting points for target-based anti-malarial drug discovery.\n\n\nMethods\n\nThe genome-wide CRISPR screening data on the relative fitness of T. gondii genes during infection of human fibroblasts cells was obtained from Sidik et al.9. The authors defined log2 fold change in abundance of single guide RNA (sgRNA) targeting a given gene as the “phenotype” score for that gene9. It was found that for a previously determined set of 81 essential and non-essential genes, a phenotype score of less than -2 identified most of the essential genes, but none of the non-essential genes9. We thus defined all genes with a phenotype score of less than -2 as essential (2870 genes). Genes with a phenotype score greater than 0 were defined as non-essential (3071 genes), while those with a phenotype score between 0 and -2 were not classified (2210 genes). The in vivo relative growth rate data for 2574 genes of P. berghei were obtained from the PlasmoGEM database7,8 (http://plasmogem.sanger.ac.uk/phenotypes). The authors generated knockout mutants by transfection with large pools of barcoded gene knockout vectors. The in vivo growth rate in Balb/c mice was obtained by counting barcodes by next generation sequencing daily between days 4 and 8 post transfection7. Essential genes were defined as genes with a growth rate not significantly different from 0.1 (growth rate of the wild type taken as 1), while non-essential genes were defined as genes with growth rate not significantly different from 17.\n\nProteome sequences of P. falciparum 3D7, P. berghei ANKA, P. chabaudi chabaudi, P. cynomolgi B, P. knowlesi H, P. reichenowi CDC, P. vivax Sal1, P. yoelii 17X were downloaded from the PlasmoDB database10 (http://plasmodb.org/common/downloads/release-27/). The Proteome sequences for six apicomplexan species were obtained from EuPathDB11: Cryptosporidium hominis TU502 (http://cryptodb.org/common/downloads/release-29/ChominisTU502/); T. gondii GT1 (http://toxodb.org/common/downloads/release-29/TgondiiGT1/); Eimeria brunetti Houghton (http://toxodb.org/common/downloads/release-29/EbrunettiHoughton/); Babesia bovis T2Bo (http://piroplasmadb.org/common/downloads/release-29/BbovisT2Bo/); Theileria annulata Ankara (http://piroplasmadb.org/common/downloads/release-29/TannulataAnkara/); and Gregarina niphandrodes (http://cryptodb.org/common/downloads/release-29/GniphandrodesUnknown/). Proteome sequences for Homo sapiens were downloaded from EBI (http://www.ebi.ac.uk/reference_proteomes). Homologs of P. berghei genes in H. sapiens were identified with E-value cut-off of 1e-6, with soft mask set as true. Orthologous sequences were identified using best bidirectional hit algorithm12.\n\nRNA-seq data (FPKM values) for different stages of P. berghei was obtained from Otto et al.13. Proteomics data on different stages of P. berghei and dN, dN/S values were obtained from Hall et al.14. Gene Ontology information for P. falciparum was obtained from PlasmoDB10, and these functions were assigned to their orthologous proteins in P. berghei. Enzyme Commission (EC) numbers for P. berghei and P. falciparum were also obtained from PlasmoDB. Trans-membrane regions were identified using TMHMM15. All statistical analyses were performed in the R software version 3.3.1 (https://www.r-project.org/).\n\n\nResults\n\nThe relative in vivo growth rate of knockout mutants for 2574 P. berghei genes (out of total 5076 genes in P. berghei) has recently been measured, of which 1198 genes (46%) with very low growth rate were classified as essential7,8. Similarly, in vivo relative fitness of knockout mutants for 8151 T. gondii genes have been measured9, of which 2870 genes (35%) with very low relative fitness values were classified as essential (see Methods). Of the 2574 P. berghei genes with fitness data, 1617 genes have an ortholog in T. gondii. P. berghei genes with an ortholog in T. gondii were significantly more likely to be essential, compared to P. berghei genes without an ortholog in T. gondii (53% vs. 36%; Fisher test p = 7e-18; Figure 1A). P. berghei genes with an essential ortholog in T. gondii were significantly more likely to be essential, compared to P. berghei genes with a non-essential ortholog in T. gondii (71% vs. 17%; Fisher test p = 6e-59; Figure 1A). There was a significant correlation in relative fitness values of P. berghei and T. gondii (Spearman correlation coefficient 0.47; p = 3e-89; n =1617; Figure 1B). The essentiality of 2502 P. berghei genes was not tested, but the essentiality information of T. gondii orthologs may be used to predict their essentiality in P. berghei. There were 687 genes in P. berghei with an essential ortholog in T. gondii, and thus may be predicted as essential in P. berghei (Dataset 116).\n\n(A) P. berghei genes with an ortholog in T. gondii were more likely to be essential, compared to P. berghei genes without an ortholog in T. gondii (Fisher test p = 7e-18). P. berghei genes with an essential ortholog in T. gondii were significantly more likely to be essential compared to P. berghei genes with a non-essential ortholog in T. gondii (Fisher test p = 6e-59). (B) There was a significant correlation in relative fitness values of P. berghei and T. gondii (Spearman correlation coefficient 0.47; p = 3e-89; n =1617). Genes classified as essential in both species are colored red. Genes classified as non-essential in both species are colored blue. Genes that are essential in only one of the species are colored green.\n\nWe argue that genes identified as essential in both the apicomplexan parasites could be more useful drug targets for the following reasons: 1) Genome-scale fitness screens often involve significant false positives and false negatives7, thus genes identified as essential in independent experiments in different parasites could be more confidently assigned as essential; 2) the substantial conservation of gene essentiality between the two parasites demonstrates that essentiality information in T. gondii offers relevant information about gene essentiality in P. berghei; 3) genes that are essential in both P. berghei and T. gondii should be more likely to be essential in human malarial species, such as P. falciparum and P. vivax; 4) genes that are essential in both P. berghei and T. gondii should be more likely to be essential across different developmental stages of Plasmodium, which is a highly desirable property of Plasmodium drug targets17. We thus identified 710 genes that were essential in both species. A total of 289 of these 710 genes encode enzymes, which are typically used as drug targets against pathogens. Of these 289 genes, 245 had an ortholog in all Plasmodium species and did not have more than one trans-membrane segment. We removed proteins with more than one trans-membrane segments, as these are often difficult to purify for in vitro assays. Of the 245 proteins, 30 showed no significant sequence similarity to any human proteins (listed in Table 1), and 83 showed less than 30% identity and 151 showed less than 40% identity to any human protein (Dataset 116). Figure 2 shows the flow chart of the selection process.\n\nAmong the P. berghei enzymes that were not tested for essentiality, 186 had an essential ortholog in T. gondii and thus may be predicted as essential in P. berghei. To increase the confidence of these genes to be essential in Plasmodium, we considered 53 genes that were conserved across Plasmodium and apicomplexan species. Among the enzymes tested for essentiality, such a criteria led to a set with 77% enzymes as essential, suggesting high enrichment for essentiality among predicted essential enzymes. In total, 28 of these enzymes had low sequence similarity (<40% identity) with human proteins and thus may also be considered as potential drug targets (Dataset 116).\n\nEssential genes show different expression, evolutionary and functional properties9. We thus tested whether similar patterns would be observed for P. berghei. Essential P. berghei genes showed higher mRNA expression levels in asexual stages, but lower expression levels in sexual stages compared to non-essential genes (Figure 3A). Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different developmental stages compared to non-essential genes (Figure 3B). Essential genes showed a lower evolutionary rate (dN and dN/dS) and higher conservation in apicomplexan species (Figure 3C). Essential genes were significantly enriched in functional classes, such as “Translation”, “Ribosome”, “DNA replication”, “Intracellular protein transport”, “Cytoplasm”, and “Nucleus” (Figure 4).\n\n(A) Essential P. berghei genes showed higher mRNA expression levels in asexual stages, but lower mRNA expression levels in sexual stages. The mean FPKM values for the essential and non-essential genes were calculated for different development stages and their log2 ratio was taken. All stages except ‘ookinete 24h’ showed a statistically significant difference between essential and non-essential genes (t-test; p < 0.05). The RNA-seq data was taken from Otto et al.13. (B) Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different stages compared to non-essential genes. All stages except ‘sporozoites’ showed a significant difference between essential and non-essential genes (Chi-square test; p < 0.05). Overall 47% of the tested genes were essential. The proteomics data was obtained from Hall et al.14 (C) Essential genes showed a lower evolutionary rate and higher conservation across apicomplexan species. The mean dN and dN/dS values for essential and non-essential genes was calculated and their log2 ratio was taken. This data was taken from Hall et al.14. The mean number of apicomplexan species (out of six), in which an ortholog was identified, was calculated for essential and non-essential genes and their log2 ratio was taken. dN and conservation in apicomplexan species showed a statistically significant difference between essential and non-essential genes (t-test; p < 0.05), but not dN/dS.\n\nThe Gene Ontology information for Plasmodium falciparum genes was obtained from PlasmoDB10 and assigned to their P. berghei orthologs. Classes with a significant difference (Chi-square test; p < 0.05) in essential genes are marked with *.\n\n\nDiscussion\n\nThe recent availability of gene essentiality data from P. berghei and the related apicomplexan T. gondii provides an unprecedented opportunity to identify potential drug targets to accelerate anti-malarial drug discovery. We find a significant correlation of gene essentiality between P. berghei and T. gondii (Figure 1). Thus, the information about gene essentiality in T. gondii provides independent experimental support for gene essentiality in P. berghei, which not only increases the confidence of gene essentiality in P. berghei, but also increases the likelihood that these genes would be essential in other Plasmodium species that cause human malaria, and probably in different Plasmodium developmental stages. Drug targets that are essential in multiple species and stages of Plasmodium are particularly desirable17. Novel essential genes in Plasmodium could also be predicted based on the essentiality of their orthologs in T. gondii. Further prioritization of these genes could be made based on their conservation across Plasmodium and apicomplexan species, low sequence similarity to human proteins, as well as practical information, such as previous availability of clones, assays, protein structure and inhibitors18,19. The high conservation of essentiality between P. berghei and T. gondii may allow prediction of essential genes in other apicomplexan pathogens, such as Cryptosporidium.\n\nWe found gene and protein properties significantly associated with essentiality in P. berghei. At the mRNA level, essential genes, compared to non-essential genes, were expressed at higher levels in asexual stages, but at lower levels in sexual stages (Figure 3A). Since gene essentiality was measured at the asexual stage, this might explain the positive correlation between essentiality and mRNA expression in asexual stages. Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different development stages (Figure 3B). Essential genes showed lower evolutionary rates and higher conservation across apicomplexan species (Figure 3C). The higher evolutionary conservation of essential genes is well-documented20. We find Gene Ontology classes “Translation”, “Ribosome”, “DNA replication”, “Intracellular protein transport”, “Cytoplasm”, and “Nucleus” to be significantly enriched in essential genes (Figure 4). “Translation” class was also enriched in essential genes after excluding “Ribosome” genes (69% essential; Chi-square test; p = 0.0001), suggesting that enrichment of essential genes in the “Translation” category is not only due to ribosomal genes. Thus enzymes involved in protein translation may be important targets for anti-malarial drug discovery.\n\n\nData availability\n\nThe in vivo relative growth rate data for 2574 genes of P. berghei genes was obtained from PlasmoGEM database (http://plasmogem.sanger.ac.uk/phenotypes)8. The genome-wide CRISPR screening data for the relative fitness of 8151 T. gondii genes during infection of human fibroblasts cells was obtained from Sidik et al.9.\n\nDataset 1: Fitness, expression, functionality, conservation and evolutionary information of Plasmodium berghei genes. doi, 10.5256/f1000research.10559.d14869816", "appendix": "Author contributions\n\n\n\nG.P.S. conceived and designed the study, performed the research and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is supported by an Early Career Fellowship to G.P.S. by the Wellcome Trust/DBT India Alliance (IA/E/15/1/502297).\n\n\nAcknowledgements\n\nThe author would like to acknowledge suggestions and criticism on the manuscript by Ms. Preeti Goel.\n\n\nReferences\n\nWorld Health Organization: The World Malaria Report 2015. 2015. Reference Source\n\nDuru V, Witkowski B, Ménard D: Plasmodium falciparum Resistance to Artemisinin Derivatives and Piperaquine: A Major Challenge for Malaria Elimination in Cambodia. Am J Trop Med Hyg. 2016; 95(6): 1228–1238. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlonso PL, Brown G, Arevalo-Herrera M, et al.: A research agenda to underpin malaria eradication. PLoS Med. 2011; 8(1): e1000406. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilbert IH: Drug discovery for neglected diseases: molecular target-based and phenotypic approaches. J Med Chem. 2013; 56(20): 7719–7726. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpangenberg T, Burrows JN, Kowalczyk P, et al.: The open access malaria box: a drug discovery catalyst for neglected diseases. PLoS One. 2013; 8(6): e62906. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWells TN, Hooft van Huijsduijnen R, Van Voorhis WC: Malaria medicines: a glass half full? Nat Rev Drug Discov. 2015; 14(6): 424–442. PubMed Abstract | Publisher Full Text\n\nGomes AR, Bushell E, Schwach F, et al.: A genome-scale vector resource enables high-throughput reverse genetic screening in a malaria parasite. Cell Host Microbe. 2015; 17(3): 404–413. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwach F, Bushell E, Gomes AR, et al.: PlasmoGEM, a database supporting a community resource for large-scale experimental genetics in malaria parasites. Nucleic Acids Res. 2015; 43(Database issue): D1176–D1182. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSidik SM, Huet D, Ganesan SM, et al.: A Genome-wide CRISPR Screen in Toxoplasma Identifies Essential Apicomplexan Genes. Cell. 2016; 166(6): 1423–1435.e12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAurrecoechea C, Brestelli J, Brunk BP, et al.: PlasmoDB: a functional genomic database for malaria parasites. Nucleic Acids Res. 2009; 37(Database issue): D539–D543. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAurrecoechea C, Barreto A, Basenko EY, et al.: EuPathDB: the eukaryotic pathogen genomics database resource. Nucleic Acids Res. 2016; 45(D1): D581–D591, pii: gkw1105. PubMed Abstract | Publisher Full Text\n\nWolf YI, Koonin EV: A tight link between orthologs and bidirectional best hits in bacterial and archaeal genomes. Genome Biol Evol. 2012; 4(12): 1286–1294. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOtto TD, Böhme U, Jackson AP, et al.: A comprehensive evaluation of rodent malaria parasite genomes and gene expression. BMC Biol. 2014; 12: 86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHall N, Karras M, Raine JD, et al.: A comprehensive survey of the Plasmodium life cycle by genomic, transcriptomic, and proteomic analyses. Science. 2005; 307(5706): 82–86. PubMed Abstract | Publisher Full Text\n\nKrogh A, Larsson B, von Heijne G, et al.: Predicting transmembrane protein topology with a hidden Markov model: application to complete genomes. J Mol Biol. 2001; 305(3): 567–580. PubMed Abstract | Publisher Full Text\n\nSingh G: Dataset 1 in: Conservation of gene essentiality in Apicomplexa and its application for prioritization of anti-malarial drug targets. F1000Research. 2017. Data Source\n\nBurrows JN, van Huijsduijnen RH, Möhrle JJ, et al.: Designing the next generation of medicines for malaria control and eradication. Malar J. 2013; 12: 187. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMagariños MP, Carmona SJ, Crowther GJ, et al.: TDR Targets: a chemogenomics resource for neglected diseases. Nucleic Acids Res. 2012; 40(Database issue): D1118–D1127. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrowther GJ, Shanmugam D, Carmona SJ, et al.: Identification of attractive drug targets in neglected-disease pathogens using an in silico approach. PLoS Negl Trop Dis. 2010; 4(8): e804. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDoyle MA, Gasser RB, Woodcroft BJ, et al.: Drug target prediction and prioritization: using orthology to predict essentiality in parasite genomes. BMC Genomics. 2010; 11: 222. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19085", "date": "16 Jan 2017", "name": "Gregory J. Crowther", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper analyzes genome-wide data on gene essentiality from two apicomplexan parasites: Plasmodium berghei (the cause of malaria in rodents) and Toxoplasma gondii (the cause of toxoplasmosis). The paper is a new analysis of previously reported data (rather than a presentation of new wet-lab results), which is fine. Those whole-genome datasets are so rich that the papers with the original data cannot possibly cover every interesting angle, so I am happy to see interesting follow-up papers such as this one, which offers additional insight into the datasets.\n\nThe following comments go from broad to specific.\n\nBroad\n\nWhile the analysis is interesting, I’m not fully convinced that it advances malaria drug discovery in important ways; it might actually be most useful as an investigation of basic apicomplexan parasite biology.  Target-based drug discovery researchers are certainly glad to know whether particular genes of interest (corresponding to specific enzymes or pathways in which they have expertise) are essential or not. However, the figures present genome-wide trends that, while interesting, don’t seem that helpful in prioritizing possible drug targets.\n\nFigure 1 is probably the most relevant to drug discovery. It shows that genes found to be essential in one species (P. berghei or T. gondii) are more likely to also be essential in the other; thus, P. berghei genes not covered by the Gomes et al. (2015) screen1 are fairly likely to be essential if their T. gondii orthologs are essential.\n\nFigure 2 shows a prioritization exercise which is not incorrect, but I don’t think sequence similarity to human proteins is an especially useful criterion. (This is also a limitation of Table 1, in my view). The hope is that we can avoid toxicity by targeting parasite proteins that are dissimilar to human proteins; however, overall sequence similarities tell us very little about whether a parasite protein will have any binding pockets (each of which represents a small part of the total amino acid sequence) that, in three dimensions, closely resemble any binding pockets of human proteins.\n\nFigure 3 shows gene expression data at the level of transcripts and proteins; I don’t think this information really applies to drug discovery. (For example, I don’t think anyone should say of a particular target, “Well, this isn’t highly expressed; maybe it isn’t a good/essential target after all.”. If I recall correctly, some excellent targets such as DHFR and PfATP4 are not expressed that highly)\n\nFigure 4 shows that some functional classes of proteins have a higher percentage of essential proteins than others – but I don’t think this helps us choose possible drug targets either. Even the right-most categories have plenty of essential genes, which is why, for example, there is interest in targeting fatty acid metabolism, the second-lowest category in terms of percent essentiality (see, for example, Shears et al.2). Likewise, the unimpressive-looking “transport” category (~52% essential) includes PfATP4, a red-hot target of current Plasmodium research (see Wells et al.3). Drug discovery researchers do not usually think in terms of the big broad categories shown in Figure 4, so knowing percent essentiality by category won’t help them much with target selection.\n\nThe above observations lead me to the overall recommendation to revise the paper in one of two ways. Option 1 is to emphasize the drug-discovery stuff less and the basic biology more. Option 2 is to enhance the drug-discovery theme by addressing my concerns about the figures (i.e. explaining why they are more relevant to drug discovery than I’m giving them credit for) and/ or adding analyses that have clearer, stronger relevance to drug discovery. The paper does not currently try to combine the essentiality data with genome-wide predictions of “druggability” (which are hard!), but perhaps a collaborator could be enlisted to help with that. In general, most proteins (including most essential proteins) are not that druggable, so essentiality information in the absence of druggability information does not get us that far down the drug-discovery road.\n\nSpecific\nFigure 1B: The legend says that green dots represent “non-conserved” proteins. I think that only conserved proteins are shown in this panel, and the green dots are proteins that are neither essential in both species nor nonessential in both species. Please check.\n\nFigure 2: Aside from my above-mentioned concern about homology to human proteins, it might make sense to show the arrows as follows: 710 => 289 => 245 => 151 => 83 => 30, thus showing the winnowing of the targets with additional criteria. In its current form, the figure initially led me to think, incorrectly, that the 245 genes could be split into subgroups of 30, 83, and 151.\n\nFigure 3: For 3A and 3B, the transcriptome data (relative abundance) don’t seem to correlate that closely with the proteome data (detectable or not). For example, essential gene expression in the sexual stages looks low at the level of RNA in 3A but average-to-high at the protein level in 3B. Are such discrepancies surprising/interesting? Discuss in the Discussion! Also, briefly define dN and dS (nonsynonymous and synonymous substitutions; 3C) somewhere in the paper. Also, to improve clarity, consider using one color for the bars corresponding to the asexual stages and another color for the bars corresponding to the sexual stages.\n\nFigure 4: Others must have done analyses like this for other (non-apicomplexan) species, e.g., of bacteria. Please compare the Figure 4 data to previous work in the Discussion. Also, why did the “cytoplasm” category come out as statistically significant? Are there a huge number of genes in that category?\n\nCan a paragraph be added to the Discussion on what sort of specific work might follow naturally from the present analysis? That would help readers appreciate the significance of the present work.", "responses": [] }, { "id": "19792", "date": "13 Feb 2017", "name": "Didier Picard", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis Research Note reports on an interesting and potentially useful exercise to identify and to prioritize candidates for target-based drug development in Plasmodium. The whole approach is relatively straightforward and provides a list of candidates to think about, not more, not less. Additional considerations could subsequently be applied by others to home in on reasonable targets to focus on. Overall, this short report was worth publishing, but would benefit from some revisions outlined below.\nSpecific comments:\nHow many genes are experimentally essential in both species is mentioned in the text at a relatively late stage of the presentation. It would be helpful to mention it earlier, e.g. in the legend to Figure 1 (the number of red dots).\n\nAt some point, the author focuses on enzymes as targets. I do not think that enzymes are the only druggable targets. But if that's what the author wants to focus on, the term \"enzyme\" should be defined. Is it just based on the GO term associated with these genes/proteins?\n\n40% sequence identity is still a lot, and may be too much if active sites are even more highly conserved. Moreover, in this conext I also agree with point 2 of the referee report by Gregory Crowther1.\n\nWhile I agree with Gregory Crowther's comment 31 about the relevance to drug discovery of the data in Figures 3 and 4, I still find this analysis interesting and not superfluous in the context of the overall story presented here.\n\nFigure 2: I share the confusion with Gregory Crowther1 with respect to the math here. The text at the bottom of page 3 clearly suggests that 245 = 30+83+151, which of course cannot be. This needs to be fixed/clarified.", "responses": [] } ]
1
https://f1000research.com/articles/6-23
https://f1000research.com/articles/6-21/v1
09 Jan 17
{ "type": "Software Tool Article", "title": "Pathogen Sequence Signature Analysis (PSSA): A software tool for analyzing sequences to identify microorganism genotypes", "authors": [ "Karina Salvatierra", "Hector Florez", "Karina Salvatierra" ], "abstract": "Introduction The chikungunya virus (CHIKV) is an arbovirus vectored by Aedes mosquitoes that infects humans in tropical and sub-tropical areas of Asia and Africa. Recently, outbreaks have been reported in tropical and sub-tropical areas of countries that were previously unaffected (e.g., Brazil, Colombia). Currently, the following geographical genotypes have been identified through phylogenetic analysis of CHIKV E1 gene sequences: the West African (WAf), East/Central/South African (ECSA), and Asian genotypes. Outbreaks in a geographical area can happen with the same or different genotypes. Determining which genotypes are circulating in an outbreak is important for public health management. Objectives To create a computer-based system available online that is suitable for detecting changes in CHIKV nucleotide and amino acid sequences and identifying their corresponding geographical genotype. Methods We used several computer frameworks, tools, programming languages, algorithms, and infrastructure systems to build a software tool that analyzes changes in nucleotide and amino acid sequences and identifies different geographical genotypes through phylogenetic analysis. Results We have built an online software tool called Pathogen Sequence Signature Analysis (PSSA) that allows researchers to analyze nucleotide and amino acid sequence variations between sample CHIKV sequences taken from infected patients and obtained through conventional Sanger sequencing, to identify their corresponding geographical genotype. Conclusion PSSA is able to analyze sequences in a simple and effective manner, and includes proper documentation (i.e., UML diagrams) and also basic examples that serve to test the algorithm. Furthermore, PSSA provides various ways to visualize the data in order to aid understanding and interpretation of results. Results provided by PSSA will be useful for the identification of circulating CHIKV genotypes and public health surveillance. PSSA is available at: http://pssa.itiud.org.", "keywords": [ "Chikungunya virus", "Public health", "sequences", "information system", "phylogenetic analysis." ], "content": "Introduction\n\nChikungunya virus (CHIKV) is an arbovirus (arthropod-borne virus), which is part of the Alphavirus genus and belongs to the Togaviridae family. It is vectored by Aedes mosquitoes and infects humans in tropical and sub-tropical areas of Asia and Africa1. CHIKV has a positive-sense, single-stranded RNA genome of 12 kb, which can persist for years in humans. Symptoms include rash and febrile illness associated with severe arthralgia2.\n\nCurrently, the following geographical genotypes have been identified through phylogenetic analysis of CHIKV E1 gene sequences: the West African (WAf), East/Central/South African (ECSA), and Asian genotypes3–5. However, most CHIKV phylogenetic studies have just used fragmented sequences from the glycoprotein envelope of the E1 gene, which avoids accurate assessments regarding the relations between strains and their evolutionary dynamics.\n\nRecently, some complete sequences from the CHIKV genome have been made available, so we have used the available data for a complete E1 gene to develop an automated computational algorithm that can be used for accurate and rapid identification of this pathogen.\n\nThe online software tool that we have created, called Pathogen Sequence Signature Analysis (PSSA), will allow researchers to analyze nucleotide and amino acid sequence variations between sample CHIKV sequences taken from infected patients, and determine the corresponding genotype from phylogenetic analysis of the results. PSSA also provides various ways to visualize the data in order to aid understanding and interpretation of results.\n\n\nMethods\n\nTo build PSSA, we used standard computer-based tools, programming languages, and infrastructure systems. PSSA is based on the Object Oriented Paradigm; thus, for its design, we used the Unified Modelling Language (UML)6. For its development, we used version 7.1.0 of the PHP language (https://www.php.net/) supported by the application server Apache version 2.4.23 (http://www.apache.org/). PSSA’s front end was developed based on version 3.3.7 of Bootstrap (http://getbootstrap.com/). Using Bootstrap is very convenient for this project because it is a framework that properly integrates JavaScript, CSS, and HTML for creating responsive web applications. PSSA uses the version 3.1.1 of the library JQuery (https://www.jquery.com/) for facilitating the use of JavaScript functionalities. After PSSA performs an analysis, it provides results in several formats. One result corresponds to an automatically generated report in pdf format, created based on version 0.0.8 of the PHP library ezpdf (https://github.com/rebuy-de/ezpdf). The other results are a force-directed graph, a radial tree, and a cartesian tree, which were developed supported by version 3.5.17 of the online JavaScript library called Data-Driven Documents, known as D3 (https://www.d3js.org/). D3 provides services for deploying data via interactive visualizations.\n\nThe geographical genotype of a sample sequence was determined based on well-defined phylogenetic clusters whose origins have been linked to a given geographic region. We analyzed all available whole genomes in GenBank database (www.ncbi.nlm.nih.gov/genbank/). However, since the E1 gene has been previously used in several studies, including Nunes et al.7, Laiton-Donat et al.8, and Volk et al.9, to determine the genotype of sample sequences, we decided to use the E1 gene. In addition, we performed extensive testing to be sure that our reference strains can accurately classify other sequences.\n\nPSSA stores all information related to nucleotide and amino acid analysis in one relational database. In this project we created said database using version 5.7.16 of the MySql community server (http://dev.mysql.com/downloads/). The database was designed to handle the required information of the proposed nucleotide and amino acid analysis, as well as phylogenetic analyses. The database is managed using version 4.6.5.2 of phpMyAdmin (https://www.phpmyadmin.net/), which is a project that serves to administrate databases that use MySql. MySQL community also offers MySqlWorkbench (https://www.mysql.com/products/workbench/), of which version 6.3 was used to design PSSA’s database.\n\nIn addition, version 4.1 of the Integrated Development Environment (IDE) EclipsePHP (https://eclipse.org/pdt/) was used to develop PSSA. EclipsePHP allows for creation of PHP-based projects supporting PHP, CSS, and JavaScript languages. In addition, EclipsePHP provides git services for storing projects in desired repositories. Thus, we started the development of PSSA by creating a PHP project in EclipsePHP; next, we created all required files and wrote the source code for the algorithms that performed the desired analyses; and finally, we created a git configuration to store the project in the GitHub repository system. To host PSSA, the SUSE Linux Enterprise Server 12 SP2 (https://www.suse.com) was used. This server includes all applications mentioned above that are necessary for PSSA to operate correctly.\n\nThe various libraries, frameworks, and software we used to develop PSSA are all under the GNU General Public License (http://www.gnu.org/licenses/licenses.en.html). This means that only free software was used to develop our software tool.\n\nAs reference sequences we used ECSA genotype HM045811-Ross, Asian genotype HM045810, and West African genotype HM045807, searching through all nucleotide sequences of the CHIKV E1 gene that are available on the GenBank database (www.ncbi.nlm.nih.gov/genbank/). (the first isolated identified of three genotypes).\n\nThe accession numbers for the representative or alternative CHIKV E1 sequences used in the phylogenetic analysis are as follows:\n\nECSA genotype: HM045823, AM258993, EF012359, AM258991, AB455494, GU199352, FJ445426, FN295485, GU301781, HM045784, HM045822, KP164568, KP164570, KP164569.\n\nAsian genotype: HM045813, HM045800, HM045790, HM045789, EF027140, EF027141, FN295483, L37661, EF452493, FJ807897, HE806461, KF318729, KJ451622, AB860301, KP164567, KP164572, KP164571, KJ451624, KP851709, KT211035, KT211049.\n\nWest African genotype: HM045816, HM045785, HM045815, HM045818, AY726732, HM045820, HM045817.\n\nPSSA has been developed to be run in with Google Chrome, Mozilla Firefox, Internet Explorer, and Safari; nevertheless, PSSA might run in other browsers such as Opera. To run PSSA the URL http://pssa.itiud.org must be typed in the web browser.\n\nHowever, if the user wants to run PSSA locally, the following steps need to be followed:\n\n1. Download PSSA from the github repository: https://github.com/florezfernandez/pssa.\n\n2. Install the local server the software: Apache, PHP, MySql, and PHPMyAdmin (optional).\n\n3. Run the database script, which is available when the project is restored from the repository.\n\n4. The project contains a file called “connection.php” in which the information regarding the connection to the database is configured. Update the information of the server, database, database user, and database password with the information of the local server. The default values provided with PSSA are: server = localhost, database = pssa, database user = “root”, and database password = “”.\n\n\nResults\n\nThe most important feature of PSSA is that the algorithm has been developed to analyze sequences taken from multiple patients. Several sequences can also be submitted per patient. The analysis process is carried out via the following steps:\n\n1. Once the user has accessed PSSA by using the corresponding web address, they must access the “Sequence Analysis” menu, in which the user can select the menu item “Chikungunya Virus”. PSSA then presents the name, description, reference and alternative sequences of available gene(s) for CHIKV (Figure 1). For each gene (e.g., E1), two icons appear on the right side. The first icon deploys reference sequences while the second icon the alternative sequences.\n\n2. There are two different types of analysis in PSSA. By selecting the reference sequences, users proceed with the mutation analysis, which analyzes nucleotide and amino acid changes in patient sequences, and by selecting alternative sequences users proceed with the phylogenetic analysis, which establishes the phylogenetic relationship between the submitted sequences and determines which genotype they belong to.\n\n3. Once the type of analysis has been selected, the user can provide the patient sequences through FASTA files. PSSA includes an example dataset that can be used to test the system. The symbol '-' can be included in desired sequences in order to specify possible missing data; but tabs, blank spaces, and any other symbols than the ones used in these kind of sequences (i.e., A, C, T, G) are not accepted.\n\nThe first icon represents the reference sequence, while the second icon represents the alternative sequences of the CHIKV E1 gene. By selecting the reference sequences, users proceed with the mutation analysis, and by selecting alternative sequences users proceed with the phylogenetic analysis,\n\nAfter patient sequences have been provided, the analysis algorithm is run and the system presents the corresponding results.\n\nFor the mutation analysis the system provides an online report that includes the nucleotide and corresponding amino acid changes in each patient’s sequence (Figure 2). The report can be sent to the user via e-mail in pdf format, and contains both a summary of the results and complete details of the analysis. It also provides a force-directed graph which presents each sequence as a node and where the set of nodes deployed using the same color represents one patient (Figure 3). In addition, nodes that belong to each patient are clustered based on the number of nucleotide and amino acid changes. When a sequence contains a substantial part of the E1 gene, these results are reliable and can be used for further purposes. Users might confirm that results are reliable by reading the pdf report and comparing it to the force-directed graph.\n\nForce-directed graph presents each sequence as a node and each set of nodes of the same color as one patient. In addition, nodes that belong to each patient are clustered based on the number of nucleotide and amino acid changes.\n\nFor the phylogenetic analysis, the system presents results as a radial tree (Figure 4) and a cartesian tree (Figure 5) to establish the phylogenetic relationship between sequences and determine which genotype they belong to, based on an array of alternative sequences corresponding to the E1 gene.\n\nThe algorithm is an iterative process. Thus, for each patient file, all sequences are collected by the algorithm; then, for each sequence, some instructions of the algorithm are used to compare the iterated sequence to the selected reference sequence as well as to the alternative sequences. All information regarding the analysis is stored and used to generate the reports through the visualizations described above. It is important to mention that there are two different types of analysis in PSSA, even though they are both closely related. The mutation analysis presents results as a report of the nucleotide and amino acid variations in each patient’s sequence and a force-directed graph, whilst the phylogenetic analysis is based on the nucleotide substitution model and results are presented as a radial and cartesian tree to establish the phylogenetic relationship between sequences and determine which genotype they belong to.\n\nThe three genotypes are separated into the different branches, where each branch corresponds to one sequence obtained from the GenBank database.\n\nIt presents the same information as the radial tree, but it shows the three genotypes separated into the different branches more clearly.\n\n\nConclusions\n\nPSSA is an online software tool that provides an automated computational algorithm that guarantees accurate and reliable detection of nucleotide and amino acid sequence variations and provides various ways to visualize the data in order to aid understanding and interpretation of results.\n\nPSSA is different to BMA10, which is another analysis tool developed in our research group, because it not only provides information regarding nucleotides and amino acid changes, but it also compares the sequences with multiple alternative sequences to identify the genotype in a phylogenetic tree.\n\nPSSA will be useful for the identification of circulating CHIKV genotypes in an outbreak and public health surveillance. It is a flexible tool, which implies that it could be used for evaluating other microorganisms, such as bacteria (e.g., Mycobacterium tuberculosis), parasites (e.g., Leishmania) or other viruses (e.g., Dengue, Zika).\n\n\nSoftware availability\n\nSoftware available from: http://pssa.itiud.org\n\nLatest source code: https://github.com/florezfernandez/pssa\n\nArchived source code as at the time of publication:\n\nhttp://dx.doi.org/10.5281/zenodo.17992211\n\nLicense: GNU General Public License (GPL)", "appendix": "Author contributions\n\n\n\nKS performed the literature review and drafted the manuscript. HF designed and developed the PSSA software and helped draft the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work presented in this paper has been supported by the Information Technologies Innovation (ITI) Research Group.\n\n\nAcknowledgments\n\nThe authors would like to thank Professor Jorge E. Osorio, Department of Pathobiological Sciences, University of Wisconsin-Madison (USA), for his collaboration in the project.\n\n\nReferences\n\nRobinson MC: An epidemic of virus disease in Southern Province, Tanganyika Territory, in 1952–53. I. Clinical features. Trans R Soc Trop Med Hyg. 1955; 49(1): 28–32. PubMed Abstract | Publisher Full Text\n\nJohnston RE, Peters CJ: Alpha viruses associated primarily with fever and polyarthritis. In: Fields BN, Knipe DM, Howley PM (Eds.), Field Virology. Lippincott-Raven Publishers, Philadelphia, 1996; 843–898.\n\nPowers AM, Brault AC, Tesh RB, et al.: Re-emergence of Chikungunya and O’nyong-nyong viruses: evidence for distinct geographical lineages and distant evolutionary relationships. J Gen Virol. 2000; 81(Pt 2): 471–479. PubMed Abstract | Publisher Full Text\n\nSchuffenecker I, Iteman I, Michault A, et al.: Genome microevolution of Chikungunya viruses causing the Indian Ocean outbreak. PLoS Med. 2006; 3(7): e263. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPowers AM, Logue CH: Changing patterns of Chikungunya virus: re-emergence of a zoonotic arbovirus. J Gen Virol. 2007; 88(Pt 9): 2363–2377. PubMed Abstract | Publisher Full Text\n\nRumbaugh J, Jacobson I, Booch G: The Unified Modeling Language Reference Manual. Pearson Higher Education. 2004. Reference Source\n\nNunes MR, Faria NR, de Vasconcelos JM, et al.: Emergence and potential for spread of Chikungunya virus in Brazil. BMC Med. 2015; 13: 102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaiton-Donat K, Usme-Ciro JA, Rico A, et al.: Análisis filogenético del virus del chikungunya en Colombia: evidencia de selección purificadora en el gen E1. Biomédica. 2016; 36(Supl.2): 25–34. Publisher Full Text\n\nVolk SM, Chen R, Tsetsarkin KA, et al.: Genome-scale phylogenetic analyses of chikungunya virus reveal independent emergences of recent epidemics and various evolutionary rates. J Virol. 2010; 84(13): 6497–650. PubMed Abstract | Publisher Full Text\n\nSalvatierra K, Florez H: Revised Biomedical Mutation Analysis (BMA): A software tool for analyzing mutations associated with antiviral resistance [version 2; referees: 2 approved]. F1000Res. 2016; 5: 1141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSalvatierra K, Florez H: PSSA. Zenodo. 2016. Data Source" }
[ { "id": "21444", "date": "18 Apr 2017", "name": "Easwaran Sreekumar", "expertise": [ "Reviewer Expertise Host-pathogen interaction", "virus evolution", "Chikungunya", "Dengue" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Salvatierra & Florez describes development of software tool and web interface for analyzing sequences of microorganisms. Essentially, the software is currently configured for analyzing only chikungunya sequences.\nThe reviewer has a number of comments/ clarifications to make:\nIntroduction: There are no confirmed evidence that the virus can persist in humans for years, as claimed by the authors \"CHIKV has a positive-sense, single-stranded RNA genome of 12 kb, which can persist for years in humans\". It might be possible that the reviewer has missed such reports in the literature; so a reference citation is required to support this point.\n\nReviewer feels that since the reference data included in the software is only suitable for Chikungunya virus, the authors should refrain from making a broad claim in the title of the manuscript that it can be used for other microorganisms\n\nThe reviewer tried to use the software to analyze two input sequences of differing length. The user interface simply prompts ‘incorrect sequence length’, without giving any clue that the input sequence should be of equal length (which I ‘guessed’ from the test data set).\n\nThere are no readily accessible user guidelines (help) or link given in the web interface so that the user can do trouble shooting easily. What happens for the mutation analysis if a user gives out of frame sequences? Will it again simply prompt that ‘incorrect sequence length’?\n\nThe manuscript does not describe the requirements for the input sequence- minimum length, maximum length, reading frame etc. except that it should be in FASTA format.\n\nWhat is the exact algorithm used in the phylogenetic tree. Does it use Neighbor joining method or Maximum Likelihood analysis, or any other methods? What are the default settings of the’ some instructions’ of the algorithms and nucleotide substitution model to compare the iterated sequences?\n\nIs there any provision to do on screen editing of the sequence (I think, no) or is it that each time one need to input an edited sequence file?\n\nIt provides on screen outputs.Is there any way to save these outputs, and in which format?\n\nThe reviewer feels that the software does not give added advantage over many of the stand alone, free programs such as BioEdit or MEGA, unless it has more user friendly features. It provides a ready reference for a small set of known CHIKV Genotypes which would be useful for a newcomer in the field to identify the genotypes. But even for a little more advanced user, the interface has no features for a customizable analysis.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly", "responses": [] }, { "id": "22414", "date": "15 May 2017", "name": "Massimo Ciccozzi", "expertise": [ "Reviewer Expertise Evolutionary analysis and molecular epidemiology" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSalvatierra & Florez describe the development of software tool and web interface for analyzing only Chikungunya sequences.\nThe idea is very interesting but I have some different concerns about the utility of its application.\nIt is not well documented that Chikungunya virus can persist in human for many time\n\nIn the title it must be underline that the system is to identify Chikungunya virus only and maybe in the text (discussion section) the eventual possibility to expand it\n\nThe authors have to better describe the requirements about the sequences used in this tool\n\nIt is important in phylogenetic analysis to identify the algorithm used but no mention has been made in the article, no model choose in a case of maximum likelihood algorithm and so on\n\nI think that in this form without detailed informations for users it is not possible to accept. After major revision, this can be a useful and detailed guide.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? No\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No", "responses": [] } ]
1
https://f1000research.com/articles/6-21
https://f1000research.com/articles/5-2348/v1
20 Sep 16
{ "type": "Software Tool Article", "title": "biojs-io-biom, a BioJS component for handling data in Biological Observation Matrix (BIOM) format", "authors": [ "Markus J. Ankenbrand", "Niklas Terhoeven", "Sonja Hohlfeld", "Frank Förster", "Alexander Keller", "Niklas Terhoeven", "Sonja Hohlfeld", "Frank Förster", "Alexander Keller" ], "abstract": "The Biological Observation Matrix (BIOM) format is widely used to store data from high-throughput studies. It aims at increasing interoperability of bioinformatic tools that process this data. However, due to multiple versions and implementation details, working with this format can be tricky. Currently, libraries in Python, R and Perl are available, whilst such for JavaScript are lacking. Here, we present a BioJS component for parsing BIOM data in all format versions. It supports import, modification, and export via a unified interface. This module aims to facilitate the development of web applications that use BIOM data. Finally, we demonstrate it's usefulness by two applications that already use this component. Availability: https://github.com/molbiodiv/biojs-io-biom, https://dx.doi.org/10.5281/zenodo.61698", "keywords": [ "biom-format", "ecology", "meta-genomics", "biojs", "parser", "meta-barcoding" ], "content": "Introduction\n\nIn recent years, there has been an enormous increase in biological data available from high-throughput studies. Despite this increase, for many of these studies the general basic layout of the data is similar to traditional assessment after bioinformatical processing, yet complications arise due to the increased size of the data tables. This is the case for transcriptomic and marker-gene community data, where the central matrix consists of counts for each observation (e.g. gene or taxon) in each sample, plus a second and third matrix for metadata of both taxa and samples, respectively.\n\nTo avoid handling three matrices and to standardize the data deposition for downstream analyses, the Biological Observation Matrix (BIOM) Format was developed1. One main purpose of the BIOM format is to enhance interoperability between different software suits. Many current leading tools in community ecology and metagenomics support the BIOM format, e.g. QIIME2, MG-RAST3, PICRUSt4, phyloseq5, VAMPS6 and Phinch7. Additionally, libraries exist in Python1, R8 and Perl9 to propagate the standardized use of the format.\n\nInteractive visualization of biological data in a web browser is becoming more and more popular10,11. For the development of web applications that support BIOM data, a corresponding library is currently lacking and would be very useful, since several challenges arise when trying to handle BIOM data. While BIOM format version 1 builds on the JSON format and thus is natively supported by JavaScript, the more recent BIOM format version 2 uses HDF5 and can therefore not be handled natively. Also the internal data storage can be either dense or sparse so applications have to handle both cases. Furthermore application developers need to be very careful when modifying BIOM data as changes that do not abide to the specification will break interoperability with other tools. Here we present biojs-io-biom, a JavaScript module that provides a unified interface to read, modify, and write BIOM data. It can be readily used as a library by applications that need to handle BIOM data for import or export directly in the browser. To demonstrate the utility of this module it has been used to implement a simple user interface for the biom-conversion-server12. Additionally, the popular BIOM visualization tool Phinch7 has been forked and extended with new features, in particular support for BIOM version 2 by integrating biojs-io-biom13. This fork is available as Blackbird via https://github.com/molbiodiv/Blackbird.\n\n\nThe biojs-io-biom component\n\nThe biojs-io-biom library can be used to create new objects (called Biom objects for brevity) by either loading file content directly via the static parse function or by initialization with a JSON object:\n\n\n\nThe data is checked for integrity and compliance with the BIOM specification. Missing fields are created with default content. All operations that set attributes of the Biom object with the dot notation are also checked and prompt an error if they are not allowed.\n\n\n\nBeside checking and maintaining integrity the biojs-io-biom library implements convenience functions. This includes getter and setter for metadata as well as data accession functions that are agnostic to internal representation (dense or sparse). But one of the main features of this library is the capability of handling BIOM data in both versions 1 and 2 by interfacing with the biom-conversion-server12. Handling of BIOM version 2 in JavaScript directly is not possible due to it’s HDF5 binary format. The only reference implementation of the format is in C and trying to transpile the library to JavaScript using emscripten14 failed due to strong reliance on file operations (see discussions: 15,16). Using the conversion server allows developers to use BIOM of both versions transparently. Biom objects also expose the function write which exports it as version 1 or version 2.\n\n\nApplication\n\nTo demonstrate the utility of this module it has been used to implement a user interface for the biom-conversion-server12. Besides providing an API it is now also possible to upload files using a file dialog. The uploaded file is checked using this module and converted to version 1 on the fly if necessary. It can then be downloaded in either version 1 or 2. As most of the functionality is provided by the biojs-io-biom module the whole interface is simply implemented with a few additional lines of code.\n\nAs a second example the Phinch framework7 has been forked to Blackbird13 and enhanced to allow BIOM version 2. Phinch visualizes the content of BIOM files using a variety of interactive plots. However due to the difficulties of handling HDF5 data only BIOM version 1 is supported. This is unfortunate as most tools nowadays return BIOM version 2 (e.g. QIIME from version 1.9,12 and Qiita17). It is possible to convert from version 2 to version 1 without loss of information but that requires an extra step using the command line. By including this biojs-io-biom module and the biom-conversion-server into Blackbird it was possible to add support for BIOM version 2 along with some other improvements13.\n\nAs the biojs-io-biom module resolves the import and export challenges, one of the next steps is the development of a further BioJS module to present BIOM data as a set of data tables. In order to do that for large datasets sophisticated, accession functions capitalizing on the sparse data representation have to be implemented.\n\n\nConclusion\n\nThe module biojs-io-biom was developed to enhance the import and export of BIOM data into JavaScript. It’s utility and versatility has been demonstrated in two example applications. It is implemented using latest web technologies, well tested and well documented. It provides a unified interface and abstracts from details like version or internal data representation. Therefore it will facilitate the development of web applications that rely on the BIOM format.\n\n\nSoftware availability\n\nLatest source code https://github.com/molbiodiv/biojs-io-biom\n\nArchived source code as at the time of publication https://zenodo.org/record/61698\n\nLicense MIT\n\nLatest source code https://github.com/molbiodiv/biom-conversion-server\n\nArchived source code as at the time of publication https://zenodo.org/record/61704\n\nLicense MIT\n\nLatest source code https://github.com/molbiodiv/Blackbird\n\nArchived source code as at the time of publication https://zenodo.org/record/61721\n\nLicense BSD 2-Clause", "appendix": "Author contributions\n\n\n\nMethodology: MJA and SH. Investigation: MJA and NT. Software: MJA. Supervision: AK and FF. Writing - original draft: MJA. Writing - review and editing: All authors.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nMJA was supported by a grant of the German Excellence Initiative to the Graduate School of Life Sciences, University of Würzburg (Grant Number GSC 106/3). This publication was supported by the Open Access Publication Fund of the University of Würzburg.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe are grateful to Franziska Saul for fruitful discussions on user interface design. We further thank members of the biom-format, Phinch and hdf5.node projects for quick, kind and helpful responses to our requests.\n\n\nReferences\n\nMcDonald D, Clemente JC, Kuczynski J, et al.: The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome. Gigascience. 2012; 1(1): 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010; 7(5): 335–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeyer F, Paarmann D, D’Souza M, et al.: The metagenomics RAST server –a public resource for the automatic phylogenetic and functional analysis of metagenomes. BMC Bioinformatics. 2008; 9: 386. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangille MG, Zaneveld J, Caporaso JG, et al.: Predictive functional profiling of microbial communities using 16S rRNA marker gene sequences. Nat Biotechnol. 2013; 31(9): 814–821. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcMurdie PJ, Holmes S: Phyloseq: an R package for reproducible interactive analysis and graphics of microbiome census data. PLoS One. 2013; 8(4): e61217. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuse SM, Mark Welch DB, Voorhis A, et al.: VAMPS: a website for visualization and analysis of microbial population structures. BMC Bioinformatics. 2014; 15: 41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBik HM; Pitch Interactive: Phinch: An interactive, exploratory data visualization framework for–Omic datasets. bioRxiv. 2014; 009944. Publisher Full Text\n\nMcMurdie PJ; The biom-format team: An interface package (beta) for the BIOM file format. R package version 0.3.12. 2014. Reference Source\n\nAngly FE, Fields CJ, Tyson GW: The Bio-Community Perl toolkit for microbial ecology. Bioinformatics. 2014; 30(13): 1926–1927. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorpas M, Jimenez R, Carbon SJ, et al.: BioJS: an open source standard for biological visualisation – its status in 2014 [version 1; referees: 2 approved]. F1000Res. 2014; 3: 55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorpas M: The BioJS article collection of open source components for biological data visualisation [version 1; referees: not peer reviewed]. F1000Res. 2014; 3: 56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnkenbrand MJ: Biom-conversion-server: Version 1.0.0. 2016. Publisher Full Text\n\nAnkenbrand MJ: Blackbird: Version 1.2.1. 2016; Accessed: 2016-09-09. Reference Source\n\nKripken/emscripten: Emscripten: An LLVM-to-JavaScript Compiler. Accessed: 2016-09-08. Reference Source\n\nBiom javascript module . Issue #699. biocore/biom-format. Accessed: 2016-09-08. Reference Source\n\nhdf5 javascript in a webbrowser . Issue #29 . HDF-NI/hdf5.node. Accessed: 2016-09-08. Reference Source\n\nQiita. Accessed: 2016-09-08. Reference Source" }
[ { "id": "16546", "date": "03 Oct 2016", "name": "Daniel McDonald", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn Ankenbrand et al, the authors develop a library to enable interaction with BIOM, a file format common in the microbiome field, from the JavaScript programming language. JavaScript is a staple of web-development, and the ability to interact with BIOM formatted files via JavaScript will facilitate the development of web-based tools for microbiome research. As the authors note, libraries for the interaction BIOM files have only been implemented so far in Python, R and Perl. And while Python and Perl have a strong web presence, they are not natively supported in modern web browsers as JavaScript is, and often rely on server-side processing as opposed to the client-side paradigms which JavaScript excels at.\nGeneral comments\nThe API provided by BioJS is minimal. Notably, methods for partitioning, collapsing, transforming, filtering and subsampling are not present. While developers will be able to access sample or observation profiles as a whole, the current release of BioJS pushes much of the common manipulation logic onto the consumer of the library.\n\nThe in memory representation of the data following parse by BioJS are either in a dense matrix, or in a dict of keys style sparse representation. As the authors note, specialized methods will need to be created to handle large data efficiently, however the authors may wish to consider placing emphasis instead on specialized data structures such as compressed sparse row or column.\n\nThe highlight with Blackbird is great to see but we were confused by the intention of the Github fork. The codebase suggests that it is more than just a proof of concept to highlight BioJS as there is project-specific branding. Would the authors consider clarifying their position with Blackbird?\n\nThe primary motivator for the development of BIOM-format 2.1.0 were scaling limitations inherent with the JSON-based representation of 1.0.0. Specifically, the “data” key of the JSON string must be parsed in full in order to random access to individual sample or observation data. This removes the possibility of algorithms which depend on efficient random access patterns for data too large for main memory. Additionally, the overhead associated with representing a large JSON object in memory is high. While we acknowledge HDF5 possesses challenges for web-based interaction with these data, it is important to note that the 1.0.0 JSON-based format is not recommended for modern sized studies using hundreds to thousands to tens of thousands of samples.\n\nThe use of the conversion server is very cool and could be taken a step further by layering a light communication API on top to allow a client to request arbitrary samples. This separation would remove the burden of the client needing to read HDF5 formatted files, greatly lower the memory footprint of the client, and likely be more performant than a pure client-side model as the client would only need to know about what it had requested. This expansion of biojs-io-biom, in our opinion, would have the greatest impact for expanding the use of BIOM formatted data within a web application.\nMajor\nWhen the authors refer to BIOM v2, we believe they are actually referring to BIOM v2.1.0. There are important distinctions between the format versions. Would the authors consider clarifying the minor version number in discussion?\nMinor\nThe two uses of “accession functions” reads awkwardly as these types of methods are generally described as “accessor functions.” Would the authors consider revising the phrasing?\nDisclosures Daniel McDonald and Evan Bolyen are developers for the BIOM-Format Project.", "responses": [ { "c_id": "2383", "date": "09 Jan 2017", "name": "Markus J. Ankenbrand", "role": "Author Response", "response": "We thank the reviewers for their constructive comments that helped us improve the manuscript. Find our point by point answers below (original comments in bold): The API provided by BioJS is minimal. Notably, methods for partitioning, collapsing, transforming, filtering and subsampling are not present. While developers will be able to access sample or observation profiles as a whole, the current release of BioJS pushes much of the common manipulation logic onto the consumer of the library. Thanks for pointing that out. We continuously add more functions to make use of our library more convenient. I opened a dedicated issue listing the functions that are present in the python library but lacking in ours (https://github.com/molbiodiv/biojs-io-biom/issues/16). We already implemented functions for transformation, normalization and filtering in order to get more feature complete. The in memory representation of the data following parse by BioJS are either in a dense matrix, or in a dict of keys style sparse representation. As the authors note, specialized methods will need to be created to handle large data efficiently, however the authors may wish to consider placing emphasis instead on specialized data structures such as compressed sparse row or column. That is a very good point and something we are evaluating at the moment. The highlight with Blackbird is great to see but we were confused by the intention of the Github fork. The codebase suggests that it is more than just a proof of concept to highlight BioJS as there is project-specific branding. Would the authors consider clarifying their position with Blackbird? After feedback from Holly Bik (Principal Investigator on the Phinch framework) we agreed to remove the Blackbird branding and instead merge our improvements back into Phinch. Therefore, we removed references to Blackbird from the manuscript. For more details see the referee report by Holly Bik (18 Oct 2016) and this discussion on GitHub: https://github.com/PitchInteractiveInc/Phinch/issues/63 The primary motivator for the development of BIOM-format 2.1.0 were scaling limitations inherent with the JSON-based representation of 1.0.0. Specifically, the “data” key of the JSON string must be parsed in full in order to random access to individual sample or observation data. This removes the possibility of algorithms which depend on efficient random access patterns for data too large for main memory. Additionally, the overhead associated with representing a large JSON object in memory is high. While we acknowledge HDF5 possesses challenges for web-based interaction with these data, it is important to note that the 1.0.0 JSON-based format is not recommended for modern sized studies using hundreds to thousands to tens of thousands of samples. This is a valid point. By using the JSON representation for our library we re-introduce the limitations of BIOM-format 1.0. We hope to support the HDF5 format in the future. However even with support of HDF5 loading full tables with tens of thousands of samples into the browser might be too memory intensive. Therefore, the next thing we would like to try is the extension of the conversion server with the communication API as you suggested. We added a short paragraph clearly stating our shortcoming and discussing the possible solution at the end of the Application section. The use of the conversion server is very cool and could be taken a step further by layering a light communication API on top to allow a client to request arbitrary samples. This separation would remove the burden of the client needing to read HDF5 formatted files, greatly lower the memory footprint of the client, and likely be more performant than a pure client-side model as the client would only need to know about what it had requested. This expansion of biojs-io-biom, in our opinion, would have the greatest impact for expanding the use of BIOM formatted data within a web application. This is a great suggestion and we are eager to work on that for the next major release. We also added this as a future prospect to the manuscript. When the authors refer to BIOM v2, we believe they are actually referring to BIOM v2.1.0. There are important distinctions between the format versions. Would the authors consider clarifying the minor version number in discussion? We added the minor version number whenever we refer to the BIOM format. We left the patch level out as the documentation on biom-format.org only lists the three versions (1.0, 2.0, 2.1). If you feel that the patch level is relevant as well we will gladly add that, too. The two uses of “accession functions” reads awkwardly as these types of methods are generally described as “accessor functions.” Would the authors consider revising the phrasing? Thanks a lot. We revised the phrasing." } ] }, { "id": "16436", "date": "18 Oct 2016", "name": "Holly M. Bik", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes the biojs-io-biom toolkit, which includes a conversion library and server for re-formatting Biological Observation Matrix (BIOM) files between versions 1.x (JSON-formatted) and 2.x (HDF5-formatted).\nThe conversion library itself is extremely useful, since it will allow users to convert quickly between BIOM file formats without having to go back to the command line (e.g. QIIME) and easily reformat files for use in various applications.\nI do not have the necessary javascript expertise to comment on the codebase and conversion server backend, so I will offer some general comments on the practical applications outlined in the text:\nSince this project is based on the Phinch framework, I find the \"Blackbird\" rebranding of the fork to be very problematic. The \"Blackbird\" instance is really just an updated release of the Phinch framework, with some bug fixes, added features, and implementation of the new BIOM conversion server. The rebranding/renaming is confusing for the end user (see comment by other peer reviewer below), and mistakenly implies a number of scenarios that are not accurate: 1) that the authors were involved in the original development of data visualization tools, 2) that the Blackbird rebranding and design changes were approved from by the original developers, and 3) the \"Blackbird\" project represents a significant expansion or retooling of the current Phinch framework. I’m fully aware that this is open source software and the authors are free to reuse and share the Phinch codebase, but I don't really see the utility of the \"Blackbird\" rebranding, and creating an additional web instance that mostly replicates the functionality of http://phinch.org will confuse end users.\nSince the authors here are really community contributors to the original Phinch project, I would recommend eliminating the \"Blackbird\" rebranding of the project, and reverting back to Phinch branding (citing the framework release as Phinch v2.0). We will then initiate a pull request to update the bug fixes and integrate the new biojs-io-biom source code to be live on http://phinch.org  The visual layout for Phinch (name, logo and visualization layout) was thoughtfully constructed, and the new Blackbird logo and visual modifications will likely interfere with “brand recognition” that should be attributed to the original Phinch framework.\nOnce this pull request is initiated and completed, the “Application” manuscript text should be updated to reflect the live implementation of the conversion library on a v2.0 Phinch framework at phinch.org.\nOther minor comments:\nCan you please provide details on how and where the \"Blackbird\" instance and biom-conversion-server are currently hosted (e.g. Amazon AWS)?\n\nPlease list the public landing page for the applications mentioned in the text (in case users want to access these tools directly) - e.g. https://biomcs.iimog.org\n\nThe biom-conversion-server does not appear to be backwards compatible (I could not upload and convert a BIOM 1.x file to 2.x format) - this one-way conversion functionality is should be clearly indicated in the first paragraph of the “Application” section. In addition, if users try to upload a BIOM 1.0 file they should be presented with an appropriate error message (I didn’t see one - the tool just froze when I attempted to upload a BIOM 1.0 file).\n\nThere are other BIOM conversion servers that exist, e.g. implementations within the Galaxy framework - see https://toolshed.g2.bx.psu.edu/repository/display_tool?repository_id=b3ae8ca9317b000e&render_repository_actions_for=tool_shed&tool_config=%2Fsrv%2Ftoolshed%2Fmain%2Fvar%2Fdata%2Frepos%2F002%2Frepo_2436%2Fbiom_convert.xml&changeset_revision=501c21cce614 - these alternate tools should be mentioned in the text. How does the biom-conversion-server compare with (and potentially improve on) such Galaxy based tools?", "responses": [ { "c_id": "2384", "date": "09 Jan 2017", "name": "Markus J. Ankenbrand", "role": "Author Response", "response": "Thanks a lot for taking the time to review this article and for the good suggestions for improvement. Find our point by point answers below (original comments in bold): Since this project is based on the Phinch framework, I find the \"Blackbird\" rebranding of the fork to be very problematic. The \"Blackbird\" instance is really just an updated release of the Phinch framework, with some bug fixes, added features, and implementation of the new BIOM conversion server. The rebranding/renaming is confusing for the end user (see comment by other peer reviewer below), and mistakenly implies a number of scenarios that are not accurate:  1) that the authors were involved in the original development of data visualization tools,  2) that the Blackbird rebranding and design changes were approved from by the original developers, and  3) the \"Blackbird\" project represents a significant expansion or retooling of the current Phinch framework. I’m fully aware that this is open source software and the authors are free to reuse and share the Phinch codebase, but I don't really see the utility of the \"Blackbird\" rebranding, and creating an additional web instance that mostly replicates the functionality of http://phinch.org will confuse end users. Since the authors here are really community contributors to the original Phinch project, I would recommend eliminating the \"Blackbird\" rebranding of the project, and reverting back to Phinch branding (citing the framework release as Phinch v2.0).We will then initiate a pull request to update the bug fixes and integrate the new biojs-io-biom source code to be live on http://phinch.org The visual layout for Phinch (name, logo and visualization layout) was thoughtfully constructed, and the new Blackbird logo and visual modifications will likely interfere with “brand recognition” that should be attributed to the original Phinch framework. Once this pull request is initiated and completed, the “Application” manuscript text should be updated to reflect the live implementation of the conversion library on a v2.0 Phinch framework at phinch.org. Thanks for sharing your thoughts on this delicate topic. We are grateful to you for suggesting a more satisfactory solution. As you suggested we prepared the pull request that integrates the additional features into Phinch and removed Blackbird branding from our fork. We look forward to the changes going live on phinch.org. We will use the same procedure for future improvements as long as you are interested in merging them. Can you please provide details on how and where the \"Blackbird\" instance and biom-conversion-server are currently hosted (e.g. Amazon AWS)? The biom-conversion-server and the Phinch preview instance are both docker containers currently running on a virtual machine with Ubuntu 16.04 (2GB RAM, 1CPU) on a dedicated server hosted by Hetzner. Please list the public landing page for the applications mentioned in the text (in case users want to access these tools directly) - e.g. https://biomcs.iimog.org Added links to the manuscript The biom-conversion-server does not appear to be backwards compatible (I could not upload and convert a BIOM 1.x file to 2.x format) - this one-way conversion functionality is should be clearly indicated in the first paragraph of the “Application” section. In addition, if users try to upload a BIOM 1.0 file they should be presented with an appropriate error message (I didn’t see one - the tool just froze when I attempted to upload a BIOM 1.0 file). In general the biom-conversion-server is not limited to one way conversion. Attempts to replicate the described behaviour were not successful so it might be a problem with a specific BIOM file. We are eager to find the cause of this issue and opened a bug report here: https://github.com/molbiodiv/biom-conversion-server/issues/4 However we need your assistance in tracking down this bug. There are other BIOM conversion servers that exist, e.g. implementations within the Galaxy framework - see https://toolshed.g2.bx.psu.edu/repository/display_tool?repository_id=b3ae8ca9317b000e&render_repository_actions_for=tool_shed&tool_config=%2Fsrv%2Ftoolshed%2Fmain%2Fvar%2Fdata%2Frepos%2F002%2Frepo_2436%2Fbiom_convert.xml&changeset_revision=501c21cce614 - these alternate tools should be mentioned in the text. How does the biom-conversion-server compare with (and potentially improve on) such Galaxy based tools? Thanks for pointing that out. We included the Galaxy biom_convert tool in our discussion." } ] }, { "id": "16545", "date": "25 Oct 2016", "name": "Joseph Nathaniel Paulson", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAnkenbrand et al. provide a javascript library to interact with the microbial consortia BIOM format version 1 class. As the authors note, a javascript library could be a great benefit to the community as many commonly used tools like QIIME and Mothur produce BIOM formatted objects. However, the article and software are missing a few key components for a fully positive review.\n\nMajor comments:\n\nThere is a historical context that Ankenbrand et al. miss in discussing biom-format and subsequently imply that the biom-format is more widely adopted than being field specific format. If the authors leave the introduction more general, then I would suggest they include more background on the history of high-throughput data storage and reproducibility in programmatic languages, perhaps starting with the Minimum Information About a Microarray Experiment - MIAME format 1 and exprSet classes developed in R about 15 years ago before the genomics standards consortium (formed in 2005), for which biom-format is a member.\n\nThe authors posit that the BIOM format version 2 / 2.1 that moved to HDF5 made it impossible for javascript libraries to manipulate it natively. We found a javascript library that “takes advantage of the compatibility of V8 and HDF5”. Were the authors unable to build from this library to take advantage of the version 2 BIOM format? The BIOM version 2 / 2.1 formats were designed specifically to handle many of the shortcomings of the version 1 in terms of memory and design. It would be advantageous of the users to build from this if possible to at least read in the BIOM v2.1 HDF5 files.\n\nIn my own installation of the software, I keep getting error messages when I attempt to create a biom object, see here: http://tinyurl.com/f1000-review. If the reviewers could please clarify the installation guide on the github repo.\n\nMinor comments:\n\nThe second sentence needs clarification. “Despite this increase, for many of these studies the general basic layout of the data is similar to traditional assessment after bioinformatical processing, yet complications arise due to the increased size of the data tables.”\n\nThe citation for the BIOM interface R package has been deprecated. The appropriate citation is: Paul J. McMurdie and Joseph N Paulson (2015). biomformat: An interface package for the BIOM file format. R/Bioconductor package version 1.0.0.2.", "responses": [ { "c_id": "2385", "date": "09 Jan 2017", "name": "Markus J. Ankenbrand", "role": "Author Response", "response": "Thanks a lot for the thorough review and the good suggestions for improvement. Find our point by point answers below (original comments in bold): There is a historical context that Ankenbrand et al. miss in discussing biom-format and subsequently imply that the biom-format is more widely adopted than being field specific format. If the authors leave the introduction more general, then I would suggest they include more background on the history of high-throughput data storage and reproducibility in programmatic languages, perhaps starting with the Minimum Information About a Microarray Experiment - MIAME format 1 and exprSet classes developed in R about 15 years ago before the genomics standards consortium (formed in 2005), for which biom-format is a member. As suggested we extended the introduction to cover more of the historical context. The authors posit that the BIOM format version 2 / 2.1 that moved to HDF5 made it impossible for javascript libraries to manipulate it natively. We found a javascript library that “takes advantage of the compatibility of V8 and HDF5”. Were the authors unable to build from this library to take advantage of the version 2 BIOM format? The BIOM version 2 / 2.1 formats were designed specifically to handle many of the shortcomings of the version 1 in terms of memory and design. It would be advantageous of the users to build from this if possible to at least read in the BIOM v2.1 HDF5 files. There is a fine distinction between JavaScript inside a browser and on a server (nodejs) that we previously did not make sufficiently clear in our manuscript. For the nodejs environment there is in fact a library that handles data in HDF5 format (https://github.com/HDF-NI/hdf5.node). As our library is supposed to work equally well in both environments we tried to port this library to the browser. Unfortunately that proofed to be infeasible even after contacting the developers of the library (see https://github.com/HDF-NI/hdf5.node/issues/29). We adjusted the manuscript to make clear that HDF5 is not natively supported in the browser rather than in javascript in general. Further we added a section discussing the downside of being limited to JSON and plans to overcome that at the end of the Application section. In my own installation of the software, I keep getting error messages when I attempt to create a biom object, see here: http://tinyurl.com/f1000-review. If the reviewers could please clarify the installation guide on the github repo. Thanks for finding that issue. We fixed the bug creating your issue, added a minimum required version of nodejs and improved the documentation. The second sentence needs clarification. “Despite this increase, for many of these studies the general basic layout of the data is similar to traditional assessment after bioinformatical processing, yet complications arise due to the increased size of the data tables.” Rephrased The citation for the BIOM interface R package has been deprecated. The appropriate citation is: Paul J. McMurdie and Joseph N Paulson (2015). biomformat: An interface package for the BIOM file format. R/Bioconductor package version 1.0.0.2. Fixed" } ] } ]
1
https://f1000research.com/articles/5-2348
https://f1000research.com/articles/5-2339/v1
19 Sep 16
{ "type": "Opinion Article", "title": "Crafting minds and communities with Minecraft", "authors": [ "Benjamin C. Riordan", "Damian Scarf", "Benjamin C. Riordan" ], "abstract": "Minecraft is a first-person perspective video game in which players roam freely in a large three-dimensional environment. Players mine the landscape for minerals and use these minerals to create structures (e.g., houses) and mould the landscape. But can Minecraft be used to craft communities and minds? In this opinion piece, we highlight the enormous potential of Minecraft for fostering social connectedness, collaboration, and its potential as an educational tool. We highlight the recent use of Minecraft to aid socialization in individuals with Autistic Spectrum Disorder (ASD) and promote civic engagement via the United Nations Human Settlement Program. We further discuss the potential for the recently released Minecraft: Education Edition and provide novel links between Minecraft and recent on work on the role of social cures and community empowerment in enhancing mental health, wellbeing, and resilience.", "keywords": [ "Minecraft", "Education", "Gamification", "Technology", "Developmental Psychology" ], "content": "\n\nMinecraft is a first-person perspective sandbox game – a three-dimensional, procedurally generated, Lego-like environment made up of blocks of different compounds (Duncan, 2011; Mojang, 2016). Players mine these compounds and re-place them to create various structures or shape the landscape. Minecraft has been purchased over 100 million times and in every country in the world (incl. Antarctica; Mojang, 2016). We think Minecraft is more than a global phenomenon, and may represent a critical and historical transition; an eminently popular videogame that is highly social and collaborative (Bainbridge, 2007; Entertainment Software Association, 2016; Granic et al., 2014; Przybylski, 2014), casting doubt on the depiction of video gamers as disconnected adolescents (Zimbardo & Coulombe, 2016). While early videogame research focused predominantly on the negative impacts of gaming (Strasburger et al., 2010), researchers are now starting to focus on the positives and Minecraft is at the forefront of this change (Granic et al., 2014; Nebel et al., 2016).\n\nUnlike other games, when played in its traditional settings, Minecraft has no aim or specific goals, which allows players the freedom to immerse themselves in their own narrative, build, create, and explore. Players can build alone, or join/create servers to play cooperatively. Given the creative nature of Minecraft and the open world environment, it is unsurprising that some have used the platform to create immersive worlds, artworks, and performances (Bukvic et al., 2014; Duncan, 2011). Importantly, Minecraft lends itself to socialization. The nature of the game has led to the formation of communities and groups that share and support creative creation. Given the move towards social and online gaming, it is unsurprising that a recent review found that videogame play is associated with social outcomes (Greitemeyer & Mügge, 2014). But unlike other games, Minecraft may be used to actively promote socialization.\n\nAn example of this is Autcraft, a semi-private Minecraft server and online community formed around those with Autistic Spectrum Disorder (ASD; Ringland et al., 2016). Those with ASD often struggle with face-to-face social interactions, but they still express a desire to connect socially. Playing Minecraft can help these players meet their social goals and gain the positive effects of socialization (Ringland et al., 2016). Although the effectiveness of Autcraft on wellbeing has not been tested, ethnographic research has suggested that the platform has successfully promoted collaboration, socialization, and community. Ensuring that individuals meet their social goals is critical in improving health and wellbeing (Holt-Lunstad et al., 2010; Jetten et al., 2012; Scarf et al., 2016), and the Autcraft blueprint can be used to help other groups meet their social goals (e.g., older adults who have become housebound or geographically isolated; Osmanovic & Pecchioni, 2016).\n\nThe collaborative nature of Minecraft can also promote prosocial behavior outside the videogame context (Gentile et al., 2009; Greitemeyer & Mügge, 2014). For example, to harness the prosocial behavior Minecraft instills in players and to promote civic engagement, Minecraft partnered with the United Nations Human Settlement Program (UN-Habitat) to engage communities in planning urban public spaces (Block By Block, 2016). The program provided residents with Minecraft and computer access so they could cooperatively recreate their cities and show city planners how they want their cities to look. The ubiquity of Minecraft and ease of play makes it the perfect game to promote bottom-up approaches and engage and empower communities to reimagine their city spaces (Baba et al., 2016). The program has helped communities all over the world create parks, city squares, sidewalks, seawalls, and marketplaces.\n\nBeyond social applications, Minecraft actively promotes the problem solving skills, creativity, planning, and persistence skills necessary for future success. Employers are actively recruiting gamers from online videogame leaderboards (Carr-Chellman, 2016) and gamers’ unique skills are credited with helping scientists solve complex unanswered problems (Cooper et al., 2010). Minecraft fosters these skills as the environment requires the player to interact with unfamiliar environments, experiment, calculate, plan ahead, and develop complex mental representations to understand the world. In fact, longitudinal research suggests that videogames are related to greater problem solving skills. For example, in a high school population, strategic videogame play predicted self-reported problem solving skills, which in turn predicted better academic grades (Adachi & Willoughby, 2013). Research has also experimentally manipulated videogame play to determine whether this correlation is causal. When undergraduates were assigned to play a strategy based game (Portal 2), relative to a group that ironically played a brain-training game (Lumosity), the strategy group displayed a significantly greater improvement in problem solving and persistence (Shute et al., 2015).\n\nTo help educators craft minds in the class room, Minecraft released Minecraft: Education Edition in September 2016 (Mojang, 2016). While using games in education is not a new concept, using a commercial game with the popularity of Minecraft is. The use of Minecraft in education may not only increase motivation for learning, but allow students to take a more active role in their education. Already, a number of educators have taken advantage of the collaborative nature of Minecraft to plan immersive lessons, and homework in subjects such as math, earth and ocean science, chemistry, molecular biology, and history (Nebel et al., 2016). Although there are few studies on using Minecraft in the classroom, educators have consistently reported that Minecraft has improved interest and motivation for learning (Nebel et al., 2016). For example, to measure the effectiveness of Minecraft as a teaching tool, one 7th grade class was taught with Minecraft and the other in traditional lecture based learning. Both groups showed improvement, however, post-tests indicated those who had been taught with Minecraft performed significantly better (Wang & Towey, 2013).\n\nWhile more empirical data is needed, the use of Minecraft to foster socialization, engage and empower communities, and enhance students’ interest in education and creation suggests that Minecraft is crafting minds and opening a new chapter in video game research.", "appendix": "Author contributions\n\n\n\nBR wrote the initial draft of the manuscript and provided scope for the manuscript. DS conceived the idea and co-wrote the initial draft of the manuscript. Both authors agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAdachi PJ, Willoughby T: More than just fun and games: the longitudinal relationships between strategic video games, self-reported problem solving skills, and academic grades. J Youth Adolesc. 2013; 42(7): 1041–1052. PubMed Abstract | Publisher Full Text\n\nBaba C, Kearns A, McIntosh E, et al.: Is empowerment a route to improving mental health and wellbeing in an urban regeneration (UR) context? Urban Stud. 2016. Publisher Full Text\n\nBainbridge WS: The scientific research potential of virtual worlds. Science. 2007; 317(5837): 472–476. PubMed Abstract | Publisher Full Text\n\nBlock By Block. Blockbyblock.org. 2016; [Accessed July 3, 2016]. Reference Source\n\nBukvic II, Cahoon C, Wyatt A, et al.: OPERAcraft: Blurring the lines between real and virtual. Ann Arbor, MI: Michigan Publishing, University of Michigan Library. 2014; 2014. Reference Source\n\nCarr-Chellman A: Why video games shouldn’t freak parents out. 2016; [Accessed July 3, 2016]. Reference Source\n\nCooper S, Khatib F, Treuille A, et al.: Predicting protein structures with a multiplayer online game. Nature. 2010; 466(7307): 756–760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuncan SC: Minecraft, beyond construction and survival. Well Played: A journal on video games, value and meaning. 2011; 1(1): 1–22. Reference Source\n\nEntertainment Software Association: Essential facts about the computer and video game industry. 2016; [Accessed July 3, 2016]. Reference Source\n\nGentile DA, Anderson CA, Yukawa S, et al.: The effects of prosocial video games on prosocial behaviors: International evidence from correlational, longitudinal, and experimental studies. Pers Soc Psychol Bull. 2009; 35(6): 752–763. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGranic I, Lobel A, Engels RC: The benefits of playing video games. Am Psychol. 2014; 69(1): 66–78. PubMed Abstract | Publisher Full Text\n\nGreitemeyer T, Mügge DO: Video games do affect social outcomes: a meta-analytic review of the effects of violent and prosocial video game play. Pers Soc Psychol Bull. 2014; 40(5): 578–589. PubMed Abstract | Publisher Full Text\n\nHolt-Lunstad J, Smith TB, Layton JB: Social relationships and mortality risk: a meta-analytic review. PLoS Med. 2010; 7(7): e1000316. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJetten J, Haslam C, Haslam SH: The social cure: Identity, health and well-being. Hove, England: Psychology Press, 2012. Reference Source\n\nMojang: Minecraft.2016. Reference Source\n\nNebel S, Schneider S, Rey GD: Mining learning and crafting scientific experiments: a literature review on the use of minecraft in education and research. Journal of Educational Technology & Society. 2016; 19: 355–366. Reference Source\n\nOsmanovic S, Pecchioni L: Beyond Entertainment: Motivations and Outcomes of Video Game Playing by Older Adults and Their Younger Family Members. Games and Culture. 2015; 11(1–2): 130–149. Publisher Full Text\n\nPrzybylski AK: Electronic gaming and psychosocial adjustment. Pediatrics. 2014; 134(3): e716–e722. PubMed Abstract | Publisher Full Text\n\nRingland KE, Wolf CT, Faucett H, et al.: “Will I always be not social?”: Re-Conceptualizing Sociality in the Context of a Minecraft Community for Autism. Proceedings of ACM CHI Conference on Human Factors in Computing Systems. 2016; 1256–1269. Publisher Full Text\n\nScarf D, Moradi S, McGaw K, et al.: Somewhere I Belong: Long-term increases in adolescents’ resilience are predicted by perceived belonging to the in-group. Br J Soc Psychol. 2016; 55(3): 588–599. PubMed Abstract | Publisher Full Text\n\nShute V, Ventura M, Ke F: The power of play: The effects of Portal 2 and Lumosity on cognitive and noncognitive skills. Computers & Education. 2015; 80: 58–67. Publisher Full Text\n\nStrasburger VC, Jordan AB, Donnerstein E: Health Effects of Media on Children and Adolescents. Pediatrics. 2010; 125: 756–767. PubMed Abstract | Publisher Full Text\n\nWang T, Towey D: A Mobile Virtual Environment game approach for improving student learning performance in integrated science classes in Hong Kong International Schools. IEEE International Conference on: Teaching, Assessment and Learning for Engineering (TALE). 2013. Publisher Full Text\n\nZimbardo P, Coulombe ND: Man Disconnected: How technology has sabotaged what it means to be male. London, England: Rider, 2016." }
[ { "id": "17059", "date": "24 Oct 2016", "name": "Mark Lorch", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is a very brief introduction to some uses of Minecraft outside of its core gamer  base. The authors highlight some interesting projects, but do not dicuss them at any length.\n\nBeyond providing a handful of examples no real attempt has been made to discuss how Minecraft can or might be used to craft minds and communities. Nor do the authors express much of an opinion on the topic.\nFurthermore some fundamentals and high profile attempts to use Minecraft as an educational tool projects have been overlooked. For example the authors imply that the first versions of Minecraft designed for educational uses was launched in September 2016. However MinecraftEDU has been available since 2011. Other educational projects such work by the Royal Geological Society, the Tate Modern are not mentioned.\nThe article does cover a very interesting and rich area for which I would like to see a comprehensive review or opinion piece. This article is a good start however much more depth, both in terms of examples, discuss and opinion is needed.", "responses": [ { "c_id": "2402", "date": "09 Jan 2017", "name": "Damian Scarf", "role": "Author Response", "response": "We thank the reviewer for their thoughts and suggestions and address some of their comments below. We hope that our amendments to the manuscript meet their concerns. The article is a very brief introduction to some uses of Minecraft outside of its core gamer base. The authors highlight some interesting projects, but do not discuss them at any length. Beyond providing a handful of examples no real attempt has been made to discuss how Minecraft can or might be used to craft minds and communities. Nor do the authors express much of an opinion on the topic.   Unfortunately, we are limited by the amount we can say due to a tight word limit. The aim was not to discuss these points at great length but to provide a succinct and accessible overview of the promise of Minecraft in a number of areas. Very few studies using Minecraft have used adequate control groups, so where possible we have drawn on other videogame literature that has used more robust methodologies. At this stage we believe the research is not fleshed out enough for a large scale review or meta-analysis.   However, we agree that we may be light on an opinion and make our thoughts more explicit. We now draw on social psychology literature to make more of an argument for the use of Minecraft to help craft communities and some potential limitations.     Furthermore some fundamentals and high profile attempts to use Minecraft as an educational tool projects have been overlooked. For example the authors imply that the first versions of Minecraft designed for educational uses was launched in September 2016. However MinecraftEDU has been available since 2011. Other educational projects such work by the Royal Geological Society, the Tate Modern are not mentioned.   We thank the reviewer for these suggestions and have added in more high profile uses of Minecraft.   We agree we have made an error in implying that Minecraft Education Edition is the first being used for education. This was not our intention. We make this point more explicit.   The article does cover a very interesting and rich area for which I would like to see a comprehensive review or opinion piece. This article is a good start however much more depth, both in terms of examples, discuss and opinion is needed.   We agree that this is an important area and are excited to see some more results. We hope more pieces like these will compel other researchers to run randomised controlled trials to address some of the gaps in the literature. We believe Minecraft can and should be more than just a ‘cool’ way to present education." } ] } ]
1
https://f1000research.com/articles/5-2339
https://f1000research.com/articles/6-19/v1
09 Jan 17
{ "type": "Research Article", "title": "Genetic polymorphisms in the serotonin receptor 7 (HTR7) gene are associated with cortisol levels in African American young adults", "authors": [ "Grace Swanson", "Stephanie Miller", "Areej Alyahyawi", "Bradford Wilson", "Forough Saadatmand", "Clarence Lee", "Georgia Dunston", "Muneer Abbas", "Grace Swanson", "Stephanie Miller", "Areej Alyahyawi", "Bradford Wilson", "Forough Saadatmand", "Clarence Lee", "Georgia Dunston" ], "abstract": "Introduction: Serotonin is a neurohormone involved in biological processes, such as behavior and immune function. Chronic psychosocial stressors may cause serotonin release resulting in immune system dysregulation, as evidenced by increased or far decreased levels of cortisol, a blood biomarker of stress and immune function. We hypothesize that genetic polymorphisms in the HTR7 gene are associated with both hypo- and hyper-cortisolism. Methods: The study population included 602 African American subjects between 18-34 years of age, living in Washington, D.C. Five single nucleotide polymorphisms (SNPs) in HTR7, rs2420367, rs12412496, rs2185706, rs7089533, and rs7093602 were genotyped by restriction fragment length polymorphism or the TaqMan assay. Statistical analysis, using the program SNPstat, was performed to determine their associations with cortisol measured in the study population. Results: While an increased risk of hypocortisolism was found to be associated with rs2420367, rs2185706, and rs7093602 in a gender specific manner, no genotypes could be associated with hypercortisolism. Inversely, a decreased risk of hypocortisolism was found with the haplotype CGGCC (p=0.033), which remained significant in males. When adjusting for gender, females associated with the haplotype AGACC. Hypercortisolism was also associated with a decreased risk for the haplotypes AAACC (p=0.042) and AAGTT (p=0.001). Discussion: Based on these results, genetic variation in the HTR7 gene may contribute to both stress and inflammation, and will provide a new glimpse into stress-related inflammation psychophysiology.", "keywords": [ "Serotonin", "SNP", "HTR7", "cortisol", "African Americans", "health disparities" ], "content": "Introduction\n\nPsychosocial stressors, such as exposure to interpersonal and community violence, may impact immune function (O'Connor et al., 2000), leading to the development of a number of health disparities (Black, 2003; Murali et al., 2007). During exposure to a stressor, many hormones are released in the brain, including serotonin (5-HT). 5-HT is a neurohormone that functions in the regulation of a variety of psychological and physiological processes, including behavior and inflammation (Idzko et al., 2004; Mikulski et al., 2010). Serotonin receptor 7 (HTR7) is the most recently described serotonin receptor, and it is found expressed on a number of immune cells, including macrophages and T lymphocytes (Ahern, 2011). Studies indicate that HTR7 activation results in the production of pro-inflammatory cytokines from immune cells, such as microglial cells, dendritic cells, and monocytes (Dürk et al., 2005; Mahé et al., 2005; Müller et al., 2009). Cytokines, such as interleukin-6 (IL-6), are then involved in the production of a variety of blood biomarkers, including cortisol, the major stress hormone (de Kloet et al., 2005; Howren et al., 2009; Maeda et al., 2010).\n\nCortisol is involved in the regulation of a number of biological processes, such as cellular metabolism and immune function (Anagnostis et al., 2009; Lundberg, 2005; Webster Marketon & Glaser, 2008). Both hypercortisolism and hypocortisolism have recently been associated with a number of health disparities, including cardiovascular disease and asthma (Buske-Kirschbaum et al., 2003; Manenschijn et al., 2013). During a stress response, cortisol concentration is increased resulting in a shift towards an inflammatory and humoral response (Murali et al., 2007; Straub et al., 2002). Under normal circumstances, this response will be limited and shut off when cortisol levels decrease back to normal levels (Ehlert et al., 2001). If a stressor persists for an extended period of time, the cortisol concentration will continue to increase until the system exhausts its supply (McEwen, 2004). Both cases, whether amplified or depressed, can lead to serious conditions, such as metabolic syndrome and irritable bowel syndrome (Anagnostis et al., 2009; Fries et al., 2005).\n\nAn increasing number of single nucleotide polymorphisms (SNPs) in genes for neurohormone receptors have also been linked to a number of health disparities. These diseases include breast cancer, depression, diabetes, and asthma (Deming et al., 2012; Kim et al., 2011; Kring et al., 2009; Lucae et al., 2010). While SNPs in these genes have been identified, there is limited data on the molecular pathways involved in these relationships. With regard to SNPs found in serotonin receptors, the mechanisms used by serotonin receptor 2A (HTR2A) in relation to disease state have begun to be identified (Beretta et al., 2008; Snir et al., 2013). It is possible that SNPs within HTR7 modulate the induction of pro-inflammatory cytokines from immune cells during times of stress, leading to a predisposition towards the development of health disparities. Previous findings prompted us to investigate whether genetic variation in HTR7 associates with the changes in function of the stress and immune system measured by cortisol levels in African American (AA) young adults.\n\n\nMethods and materials\n\nAll DNA and sample data were collected during a previous study done between 2010 and 2012 entitled “Gender Differences in the Experience of Violence, Discrimination, and Stress Hormone in African Americans: Implications for Public Health”, which examined the genetic markers for alcohol and depression, violence exposure, and drug use in AA young adults (unpublished study; Jackson L, Shestov M, Abbas M and Saadatmand F). In total, DNA samples from 602 AA individuals living in the Washington, D. C. area were available for genotyping. All individuals, both male and female, ranged between the ages of 18 and 34 years old. In an Audio Computer-Assisted Self-Interviewing (ACASI) survey, only 590 participants provided information concerning behavioral data, including information concerning housing, income, and violence exposure (Table 1). Blood biomarkers, including cortisol (472 out of 602), were determined after blood collection in the morning (Dataset 1; Swanson et al., 2016a). All analyses were performed in a case-control manner for both hypercortisolism (case1, participant cortisol concentration >14 µg/dl; control1, participant cortisol concentration <14 µg/dl) and hypocortisolism (case2, participant cortisol concentration <5 µg/dl; control2, participant cortisol concentration >5 µg/dl). Consent was obtained from each participant in the study, and approval from Howard University’s Institutional Review Board was obtained (approval number, IRB-16-MED-03).\n\nDemographics include gender, age, housing and income status, and violence exposure.\n\nSNPs in HTR7 were downloaded from NCBI using the SNP: Gene View (https://www.ncbi.nlm.nih.gov/SNP/snp_ref.cgi?chooseRs=all&locusId=3363&mrna=NM_019859.3&ctg=NT_030059.14&prot=NP_062873.1&orien=reverse&refresh=refresh) for the comparison of allele frequencies between the CEU, Caucasian descent, and YRI, Yoruban descent, populations. Differences between CEU and YRI populations was of interest as African American’s are likely to possess frequencies that fall between those of Caucasian and Yuroban descent. Four SNPs (rs12412496, rs2185706, rs7089533, and rs7093602) were selected for inclusion in the study based on a large difference in allele frequency between the CEU and YRI populations. The SNP rs2420367 was selected based on no frequency data being recorded with any population besides CEU (Table 2). All of the chosen SNPs were located in the first intron region of the gene, as indicated on NCBI’s SNP: Gene View.\n\nListed populations are as follows: YRI- Yoruban descent population listed in HapMap; CEU- Caucasian descent population listed in HapMap; AA- African American population from this study.\n\nGenotypes were determined using one of two methods. Restriction fragment length polymorphisms (RFLPs) were used for the genotyping of rs2420367, rs7089533, and rs7093602. Samples were amplified by polymerase chain reaction (PCR) in 96 well plates. The Taq polymerase and dNTP mix utilized for PCR amplification was obtained from Thermo Fisher Scientific (Waltham, MA, USA). SNP primers were supplied by Integrated DNA Technologies (Coralville, IA, USA). The concentration of magnesium chloride was 1.8 mM (rs2420367, rs7089533) and 1.5 mM (rs7093602). A total of 40 cycles were used, in which the annealing temperature was decreased by 2°C every 5 cycles until reaching the optimum annealing temperature of 57°C for each primer set. After confirming proper DNA amplification, the PCR products were digested with the appropriate restriction enzyme. The restriction enzymes, SmlI, AflII, and AvaII, were used at a concentration of 0.2 µL for the three SNPs rs2420367, rs7089533, rs7093602, respectively, (New England BioLabs, Ipswich, MA, USA). Incubation time for SmlI was increased to two hours at 55°C, while incubation for AflII and AvaII remained at the recommended 15 minutes. Visualization of the digestion was done using a 3% agarose gel.\n\nThe TaqMan SNP genotyping assay was used to genotype rs12412496 and rs2185706. A volume 1.5 µL of DNA was used, as well as the 10 µL of the prepared master mix as per manufacturer’s instructions. The plates were then run using the TaqMan SNP genotyping assay protocol by Applied Biosystems.\n\nThe genotype and allele frequencies for each SNP were determined using the data generated from all 602 DNA samples. Genotype and haplotype associations were made using the software program SNPstat, which tested the data for Hardy-Weinberg equilibrium, linear and logistic regression, and linkage disequilibrium statistics (http://bioinfo.iconcologia.net/SNPstats).\n\n\nResults\n\nAll 602 samples were used for the determination of allele and genotype frequencies within the AA population (Table 2). In determining the haplotype frequencies within the population, the majority of the population possessed one of three haplotypes; AGGCC (18.19%), AGACC (16.72%), and AAGCC (13.22%) ordered by SNPs, rs2420367, rs12412496, rs2185706, rs7089533, rs7093602 (Dataset 2; Swanson et al., 2016a).\n\nCortisol, the major glucocorticoid in the stress response, is also an important biomarker of an immune response (Cavigelli & Chaudhry, 2012; Straub et al., 2002). Cortisol is shown to first be involved in the development of an inflammatory response, before eventually acting to depress this response (Elenkov, 2008; Kunz-Ebrecht et al., 2003). It has also been shown to shift the adaptive immune response towards the humoral response (Cavigelli & Chaudhry, 2012; Murali et al., 2007). As such, it was desired to identify potential associations between both hyper- and hypocortisolism and the five intronic SNPs in HTR7. None of the five SNPs were found to be associated with hypercortisolism.\n\nHypocortisolism was found to be associated with rs2420367, rs2185706, and rs7093602 when adjusting for gender (Table 3 and Table 4). Females showed an association with the genotype A/C (rs2420367) by an 11-fold increase in the risk of hypocortisolism (OR=11.64[1.52-89.24]), when categorizing the interaction first by SNP then by gender (SNP within gender; Table 4). Similarly, males showed a 2-fold increased risk of hypocortisolism with the genotype A/A (rs2420367) when the interaction was categorized first by gender then by SNP (gender within SNP; Table 3). Only males showed an association to hypocortisolism when categorizing in a gender within SNP manner for both rs2185706 and rs7093602. Males with the genotype A/A (rs2185706) were found to be at a 5-times greater risk (OR=5.23[1.50-18.21]), while those with the genotype C/C (rs7093602) were associated with a 2-times greater risk (OR=2.13[1.06-4.28]; Table 3). This indicates that some SNPs within HTR7 are associated with hypocortisolism in a gender specific manner in the AA population.\n\nData was analyzed as gender within the SNPs.\n\nData was analyzed as SNPs within gender.\n\nWhen analyzing for haplotype associations to hypercortisolism, two haplotypes provided a decreased risk in the population. The haplotype AAACC (p=0.042) was found associated with a 77% decreased risk (OR=0.23[0.06-0.95]; Table 5), while the AAGTT haplotype (p=0.001) was associated with a 98% decreased risk in the AA population (OR=0.02[0.00-0.21]; Table 5). When analyzing the data for hypocortisolism, only the haplotype CGGCC (p=0.033) was associated with 79% decreased risk (OR=0.19[0.04-0.87]; Table 5).\n\nSNP ordering for haplotype analysis as follows; rs2420367, rs12412496, rs2185706, rs7089533, rs7093602.\n\nWhen adjusting for gender, males remained associated with hypocortisolism by the CGGCC haplotype (OR=0.01[0.00-0.19]) providing a 99% decreased risk (Table 6). This relationship held true regardless of the categorization of the interaction. While females did not remain associated with the CGGCC haplotype, a 91% decreased risk was found associated with hypocortisolism by the haplotype AGACC (OR=0.11[0.02-0.69]) and by 83% for the grouping of extremely rare haplotypes (OR=0.17[0.03-0.88]) (Table 6). These associations held when categorized in a haplotype within gender manner. Due to the low number of individuals denoted as case for both analyses (case1=22; case2=85), the majority of haplotypes were unable to be used for analysis purposes. The results obtained indicate that certain haplotypes are associated with cortisol level in the AA population by decreasing the risk of having either hyper- or hypocortisolism.\n\nAnalysis was performed as a haplotype and gender cross-classification interaction.\n\n\nDiscussion\n\nDespite the existing data on the impact of both genetic and environmental factors on immune function, few studies have examined relationships between these variables, especially in African Americans. We know relatively little about potentially sequential relationships between genetic risk factors, environmental stress, and immune function. In the brain, 5-HT is produced primarily by neurons in the midbrain, especially the dorsal raphe nucleus, which acts to innervate the majority of the brain, allowing 5-HT to modulate the response to stress (Holmes, 2008; Hornung, 2003). HTR7 is located throughout the brain and periphery, functioning in the regulation of circadian rhythmicity and smooth muscle tone, both in the cardiovasculature and gastrointestinal tract. (Abdouh et al., 2004; Hornung, 2003; Mahé et al., 2005) During times of stress, HTR7 expression is shown to increase and has been linked to depression (Guscott et al., 2005; Holmes, 2008). SNP genotypes and haplotypes in various genes, including serotonin receptors, have also been associated with changes in immune function or disease state (Ikeda et al., 2006; Snir et al., 2013; Zlojutro et al., 2011). As such, it should follow that SNP genotypes and haplotypes in the HTR7 gene should also be associated with changes in the stress response, resulting in an effect on immune function. In this paper, we report the allele, genotype, and haplotype frequencies for five intronic SNPs of the HTR7 gene in an African American population.\n\nDuring times of stress, cortisol secretion will initially increase, and over time may lead to hypercortisolism if the stressor remains persistent (Fries et al., 2005). In the case of chronic stress, it is possible to reach a state of adrenal exhaustion or hypocortisolism (McEwen, 2004). Both states, hyper- and hypocortisolism, have been associated with a number of stress-related diseases. While hypercortisolism has been shown to contribute to diseases such as depression, heart disease, and type 2 diabetes (Lundberg, 2005; Tse & Bond, 2004), hypocortisolism contributes to post-traumatic stress disorder, asthma, and irritable bowel syndrome (Buske-Kirschbaum et al., 2003; Ehlert et al., 2001; Fries et al., 2005; Heim et al., 1999). To test the hypothesis that there would be an association between SNPs in the HTR7 gene and both hyper- and hypocortisolism, the program SNPstat was used. While hypercortisolism was not associated to any specific SNP genotypes, associations were found to hyper- and hypercortisolism with the haplotypes AAACC and AAGTT, respectively. Both haplotypes provided a decreased risk of 77% and 98%, respectively, and may identify individuals who are less likely to develop hypercortisolism, and the potential resulting health disparities.\n\nAn increased risk of hypocortisolism was found to associated with three SNP genotypes in a gender specific manner. For rs2420367, females with the A/C genotype and males with the A/A genotype showed an 11-fold and 2-fold increased risk, respectively. Males, but not females also showed a 5-fold and 2-fold increased risk of hypocortisolism with the genotypes A/A (rs2185706) and C/C (rs7093602), respectively. As such, it is possible to conclude that individuals with these genotypes will be at an increased risk of developing hypocortisolism, and this may also increase the likelihood of developing health disparity diseases.\n\nThe haplotype CGGCC was determined to provide a 79% decreased risk of hypocortisolism in the population, and this association remained in males when adjusting for gender. Females however, showed a decreased risk of 78% with the haplotype AGACC. These haplotypes may identify the individuals in the AA population who are at a decreased risk of developing hypocortisolism and associated diseases.\n\nIn this work, we demonstrate that genetic variation in HTR7 influences the production of cortisol in response to chronic stress. While individual SNP genotypes indicate a risk towards the development of hypocortisolism, haplotypes within HTR7 seem to be protective against both hyper- and hypocortisolism. This protective effect may be a biological indicator of resilience towards maladaptive effects of chronic stress. This supports previous findings that individual genetics are involved in the degree of adaptability seen in response to stress (Feder et al., 2009; Gillespie et al., 2009). To our knowledge, this is the first evidence for a functional link between a genetic polymorphism in the HTR7 receptor gene and immunologically important subphenotypes related to differential levels of a blood biomarker for both stress and the immune response.\n\nFurther study is required to investigate this relationship between genetic polymorphisms in HTR7, the stress response, and the immune response. Due to the low number of individuals in the case grouping, haplotype analysis was incomplete. By increasing the sample set data with regards to cortisol measurements, a clearer picture of the relationship between these five SNPs and cortisol may be formed. This will enable a more detailed picture of the relationship between HTR7 polymorphisms and cortisol production. Furthermore, the cytokine shift induced by cortisol during a response to stress suggests that a relationship may exist between these five SNPs and both the inflammatory and humoral response (Maeda et al., 2010; Murali et al., 2007; Pepys & Hirschfield, 2003), and it would be beneficial to analyze whether any association between the five intronic SNPs and biomarkers of the inflammatory and humoral response. This would aid in the understanding of how stress exposure contributes to the development of disease, and the role in which genetics plays in this pathway.\n\n\nData availability\n\nDataset 1. Raw data for genetic polymorphisms in the serotonin receptor 7 (HTR7) gene are associated with cortisol levels in African Americans young adults. Cortisol measurements were determined in Mic-Gr/dL and recorded as a numberical value or the three values as follows; -9, indicate participants in which no cortisol was measurable in the participant; NA, indicate participants in which blood samples were not taken; √, were taken as case values for analysis. NA for sex and age of participant indicate the participant did not wish to specify. Within SNP genotypes, NA indicates that no genotype was determined. (doi, 10.5256/f1000research.10442.d146883; Swanson et al., 2016a).\n\nDataset 2. Haplotype frequencies for five SNPs in HTR7 in African Americans. SNP order is as follows: rs2420367, rs12412496, rs2185706, rs7089533, rs7093602. (doi, 10.5256/f1000research.10442.d146884; Swanson et al., 2016b).", "appendix": "Author contributions\n\n\n\nMA conceived the project and designed the experiments. GS, SM, and AA carried out the experiments. GS performed the statistical analysis and prepared the manuscript. MA, CL, GD, FS, and BW were involved in the revision of the manuscript. All authors agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe “Biological and Social Correlates of Drug Use in African American Adults” dataset was collected under Dr. Kathy Sanders-Phillips and was supported by the National Institute of Minority Health and Health Disparities (grant number, 5P20MD000198) and the National Institutes of Health (NIH) “Re-Engineering the Clinical Research Enterprise” (grant #UL1TR000101). The research reported in this publication was supported by the National Institute on Drug Abuse (NIDA) of the NIH (award number, R24DA021470), as well at the National Science Foundation under the Louis Stokes Alliance for Minority Participation (award numbers, HRD-1000286 and HRD-1503192). The content reported is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the National Science Foundation.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAbdouh M, Albert PR, Drobetsky E, et al.: 5-HT1A-mediated promotion of mitogen-activated T and B cell survival and proliferation is associated with increased translocation of NF-kappaB to the nucleus. Brain Behav Immun. 2004; 18(1): 24–34. PubMed Abstract | Publisher Full Text\n\nAhern GP: 5-HT and the immune system. Curr Opin Pharmacol. 2011; 11(1): 29–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnagnostis P, Athyros VG, Tziomalos K, et al.: Clinical review: The pathogenetic role of cortisol in the metabolic syndrome: a hypothesis. J Clin Endocrinol Metab. 2009; 94(8): 2692–2701. PubMed Abstract | Publisher Full Text\n\nBeretta L, Cossu M, Marchini M, et al.: A polymorphism in the human serotonin 5-HT2A receptor gene may protect against systemic sclerosis by reducing platelet aggregation. Arthritis Res Ther. 2008; 10(5): R103. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlack PH: The inflammatory response is an integral part of the stress response: Implications for atherosclerosis, insulin resistance, type II diabetes and metabolic syndrome X. Brain Behav Immun. 2003; 17(5): 350–364. PubMed Abstract | Publisher Full Text\n\nBuske-Kirschbaum A, von Auer K, Krieger S, et al.: Blunted cortisol responses to psychosocial stress in asthmatic children: a general feature of atopic disease? Psychosom Med. 2003; 65(5): 806–810. PubMed Abstract | Publisher Full Text\n\nCavigelli SA, Chaudhry HS: Social status, glucocorticoids, immune function, and health: can animal studies help us understand human socioeconomic-status-related health disparities? Horm Behav. 2012; 62(3): 295–313. PubMed Abstract | Publisher Full Text\n\nde Kloet ER, Joëls M, Holsboer F: Stress and the brain: from adaptation to disease. Nat Rev Neurosci. 2005; 6(6): 463–475. PubMed Abstract | Publisher Full Text\n\nDeming SL, Lu W, Beeghly-Fadiel A, et al.: Melatonin pathway genes and breast cancer risk among Chinese women. Breast Cancer Res Treat. 2012; 132(2): 693–699. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDürk T, Panther E, Müller T, et al.: 5-Hydroxytryptamine modulates cytokine and chemokine production in LPS-primed human monocytes via stimulation of different 5-HTR subtypes. Int Immunol. 2005; 17(5): 599–606. PubMed Abstract | Publisher Full Text\n\nEhlert U, Gaab J, Heinrichs M: Psychoneuroendocrinological contributions to the etiology of depression, posttraumatic stress disorder, and stress-related bodily disorders: the role of the hypothalamus-pituitary-adrenal axis. Biol Psychol. 2001; 57(1–3): 141–152. PubMed Abstract | Publisher Full Text\n\nElenkov IJ: Neurohormonal-cytokine interactions: implications for inflammation, common human diseases and well-being. Neurochem Int. 2008; 52(1–2): 40–51. PubMed Abstract | Publisher Full Text\n\nFeder A, Nestler EJ, Charney DS: Psychobiology and molecular genetics of resilience. Nat Rev Neurosci. 2009; 10(6): 446–457. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFries E, Hesse J, Hellhammer J, et al.: A new view on hypocortisolism. Psychoneuroendocrinology. 2005; 30(10): 1010–1016. PubMed Abstract | Publisher Full Text\n\nGillespie CF, Phifer J, Bradley B, et al.: Risk and resilience: genetic and environmental influences on development of the stress response. Depress Anxiety. 2009; 26(11): 984–992. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuscott M, Bristow LJ, Hadingham K, et al.: Genetic knockout and pharmacological blockade studies of the 5-HT7 receptor suggest therapeutic potential in depression. Neuropharmacology. 2005; 48(4): 492–502. PubMed Abstract | Publisher Full Text\n\nHeim C, Ehlert U, Hanker JP, et al.: Psychological and endocrine correlates of chronic pelvic pain associated with adhesions. J Psychosom Obstet Gynaecol. 1999; 20(1): 11–20. PubMed Abstract | Publisher Full Text\n\nHolmes A: Genetic variation in cortico-amygdala serotonin function and risk for stress-related disease. Neurosci Biobehav Rev. 2008; 32(7): 1293–1314. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHornung JP: The human raphe nuclei and the serotonergic system. J Chem Neuroanat. 2003; 26(4): 331–343. PubMed Abstract | Publisher Full Text\n\nHowren MB, Lamkin DM, Suls J: Associations of depression with C-reactive protein, IL-1, and IL-6: a meta-analysis. Psychosom Med. 2009; 71(2): 171–186. PubMed Abstract | Publisher Full Text\n\nIdzko M, Panther E, Stratz C, et al.: The serotoninergic receptors of human dendritic cells: identification and coupling to cytokine release. J Immunol. 2004; 172(10): 6011–6019. PubMed Abstract | Publisher Full Text\n\nIkeda M, Iwata N, Kitajima T, et al.: Positive association of the serotonin 5-HT7 receptor gene with schizophrenia in a Japanese population. Neuropsychopharmacology. 2006; 31(4): 866–871. PubMed Abstract | Publisher Full Text\n\nKim TH, An SH, Cha JY, et al.: Association of 5-hydroxytryptamine (serotonin) receptor 4 (5-HTR4) gene polymorphisms with asthma. Respirology. 2011; 16(4): 630–638. PubMed Abstract | Publisher Full Text\n\nKring SI, Werge T, Holst C, et al.: Polymorphisms of serotonin receptor 2A and 2C genes and COMT in relation to obesity and type 2 diabetes. PLoS One. 2009; 4(8): e6696. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKunz-Ebrecht SR, Mohamed-Ali V, Feldman PJ, et al.: Cortisol responses to mild psychological stress are inversely associated with proinflammatory cytokines. Brain Behav Immun. 2003; 17(5): 373–383. PubMed Abstract | Publisher Full Text\n\nLucae S, Ising M, Horstmann S, et al.: HTR2A gene variation is involved in antidepressant treatment response. Eur Neuropsychopharmacol. 2010; 20(1): 65–68. PubMed Abstract | Publisher Full Text\n\nLundberg U: Stress hormones in health and illness: the roles of work and gender. Psychoneuroendocrinology. 2005; 30(10): 1017–1021. PubMed Abstract | Publisher Full Text\n\nMaeda K, Mehta H, Drevets DA, et al.: IL-6 increases B-cell IgG production in a feed-forward proinflammatory mechanism to skew hematopoiesis and elevate myeloid production. Blood. 2010; 115(23): 4699–4706. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahé C, Loetscher E, Dev KK, et al.: Serotonin 5-HT7 receptors coupled to induction of interleukin-6 in human microglial MC-3 cells. Neuropharmacology. 2005; 49(1): 40–47. PubMed Abstract | Publisher Full Text\n\nManenschijn L, Schaap L, van Schoor NM, et al.: High long-term cortisol levels, measured in scalp hair, are associated with a history of cardiovascular disease. J Clin Endocrinol Metab. 2013; 98(5): 2078–2083. PubMed Abstract | Publisher Full Text\n\nMcEwen BS: Protection and damage from acute and chronic stress: allostasis and allostatic overload and relevance to the pathophysiology of psychiatric disorders. Ann N Y Acad Sci. 2004; 1032: 1–7. PubMed Abstract | Publisher Full Text\n\nMikulski Z, Zaslona Z, Cakarova L, et al.: Serotonin activates murine alveolar macrophages through 5–HT2C receptors. Am J Physiol Lung Cell Mol Physiol. 2010; 299(2): L272–280. PubMed Abstract | Publisher Full Text\n\nMüller T, Dürk T, Blumenthal B, et al.: 5–hydroxytryptamine modulates migration, cytokine and chemokine release and T-cell priming capacity of dendritic cells in vitro and in vivo. PLoS One. 2009; 4(7): e6453. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMurali R, Hanson MD, Chen E: Psychological stress and its relationship to cytokines and inflammatory diseases. Cytokines, Stress and Immunity. 2007; 29–49. Reference Source\n\nO'Connor TM, O'Halloran DJ, Shanahan F: The stress response and the hypothalamic-pituitary-adrenal axis: from molecule to melancholia. QJM. 2000; 93(6): 323–333. PubMed Abstract | Publisher Full Text\n\nPepys MB, Hirschfield GM: C-reactive protein: a critical update. J Clin Invest. 2003; 111(12): 1805–1812. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnir O, Hesselberg E, Amoudruz P, et al.: Genetic variation in the serotonin receptor gene affects immune responses in rheumatoid arthritis. Genes Immun. 2013; 14(2): 83–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStraub RH, Schuld A, Mullington J, et al.: The endotoxin-induced increase of cytokines is followed by an increase of cortisol relative to dehydroepiandrosterone (DHEA) in healthy male subjects. J Endocrinol. 2002; 175(2): 467–474. PubMed Abstract | Publisher Full Text\n\nSwanson G, Miller S, Alyahyawi A, et al.: Dataset 1 in: Genetic polymorphisms in the serotonin receptor 7 (HTR7) gene are associated with cortisol levels in African Americans young adults. F1000Research. 2016a. Data Source\n\nSwanson G, Miller S, Alyahyawi A, et al.: Dataset 2 in: Genetic polymorphisms in the serotonin receptor 7 (HTR7) gene are associated with cortisol levels in African Americans young adults. F1000Research. 2016b. Data Source\n\nTse WS, Bond AJ: The impact of depression on social skills. J Nerv Ment Dis. 2004; 192(4): 260–268. PubMed Abstract | Publisher Full Text\n\nWebster Marketon JI, Glaser R: Stress hormones and immune function. Cell Immunol. 2008; 252(1–2): 16–26. PubMed Abstract | Publisher Full Text\n\nZlojutro M, Manz N, Rangaswamy M, et al.: Genome-wide association study of theta band event-related oscillations identifies serotonin receptor gene HTR7 influencing risk of alcohol dependence. Am J Med Genet B Neuropsychiatr Genet. 2011; 156B(1): 44–58. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19064", "date": "24 Jan 2017", "name": "Kaustubh Adhikari", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript by Swanson et al. is a rather simple analysis of a few SNPs in the HTR7 gene checking for association with cortisol levels in a small cohort of African-Americans. But it suffers from numerous severe problems in the description, planning and analysis of the data, to the extent that its major conclusions are not supported by presented evidence anymore after careful consideration of the results.\n\nA) One of the most striking problems is that this manuscript conducts many different statistical analyses, but surprisingly, does not employ any multiple testing correction such as Bonferroni correction. For example, they have 26 different observed haplotypes (out of 2^5=32 possible haplotypes), and 3 gender analyses (combined, male, female). The appropriate significance threshold after Bonferroni correction would be 0.05/(26x3)=0.00064, at which level no reported p-values are significant any more. Which means all the reported associations in the manuscript do not stand up to a proper statistical scrutiny. Thus the manuscript, including the title, abstract and conclusions, should actually be rewritten to present this fact that no associations are significant so no association between HTR7 and cortisol can be drawn.\n\nB) The reportedly significant haplotypes for males and female associated with cortisol are very different. Yet no biological explanations are provided as to why that may be the case. Such instances suggest even more that the observed associations could be artifacts of improper statistical analysis.\n\nProblems with the sample description:\n\nC) Since the study referred to while presenting the cohort is an unpublished study, readers can have no further idea about the sample. So it should be described in more detail, possibly in supplement.\n\nD) Is there any information available about ancestry proportions of the samples? A typical GWAS will include genetic PCs to adjust for population substructure/stratification/admixture while doing association test with SNP genotypes / haplotypes. As the authors mention, these African-Americans are likely admixed, having some European and some African ancestry. Because whole-genome SNP genotypes are not available, it is understood that the authors can't adjust using PCs. But that is a substantial weakness and a possible cause for biased results. At least some idea about ancestry would help to shed light on the possible degree of substructure in the data.\n\nE) Table 1 provides several variables that the authors seemingly consider to be interesting in the cohort. If so, why aren’t any of these variables used in analysis? Otherwise, what is the purpose of presenting them?\n\nProblems with phenotyping:\n\nF) People with health issues, such as tumors in pituitary or adrenal glands, should be excluded, as these issues will affect the normal production of cortisol. Was such data collected in the participants?\n\nG) The choice of cut-offs for defining hypo- and hyper-cortisolism are arbitrary and unexplained. Explanation and reference should be provided.\n\nProblems with the choice of markers:\n\nH) It is not clear why HTR7 specifically was selected to check in contrast with cortisol level. Further biological explanation should be provided.\n\nI) The rationale provided behind the choice of SNPs is rather weak. Not clear why the only SNPs of interest would be those showing a big allele frequency difference between CEU & YRI. As it happens, this rule only ends up selecting some intronic snps. Selecting some functional variants mentioned in the literature, such as non-synonymous SNPs, would be more interesting as such SNPs are more likely to have a biological effect, if any.\n\nJ) And even if the SNPs are intronic, they should at least be studied if they have any functional features such as regulatory annotations.\n\nOther problems with statistical analysis:\n\nK) The ‘statistical analysis’ section is extremely brief and doesn’t present any details on the methods or analysis procedure. The various analyses done in the manuscript is very haphazard, without any planned systematic presentation.\n\nL) The authors use the term “adjusted for gender” several times, but it doesn’t make sense, compared to the analyses performed.\n\nM) OR for gender within SNP genotype groups doesn't seem relevant as it is not related to the research question.\n\nN) OR for one genotype category AC against AA is significant for one gender in one SNP, but is not true for CC against AA, neither in effect size nor in p-value, and thus the postulated effect of the C allele doesn't seem reliable. Also, wonder why the authors didn't test for allelic effects in addition to genotypic effects.\n\nO) They should provide some power calculations, particularly considering that the statistical analyses actually don’t show anything as significant.\n\nOther comments:\n\nP) “as African American’s are likely to possess frequencies that fall between those of Caucasian and Yuroban descent.” – references should be provided.\n\nQ) “Yuroban” should be Yoruban.", "responses": [] } ]
1
https://f1000research.com/articles/6-19
https://f1000research.com/articles/6-14/v1
05 Jan 17
{ "type": "Review", "title": "EPHect – the Endometriosis Phenome (and Biobanking) Harmonisation Project – may be very helpful for clinicians and the women they are treating", "authors": [ "Laura M. Miller", "Neil P. Johnson", "Laura M. Miller" ], "abstract": "This article acts as a summary of the recently published papers by the World Endometriosis Research Foundation aiming to set up the Endometriosis Phenome and Biobanking Harmonisation Project.  The objective of this project is to standardise recording of patient history and characteristics, recording of surgical procedure and extent of disease as well as collection, processing and storage of specimens and consequently create a reliable resource for research into endometriosis.", "keywords": [ "Endometriosis", "Phenome Harmonisation Project", "research", "clinical history", "laparoscopy" ], "content": "\n\nThe World Endometriosis Research Foundation (WERF) set up the Endometriosis Phenome and Biobanking Harmonisation Project (EPHect) which has recently published four papers1–4 aimed at the standardisation of reporting and pathological processing. This consensus was reached after two workshops in 2013 covering all four topics (surgical phenotype;1 clinical and covariate phenotype;2 fluid biospecimen collection, processing and storage;3 tissue collection, processing and storage4) with 54 leaders in endometriosis research, sampling and management worldwide.\n\nEPHect is, as far as we are aware, unprecedented in its attempt to provide standards to endeavour to harmonise phenotypic data collection and biological sampling protocols for a specific disease. It remains to be seen whether the initiative will truly attain its objective of facilitating global collaborative research in endometriosis, but the ball is now in the court of national leaders in endometriosis research to ensure that this unique opportunity is not missed.\n\nEPHect is primarily designed to “facilitate large-scale internationally collaborative, longitudinal, epidemiologically robust, translational, biomarker and treatment target discovery research in endometriosis” (a more noble endeavour we cannot envisage – and we couldn’t have put it better ourselves!). Although not specifically designed (nor even intended) for such, we believe that this also presents an opportunity for clinical leaders and clinicians even in a purely clinical setting, to harmonise the collection of standardised clinical and co-variate phenotype data1,2. Doing so will mean that we gradually become more fluent in a common endometriosis language and more versed in a true understanding of what has to date appeared a heterogeneous disease, not uncommonly described as an enigma1,2. This level of standardisation of documentation will surely be just as important in the future for optimising patient-focused individualised care as for research. And familiarity with the standard documentation will facilitate clinical leaders to engage in collaborative research efforts seamlessly.\n\nCertainly the collection of “standardized detailed information” and “thus optimizing the surgical phenotype”1 should become a non-negotiable standard each time surgery is undertaken for endometriosis, as the recognition dawns that only a small percentage of women suffering from endometriosis worldwide will ever have the chance to undergo surgery. It is thus imperative that as much information as possible is gained each time a woman undergoes laparoscopic surgery for endometriosis and that this information is comparable with any other woman in the world having surgery for endometriosis.\n\nTwo surgical data forms were created1. The standard (recommended) surgical form (EPHect SSF) and the minimum required surgical form (EPHect MSF). We believe that ‘surgeons with expertise’ (relating to the ‘networks of expertise’ that the World Endometriosis Society consensus group on current management of endometriosis has previously described5) should collect information to complete the SSF. The EPHect SSF has two parts. The first asks for clinical covariates (details of clinical relevance) such as last menstrual period, current medical therapy and previous endometriosis surgery and findings. The second part focuses on intraoperative findings such as duration of procedure, extent of endometriosis (location, size and colour of endometriotic lesions, as well as surgical treatment undertaken), location of biopsies, intraoperative complications, extent of residual endometriosis at the end of the surgery and any other pathological findings. Standardisation of the way in which we undertake surgery is a feature that we have now identified to be lacking in terms of diagnosis and classification of endometriosis5. Thus this EPHect initiative should produce a standardised way in which we all undertake laparoscopy and laparoscopic removal of endometriosis. Specifically, the EPHect laparoscopic surgical technique calls for:\n\n- Meticulous search of the entire pelvis and abdominal cavity with a “close tip technique” (2–5 cm distance between laparoscope tip and peritoneal surface)\n\n- Limited handling of the peritoneum during the diagnostic phase of the laparoscopy to minimise petechiae formation\n\n- Video documentation of exploration and the surgical procedure if possible\n\n- Photographic documentation of surgery is considered an acceptable standard and we encourage all endometriosis surgeons to familiarise themselves with the standardised photograph zones (the pelvis is split into seven zones, three in the midline and two for each side, and photographs should be taken with the laparoscope 5–10 cm from the peritoneum). If small lesions or extra-pelvic lesions are present or more detail is deemed appropriate than additional pictures may be required for full documentation. Another additional photograph of the pelvis at the end of the procedure should document any residual disease.\n\n- Documentation of other pathology such as adhesions, scarring and uterine fibroids.\n\n- Where feasible, the use of electric or light energy should be avoided in removing tissue samples, as these may cause artifacts that may impact on the histological interpretation of tissue samples – and all energy sources used for this purpose should be recorded. The type of thermal energy recommended, if required surgically, is laser or plasma jet, and, in these circumstances, an excision margin of 5 mm is recommended.\n\n- Extent of residual endometriosis at the end of the procedure should be described.\n\n- The temperature of carbon dioxide insufflation gas entering the peritoneal cavity and the presence or absence of a dehumidifier should be recorded.\n\n- When sampling of different tissues is required, the sequence of sampling should be the order of priority, with the most important tissue or tissue of greatest interest being sampled first.\n\nTwo clinical and covariate phenotype data forms have also been created in a similar fashion to the surgical phenotype data: a standard endometriosis patient questionnaire (EPQ-S) and the minimum required endometriosis patient questionnaire (EPG-M)2. These clinical data, again, are geared towards research, however clinicians must seriously consider moving towards routine clinical data collection in order to manage their patients optimally and to determine how research data translates to their own patient population and to individual women. The EPQ-S includes questions on the following:\n\n1. Pain – Quantified on 11 point scale for intensity (0 being no pain to 10 meaning worse imaginable pain). Pain affect is captured with the short form McGill Pain Questionnaire (SF-MPQ)6. However, it is recommend to use the most recent SF-MPQ -2 as this again uses an 11 point scale and has 7 additional questions and so would allow for calculations on a total of 4 domains (continuous pain, intermittent pain, neuropathic pain and affective). However, investigators are required to sign a user agreement in order to access the SF-MPQ-2.\n\n2. Depression, anxiety, and health related quality of life – There are already multiple validated measures already to assess this. They include the Endometriosis Health Profile Questionnaire (EHP-30)7 or Short-Form Health Status survey,8 but both require registration and/or payment. Beck Depression Inventory9 and State Trait Anxiety Inventory10 are also tools that could be used. Alternatively there are combined measures such as the Hospital Anxiety and Depression Scale11 (however this is more for overall psychological distress rather than determining degree of anxiety or depression). Institutions may decide which scale to adopt.\n\n3. Menstrual history –This includes details on age of menarche, cycle characteristics (including frequency, duration and amount) and changes in menstrual patterns over age ranges, as well as documenting hormone use.\n\n4. Fertility – Information of the length of time a subject has tried to become pregnant without success, with subfertility being assessed as >6 months of trying. Information is gathered on fertility investigation and treatment. Pregnancy history and outcomes are also documented.\n\n5. Medical and surgical history – This section also includes question on urinary symptoms as well as bowel symptoms (including questions from the Rome III12 criteria irritable bowel syndrome module).\n\n6. Medication use – Patterns and type of analgesia/ herbal supplements/ sleeping aid are documented in this segment.\n\n7 Personal information - Ethnicity, age, BMI, (also lowest and highest weight since the age of 18yrs, somatotype and body shape by age range), level of education, smoking, exercise and alcohol consumption. Interestingly also questions on hair and eye colour looking at marking genetic subpopulations.\n\nThe third and fourth papers in this sequence dealt respectively with standard operating procedures for collection, processing and long term storage of fluid biospecimens3 and tissue4. These papers have less direct relevance to clinicians.\n\n\nSummary\n\nThe four papers are freely available for clinicians to obtain further information. It is hoped that implementation should be easily achievable as much that is required is standard practice. The project will ensure there is full documentation of patient demographics, history, care, surgical findings and specimen control enabling clinicians everyday gold standard care to be used in robust research studies and finally allowing for a significant sample size to be obtained and hopefully for firm recommendations to be able to made on patient care.", "appendix": "Author contributions\n\n\n\nBoth authors contributed equally to this manuscript.\n\n\nCompeting interests\n\n\n\nNeil Johnson has received conference expenses from Bayer Pharma, Merck-Serono, and MSD, research funding from AbbVie, and is a consultant to Vifor Pharma and Guerbet.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nBecker CM, Laufer MR, Stratton P, et al.: World Endometriosis Research Foundation Endometriosis Phenome and Biobanking Harmonisation Project: I. Surgical phenotype data collection in endometriosis research. Fertil Steril. 2014; 102(5): 1213–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVitonis AF, Vincent K, Rahmioglu N, et al.: World Endometriosis Research Foundation Endometriosis Phenome and Biobanking Harmonization Project: II. Clinical and covariate phenotype data collection in endometriosis research. Fertil Steril. 2014; 102(5): 1223–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRahmioglu N, Fassbender A, Vitonis AF, et al.: World Endometriosis Research Foundation Endometriosis Phenome and Biobanking Harmonization Project: III. Fluid biospecimen collection, processing, and storage in endometriosis research. Fertil Steril. 2014; 102(5): 1233–1243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFassbender A, Rahmioglu N, Vitonis AF, et al.: World Endometriosis Research Foundation Endometriosis Phenome and Biobanking Harmonisation Project: IV. Tissue collection, processing, and storage in endometriosis research. Fertil Steril. 2014; 102(5): 1244–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohnson NP, Hummelshoj L; World Endometriosis Society Montpellier Consortium: Consensus on current management of endometriosis. Hum Reprod. 2013; 28(6): 1552–68. PubMed Abstract | Publisher Full Text\n\nDworkin RH, Turk DC, Revicki DA, et al.: Development and initial validation of an expanded and revised version of the Short-form McGill Pain Questionnaire (SF-MPQ-2). Pain. 2009; 144(1–2): 35–42. PubMed Abstract | Publisher Full Text\n\nJones G, Jenkinson C, Kennedy S: Evaluating the responsiveness of the Endometriosis Health Profile Questionnaire: the EHP-30. Qual Life Res. 2004; 13(3): 705–713. PubMed Abstract | Publisher Full Text\n\nBrook RH, Ware JE Jr, Davies-Avery A: A conceptualization and measurement of health for adults in the health insurance study. Santa Monica: Rand Corp. 1979.\n\nBeck AT, Steer RA, Ball R, et al.: Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. J Pers Assess. 1996; 67(3): 588–597. PubMed Abstract | Publisher Full Text\n\nSpielberger C, Gorsuch R, Lushene R, et al.: Manual for the State-Trait Anxiety Inventory (form Y). Palo Alto, CA: Consulting Psychologists Press; 1983.\n\nZigmond AS, Snaith RP: The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983; 67(6): 361–370. PubMed Abstract | Publisher Full Text\n\nDrossman DA, Dumitrascu DL: Rome III: New standard for functional gastrointestinal disorders. J Gastrointestin Liver Dis. 2006; 15: 237–241. PubMed Abstract" }
[ { "id": "21915", "date": "18 Apr 2017", "name": "Alan Lam", "expertise": [ "Reviewer Expertise Clinical and surgical management of endometriosis" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review article is a timely ‘call-to-arms’ to clinicians caring for women suffering from endometriosis world-wide to utilise the Endometriosis Phenome and Biobanking Harmonisation Project (EPHect).\n\nPublished by the World Endometriosis Research Foundation (WERF) in 2014, EPHect provides a fundamental framework for collaborative international endometriosis research through global standardization of phenotypic data compilation and biological sample collection and storage.\n\nConcerned that the uptake of the EPHect has be sporadic and not uniform, the authors believe that the collection of “standardized detailed information”, using the standard surgical form (EPHect SSF) and standard endometriosis patient questionnaire (EPQ-S), should be ‘non-negotiable’ whenever surgery is undertaken for endometriosis.\n\nIn practice, there are a number of challenges to the uptake of the EPHect which this review article has not addressed. Firstly, the time taken for busy clinicians and surgeons to complete the standard surgical form (SSF) is a potential impediment. Secondly, the completion of the standard endometriosis patient questionnaire (EPQ-S) or the minimum (EPG-M) by patient also requires motivation and time. Thirdly, the storage of the large amount of paper-based information is space-consuming. Fourthly, the transfer of information collected onto an electronic database is time-consuming, with privacy coding an issue. Finally, the storage, extraction, collation and analysis of data for the purpose of collaborative international research is likely going to be an expensive endeavour.", "responses": [] }, { "id": "22139", "date": "08 May 2017", "name": "Maria Grazia Porpora", "expertise": [ "Reviewer Expertise Clinical and surgical management of endometriosis" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe aim of this paper is to summarize four articles published by the World Endometriosis Research Foundation, aiming to set up the Endometriosis Phenome and Biobanking Harmonisation Project. This is a global initiative with the mission to develop a consensus on standardization and harmonization of phenotypic surgical/clinical data and biological sample collection methods in endometriosis management and research. It provides forms for collection of data related to surgery, clinical, and epidemiological phenotyping characteristics as well as standard operating procedures, processing, and long-term storage of biological samples from affected women. This project will allow data analysis of a large number of cases using the same forms worldwide, and the possibility of sharing the obtained information.\nThis paper offers a clear and comprehensible recapitulation of the standard surgical forms and standard endometriosis patient questionnaires to facilitate their use by the clinicians in the clinical practice. Nevertheless, this paper does not analyze the limitations of the project such as the possible differences in resources and logistics among centers that could affect data collection and implementation of standard operating procedures. The extent and type of data collected and the effort to record data in an online database could be influenced by availability of time, local organizational structures and experience as well as motivation of the surgeon.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-14
https://f1000research.com/articles/6-13/v1
05 Jan 17
{ "type": "Research Article", "title": "A simple method for calculation of basic molecular properties of nutrients and their use as a criterion for a healthy diet", "authors": [ "Veljko Veljkovic", "Vladimir Perovic", "Marko Anderluh", "Slobodan Paessler", "Milena Veljkovic", "Sanja Glisic", "Garth Nicolson", "Vladimir Perovic", "Marko Anderluh", "Slobodan Paessler", "Milena Veljkovic", "Sanja Glisic", "Garth Nicolson" ], "abstract": "Background: Healthy nutrition is vital for good health and well-being. Despite the important role of a healthy nutritional diet, recommendations for healthy eating remain elusive and are mainly based on general properties of nutrients. The present study proposes an improved characterization of the molecular characteristics of nutrients, which are important for biological functions and can be useful in describing a healthy diet. Methods: We investigated the electronic properties of some known nutrient ingredients. In this analysis, we used the average quasi valence number (AQVN) and the electron-ion interaction potential (EIIP), which are molecular descriptors that represent the basic electronic properties of organic molecules.  Results: Our results show that most nutrients can be represented by specific groups of organic compounds according to their basic electronic properties, and these differ from the vast majority of known chemicals. Based on this finding, we have proposed a simple criterion for the selection of food components for healthy nutrition. Discussion: Further studies on the electronic properties of nutrients could serve as a basis for better understanding of their biological functions.", "keywords": [ "healthy diet", "nutrients", "human milk", "molecular descriptors" ], "content": "Abbreviations\n\nBCS - basic chemical space\n\nd1 - domain to the left of BCS\n\nd2 - domain to the right of BCS\n\nN(d1) - fraction of nutrients in the d1 domain\n\nN(d2) - fraction of nutrients in the d2 domain\n\n\nIntroduction\n\nHealthy eating behavior and physical activity patterns promote good mental and physical health and reduce the rates of chronic morbidity and mortality. In the U.S., the Centers for Disease Control and Prevention estimate that in the U.S. alone “poor diet and physical inactivity cause 310,000 to 580,000 deaths per year and are major contributors to disabilities that result from diabetes, osteoporosis, obesity and stroke”1.\n\nThe world’s population is aging, and there is a world-wide increase in the prevalence of chronic diseases (http://www.who.int/ageing/publications/global_health.pdf). The continuing increase in overweight individuals and obesity, which predispose susceptible populations to chronic disease (http://www.un.org/esa/population/publications/worldageing19502050), emphasize the importance of understanding the impact of nutrition in chronic disease prevention and control.\n\nOptimal nutrition starts with healthy eating habits and assumes a diet that provides the body with essential nutrition, including adequate calories and essential amino acids from proteins, essential fatty acids, vitamins, minerals, and trace nutrients. The crucial part of healthy nutrition is providing a balanced diet, which means consuming foods from all the different nutrient groups (whole grains, fruit and vegetables, diary, protein, fat and sugar) in appropriate quantities. It is also recommended that a healthy nutritional diet favors plant-based foods over animal-based foods. This superficial and elusive definition of a “healthy diet” is often confusing and leads to inappropriate selections of foods. Some animal-based foods should not be selectively avoided, because they contain important nutrients. For example, seafood is an excellent source of the long chain omega-3 fatty acids, and organ meats, such as liver, kidney and heart, as found in beef, sardines, and mackerel, are rich in the coenzyme Q10 and trace nutrients (https://en.wikipedia.org/wiki/Nutrient). On the other hand, vegetables and fruits contain nutrients with very different biological properties (for example, polyphenols and carotenoids), which are important in a balanced diet.\n\nIn addition, increasing protein (“protein diet”) for weight management has become popular, despite some potential adverse effects of this diet, due to carcinogen ingestion of heated or processed meats consumption2–4. All current recommendations for healthy nutrition suggest avoiding consumption of foods containing high concentrations of saturated fat, due to serious long-term risks of contracting cardiovascular diseases (CVD). However, a recent study on the association between food consumption and CVD, which included data collected from 46 European countries in the period 1980–2008, showed the lack of a connection between saturated fat and CVD. The authors called for serious reconsideration of current dietary recommendations5.\n\nTo obtain a better definition of healthy nutrition, it is necessary to know how nutrients execute their biological function. Recently, Norheim et al.6 proposed the concept of “molecular nutrition research,” which they defined as “science concerned with the effect of nutrients and foods/food components on whole body physiology and health status at a molecular and cellular level”6.\n\nPreviously, we have proposed that the electronic properties of organic molecules represented by the average quasi valence number (AQVN) and the electron-ion interaction potential (EIIP), play an essential role in the determination of their biological properties7. These molecular descriptors, which characterize the long-range molecular interactions (distances between 5 and 1000 Å) in biological systems7, are derived from Mendeleev’s periodic table and determined only by atomic and valence numbers of atoms in a molecule8,9.\n\nWe previously showed that 90.5% of 4,5010,644 compounds randomly selected from the PubChem database (http://pubchem.ncbi.nlm.nih.gov) have EIIP and AQVN values in the intervals (0.00 – 0.10 Ry) and (2.4 – 3.2), respectively10. This domain of the EIIP/AQVN space, encompassing the majority of known chemical compounds, was referred as the “basic chemical space” (BCS)10. The domains to the left of BCS (d1) and to the right of BCS (d2) encompass 4.3% and 5.3% of analyzed compounds from PubChem, respectively. Compounds located within the domain d1 have strong electron-donor properties and compounds in the domain d2 are strong electron-acceptors7. It was also showed that biological properties of organic molecules (e.g. antibiotics, cytostatics, antiviral compounds, neurotoxins, pheromones, antiparasitic molecules, etc.) are characterized by the electronic properties which are represented by specific domain of the AQVN/EIIP space7,10,11. Recently, this finding served as base for development of the criterion for the in silico screening of approved drugs for candidate anti-Ebola drugs12. This analysis suggested ibuprofen as a candidate molecule for treatment of the Ebola disease13. The anti-Ebola activity of ibuprofen was later experimentally confirmed14.\n\nHere we present a molecular descriptor analysis of 227 essential and non-essential nutrients and phytonutrients. The comparison of these substances and biologically active compounds in PubChem and ChemBank Databases reveal that food components are characterized by specific electronic properties represented by AQVN and EIIP. Therefore, specific AQN/EIIP molecular descriptors could be regarded as a simple quantitative structure-activity relationship (QSAR) criterion for a selection of food components in healthy diets. Further studies of these molecular descriptors of nutrients, which distinguish them from most other known organic molecules, will help in better understanding their role in essential biological processes.\n\n\nMethod\n\nThe following compounds were assessed by the present study: 227 commonly used organic nutrients (Dataset 115) (https://en.wikipedia.org/wiki/List_of_micronutrients; https://en.wikipedia.org/wiki/List_of_macronutrients; https://en.wikipedia.org/wiki/List_of_phytochemicals_in_food); 4,667 biologically active compounds from the small bioactive molecule database of ChemBank (Dataset 216) (http://chembank.broad.harvard.edu); 126 organic nutrients from human milk (Dataset 317) (http://doublethink.us.com/paala/wp-content/uploads/2012/11/whats-in-breastmilk-poster-canada.jpg), 101 compounds isolated from pomegranate (Dataset 418)19; 42 ingredients of the liquid diet Fresubin (Dataset 520) (http://www.fresenius-kabi.co.uk/4824_4889.htm).\n\nMolecular descriptors AQVN and EIIP, determining the long-distance (>5Å) intermolecular interactions in biological systems7, were derived from the “general model pseudopotential”8,9 and were defined by the following equations:\n\nW=0.25Z∗sin⁡(1.04πZ∗)2π(Eq.1)\n\nwhere Z* is the AQVN determined by:\n\nZ∗=1N∑i=1mniZi(Eq.2)\n\nwhere Zi is the valence number of the i-th atomic component, ni is the number of atoms of the i-th component, m is the number of atomic components in the molecule, and N is the total number of atoms. The EIIP values calculated according to equations (Eq.1) and (Eq.2) are in Rydbergs (Ry).\n\n\nResults\n\nThe present analysis of 227 essential and non-essential organic nutrients and fitonutrients (Dataset 115) showed significantly different distributions of these compounds in the AQVN/EIIP space compared to compounds from the PubChem database10 (Figure 1A). Domains d1 and d2 contained 26.2% (N(d1)) and 31.8% (N(d2)) of nutrients, respectively. This result showed that the basic electronic properties defined by AQVN/EIIP of most nutrients (58.2%) significantly differed from the electronic properties of the vast majority of known chemical compounds (Figure 1B).\n\n(A) Distribution of 227 nutrients (blue) and 4,5010,644 compounds randomly selected from the PubChem database (red) in the AQVN/EIIP space. (B) Percentage of nutrients and compounds from the PubChem database in the AQVN/EIIP domains BCS, d1 and d2. AQVN, average quasi valence number; EIIP, electron-ion interaction potential; BCS, basic chemical space.\n\nTo verify that the placement of compounds in domains d1 and d2 were due to specific characteristics of nutrients, we compared them with 4,667 biologically active compounds obtained from ChemBank (Dataset 216). The results presented in Figure 2 show that the distribution of biologically active compounds in the AQVN/EIIP space is similar to distribution of compounds from PubChem (Figure 1B), but it is different from the allocation of nutrients in this space. The percentage of biologically active compounds that reside outside of the limits of the BCS was also significantly different than the percentage of nutrients (23.9 vs. 58.2%). This confirms that the basic electronic properties of most nutrients also differ from those of other biologically active compounds.\n\nDistribution of 227 nutrients (blue) and 4667 biologically active compounds (red). AQVN, average quasi valence number; EIIP, electron-ion interaction potential.\n\nAs an example of a complex natural nutrient fluid, we next analyzed 126 ingredients in human milk (Dataset 317) in order to see how the presence of nutrients with the electronic properties of N(d1) and N(d2) reflected the composition of a complex food that contains all components necessary for development and growth of human tissue. This analysis showed that 56.4% of ingredients of human milk are located outside BCS (38.1% N(d1) and 18.3% N(d2)). Although the general range of distribution of known nutrients in the AQVN/EIIP space (Dataset 115) and the range of distribution of ingredients in human milk (Dataset 317) was similar (Figure 3A), the percentage content of N(d1) and N(d2) components was different. In contrast to the nutrients that contain similar percentages of N(d1) and N(d2) components, the content of components in human milk was found to be two-times higher in d1 than in the d2 domain (Figure 3B). Of note is that the content of essential nutrients (essential fatty acids, essential amino acids, vitamins) and non-essential nutrients (unsaturated fatty acids, saturated fatty acids, carbohydrates, non-essential amino acids) presented in Dataset 115 is five-times higher in d1 than in the d2 domain (42 vs. 8%). These results suggest that N(d1) components are on average more important for function at the beginning of life, where processes of human organism growth are much more intensive than in the elder organism.\n\n(A) Distribution of 227 nutrients (blue) and 126 organic ingredients in human milk (red) in the AQVN/EIIP space. (B) Percentage of nutrients and organic ingredients of human milk in the AQVN/EIIP domains BCS, d1 and d2. AQVN, average quasi valence number; EIIP, electron-ion interaction potential; BCS, basic chemical space.\n\nAn interesting comparison can be made with Pomegranate (Punica granatum), which has drawn a great deal of attention from both the scientific community and the general public, due to its demonstrated ability to suppress the formation of cancers21–23 and assist in protection against cardiovascular diseases24,25 and infections26. Analysis of 101 compounds isolated from pomegranate (Dataset 418) showed that 67% of these compounds are located outside the BCS (26% in d1 and 41% in d2). The extraordinarily high percentage of pomegranate ingredients in the d2, which was significantly higher than the percentage of ingredients of human milk located in this AQVN/EIIP domain, suggests that these ingredients are likely to be essential for the protective effects of this fruit against the development of various diseases.\n\nFinally, we analyzed the distribution of 42 ingredients (Dataset 520) from the liquid supplement Fresubin, a nutritionally complete liquid diet designed for patients suffering from malnutrition or obstructions of the gastrointestinal tract (http://www.fresenius-kabi.co.uk/4824_4889.htm). The majority of ingredients (62%) in this liquid nutritional product were distributed outside the BCS. Of these ingredients, 50% were found to be N(d1) and 12% were found to be N(d2). Comparison of the distribution in the AQVN/EIIP space of Fresubin and human milk ingredients is provided in Figure 4.\n\nDistribution of 126 organic ingredients of human milk (blue) and 42 organic ingredients of Fresubin (red). AQVN, average quasi valence number; EIIP, electron-ion interaction potential.\n\n\nDiscussion\n\nMost nutrients, according to their basic electronic properties, differ from the majority of known chemical compounds, and in the present study we found these preferentially located within the BCS domain. Nutrients outside the BCS domain can be divided in two groups: N(d1) and N(d2) which are located in domains to the left of BCS and to the right of BCS, respectively. We found that in the contents of human milk the number of N(d1) was two-times higher than N(d2). Taking into account the essential role of breast milk in human development, we can speculate that N(d1) is important for the maintenance of basic functions in human organisms, especially in the early phases of intensive growth and development. This assumption was supported by the fact that the N(d1) in a liquid diet designed for chronic illness patients was four times higher than N(d2). As this supplement’s formulation was designed based on empirical rules and is recommended for patients with malnutrition, it is not a coincidence that it resembles milk by composition and is a role model for a balanced diet. More importantly, it offers proof-of concept that the AQVN/EIIP defines food with similar compositions and clearly distinguishes nutrients within a vast chemical space. A high fraction of N(d2) in pomegranate components, which represents a nutritional mixture useful as a supplement in a wide range of human diseases, suggests a protective role of N(d2)-rich compounds. This conclusion is in accord with the fact that polyphenols, which have been confirmed as useful protecting agents against different chronic and infectious diseases27–29, are largely represented in the d2 domain.\n\nThese findings allow the division of foods into two categories, according to their AQVN/EIIP properties: N(d1)- and N(d2)-rich foods. This division serves as a basis for the selection of foodstuffs for a healthy diet with a N(d1)/N(d2) ratio corresponding to human milk and for a diet that is richer in N(d2) and which could have some protective effect against chronic and infectious diseases.\n\nIn conclusion, our results demonstrate that most nutrients represent specific groups of organic compounds that can be identified according to their basic electronic properties, which that can be effectively calculated with AQVN/EIIP. In the present study, nutrients were found to differ in their electronic properties from the majority of known chemicals. Additional studies on these properties could help to develop an improved understanding of the role of nutrients in the development and function of human organisms, as well as in protection against various diseases.\n\n\nData availability\n\nDataset 1: The AQVN and EIIP of 227 commonly used organic nutrients collected from various sources. DOI, 10.5256/f1000research.10537.d14779615\n\nDataset 2: The AQVN and EIIP of 4667 biologically active compounds from the small bioactive molecule database ChemBank. DOI, 10.5256/f1000research.10537.d14779716\n\nDataset 3: The AQVN and EIIP of 126 organic nutrients from human milk collected from various sources. DOI, 10.5256/f1000research.10537.d14779817\n\nDataset 4: The AQVN and EIIP 42 ingredients of the liquid diet Fresubin. DOI, 10.5256/f1000research.10537.d14779918\n\nDataset 5: The AQVN and EIIP 101 compounds isolated from pomegranate. DOI, 10.5256/f1000research.10537.d14780020", "appendix": "Author contributions\n\n\n\nConceived and designed the study: VV. Analyzed the data: VV SG GN MV VP. Wrote the paper: VV GN MA SP SG. All authors agreed to the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nDepartment of Health and Human Services: Physical Activity and Health: A Report of the Surgeon General. Atlanta: Department of Health and Human Services, Centers for Disease Control and Prevention; 1996. Reference Source\n\nLiu M, Li M, Liu J, et al.: Elevated urinary urea by high-protein diet could be one of the inducements of bladder disorders. J Transl Med. 2016; 14: 53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nByun SY, Kim DB, Kim E: Curcumin ameliorates the tumor-enhancing effects of a high-protein diet in an azoxymethane-induced mouse model of colon carcinogenesis. Nutr Res. 2015; 35(8): 726–735. PubMed Abstract | Publisher Full Text\n\nDelimaris I: Adverse Effects Associated with Protein Intake above the Recommended Dietary Allowance for Adults. ISRN Nutr. 2013; 2013: 126929. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrasgruber P, Sebera M, Hrazdira E, et al.: Food consumption and the actual statistics of cardiovascular diseases: an epidemiological comparison of 42 European countries. Food Nutr Res. 2016; 60: 31694. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNorheim F, Gjelstad IM, Hjorth M, et al.: Molecular nutrition research: the modern way of performing nutritional science. Nutrients. 2012; 4(12): 1898–1944. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVeljkovic V: A theoretical approach to preselection of carcinogens and chemical carcinogenesis. Gordon & Breach, York: 1980. Reference Source\n\nVeljković V, Slavić I: Simple general model pseudopotential. Phys Rev Lett. 1972; 20: 105–107. Publisher Full Text\n\nVeljković V: The dependence of the Fermi energy on the atomic number. Phys Lett. 1973; 45A: 41–42. Publisher Full Text\n\nVeljkovic N, Glisic S, Perovic V, et al.: The role of long-range intermolecular interactions in discovery of new drugs. Expert Opin Drug Discov. 2011; 6(12): 1263–70. PubMed Abstract | Publisher Full Text\n\nGlisic S, Sencanski M, Perovic V, et al.: Arginase Flavonoid Anti-Leishmanial in Silico Inhibitors Flagged against Anti-Targets. Molecules. 2016; 21(5): pii: E589. PubMed Abstract | Publisher Full Text\n\nVeljkovic V, Loiseau PM, Figadere B, et al.: Virtual screen for repurposing approved and experimental drugs for candidate inhibitors of EBOLA virus infection [version 1; referees: 2 approved]. F1000Res. 2015; 4: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVeljkovic V, Goeijenbier M, Glisic S, et al.: In silico analysis suggests repurposing of ibuprofen for prevention and treatment of EBOLA virus disease [version 1; referees: 2 approved]. F1000Res. 2015; 4: 104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhao Y, Ren J, Harlos K, et al.: Toremifene interacts with and destabilizes the Ebola virus glycoprotein. Nature. 2016; 535(7610): 169–172. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVeljkovic V, Perovic V, Anderluh M, et al.: Dataset 1 in: A simple method for calculation of basic molecular properties of nutrients and their use as a criterion for a healthy diet. F1000Research. 2016. Data Source\n\nVeljkovic V, Perovic V, Anderluh M, et al.: Dataset 2 in: A simple method for calculation of basic molecular properties of nutrients and their use as a criterion for a healthy diet. F1000Research. 2016. Data Source\n\nVeljkovic V, Perovic V, Anderluh M, et al.: Dataset 3 in: A simple method for calculation of basic molecular properties of nutrients and their use as a criterion for a healthy diet. F1000Research. 2016. Data Source\n\nVeljkovic V, Perovic V, Anderluh M, et al.: Dataset 4 in: A simple method for calculation of basic molecular properties of nutrients and their use as a criterion for a healthy diet. F1000Research. 2016. Data Source\n\nLansky EP, Newman RA: Punica granatum (pomegranate) and its potential for prevention and treatment of inflammation and cancer. J Ethnopharmacol. 2007; 109(2): 177–206. PubMed Abstract | Publisher Full Text\n\nVeljkovic V, Perovic V, Anderluh M, et al.: Dataset 5 in: A simple method for calculation of basic molecular properties of nutrients and their use as a criterion for a healthy diet. F1000Research. 2016. Data Source\n\nTurrini E, Ferruzzi L, Fimognari C: Potential Effects of Pomegranate Polyphenols in Cancer Prevention and Therapy. Oxid Med Cell Longev. 2015; 2015: 938475. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVini R, Sreeja S: Punica granatum and its therapeutic implications on breast carcinogenesis: A review. Biofactors. 2015; 41(2): 78–89. PubMed Abstract | Publisher Full Text\n\nVlachojannis C, Zimmermann BF, Chrubasik-Hausmann S: Efficacy and safety of pomegranate medicinal products for cancer. Evid Based Complement Alternat Med. 2015; 2015: 258598. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSahebkar A, Ferri C, Giorgini P, et al.: Effects of Pomegranate juice on blood pressure: A systematic review and meta-analysis of randomized controlled trials. Pharmacol Res. 2016; 115: 149–161. PubMed Abstract | Publisher Full Text\n\nSahebkar A, Simental-Mendía LE, Giorgini P, et al.: Lipid profile changes after pomegranate consumption: A systematic review and meta-analysis of randomized controlled trials. Phytomedicine. 2016; 23(11): 1103–1112. PubMed Abstract | Publisher Full Text\n\nIsmail T, Sestili P, Akhtar S: Pomegranate peel and fruit extracts: a review of potential anti-inflammatory and anti-infective effects. J Ethnopharmacol. 2012; 143(2): 397–405. PubMed Abstract | Publisher Full Text\n\nDaglia M: Polyphenols as antimicrobial agents. Curr Opin Biotechnol. 2012; 23(2): 174–181. PubMed Abstract | Publisher Full Text\n\nLamoral-Theys D, Pottier L, Dufrasne F, et al.: Natural polyphenols that display anticancer properties through inhibition of kinase activity. Curr Med Chem. 2010; 17(9): 812–825. PubMed Abstract | Publisher Full Text\n\nGonzález R, Ballester I, López-Posadas R, et al.: Effects of flavonoids and other polyphenols on inflammation. Crit Rev Food Sci Nutr. 2011; 51(4): 331–362. PubMed Abstract | Publisher Full Text" }
[ { "id": "19213", "date": "30 Jan 2017", "name": "Timothy K Roberts", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper represents a further publication of a novel approach to defining functional properties of biological molecules.  Previously the authors have proposed that the average quasi valence number (AQVN) and the electron-ion interaction potential (EIIP) play a determining role in the biological function of an organic molecule, presumably through facilitating molecular interactions.  Calculations of these values are based on the atomic and valence numbers of atoms in the molecule in question.  This paper addresses the question of consistency in AQVN and EIIP values in relation to known nutritional function. The data support the conclusion that known nutritionally important substances conform to definite subgroupings of AQVN and EIIP values. The paper raises many questions in the mind of a traditional biochemist not the least of which is how does this form of analysis relate to the protein components of milk for example?  It is well known that the nutritional and biological function of milk is not only dependent on the small molecules but is also dependent on the protein growth factors present as well.", "responses": [] }, { "id": "22039", "date": "08 May 2017", "name": "William W. Stringer", "expertise": [ "Reviewer Expertise Human Biochemistry", "Medicine", "and Exercise Physiology" ], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary:  Veljkovic et al. have investigated the electronic properties of some known human nutrients and biomolecules. They utilize average quasi valence number (AQVN) and the electron-ion interaction potential (EIIP) as molecular descriptors of the properties of organic molecules (as they have in prior publications).  They were able to represent groups of organic compounds according to their basic electronic properties.  The authors conclude that simple criterion for the selection of food components for healthy nutrition can be identified, and will serve as a basis for better understanding of the nutrients biological functions in the future.\n\nMajor Issues: None\n\nMinor Issues:  This reviewer wonders if Fresubin (being an \"unflavoured liquid consisting of protein (milk and soya), fat (soya, MCT, linseed, sunflower and fish oils), carbohydrate (maltodextrin), vitamins, minerals and trace elements.\") and human milk compare so favorably (Figure 4) due to similar starting materials (milk proteins, minerals, and trace elements)?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-13
https://f1000research.com/articles/6-10/v1
05 Jan 17
{ "type": "Research Note", "title": "Abnormal expression of ATP1A1 and ATP1A2 in breast cancer", "authors": [ "Alexey Bogdanov", "Fedor V. Moiseenko", "Michael Dubina", "Michael Dubina" ], "abstract": "Breast cancer is the first in incidence and the second in death among all solid tumors occurring in women. The identification of molecular genetic abnormalities in breast cancer is important to improve the results of treatment. In the present study, we analyzed microarray data of breast cancer expression profiling (NCBI GEO database, accession GSE65194), focusing on Na+/K+-ATPase coding genes. We found overexpression of the ATP1A1 and down-regulation of the ATP1A2. We expect that our research could help to improve the understanding of predictive and prognostic features of breast cancer.", "keywords": [ "breast cancer", "Na+/K+-ATPase", "gene expression", "abnormality", "ATP1A1", "ATP1A2" ], "content": "Introduction\n\nBreast cancer is one of the most common and deadly female solid tumors1. According to reports from Perou et al.2, further confirmed by other investigators3,4, breast cancer is a highly molecularly heterogeneous disease. The identification of molecular genetic abnormalities in breast cancer is important to improve the results of treatment and, for instance, to reveal new targets for specific therapies. Recent studies based on original retrospective analysis of digitalis use in breast cancer patients have demonstrated the anticancer effect of cardiac glycosides5 that directly inhibit Na+/K+-ATPase (NKA) activity. NKA signaling functions after interaction with cardiac glycosides were also shown6. It seems rational that expression of NKA might influence breast cancer prognosis.\n\nNKA is a significant integral membrane protein. NKA’s main function is the creation and maintenance of electrochemical gradients for sodium and potassium ions in the living cell. These gradients have critical importance for control of cell volume, osmolarity and resting potential7,8. The minimal functional NKA consists of two associated alpha- and beta- subunits. The catalytic alpha-subunit is responsible for conversion of ATP energy to transport of Na+ and K+ across cell membranes and has ATP and cardiac glycosides binding sites. It may be present in human tissues in four different isoforms (α1, α2, α3, α4 – found only in testicles). The beta-subunit is responsible for delivery and insertion of alpha one in cell membranes and has three distinct isoforms in humans (β1, β2, β3)8–10. NKA subunits are variably expressed in different human tissues11. Changes in the relative expression between different isoforms are associated with a number of pathological processes including malignant transformation12,13. Both down- and up-regulation of alpha- and beta- subunits were shown in solid tumors of different origin14–19.\n\nIn the present study, we analyzed public breast cancer expression profiles made using Affymetrix Human Genome U133 Plus 2.0 Array (NCBI GEO database20, accession GSE65194) for the expression of alpha subunits of NKA. We found abnormalities in ATP1A1 (coding α1-subunit) and ATP1A2 (coding α2-subunit) expression (Table 1) in breast cancer samples relative to their expression in normal breast tissue. ATP1A1 was overexpressed approximately 1.5 times in all groups of breast cancer samples (p<0.05). Coincidently, ATP1A2 expression decreased by more than 2 times (p<0.05). There were no differences observed in the expression of ATP1A3 (coding α3-subunit).\n\n\nMethods\n\nPreanalytical procedures consisted of a robust multichip analysis (RMA) algorithm21, including background correction, probe set signal integration, and quantile normalization. For this purpose, we used Expression Console 1.4 software (Affymetrix, Inc. USA). We utilized Transcriptome Analysis Console 3.0 software (Affymetrix, Inc. USA) to analyze the obtained CHP files and to detect differentially expressed genes using one-way between subjects ANOVA. Array data for 41 triple negative samples (TNBC group), 30 Her2-positive (Her2 group), 30 Luminal B (Lum B group), 29 Luminal A (Lum A group) breast cancer samples and 11 normal breast tissue samples were investigated.\n\n\nConclusions\n\nUsing a public microarray dataset we found abnormalities in the expression of ATP1A1 and ATP1A2 in breast cancer samples. This may correlate with digitalis anticancer activity, but requires additional research. We expect that our research could help to improve the understanding of predictive and prognostic features of breast cancer.\n\n\nData and software availability\n\nRaw data for Table 1 are available at:\n\nhttps://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE65194&format=file22.\n\nExpression Console 1.4 software and Transcriptome Analysis Console 3.0 software (Affymetrix, Inc. USA) are available after free customer registration at:\n\nhttp://www.affymetrix.com/support/technical/software_downloads.affx.", "appendix": "Author contributions\n\n\n\nAB, FM and MD conceptualized the study, collected data and performed data analysis. All authors were involved in the writing and revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by The Ministry of Education and Science of Russian Federation (unique identifier of applied research: RFMEFI60414X0070).\n\n\nReferences\n\nSiegel RL, Miller KD, Jemal A: Cancer statistics, 2016. CA Cancer J Clin. 2016; 66(1): 7–30. PubMed Abstract | Publisher Full Text\n\nPerou CM, Sørlie T, Eisen MB, et al.: Molecular portraits of human breast tumours. Nature. 2000; 406(6797): 747–52. PubMed Abstract | Publisher Full Text\n\nSørlie T, Perou CM, Tibshirani R, et al.: Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proc Natl Acad Sci U S A. 2001; 98(19): 10869–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSørlie T, Tibshirani R, Parker J, et al.: Repeated observation of breast tumor subtypes in independent gene expression data sets. Proc Natl Acad Sci U S A. 2003; 100(14): 8418–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrassas I, Diamandis EP: Novel therapeutic applications of cardiac glycosides. Nat Rev Drug Discov. 2008; 7(11): 926–35. PubMed Abstract | Publisher Full Text\n\nSchoner W, Scheiner-Bobis G: Endogenous and exogenous cardiac glycosides: their roles in hypertension, salt metabolism, and cell growth. Am J Physiol Cell Physiol. 2007; 293(2): C509–C36. PubMed Abstract | Publisher Full Text\n\nSkou JC: The influence of some cations on an adenosine triphosphatase from peripheral nerves. Biochim Biophys Acta. 1957; 23(2): 394–401. PubMed Abstract | Publisher Full Text\n\nSkou JC, Esmann M: The Na,K-ATPase. J Bioenerg Biomembr. 1992; 24(3): 249–61. PubMed Abstract\n\nMcDonough AA, Geering K, Farley RA: The sodium pump needs its beta subunit. FASEB J. 1990; 4(6): 1598–605. PubMed Abstract\n\nMercer RW: Structure of the Na,K-ATPase. Int Rev Cytol. 1993; 137C: 139–68. PubMed Abstract\n\nBlanco G, Mercer RW: Isozymes of the Na-K-ATPase: heterogeneity in structure, diversity in function. Am J Physiol. 1998; 275(5 Pt 2): F633–50. PubMed Abstract\n\nBabula P, Masarik M, Adam V, et al.: From Na+/K+-ATPase and cardiac glycosides to cytotoxicity and cancer treatment. Anticancer Agents Med Chem. 2013; 13(7): 1069–87. PubMed Abstract | Publisher Full Text\n\nSuhail M: Na(+), K(+)-ATPase: Ubiquitous Multifunctional Transmembrane Protein and its Relevance to Various Pathophysiological Conditions. J Clin Med Res. 2010; 2(1): 1–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuñol M, Cusi V, Cruz O, et al.: Immunohistochemical analyses of alpha1 and alpha3 Na+/K+-ATPase subunit expression in medulloblastomas. Anticancer Res. 2011; 31(3): 953–8. PubMed Abstract\n\nRajasekaran SA, Huynh TP, Wolle DG, et al.: Na,K-ATPase subunits as markers for epithelial-mesenchymal transition in cancer and fibrosis. Mol Cancer Ther. 2010; 9(6): 1515–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nInge LJ, Rajasekaran SA, Yoshimoto K, et al.: Evidence for a potential tumor suppressor role for the Na,K-ATPase beta1-subunit. Histol Histopathol. 2008; 23(4): 459–67. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEspineda C, Seligson DB, James Ball W Jr, et al.: Analysis of the Na,K-ATPase alpha- and beta-subunit expression profiles of bladder cancer using tissue microarrays. Cancer. 2003; 97(8): 1859–68. PubMed Abstract | Publisher Full Text\n\nRajasekaran SA, Ball WJ Jr, Bander NH, et al.: Reduced expression of beta-subunit of Na,K-ATPase in human clear-cell renal cell carcinoma. J Urol. 1999; 162(2): 574–80. PubMed Abstract | Publisher Full Text\n\nMijatovic T, Ingrassia L, Facchini V, et al.: Na+/K+-ATPase alpha subunits as new targets in anticancer therapy. Expert Opin Ther Targets. 2008; 12(11): 1403–17. PubMed Abstract | Publisher Full Text\n\nEdgar R, Domrachev M, Lash AE: Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res. 2002; 30(1): 207–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIrizarry RA, Bolstad BM, Collin F, et al.: Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res. 2003; 31(4): e15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGEO accession GSE65194, Dubois T: Expression profiling of breast cancer samples from Institut Curie (Maire cohort) --Affy CDF. 2015. Reference Source" }
[ { "id": "20402", "date": "21 Feb 2017", "name": "Mikhail Fedyanin", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOver the past years several papers were published concerning prognostic role of ATP1A1 expression in hepatocellular carcinoma, lung cancer, and esophageal cancer. The authors of the present study show that increased expression of ATP1A1 observed at all breast cancer phenotypes compared to normal tissue.\nI would like to note that the authors studied gene expression only, but did not appreciate the immunohistochemical (IHC) changes in the content of gene products. In the absence of data of the IHC expression of ATP1A1, it is desirable to represent the differences in gene expression of ATP1A1 compared to referent genes for membrane transporters (http://bmcmolbiol.biomedcentral.com/articles/10.1186/1471-2199-7-29 ). Given a sufficiently large number of patients included in the study, it is interesting to evaluate the prognostic and predictive value of these findings. But I can conclude that this article is interesting for medical oncologists and molecular biologists.", "responses": [] }, { "id": "21728", "date": "10 May 2017", "name": "Jen-Tsan Ashley Chi", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI think the analysis is appropriate to examine the relative expression of the ATP1A1 and ATP1A2 among different breast cancer cells. The data analysis is standard and appropriate. One helpful thing is to validate the findings in other breast cancer expression datasets beyond this discovery dataset. Another relevant thing is whether the abnormal expression of these genes are associated with varying clinical outcomes.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] } ]
1
https://f1000research.com/articles/6-10
https://f1000research.com/articles/5-2935/v1
30 Dec 16
{ "type": "Research Article", "title": "Health communication, information technology and the public’s attitude toward periodic general health examinations", "authors": [ "Quan-Hoang Vuong" ], "abstract": "Background: Periodic general health examinations (GHEs) are gradually becoming more popular as they employ subclinical screenings, as a means of early detection. This study considers the effect of information technology (IT), health communications and the public’s attitude towards GHEs in Vietnam. Methods: A total of 2,068 valid observations were obtained from a survey in Hanoi and its surrounding areas. Results: In total, 42.12% of participants stated that they were willing to use IT applications to recognise illness symptoms, and nearly 2/3 of them rated the healthcare quality at average level or below. Discussion: The data, which was processed by the BCL model, showed that IT applications (apps) reduce hesitation toward GHEs; however, older people seem to have less confidence in using these apps. Health communications and government’s subsidy also increased the likelihood of people attending periodic GHEs. The probability of early check-ups where there is a cash subsidy could reach approximately 80%.", "keywords": [ "general health examination", "subclinical screenings", "information and communication technology", "healthcare subsidy" ], "content": "Introduction\n\nNowadays, people tend to avoid taking clinical treatments, instead, they prefer having subclinical tests and screenings as preventive medicine1–4. Using mobile applications (apps) in medical care is now becoming more popular thanks to the proliferation of information technology (IT)5–8 (http://www.mobihealthnews.com/4740/physician-smartphone-adoption-rate-to-reach-81-in-2012). As of 2012, there were 114 countries all over the world using mobile technology in medical care9, and a total of 165,000 mobile health apps were on the market in 2015 (http://www.imedicalapps.com/2015/09/ims-health-apps-report/), which were used in various different specialities from orthopaedics to cardiology10,11. West (2012) indicated that mobile technology was helping with chronic disease management, empowering the elderly and expectant mothers, reminding people to take medication at the proper time, extending services to underserved areas, and improving health outcomes and medical system efficiency9. In the same vein, some other studies also underscored the effectiveness of these apps in remote treatment in developing countries12–14. This efficiency was allegedly because they assisted faster decision making, transmitting messages more quickly and therefore saving money9,15. However, Buijink et al argued that almost all these mobile apps lacked authenticity or professional involvement, which could result in a wrong diagnosis, which may cause harm to the users10,18.\n\nDue to the above limitations, many people still prefer to have direct clinical check-ups with doctors for prevention and early detection through periodic general health examinations (GHEs). However, this usually costs a substantial amount of money for clinical treatment, subclinical screenings or preventive services that we use19–21. People are more worried about increasing healthcare costs than being unemployed or terrorism22, since the financial burden could push them into poverty or even destitution23. Yet, the quality of medical services is still not compatible with what the patient’s pay for, as the majority of patients have low satisfaction with doctors and nursing care, especially with waiting time24,25. Responsiveness is usually the top factor that patients expect26,27, but the reality still falls far short of their expectations24,25,28,29. Those who have a high education background are more likely to demand higher standards on medical quality30,31. Conversely, the elderly tend to be more easily satisfied, with evidence from different countries in the world32,33.\n\nHealth communications, usually delivering case information, social consequences and policy messages, also have a certain influence on peoples’ behaviours and attitudes toward medical services33. Vivid, fearful and credible messages are apparently more persuasive22,33–35. Younger people prefer social consequence communications, whereas older people are more influenced by physical consequences33. Furthermore, women respond to emotional messages with social consequences for oneself or health consequences to near and dear ones, whereas men are more influenced by unemotional messages that emphasise personal physical health consequences33.\n\nThe majority of Vietnamese households still take advice from relatives or friends rather than from professionals on making clinical treatment-related decisions36. Families are the primary units for health education across most countries, whatever the level of economic development, and help establish culturally engrained beliefs about health and illness37. Family members and friends are huge sources of health information that can affect prevention, control and care activities38. Moreover, the social networks surrounding each health consumer also have powerful influences on their health beliefs and behaviours39. The quality of information and professional credibility are critical factors that help patients choose a healthcare provider40. However, it is not productive to encourage people to seek early detection, diagnosis and treatment when they have limited access to care, which is a reality in many developing countries41.\n\nIn this study, four models are employed to find out the influences of factors, including health communications, IT apps, age, education backgrounds, willingness/hesitations toward periodic GHE and government subsidies, on peoples’ attitude and behaviours toward preventive, subclinical or GHE decisions.\n\n\nMethods\n\nA survey was conducted by the research team from the office of Vuong & Associates (http://www.vuongassociates.com/home), who directly interviewed people in the areas of Hanoi and Hung Yen (Vietnam) in the period between September and October 2016. The study was performed under a license granted by the joint Ethics Board of Hospital 125 Thai Thinh, Hanoi, and Vuong & Associates Research Board (V&A/07/2016; 15 September 2016). Written informed consent was obtained from the participants prior to starting the survey. The questions selected were fairly simple and easy to understand, which when coupled with the enthusiasm of the participants, led to straightforward interviews. The subjects of the survey were chosen completely randomly and there was no exclusion criteria. The obtained dataset contained 2,068 observations (Dataset 142).\n\nRegarding the data collecting process, since the data sample is random, no specific criteria for selecting some groups of people, like gender or age or job, were imposed. The survey team targeted places where most people are willing to spend time to take part in the survey. The interviewing places were public and private hospitals, junior high and high schools and business offices around Hanoi. Each respondent was given 10 to 20 minutes for each questionnaire, and the survey took place after the participant had understood the research ethics, content of the survey and ways of responding to the questions. The full questionnaire was delivered in Vietnamese, with a clear statement of research ethics standards, and is provided in Supplementary File 1 (an English translation can be found in Supplementary File 2).\n\nApart from the basic descriptive statistics, the present study employed statistical methods of categorical data analysis for modelling baseline category logits (i.e., BCL models), with the existence of continuous variables, as provided in Table 2. The practical estimations of categorical data following BCL models follow23.\n\nThe data were entered into Microsoft Office Excel 2007, then processed by R (3.3.1). The estimates in the study were made using BCL logistic regression models23 to predict the likelihood of a category of response variable Y in various conditions of predictor variable x.\n\nThe general equation of the baseline-categorical logit model is:\n\nln(πj(x)/πJ(x)) = αj+βj’x,       j=1,…, J-1.\n\nin which x is the independent variable; and πj(x)=P(Y=j/x) is its probability. Thus πj=P(Yij=1), with Y being the dependent variable.\n\nIn the logit model in consideration, the probability of an event is calculated as:\n\nπj(x) = exp(αj+βj’x)/[1+ J-1∑(h-1)exp(αj+βj’x)]\n\nwith ∑jπj(x) =1; αJ = 0 and βJ = 0; n is the number of observations in the sample, j is the categorical values of an observation i and h is a row in basic matrix Xi, see 23. In the analysis, z-value and p-value are the bases to conclude the statistical significance of predictor variables in the models, with P < 0.05 being the conventional level of statistical significance required for a positive result.\n\n\nResults\n\nThe sample totalled 2,068 participants, of which 1,510 had an educational level of university or above (73.02%). A total of 1,073 participants expressed hesitation toward attending GHEs because they do not think it is not urgent or important (Table 1).\n\nWhen seeing clinical signs, many respondents choose clinics as the first priority (43.04%), while 29.45% seek relatives or friends’ advice and 27.51% prefer to self-study. Furthermore, the majority (86.32%) are ready to pay for healthcare if the cost of a periodic GHE is less than VND 2 million.\n\nOf the participants, 42.12% were willing to use mobile health apps if they are supposedly credible. If the apps reveal some health problems, 78.96% of participants may or will certainly go to the clinic to receive a check-up. Regarding the quality of medical services, most of the respondents expressed poor experiences; 1,291 participants scored the quality of medical services medium, while 60 scored it low.\n\nRegarding peoples’ assessments of GHE quality, a scale of 5 (1 is lowest, 5 is highest) was used. “Respon” is the element that was assessed lowest among five elements (Response, Tangibility, Reliability, Assurance and Empathy) with 3.38 points (Tangibility 3.61 points; Reliability 3.57 points; Assurance 3.69 points; and Empathy 3.47 points) and is 0.17 points lower than the composite point (3.55). On the contrary, when it comes to health communications, ‘sufficiency of information’ achieved 3.01 points (95% CI: 2.96 - 3.06), which is the highest among the four components constituting the factor of health communications, apart from ‘the efficiency of health communications’, which is 0.18 points higher than the average at 2.83 (the two other components are: the attractiveness (2.69 points) and emphasis of information (2.82 points)).\n\n*Note: Codes of variables used in R estimations in brackets\n\nPropensities toward the first choice when experiencing disease symptoms. Employing logistic regression estimations with the dependent variable “StChoice” against four independent variables “Edu”, “Age”, “Respon” and “PopularInfo”, introduced in Table 2, the results reported in Table 3 show that there are relationships between the choice people prioritise when they recognise their symptoms with age, educational background, physicians’ responsiveness and the sufficiency of health information.\n\n*Note: Variables “Respon”, “PopularInfo” and “SuffInfo” have the lowest value of 1 and highest 5.\n\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1; z-value in square brackets; baseline category for: “Edu”=“Uni”. Residual deviance: 4304.03 on 4126 degrees of freedom.\n\n(Eq.1) and (Eq.2) are established based on Table 3 as follows:\n\nln(πaskrel/πselfstudy) = 1.004 + 0.712×Hi.Edu – 0.025×Age – 0.225×Respon + 0.123×PopularInfo                (Eq.1)\n\nln(πclinic/πselfstudy) = –0.673 + 0.578×Hi.Edu + 0.026×Age – 0.067×Respon + 0.158×PopularInfo                (Eq.2)\n\nFrom the two above formulas, the probability of a person aged 30, giving 3.38 points for doctors’ responsiveness and 2.08 points for the efficiency of health communications (average points), choosing to go to clinic as the first choice is:\n\nπclinic= e-0.673+0.578+0.026×30-0.067×3.38+0.158×2.8/[1+ e-0.673+0.578+0.026×30-0.067×3.38+0.158×2.8 + e(1.004+0.712-0.025×30-0.225×3.38+0.123×2.8)] = 0.474\n\nIn the same manner, the probability calculated in the case that this person has a university or higher education background is 42.74%.\n\nDecision to attend periodic GHE after using IT apps. The results of logistic regression with the independent variables “Age”, “UseIT”, “PopularInfo” and the dependent variable “AfterIT” has shown the effect of age, the efficiency of health communications and the readiness to use IT health apps on the decision to attend GHE if the apps identify health problems.\n\nFrom that, in ln(πmaybe/πyes), the intercept β0=1.624 (P<0.001, z=6.833), the coefficient of “Age” β1=0.001 (P<1, z=0.165); the coefficient of “UseIT” at “no” is β2 =-1.744 (P<0.001, z=-9.816) and at “yes” is β3=-2.558 (P<0.001, z=-19.870). The coefficient of “PopularInfo” β4=-0.008 (P<1, z=-0.169).\n\nIn ln(πno/πyes), the intercept β0=-1.290 (P<0.001, z=-3.785), the coefficient of “Age” β1=0.026 (P<0.001, z=3.470); the coefficient of “UseIT” at “no” is β2=2.022 (P<0.001, z=9.095) and at “yes” β3=-1.774 (P<0.001, z=-6.859). For the coefficient “PopularInfo”, β4=-0.210 (P<0.01, z=-3.094).\n\nThe two formulas below describe the relationships between the factors:\n\nln(πmaybe/πyes) = 1.624 + 0.001×Age – 1.744×no.UseIT – 2.558×yes.UseIT – 0.008×PopularInfo                (Eq.3)\n\nln(πno/πyes) = –1.290 + 0.026×Age + 2.022×no.UseIT – 1.774×yes.UseIT – 0.210×PopularInfo                (Eq.4)\n\nBased on (Eq.3) and (Eq.4), we can calculate the probabilities of a patient taking GHE after IT apps reveal health problems with “Age”=30, “PopularInfo”=2.80 and “UseIT”=“yes” is 68.84%. In case “UseIT” = “no”, πyes=22.66%.\n\nEmploying a logistic regression model with the response “QualExam” and two continuous dependent variables “SuffInfo” and “PopularInfo”, the results are described as follows. In ln(πhi/πmed), the intercept β0=-1.525 (P<0.001, z=-10.317), the coefficients of “SuffInfo” and “PopularInfo” are β1=0.114 (P<0.05, z=2.298) and β2=0.204 (P<0.001, z=4.169), respectively. In addition, for ln(πlow/πmed), intercept β0=-1.454 (P<0.001, z=-4.235), the coefficients of “SuffInfo” and “PopularInfo” are β1=-0.635(P<0.001, z=-4.080) and β2=-0.005 (P<1, z=-0.035), respectively.\n\nThe two regression equations:\n\nln(πhi/πmed) = –1.525 + 0.114 × SuffInfo + 0.204 × PopularInfo                (Eq.5)\n\nln(πlow/πmed) = –1.454 – 0.635 × SuffInfo – 0.005 × PopularInfo                (Eq.6)\n\nThe correlation between the hesitation toward GHE, due to perceived non-urgency and unimportance, the readiness due to community subsidy, affordable costs and the usage of subsidy is confirmed with the results as follows: In ln(πallsoon/πpartly), the intercept β0=1.868 (P<0.001, z=12.763), the coefficient of “NotImp” at “yes” is β1=-0.350 (P<0.01, z=-2.706), the coefficient of “ComSubsidy” at “yes” is β2=0.097 (P<1, z=0.751), the coefficient of “AffCost” at “hi” is β3=0.699 (P<0.05, z=2.477) and at “low” is β4=-0.752 (P<0.001, z=-5.490).\n\nLikewise, in ln(πlater/πpartly), the intercept β0=0.910 (P<0.001, z=5.464), the coefficient of “NotImp” at “yes” is β1=0.303 (P<0.05, z=1.989), the coefficient of “ComSubsidy” at “yes” is β2=-0.672 (P<0.001, z=-4.459), and “AffCost” at “hi” is β3=0.790 (P<0.01, z=2.622) and at “low” is β4=-0.916 (P<0.001, z=-5.714).\n\nRegression equations (Eq.7) and (Eq.8) are built based on the above results:\n\nln(πallsoon/πpartly) = 0.910 + 0.303×yes.NotImp – 0.672×yes.ComSubsidy + 0.790 × hi.AffCost – 0.916×low.AffCost                (Eq.7)\n\nln(πlater/πpartly) = 1.868 – 0.350×yes.NotImp + 0.097×yes.ComSubsidy + 0.699 × hi.AffCost – 0.752×low.AffCost                (Eq.8)\n\nFrom (Eq.7) and (Eq.8), the probability of a person using all of a subsidy soon being ready to participate in GHE, having no hesitation and willing to pay at high cost is calculated as follows:\n\nπallsoon = e1.868+0.097+0.699/[1+ e1.868+0.097+0.699 + e0.910-0.672+0.790]=0.791\n\nThe same procedure could be used to compute other likelihoods (Supplementary File 3).\n\n\nDiscussion\n\nComparing πclinic=47.4% at the “Edu”=“Hi” with πclinic=42.74%=“Edu”=“Uni”, it can be concluded that people with lower levels of education (high school or less) are more likely to go to clinics than those with a higher education (university or above). Also, a change of πclinic from 43.7% to 51.6% when “PopularInfo” runs from 1 to 5 points proves that effective communication will increase the likelihood of people going to clinics when finding illness symptoms. Similarly, πclinic also increases if physicians’ responsiveness is rated at a high level. Moreover, it can be seen that the older people are, the higher the probability they prioritise visiting clinics (Table 4a).\n\nFrom the two equations (Eq.3) and (Eq.4), it can be observed that the absolute value of the coefficient corresponding to the variable “UseIT” is the largest, with β3=-2.558 (P < 0.001) at (Eq.3) and β2=2.022 (P < 0.001) at (Eq.4). It means that the increase or decrease of the probability of attending GHE after using IT apps will bear the greatest impact from the readiness or hesitation toward using IT health apps. In addition, Table 4b shows that the likelihood of attending GHE after using IT apps decreases as age increases. In contrast, this figure increases when health communication becomes increasingly popular.\n\nRegarding assessment of the quality of healthcare services, the probability of a high score is larger than a low score in all conditions, especially when the efficiency of communication and the sufficiency of information reach the highest point (5 points), the probability that healthcare quality is assessed highly is largest (πhi > 40%). Therefore, it can be stated that the more widely and adequately information is disseminated, the more probable people will feel positive about healthcare quality (Table 4c).\n\nIt can be seen that the regression coefficient β1 of variable “NotImp” in (Eq.7) is negative and is positive in (Eq.8). Therefore, those who are hesitant, due to considering GHEs as not urgent and important, are less likely to make use of the total subsidy in the near future. The influence of “ComSubsidy” and “AffCost” are clarified through the analyses of Figure 1.\n\nThe figure represents trends of changing probabilities using funds available for GHEs, which control for the provision of community cash support. With community subsidies, respondents showed a stronger propensity to quickly use up the funds for GHEs.\n\nFirstly, it can be seen that the probability line of “using all the money soon” (“allsoon”) in both the charts in Figure 1 have downward trends when moving from point “hi” to point “low” of “AffCost”, whereas the opposite trend occurs for the “later_partly” line. This means that the probability of using all the money soon reduces when people are willing to pay a high cost for a GHE. Moreover, (Eq.7) and (Eq.8) also imply that acceptable costs have the strongest impact on the use of provided money for GHEs.\n\nFurthermore, the probability line of “allsoon” ranges from over 55% to nearly 70% in Figure 1 (left panel) and from over 47% to nearly 53% on the right panel. Therefore, participants tend to take all the money for an early GHE if they receive a subsidy from the community or government.\n\nFinally, the two probability lines in Figure 1 (left panel) lie separately, while those in the right panel intersect with one another. This proves that when a person demonstrates a willingness toward GHEs, due to a community subsidy, then they tend to give priority to GHEs.\n\n\nConclusion\n\nThe analyses in the present study helps to provide some valuable conclusions as follows:\n\nIT apps increase the likelihood of GHE participation, as 83% of participants said they might or would definitely visit a doctor if the apps reveal health problems or illness symptoms. The remainder expressed doubts on the reliability of the apps. This usually occurred in older people; nearly ¾ of people aged above 50 years did not completely trust the quality of these mobile apps.\n\nEducational attainment is also a strong influence on the decision of GHE participation (with β2=0.712 (P<0.001) at (Eq.1) and β2=0.578 (P<0.001), following (Eq.2)). The preventive medicine or subclinical tests applied in GHE require inquiry and a certain amount of knowledge, which is limited for the people with a lower level of education. In this case, the clinical methods appear more effective. These people are eager to get direct advice from relatives, friends or doctors, while only about 18% of participants preferred self-study.\n\nBy contrast, effective health communications helped participants to have enough information and a thus formed a more trustworthy base, forming standards of comparison instead of purely emotional and personal conclusions, so that the evaluation tends to be improved and more objective. The proof is that nearly 70% of respondents rated the quality of healthcare services highly if they rated the sufficiency and coverage of information highly. Moreover, ITs also reduce the expensiveness of information36. However, health communications in Vietnam are still defective, especially as they are less widespread (assessment of efficacy: 2.8 out of 5 points; Table 2). Therefore, people expect a better coverage of health information.\n\nApart from ICTs, the community/government subsidy is also one measure that promotes GHEs. People tend to attend early GHEs when they receive cash subsidies (58.4 – 79.1%). However, about 52% of participants do not appreciate the importance of regular check-ups (Table 1). This may be due to limited finance (accounting for 60.8%), but might also be because they feel GHEs are not really necessary; therefore, they could use the subsidy for other improper purposes (accounting for 37.81%). For that reason, the authorities/communities need support in a reasonable manner in order to further promote the public’s readiness toward GHEs for their family and themselves.\n\nAlso, it cannot be denied that the quality of healthcare services in clinics and hospitals, particularly the responsiveness of nurses and doctors, remains low. With an average of 3.38 out of 5 points, responsiveness is rated lowest among the five elements included, whereas the empirical average score for quality of medical services is only at a medium level (3.55 out of 5 points). This somewhat reduces peoples’ desire to go to hospitals to check their health. Therefore, it is definitely necessary to improve the quality of medical services in Vietnam, especially public hospitals, since people tend to be more satisfied with private hospitals31.\n\n\nData availability\n\nDataset 1: Raw data gathered from the survey, doi, 10.5256/f1000research.10508.d14754842. The data table used for providing descriptive statistics and preparing data subsets for statistical analysis (see also Supplementary Table 1).", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: The survey questionnaire is provided in full (in Vietnamese).\n\nClick here to access the data.\n\nSupplementary File 2: The survey questionnaire is provided in full (in English).\n\nClick here to access the data.\n\nSupplementary File 3: Estimations in R. These data files are available for verification and re-confirmation of the results found by the present study.\n\nClick here to access the data.\n\nSupplementary Table 1: Contingency table for estimations. Counts for relevant factors involved in statistical analysis.\n\nClick here to access the data.\n\n\nReferences\n\nCherrington A, Corbie-Smith G, Pathman DE: Do adults who believe in periodic health examinations receive more clinical preventive services? Prev Med. 2007; 45(4): 282–289. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurton LC, Steinwachs DM, German PS, et al.: Preventive services for the elderly: would coverage affect utilization and costs under Medicare? Am J Public Health. 1995; 85(3): 387–391. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFinkelstein MM: Preventive screening. What factors influence testing? Can Fam Physician. 2002; 48: 1494–1501. PubMed Abstract | Free Full Text\n\nNakanishi N, Tatara K, Fujiwara H: Do preventive health services reduce eventual demand for medical care? Soc Sci Med. 1996; 43(6): 999–1005. PubMed Abstract | Publisher Full Text\n\nGarritty C, El Emam K: Who’s using PDAs? Estimates of PDA use by health care providers: a systematic review of surveys. J Med Internet Res. 2006; 8(2): e7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaldwin LP, Low PH, Picton C, et al.: The use of mobile devices for information sharing in a technology-supported model of care in A&E. Int J Electron Healthc. 2007; 3(1): 90–106. PubMed Abstract | Publisher Full Text\n\nOzdalga E, Ozdalga A, Ahuja N: The smartphone in medicine: a review of current and potential use among physicians and students. J Med Internet Res. 2012: 14(5): e128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoehler N, Vujovic O, McMenamin C: Healthcare professionals’ use of mobile phones and the internet in clinical practice. JMTM. 2013; 2: 3–13. Publisher Full Text\n\nWest D: How mobile devices are transforming healthcare. Issues in Tech Innovation. 2012; 18: 1–14. Reference Source\n\nHamilton AD, Brady RR: Medical professional involvement in smartphone ‘apps’ in dermatology. Br J Dermatol. 2012; 167(1): 220–221. PubMed Abstract | Publisher Full Text\n\nAbboudi H, Amin K: Smartphone applications for the urology trainee. BJU Int. 2011; 108(9): 1371–1373. PubMed Abstract | Publisher Full Text\n\nKaplan WA: Can the ubiquitous power of mobile phones be used to improve health outcomes in developing countries? Global Health. 2006; 2: 9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartinez AW, Phillips ST, Carrilho E, et al.: Simple telemedicine for developing regions: camera phones and paper-based microfluidic devices for real-time, off-site diagnosis. Anal Chem. 2008; 80(10): 3699–3707. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFraser HS, Jazayeri D, Nevil P, et al.: An information system and medical record to support HIV treatment in rural Haiti. BMJ. 2004; 329(7475): 1142–1146. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVentola CL: Mobile devices and apps for health care professionals: uses and benefits. P T. 2014; 39(5): 356–364. PubMed Abstract | Free Full Text\n\nBuijink AW, Visser BJ, Marshall L: Medical apps for smartphones: lack of evidence undermines quality and safety. Evid Based Med. 2013; 18(3): 90–92. PubMed Abstract | Publisher Full Text\n\nRosser BA, Eccleston C: Smartphone applications for pain management. J Telemed Telecare. 2011; 17(6): 308–312. PubMed Abstract | Publisher Full Text\n\nVisvanathan A, Hamilton A, Brady RR: Smartphone apps in microbiology--is better regulation required? Clin Microbiol Infect. 2012; 18(7): E218–220. PubMed Abstract | Publisher Full Text\n\nGandjour A, Lauterbach KW: Preventive care and the prospect of cost savings. Eur J Health Econ. 2006; 3: 1–2.\n\nFletcher RH: Review: Periodic health examination increases delivery of some clinical preventive services and reduces patient worry. Evid Based Med. 2007; 12(4): 118. PubMed Abstract | Publisher Full Text\n\nMerenstein D, Daumit GL, Powe NR: Use and costs of nonrecommended tests during routine preventive health exams. Am J Prev Med. 2006; 30(6): 521–527. PubMed Abstract | Publisher Full Text\n\nGurchiek K: Health contributions tied to workers’ pay. HR Magazine. 2005; 50(2): 28–32. Reference Source\n\nVuong QH: Be rich or don’t be sick: estimating Vietnamese patients’ risk of falling into destitution. Springerplus. 2015; 4: 529. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim PC, Tang NK: A study of patients' expectations and satisfaction in Singapore hospitals. Int J Health Care Qual Assur Inc Leadersh Health Serv. 2000; 13(6–7): 290–299. PubMed Abstract | Publisher Full Text\n\nKhan MH, Hassan R, Anwar S, et al.: Patient satisfaction with nursing care. RMJ. 2007; 32(1): 28–30. Reference Source\n\nBleich SN, Ozaltin E, Murray CK: How does satisfaction with the health-care system relate to patient experience? Bull World Health Organ. 2009; 87(4): 271–278. PubMed Abstract | Free Full Text\n\nValentine N, Darby C, Bonsel GJ: Which aspects of non-clinical quality of care are most important? Results from WHO's general population surveys of “health systems responsiveness” in 41 countries. Soc Sci Med. 2008; 66(9): 1939–1950. PubMed Abstract | Publisher Full Text\n\nEpner JE, Levenberg PB, Schoeny ME: Primary care providers' responsiveness to health-risk behaviors reported by adolescent patients. Arch Pediatr Adolesc Med. 1998; 152(8): 774–780. PubMed Abstract\n\nAndaleeb SS: Service quality perceptions and patient satisfaction: a study of hospitals in a developing country. Soc Sci Med. 2001; 52(9): 1359–1370. PubMed Abstract | Publisher Full Text\n\nGonzález-Valentín A, Padín-López S, de Ramón-Garrido E: Patient satisfaction with nursing care in a regional university hospital in southern Spain. J Nurs Care Qual. 2005; 20(1): 63–72. PubMed Abstract\n\nUzun Ö: Evaluation of satisfaction with nursing care of patients hospitalized in surgical clinics of different hospitals. IJCS. 2015; 8(1): 19–24. Reference Source\n\nCoulter A, Jenkinson C: European patients' views on the responsiveness of health systems and healthcare providers. Eur J Public Health. 2005; 15(4): 355–360. PubMed Abstract | Publisher Full Text\n\nKeller PA, Lehmann DR: Designing effective health communications: a meta-analysis. JPP&M. 2008; 27(2): 117–130. Publisher Full Text\n\nBlock LG, Keller PA: Effects of self-efficacy and vividness on the persuasiveness of health communications. J Consum Psychol. 1997; 6(1): 31–54. Publisher Full Text\n\nSutton SM, Eisner EJ, Burklow J: Health communications to older Americans as a special population. The National Cancer Institute's consumer-based approach. Cancer. 1994; 74(7 Suppl): 2194–2199. PubMed Abstract | Publisher Full Text\n\nVuong QH: Information expensiveness perceived by Vietnamese patients with respect to healthcare provider's choice. Acta Inform Med. 2016; 24(5): 280–283. Publisher Full Text\n\nKreps GL: Communication and health education. In Communication and health: Systems and Applications. (ed. Eileen B, & Lewis D). Routledge, 1990; 187–203. Reference Source\n\nKreps GL, Kunimoto EN: Effective communication in multicultural health care settings. Sage Publications, 1994. Publisher Full Text\n\nPatrick K, Intille SS, Zabinski MF: An ecological framework for cancer communication: implications for research. J Med Internet Res. 2005; 7(3): e23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVuong QH, Nguyen TK: Vietnamese patients' choice of healthcare provider: in search of quality information. Int J Behav Healthcare Res. 2015; 5(3/4): 184–212. Publisher Full Text\n\nKanavos P: The rising burden of cancer in the developing world. Ann Oncol. 2006; 17(Suppl 8): viii15–viii23. PubMed Abstract | Publisher Full Text\n\nVuong QH: Dataset 1 in: Health communication, information technology and the public’s attitude toward periodic general health examinations. F1000Research. 2016. Data Source" }
[ { "id": "18862", "date": "03 Jan 2017", "name": "Cuong Viet Nguyen", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for giving me a chance to review the paper ‘Health communication, information technology and the public’s attitude toward periodic general health examinations’. I find the paper is interesting and important for health care. In Vietnam as well as other countries, there are an increasing number of smart phones, but there is low health care utilization. The paper shows that development of reliable IT apps is a useful way to increase the health care utilization. The title and abstract are appropriate. The study design is well described. Overall, the paper is well written. I would like to suggest this paper for indexing.\n\nBest regards,\n\nCuong", "responses": [ { "c_id": "2399", "date": "03 Jan 2017", "name": "Quan-Hoang Vuong", "role": "Author Response", "response": "I would like to thank Professor Cuong V. Nguyen of the National Economics University (Vietnam) for your comment and especially a valid point on the utilizing of the IT devices and facilities. Given the widespread problems of cancer and diabetes in Vietnam and many other emerging market economies, where rising economic standards have not necessarily been followed by improved health and healthcare standards and the populace are subject to unequal access to quality health services, the ICT solutions appear to have been untapped. I believe the theme will further invite research efforts by both health economists and health social scientists, which would likely induce increasing interests among a broader scholarly community for the sake of bettering health for the populace." } ] }, { "id": "18859", "date": "13 Jan 2017", "name": "Bach Xuan Tran", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study findings enrich the literature on factors that influence health behaviors and health care services seeking in Vietnam. The analysis was sufficiently robust and the study has enough scientific merit for indexing.\nAs for Sampling, the author may consider describing the size and development of sample frame and selection approach. If it was simple random sampling, how did the author randomly select and approach the subjects?", "responses": [ { "c_id": "2428", "date": "13 Jan 2017", "name": "Quan-Hoang Vuong", "role": "Author Response", "response": "I would like to thank Professor Bach Xuan Tran for the review report and related comment. With respect to Prof Tran's suggestion on further description of the sample and the sampling practice, I find the point both valid and useful. I will seek to elaborate right when an opportunity for a data article is possible and will provide it when ready. Sincerely, Quan-Hoang Vuong" } ] } ]
1
https://f1000research.com/articles/5-2935
https://f1000research.com/articles/5-2934/v1
30 Dec 16
{ "type": "Research Article", "title": "Inducible targeting of CNS astrocytes in Aldh1l1-CreERT2 BAC transgenic mice", "authors": [ "Jan Winchenbach", "Tim Düking", "Stefan A. Berghoff", "Sina K. Stumpf", "Swen Hülsmann", "Klaus-Armin Nave", "Gesine Saher", "Jan Winchenbach", "Tim Düking", "Stefan A. Berghoff", "Sina K. Stumpf", "Swen Hülsmann" ], "abstract": "Background: Studying astrocytes in higher brain functions has been hampered by the lack of genetic tools for the efficient expression of inducible Cre recombinase throughout the CNS, including the neocortex. Methods: Therefore, we generated BAC transgenic mice, in which CreERT2 is expressed under control of the Aldh1l1 regulatory region. Results: When crossbred to Cre reporter mice, adult Aldh1l1-CreERT2 mice show efficient gene targeting in astrocytes. No such Cre-mediated recombination was detectable in CNS neurons, oligodendrocytes, and microglia. As expected, Aldh1l1-CreERT2 expression was evident in several peripheral organs, including liver and kidney. Conclusions: Taken together, Aldh1l1-CreERT2 mice are a useful tool for studying astrocytes in neurovascular coupling, brain metabolism, synaptic plasticity and other aspects of neuron-glia interactions.", "keywords": [ "Astrocyte", "Bergman glia", "inducible Cre recombinase", "tamoxifen", "neuroscience" ], "content": "Introduction\n\nCre-mediated recombination of target genes in adult astrocytes requires the use of an inducible expression system, because many promoters of the astrocyte lineage are also active in multipotential neural stem cells in the subventricular and subgranular zones (Christie et al., 2013). Thus, transgenic mouse lines have been generated for tamoxifen-inducible Cre recombination of target genes in mature astrocytes (Chow et al., 2008; Ganat et al., 2006; Hirrlinger et al., 2006; Mori et al., 2006; Slezak et al., 2007). However, none of them achieves sufficient recombination to study the function of genes in the majority of cortical and spinal cord astrocytes.\n\nThe aldehyde dehydrogenase 1 family member L1 (Aldh1l1), also known as 10-formyltetrahydrofolate dehydrogenase (EC 1.5.1.6), converts 10-formyltetrahydrofolate to tetrahydrofolate and CO2 together with the reduction of NADP+ (Kutzbach & Stokstad, 1971). The Aldh1l1 gene is expressed in a subset of radial glia in the midline of the embryonic CNS (Anthony & Heintz, 2007) and neuronal precursors (Foo & Dougherty, 2013). By transcriptional profiling in postnatal brain, Aldh1l1 was identified to be specifically expressed in astrocytes (Cahoy et al., 2008), which increase Aldh1l1 expression about tenfold with maturation (Zhang et al., 2014). To date, Aldh1l1 is regarded a pan-astrocyte marker, as determined in BAC transgenic mice with a fluorescent reporter protein or constitutive Cre expression under control of the Aldh1l1 promoter (Heintz, 2004; Yang et al., 2011). Therefore, we selected the Aldh1l1 regulatory region and a similar BAC transgenic strategy to target transgenic expression of CreERT2 to mature astrocytes.\n\n\nResults and discussion\n\nWe generated Aldh1l1-CreERT2 transgenic mice by inserting a CreERT2 cassette (Sauer, 1994) under control of the Aldh1l1 promoter in a murine BAC (BAC RP23-7M9). Targeting the first coding exon of Aldh1l1 by homologous recombination, we substituted the open reading frame of exon 2 with the CreERT2 cDNA (Figure 1a). Three lines of BAC transgenic mice were obtained by pronuclear injection, and crossbred with the Cre reporter mice ROSA26-Tdto or ROSA26-Eyfp (Madisen et al., 2010; Srinivas et al., 2001). Based on the degree of expression, one of the three lines of Aldh1l1-CreERT2 mice was selected for detailed characterization of double-transgenic offspring.\n\na) Scheme of the cloning strategy of Aldh1l1-CreERT2 BAC transgene. b) Immunoblot detecting RFP (tdTomato) in cortex (CTX), cerebellum (CB) and spinal cord (SC) lysates of two animals each, as indicated. GAPDH shows comparable loading of protein. c) Direct fluorescence of the Cre-reporter tdTomato in sagittal sections of Aldh1l1-CreERT2*ROSA26-Tdto mice. d) Immunolabeling of the astrocyte marker S100beta in the cortex reveals almost complete overlap with the tdTomato Cre reporter in astrocytes. Scale, 20 µm. e) CCD camera image of a tdTomato positive astrocyte with the position of the patch pipette outlined as dashed lines (scale, 20 µm, left) that showed a typical passive response to the voltage step protocol (middle). The IV-curve of this cell is shown (right panel, open circles) together with the averaged IV curve of all 18 analyzed cells (mean ± sd).\n\nFirst, we determined the leakiness of reporter expression in adult Aldh1l1-CreERT2 mice. After corn oil injections in Aldh1l1-CreERT2*ROSA26-Tdto mice, we found very few labeled cells (less than 5 per section), demonstrating that the inducible Cre system operates tightly. In parallel experiments, adult Aldh1l1-CreERT2 mice were analyzed 7 days after tamoxifen induction. Sagittal brain sections revealed numerous tdTomato Cre reporter expressing cells, which in the forebrain exhibited the typical morphology of protoplasmic astrocytes (Figure 1). Co-labeling revealed that almost all S100beta (S100 calcium-binding protein B) positive cells in hippocampus and cerebral cortex expressed tdTomato (Figure 1, Table 1).\n\nEfficiency and specificity of inducible Cre mediated recombination in adult Aldh1l1-CreERT2 mice crossbred with Cre reporter ROSA26-Tdto or ROSA26-Eyfp. For each value shown (average percentage), cells were counted on eight confocal images and two sections for each of n=4 animals. Efficiency is expressed as percent Cre reporter positive cells of all S100beta labeled cells. Specificity is expressed as percentage of all Cre reporter positive cells that lack immuno-labeling for S100beta.\n\nFor comparison, when using a less sensitive EYFP Cre reporter line (Srinivas et al., 2001) in corresponding experiments, only two thirds of all S100beta positive cells in the cortex were also EYFP positive (Table 1). Thus, although both Cre reporter lines were generated as a knock-in into the endogenous ROSA26 locus, the recombination efficacy achieved is clearly different, in agreement with previous reports (Madisen et al., 2010; Srinivas et al., 2001). This finding illustrates the need to determine recombination efficiency individually for each combination of Cre allele and floxed target gene.\n\nTo characterize the identity of targeted cells functionally, we patched in total 18 tdTomato expressing cells in the cortex (Figure 1e). As expected, all cells displayed the electrophysiological signature of mature astrocytes (Grass et al., 2004; Schipke et al., 2001), with low input resistance (20.79 ± 9.26 mean MΩ ± sd; n=18) and negative resting membrane potential (-78.71 ± 3.22 mV).\n\nThe expression pattern of some astroglial marker proteins, such as GFAP (glial fibrillary acidic protein), differs between protoplasmic astrocytes in the cortex and fibrous astrocytes in white matter. We therefore assessed the efficacy of Cre recombination separately for the corpus callosum, fimbria, hippocampus and spinal cord. Again, in all these regions a large majority of astrocytes, as defined by S100beta or GFAP, expressed Cre reporter, e.g. 85±1% in the corpus callosum and 94±2% in the fimbria (n=3 animals) (Figure 2, Table 1). Co-labeling with GFAP was not used for cell counts because of the protein’s low abundance in cell bodies which makes unequivocal quantification difficult.\n\nCo-immunolabeling of the astrocyte marker S100beta or GFAP with Cre reporter (direct tdTomato fluorescence, GFP anti EYFP or RFP anti tdTomato) in fimbria (a), hippocampus (b), cerebellum (c) and spinal cord (d) reveals almost complete overlap of the transgene with astrocytes. Scale, 50 µm.\n\nIn the cerebellum, a large fraction (89 ± 1%) of S100beta positive Bergman glia cells expressed the Cre reporter EYFP (Figure 2c, Table 1). While 3.3 ± 0.3% of parvalbumin positive interneurons of the molecular layer expressed the tdTomato Cre reporter, none was double positive in corresponding experiments using the EYFP Cre reporter, confirming the sensitivity of the tdTomato reporter with a tendency for off-target recombination. Cre reporter expression was also observed in some neurons in the dentate gyrus and olfactory bulb, likely reflecting some recombination in adult neural stem cells in the subgranular and subventricular zone, followed by the migration of labeled progeny through the rostral migratory stream (Figure 1c).\n\nNext, we compared Aldh1l1-CreERT2 mediated recombination with the expression pattern of EGFP in Aldh1l1-Egfp transgenic mice, generated with a similar BAC based strategy (Heintz, 2004). As expected, reporter and EGFP expression was nearly identical in the cortex, confirming the high efficiency of CreERT2 mediated induction of the tdTomato reporter (Figure 3a).\n\na) Co-immunolabeling of the Cre reporter tdTomato (anti RFP) and EGFP (anti GFP) in triple transgenic mice (Aldh1l1-CreERT2*ROSA26-Tdto*Aldh1l1-Egfp) in cortical sections. Scale 50 µm. b) Direct fluorescence of the Cre-reporter tdTomato in spinal sections of Aldh1l1-CreERT2*ROSA26-Tdto and Slc1a3-CreERT2*ROSA26-Tdto transgenes. Scale 50 µm.\n\nFinally, in comparison with Slc1a3 (Glast)-CreERT2 (Mori et al., 2006), Aldh1l1-CreERT2 mediated recombination of the tdTomato reporter revealed nearly complete recombination of astrocytes in spinal cord white matter, whereas Slc1a3-CreERT2 mediated fluorescence appeared patchy (Figure 3b).\n\nNext, we tested the cell-type specificity of the Aldh1l1-CreERT2 transgene. Co-localization of tdTomato with markers for neurons (NSE, neuron specific enolase) or microglia (Iba1, ionized calcium binding adaptor molecule 1) was virtually absent (Figure 4, Table 2). However, we observed a small fraction of Cre reporter positive cells co-localizing with Olig2 (oligodendrocyte lineage transcription factor 2), a transcription factor found in all oligodendrocyte lineage cells, including oligodendrocyte precursor cells (Figure 4b). Similarly, in triple transgenic mice that additionally express EYFP under control of the endogenous NG2 (neural/glial antigen 2) promoter (Karram et al., 2008), we identified 3.4% of double labeled cells, presumably oligodendrocyte precursor cells based on their localization and morphology. However, co-localization with a marker of mature oligodendrocytes (CAII, carbonic anhydrase 2) was negligible 12d after tamoxifen injections, and did not increase in mice that were analyzed 27 weeks after recombination (tamoxifen induction at 16 weeks of age). This suggests that the small percentage of Aldh1l1-CreERT2 expressing NG2 glia does not give rise to oligodendrocytes. An independently generated line of Aldh1l1-CreERT2 mice (Srinivasan et al., 2016) shows some Olig1, Olig2, CNP and CAII but no NG2 expression, as determined by ribotag-dependent transcriptome profiling (Sanz et al., 2009). Whether this dissimilarity is caused by the different detection methods employed remains to be determined.\n\nSpecificity of inducible Cre mediated recombination in adult Aldh1l1-CreERT2*ROSA26-Tdto mice. For each value (average percentage), cells were counted on eight confocal pictures and two sections for each of n=4 animals. Specificity is expressed as percentage of cells that show Cre reporter expression of all cell type marker positive cells.\n\n*analyzed in triple transgenic mice (Aldh1l1-CreERT2*ROSA26-Tdto*NG2-Eyfp)\n\na) Direct fluorescence of the Cre-reporter tdTomato and immunolabeling of neurons (NSE) and microglia (Iba1) on cortical sections. Scale, 50 µm. b) Direct fluorescence of the Cre-reporter tdTomato and immunolabeling of mature oligodendrocytes (CAII, scale, 50 µm) and oligodendroglia (Olig2, scale, 20 µm). c) Co-immunolabeling of the Cre reporter tdTomato (anti RFP) and EYFP (anti GFP) in triple transgenic mice (Aldh1l1-CreERT2*ROSA26-Tdto*NG2-Eyfp) revealing co-labeling in a small fraction of cells. Scale, 20 µm.\n\nAldh1l1 is an enzyme of folate metabolism that is expressed in various peripheral organs (Krupenko & Oleinik, 2002). In agreement, we detected Cre reporter expression in liver, kidney, lung, and small intestine by direct immunofluorescence and Western blotting (Figure 5). Cre reporter was not detected in heart muscle.\n\na) Direct fluorescence of the Cre-reporter in transgenic Aldh1l1-CreERT2*ROSA26-Tdto mice in liver, kidney, lung and intestine. Nuclei are shown in white (DAPI). Scale, 50 µm. b) Western blot detecting RFP (tdTomato) in lung, liver, kidney, small intestine, and heart, as indicated. GAPDH served as loading control.\n\n\nConclusion\n\nAldh1l1 is a general marker for astrocytes within the CNS, and our new line of tamoxifen-inducible Aldh1l1-CreERT2 transgenic mice can be used to genetically target astrocytes in the mature CNS with high efficiency and specificity. When the corresponding genomic recombination in peripheral tissues is well tolerated, this line is suitable to study gene functions in astroglial cells of adult mice. Aldh1l1-CreERT2 mice will be made freely available upon request to the corresponding author.\n\n\nMethods\n\nAll animal studies were performed at the Max Planck Institute of Experimental Medicine in compliance with the animal policies of the Max Planck Institute of Experimental Medicine and were approved by the German Federal State of Lower Saxony. All animals were housed in individually ventilated cages in groups of 3–5 mice per cage, kept in a room with controlled temperature (~23°C) under 12 h light/dark cycle and had access to food and water ad libitum. In addition to the newly generated inducible Aldh1l1-CreERT2 mouse line (see below), we used BAC transgenic Aldh1l1-Egfp mice (Heintz, 2004), Slc1a3-CreERT2 mice (also called Glast-CreERT2; Mori et al., 2006), and NG2-EYFP knock-in mice (Karram et al., 2008). As Cre reporter we used the ROSA26 flox-stop-flox-Tdtomato line (ROSA26-Tdto; Madisen et al., 2010) and the ROSA26 flox-stop-flox-EYFP line (ROSA26-Yfp; Srinivas et al., 2001). We used a total of 26 mice of both sexes at the age of 7–10 weeks unless otherwise stated (20 – 30 g body weight). All mice were analyzed as heterozygotes for the respective transgenic allele.\n\nBy PCR we introduced 50 bp of the Aldh1l1 intron 1/ exon 2 sequence 5’ of the CreERT2 open reading frame. The bovine growth hormone poly A sequence (bGH pA), the frt (flippase recognition site) flanked kanamycin resistance cassette, and 50 bp of Aldh1l1 genomic sequence was inserted into an Nhe1 site 3’ to the ERT2 sequence. The combined construct was introduced into exon 2 of the Aldh1l1 gene on the BAC RP23-7M9 (BACPAC Resources of the Children's Hospital Oakland Research Institute in Oakland), in frame with the start ATG, by homologous recombination in bacteria (EL250) as described (Lee et al., 2001). Excision of the resistance cassette was done by arabinose induced flippase expression. The BAC insert was excised by Not I digestion and purified by size exclusion chromatography using a sepharose column. Pronucleus injection gave rise to 5 transgenic founder mice. Genotyping was done by PCR of purified tail genomic DNA under standard conditions with the primers (5’-3’, final concentration 0.25 µM) CAACTCAGTCACCCTGTGCTC and TTCTTGCGAACCTCATCACTCG amplifying the 3’ part of intron1 of the Ald1l1 gene to the 5’ part of the Cre open reading frame. Three out of five founder mice that were crossed with reporter mice showed expression in brain. Only one line (Aldh1l1-CreERT2 line 02) showed robust expression in forebrain astrocytic cells and minimal expression in other cell types of the brain.\n\nTamoxifen (Sigma, T5648) was dissolved in corn oil (Sigma, C8267) at a concentration of 7.5 mg/ml and injected intraperitoneally at 75 µg/g body weight on 5 consecutive days. We used a total of 26 mice of both sexes at the age of 7–10 weeks unless otherwise stated (20 – 30 g body weight). Mice were analyzed 12 (immunohistochemistry) and 20 days (electrophysiology) after tamoxifen induction.\n\nAfter perfusion with 4% paraformaldehyde (w/v) in phosphate buffered saline (PBS, pH 7.4) for 20 min, tissue specimens were either cut on a vibratome (40 µm) or cryoprotected in 30% sucrose/PBS, frozen and cut on a cryostat at -22°C (spinal cord 14 µm, peripheral organs 20 µm). Tissue sections were processed for immunohistochemistry by permeabilization in 0.4% Triton X-100 (Sigma, T8787) in PBS for 30 min, blocking in 4% horse serum (HS) and 0.2% Triton X-100 in PBS for 30 min and incubation with first antibody in 1% HS and 0.05% Triton X-100 in PBS at 4°C overnight or for 48h (CAII and Olig2). Incubation with secondary antibodies and DAPI (4',6-diamidino-2-phenylindole) were in 1.5% HS in PBS for 2h at room temperature after which sections were mounted in AquaPolymount (Polysciences). Specimens were analyzed by epifluorescence microscopy using a Plan-Apochromat 20x/0.8 objective (Zeiss Axio Oberser.Z1 with ApoTome.2) and the ZEN 2 software (Zeiss). Confocal laser scanning microscopy (Leica SP2 equipped with a HC PL APO lambda blue 20x/0.7 objective or with a Leica SP5 (HCX PL APO CS 20x/0.7, HCX PL APO lambda blue 40x/1.25, HCX PL APO CS 100x/1.44 objectives) using the Leica Confocal Software (Leica Microsystems). Images were processed with NIH ImageJ and Adobe Photoshop CS5.1 softwares. For quantification, cells were counted on eight confocal images for each of the n=4 animals.\n\nTissue was lysed in sucrose buffer containing 320 mM sucrose, 10mM Tris-HCL (pH 7.4), 1mM NaHCO3, 1mM MgCl2, 1% Triton X-100, 2% lithiumdodecylsulfate, 0.5% sodiumdeoxycholate, and protease and phosphatase inhibitors (cOmplete™, PhosSTOP™, Roche). 25 µg (brain tissue) and 20 µg (lung, liver, kidney, small intestine, heart) of protein lysates were resolved on 12% SDS-polyacrylamide gels under denaturing conditions and electro-transferred to PVDF membranes (Hybond P; GE Healthcare). Blocking was performed for 1h in Tris buffered saline / 0.05% Tween 20 (TBST) containing 5% milk powder and incubated in primary antibody at 4°C overnight in the same solution. Membranes were washed in TBST prior to incubation with appropriate horseradish peroxidase (HRP)-conjugated secondary antibodies (1:5000 Dianova, Hamburg) for 1h. Blots were developed by enhanced chemiluminescence (Pierce, Rockford) and scanned using the ChemoCam Imager (Intas Science Imaging Instruments, Goettingen).\n\nThe following primary antibodies were used in this study: S100beta (rabbit monoclonal, 1:200, Abcam, ab52642), NSE (rabbit polyclonal, 1:500, Chemicon, AB951), CAII (polyclonal rabbit, 1:100, generous gift from S. Ghandour), GFAP (monoclonal mouse, 1:200, Chemicon, MAB3402), Parvalbumin (polyclonal rabbit, 1:1000, Swant, PV-28), Iba1 (rabbit polyclonal, 1:1000, Wako, 019-19741), Olig2 (polyclonal rabbit, 1:100, generous gift from Charles Stiles and John Alberta), RFP (polyclonal rabbit, 1:500 (immunostaining) or 1:1500 (immunoblotting), Rockland, 600-401-379), GAPDH (monoclonal mouse, 1:2500, Stressgen, CSA-335), and GFP (polyclonal goat, 1:500, Rockland, 600-101-215). We used Alexa Fluor 488-conjugated (1:2000, Invitrogen, A21206, 21202, A11055), Alexa Fluor 555-conjugated (1:2000, Invitrogen, A31572) and DyLight 633-conjugated (1:500, YO Proteins 356) secondary antibodies.\n\nAcute forebrain slices from 8 weeks old Aldh1l1-CreERT2*ROSA26-Tdto (n=3) mice were prepared as described previously (Schnell et al., 2015). Briefly, after deep isoflurane narcosis, animals were decapitated, the forebrain was prepared and placed in ice-cooled, carbogen-saturated (95 % O2, 5 % CO2) artificial cerebrospinal fluid (aCSF; in mM: 118 NaCl, 3 KCl, 1.5 CaCl2, 1 MgCl2, 1 NaH2PO4, 25 NaHCO3, and 30 D-glucose; 330 mosmol/l, pH7.4). Sagittal sections (300 µm) were cut on a vibroslicer (VT1200 S, Leica) and stored in aCSF at (35–36°C) for at least 30 min. Subsequently, slices were transferred to the recording chamber and kept submerged by a platinum grid with nylon fibers for mechanical stabilization. The chamber was mounted on an upright microscope (Axioscope FS, Zeiss Germany, 40x objective) and continuously perfused with aCSF at room temperature at a flow rate of 5–10 ml/min. Astrocytes were identified by their red fluorescence in epifluorescence illumination (white-LED, Lumencor Sola SE II) using a tdTomato optimized filter set (excitation 560/40 nm; dichroic mirror 595 nm, emission 645/75 nm; AHF Analysentechnik). For documentation, images of recorded tdTomato-expressing cells were taken with a CCD camera (Sensicam, PCO) and Imaging workbench 6.0 software (Indec Biosystems). Whole-cell voltage-clamp recordings were obtained with a MultiClamp 700B Amplifier (Molecular Devices). Patch electrodes were pulled from borosilicate glass capillaries (Biomedical Instruments, Zülpich, Germany) using a horizontal pipette-puller (Zeitz-Instrumente, Germany). Electrodes were filled with (in mM) 125 KCl, 1 CaCl2, 2 MgCl2, 4 Na2ATP, 10 EGTA, 10 HEPES (pH adjusted to 7.2 with KOH) leading to tip resistance of 2 – 6 MΩ. Currents were low-pass filtered at 3 kHz, and sampled at 10 kHz and recorded with pClamp 10 software (Molecular Devices) and stored for off-line analysis. Astrocytes were voltage-clamped to –80 mV and characterized by a voltage step protocol. Therefore, cells were hyperpolarized by -80 to -10 mV and depolarized by +10 mV to +110 mV voltage steps (10 mV increment).\n\n\nData availability\n\nDataset 1: Raw data generated or analyzed during the present study in a zipped file. DOI, 10.5256/f1000research.10509.d147854 (Winchenbach et al., 2016).", "appendix": "Author contributions\n\n\n\nJW performed most of the experiments and analyzed data. TD, SAB and SKS were involved in tissue preparation, immuno-blotting and histology. Electrophysiological recordings were done together with SH. KAN initiated this project and edited the manuscript. GS designed experiments, performed analyses, and wrote the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by the Deutsche Forschungsgemeinschaft (SFB/TR43 to KAN, SPP1757 SA 2014/2-1 to GS), by an ERC advanced grant to KAN, and by the CNMPB to SH.\n\n\nAcknowledgements\n\nWe thank Carolin Böhler, Ulrike Bode, and Jana Kroll for very valuable technical assistance, M.H. Schwab for advice of BAC cloning, and Said Ghandour, Charles Stiles and John Alberta for antibodies.\n\n\nReferences\n\nAnthony TE, Heintz N: The folate metabolic enzyme ALDH1L1 is restricted to the midline of the early CNS, suggesting a role in human neural tube defects. J Comp Neurol. 2007; 500(2): 368–383. PubMed Abstract | Publisher Full Text\n\nCahoy JD, Emery B, Kaushal A, et al.: A transcriptome database for astrocytes, neurons, and oligodendrocytes: a new resource for understanding brain development and function. J Neurosci. 2008; 28(1): 264–278. PubMed Abstract | Publisher Full Text\n\nChow LM, Zhang J, Baker SJ: Inducible Cre recombinase activity in mouse mature astrocytes and adult neural precursor cells. Transgenic Res. 2008; 17(5): 919–928. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChristie KJ, Emery B, Denham M, et al.: Transcriptional regulation and specification of neural stem cells. Adv Exp Med Biol. 2013; 786: 129–155. PubMed Abstract | Publisher Full Text\n\nFoo LC, Dougherty JD: Aldh1L1 is expressed by postnatal neural stem cells in vivo. Glia. 2013; 61(9): 1533–1541. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGanat YM, Silbereis J, Cave C, et al.: Early postnatal astroglial cells produce multilineage precursors and neural stem cells in vivo. J Neurosci. 2006; 26(33): 8609–8621. PubMed Abstract | Publisher Full Text\n\nGrass D, Pawlowski PG, Hirrlinger J, et al.: Diversity of functional astroglial properties in the respiratory network. J Neurosci. 2004; 24(6): 1358–1365. PubMed Abstract | Publisher Full Text\n\nHeintz N: Gene expression nervous system atlas (GENSAT). Nat Neurosci. 2004; 7(5): 483. PubMed Abstract | Publisher Full Text\n\nHirrlinger PG, Scheller A, Braun C, et al.: Temporal control of gene recombination in astrocytes by transgenic expression of the tamoxifen-inducible DNA recombinase variant CreERT2. Glia. 2006; 54(1): 11–20. PubMed Abstract | Publisher Full Text\n\nKarram K, Goebbels S, Schwab M, et al.: NG2-expressing cells in the nervous system revealed by the NG2-EYFP-knockin mouse. Genesis. 2008; 46(12): 743–757. PubMed Abstract | Publisher Full Text\n\nKrupenko SA, Oleinik NV: 10-formyltetrahydrofolate dehydrogenase, one of the major folate enzymes, is down-regulated in tumor tissues and possesses suppressor effects on cancer cells. Cell Growth Differ. 2002; 13(5): 227–236. PubMed Abstract\n\nKutzbach C, Stokstad EL: Mammalian methylenetetrahydrofolate reductase. Partial purification, properties, and inhibition by S-adenosylmethionine. Biochim Biophys Acta. 1971; 250(3): 459–477. PubMed Abstract | Publisher Full Text\n\nLee EC, Yu D, Martinez de Velasco J, et al.: A highly efficient Escherichia coli-based chromosome engineering system adapted for recombinogenic targeting and subcloning of BAC DNA. Genomics. 2001; 73(1): 56–65. PubMed Abstract | Publisher Full Text\n\nMadisen L, Zwingman TA, Sunkin SM, et al.: A robust and high-throughput Cre reporting and characterization system for the whole mouse brain. Nat Neurosci. 2010; 13(1): 133–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMori T, Tanaka K, Buffo A, et al.: Inducible gene deletion in astroglia and radial glia--a valuable tool for functional and lineage analysis. Glia. 2006; 54(1): 21–34. PubMed Abstract | Publisher Full Text\n\nSanz E, Yang L, Su T, et al.: Cell-type-specific isolation of ribosome-associated mRNA from complex tissues. Proc Natl Acad Sci U S A. 2009; 106(33): 13939–13944. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSauer B: Site-specific recombination: developments and applications. Curr Opin Biotechnol. 1994; 5(5): 521–527. PubMed Abstract | Publisher Full Text\n\nSchipke CG, Ohlemeyer C, Matyash M, et al.: Astrocytes of the mouse neocortex express functional N-methyl-D-aspartate receptors. FASEB J. 2001; 15(7): 1270–1272. PubMed Abstract | Publisher Full Text\n\nSchnell C, Shahmoradi A, Wichert SP, et al.: The multispecific thyroid hormone transporter OATP1C1 mediates cell-specific sulforhodamine 101-labeling of hippocampal astrocytes. Brain Struct Funct. 2015; 220(1): 193–203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlezak M, Göritz C, Niemiec A, et al.: Transgenic mice for conditional gene manipulation in astroglial cells. Glia. 2007; 55(15): 1565–1576. PubMed Abstract | Publisher Full Text\n\nSrinivas S, Watanabe T, Lin CS, et al.: Cre reporter strains produced by targeted insertion of EYFP and ECFP into the ROSA26 locus. BMC Dev Biol. 2001; 1: 4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSrinivasan R, Lu TY, Chai H, et al.: New Transgenic Mouse Lines for Selectively Targeting Astrocytes and Studying Calcium Signals in Astrocyte Processes In Situ and In Vivo. Neuron. 2016; pii: S0896-6273(16)30898-4. PubMed Abstract | Publisher Full Text\n\nWinchenbach J, Düking T, Berghoff SA, et al.: Dataset 1 in: Inducible targeting of CNS astrocytes in Aldh1l1-CreERT2 BAC transgenic mice. F1000Research. 2016. Data Source\n\nYang M, Roman K, Chen DF, et al.: GLT-1 overexpression attenuates bladder nociception and local/cross-organ sensitization of bladder nociception. Am J Physiol Renal Physiol. 2011; 300(6): F1353–1359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang Y, Chen K, Sloan SA, et al.: An RNA-sequencing transcriptome and splicing database of glia, neurons, and vascular cells of the cerebral cortex. J Neurosci. 2014; 34(36): 11929–11947. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19456", "date": "18 Jan 2017", "name": "David H. Rowitch", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript by Winchenbach et al described generation and characterization of a new transgenic tool to investigate astrocyte biology.  The field is currently limited by cre recombinase driver transgenic reagents that show poor coverage and/or temporal control during central nervous system development.  The current paper uses ALDH1L1 cre ERT2 BAC transgenic mice to address limitations in the field, developing a useful new tool.  The data presented are technically sound and included the use of crosses with two conditional reporter alleles, single cell patch clamp electrophysiological analysis to confirm astrocyte features and inclusionary and exclusionary immunohistochemistry.  The conclusions are supported by data and also highlight the utility of this new transgenic allele, compared to a commonly-used (GLAST-CRE) transgenic driver mouse, such that I think will be of significant interest in this new mouse line from the glial biology community.", "responses": [] }, { "id": "18882", "date": "23 Jan 2017", "name": "Andrea Volterra", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Winchenbach et al. describes a newly generated astrocyte-specific Aldh1L1-CreERT2 mouse line resulting in Cre-mediated recombination after Tamoxifen injection.\nThis new mouse line addresses the crucial need in the field for a conditional line in which Cre is both astrocyte-specific, and yet present in the absolute majority of astrocytes homogeneously throughout the CNS. Prior models have reportedly suffered from either lower astrocyte specificity (such as certain Glast-CreERT lines), or lower levels of astrocytic expression, with significant differences depending on the brain area (such as in GFAP-CreERT2 lines). The authors commit to making their new mouse line freely available on request.\nOn balance, the new Aldh1l1-CreERT2 mouse line from Nave/Saher, together with a similar line independently generated and recently published by the Khakh lab 1 have a large potential to advance astroglial research. Both labs have done a great service to the field by making their mice freely available. Consequently, these are likely to be highly useful tools for generating and studying astrocyte-specific knockouts and even knock-ins.\n\nDetailed comments:\nIn this paper the authors used 5 Tamoxifen injections to induce recombination in Aldh1l1-CreERT2 animals crossbred with either Rosa-26-tdTomato or -YFP mice, and quantified the % recombination in S100beta-positive cells (putative astrocytes) across different brain regions after 7 days by immunohistochemistry. A subset of cells was additionally examined after 20 days by electrophysiology. Other controls were also performed.\nCo-staining with neuronal markers shows very limited overlap with neurons; recombination in a small (3%) subset of parvalbumin-positive cells of the cerebellum is judged to be the artifact of the tdTomato reporter line as it was absent in the YFP line. Neuronal expression in areas of adult neurogenesis is also reported, similar to what was already observed in prior astrocyte lines. Peripheral organs were also examined, and the authors report expected recombination in some of them.\nThe manuscript is technically sound and of high scientific quality. The paper provides a thorough characterization of the new line which has the potential to be highly useful for the field.\nBelow we list some minor suggestions for improving the paper, as well as some general points applicable for the entire field.\nIssues to be addressed for Aldh1l1-CreERT2 mice:\nThe authors could add more information regarding image quantification (Tables 1 & 2). Do “eight confocal pictures” mean single-plane images of different areas of the slice, or stacks? Of what thickness? What was the zoom, axial/lateral resolution, field of view?\n\nThe authors provide a helpful electrophysiological confirmation of reporter-expressing cells as passive astrocytes. As a very minor methodological point, they could indicate whether they used liquid junction potential correction.\n\nDefinitive marker for “astrocytes.” Can the authors discuss why they think that S100Beta is the best marker of astrocytes? This is relevant in particular for correctly interpreting the existing  S100Beta-negative population as astrocytic or non-astrocytic: indeed, the authors report that about 4-19% of S100Beta-negative cells also express reporters (Table 1). What are those cells?\n\nRelated to the previous point, the authors have performed GFAP co-staining but do not currently report the % co-labelling due to difficulty in quantification. An image from spinal cord is shown in Fig. 2d. However, some summary statement, even based on a limited number of manually analyzed cells, would be helpful. Do they see some of the S100beta-negative (reporter-positive) cells also positive for GFAP?\n\nThe authors report about 6% of cells double-label for markers of other cell types (e.g. oligodendrocyte precursors or microglia) in Table 2. Do these account for the 4-19% of S100Beta-negative/reporter-positive cells in Table 1? Importantly, neuronal co-labelling is reported as nonexistent (with NSE) at least in the cortex. Is this the same also in other areas?\n\nWith the publication of the present line, there are now two Aldh1l1-CreERT2 mouse lines openly available on the “market.” It remains for the field to determine which of the two lines has the most reliable and therefore useful profile, meaning  highest astrocyte specificity as well as  highest recombination efficiency in astrocytes. As the authors suggest, there may already be some differences regarding expression in e.g. NG2 cells, to be determined in the follow-up studies.\n\nPreserving a “reference” strain of mice (e.g. via frozen sperm) may be helpful to avoid gene drift and future emergence of colonies with different properties. Ostensibly, this mechanism may explain conflicting results historically reported for other “astrocytic” mice in the field such as dnSNARE (see e.g. Fujita et al. J Neurosci 2014, reviewed in Bazargani and Attwell, Nat. Neurosci. 2016)\n\n“Titration” curve for tamoxifen induction. One of the strengths of this mouse line is the very high recombination efficiency across diverse astrocyte populations: ~90% of likely astrocytes after just five Tamoxifen injections. The authors emphasize the need to determine the % recombination for each individual line. Additionally, it would be useful in the future to know how differences in tamoxifen treatment regime correspond to different levels of recombination for some common lines (e.g. Rosa26-tdTomato). For instance, is a single injection sufficient to cause recombination in the bulk of astrocytes? Is it also possible to achieve sparser expression of the reporter (for single-cell imaging studies) with a reduced Tamoxifen administration (e.g. single-day)? Can the % recombination be raised over 90%, and if so, after how many injections? Obviously, the ultimate % recombination will depend also on a chosen reporter line (as apparent from Table 1), but more preliminary information for common lines would already be helpful.", "responses": [] }, { "id": "18885", "date": "20 Feb 2017", "name": "Jeffrey D. Rothstein", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript is about the generation of a Aldh1l1-CreERT2 mouse model to better study astrocytes in vivo. Due to the lack of Cre-driven astrocyte-specific rodent models, manipulating and studying astrocytes has been exceptionally challenging. This new mouse model may open up many opportunities for scientists in the study of astrocytes. However, for the staining of astrocytes during cell counts and co-labeling the S100beta staining seems to label only a fraction of all astrocytes compared to reporter mouse models such as the BAC-Glt1-eGFP or Aldh1l1-eGFP; an antibody that could detect all astrocytes would be the best to quantify overall double positive astrocytes such antibody combinations of Glt1, Aldh1l1, and/or Acsbg1. In an ideal situation you would generate a triple transgenic with the Aldh1l1-CreERT2, Rosa26-Tdtomato, and BAC-Glt1-eGFP to obtain more accurate cell counts and co-labeling. Although this experiment would take considerable time to complete and might be a fine followup study. Overall, this study will provide a greatly needed tool to glial biology. Provided these approaches such as staining are considered, we approve this manuscript for indexing.", "responses": [] } ]
1
https://f1000research.com/articles/5-2934
https://f1000research.com/articles/5-2931/v1
29 Dec 16
{ "type": "Research Article", "title": "Lightning Injury is a disaster in Bangladesh? - Exploring its magnitude and public health needs", "authors": [ "Animesh Biswas", "Koustuv Dalal", "Jahangir Hossain", "Kamran Ul Baset", "Fazlur Rahman", "Saidur Rahman Mashreky", "Koustuv Dalal", "Jahangir Hossain", "Kamran Ul Baset", "Fazlur Rahman", "Saidur Rahman Mashreky" ], "abstract": "Background: Lightning injury is a global public health issue. Low and middle-income countries in the tropical and subtropical regions of the world are most affected by lightning. Bangladesh is one of the countries at particular risk, with a high number of devastating lightning injuries in the past years, causing high mortality and morbidity. The exact magnitude of the problem is still unknown and therefore this study investigates the epidemiology of lightning injuries in Bangladesh, using a national representative sample. Methods: A mixed method was used. The study is based on results from a nationwide cross-sectional survey performed in 2003 in twelve randomly selected districts. In the survey, a total of 819,429 respondents from 171,336 households were interviewed using face-to-face interviews. In addition, qualitative information was obtained by reviewing national and international newspaper reports of lightning injuries sustained in Bangladesh between 13 and 15 May 2016. Results: The annual mortality rate was 3.661 (95% CI 0.9313–9.964) per 1,000,000 people. The overall incidence of lightning injury was 19.89/100,000 people. Among the victims, 60.12% (n=98) were males and 39.87% (n=65) were females. Males were particularly vulnerable, with a 1.46 times increased risk compared with females (RR 1.46, 95% CI 1.06–1.99). Rural populations were more vulnerable, with a 8.73 times higher risk, than urban populations (RR 8.73, 95% CI 5.13–14.86). About 43% of injuries occurred between 12 noon and 6 pm. The newspapers reported 81 deaths during 2 days of electric storms in 2016. Lightning has been declared a natural disaster in Bangladesh. Conclusions: The current study indicates that lightning injuries are a public health problem in Bangladesh. The study recommends further investigations to develop interventions to reduce lightning injuries, mortality and related burden in Bangladesh.", "keywords": [ "Lightning injury", "incidence", "disaster", "Bangladesh" ], "content": "Introduction\n\nLightning injury is a global public health problem representing the leading cause of weather-related death after tornadoes, flash floods and hurricanes. The incidence rates of lightning injury are probably higher than registered since there is no referral and information centre where data are collected and stored1. Lightning strikes the earth more than 100 times each second, totalling 8 million times every day. An estimated 50,000 thunderstorms occur each day, causing fires and injuries2. Worldwide, mortality from lightning is estimated at between 0.2 and 1.7 deaths/1,000,000 people, affecting mainly the young and people who work outdoors3,4. Lightning injuries are the highest during the summer months. However, in some countries such as India and Vietnam, lightning mostly occurs during the rainy season5,6. Lightning injuries and related deaths mostly affect individuals who work outside or participate in outdoor recreational activities. Worldwide, men are five times more likely than women to be struck by lightning3,7. The most vulnerable age for lightning injury is estimated to be between 10 and 29 years3.\n\nLightning injuries cause high mortality and significant long-term morbidity. A previous study reports that in Bangladesh, the incidence of lightning fatalities is 0.9 per 1,000,000 people per year, which is higher than in high-income countries6. In 2016, the country had a lightning event with several strikes, causing 81 deaths, which is particularly high. However, underreporting of lightning strikes is common, as the majority of lightning occurs in rural areas. People used to seek treatment from the local village doctor, pharmacist or traditional healer rather than seeking health care from government facilities, unless the community health provider failed to manage the injuries. Moreover, only a few cases are reported to the police and government hospital records only have information on those who seek treatment. Therefore, studying the epidemiology of lightning injuries in Bangladesh is very important. This study explores the epidemiology of lightning injury, using data from a nationwide survey and newspaper reports on lightning deaths on 13–15 May 2016.\n\n\nMethods\n\nThe study was a mixed method study using both quantitative and qualitative data. A cross-sectional study was conducted to understand the epidemiology of the lightning injuries in Bangladesh (see below). In addition, we searched two of the most popular Bengali and another three national, English-language newspapers in Bangladesh. Furthermore, lightning news reported in another three international English-language daily newspapers and on three international media websites was retrieved and reviewed (Table 1). Qualitative data related to lightning injury in Bangladesh were collected to explore the magnitude of lightning injuries in Bangladesh during 13–15 May 2016.\n\nA large cross-sectional study was conducted during January to December 2003 in twelve randomly selected districts in Bangladesh and also in Dhaka Metropolitan City. Multi-stage cluster sampling was employed to select 171,366 households (88,380 in rural areas and 45,183 in urban areas in the twelve districts, and 37,803 in Dhaka Metropolitan City). The current study is part of this larger study. Each district consists of several upazilas (subdistricts). From each district, one upazila was chosen. The upazilas contain smaller units called “union”. A union is the lowest administrative unit, with a population of approximately 20,000. In this study, two unions from each of the upazilas were selected. Similarly, in urban settings, the mohalla is the lowest unit of the City Corporation. Systemic random sampling was performed of a certain number of households in selected mohallas.\n\nPrior to data collection, 48 trained interviewers had visited the selected households and explained the study objectives and ethical issues. They then conducted the questionnaire survey. As well, 819,429 people of all age groups from 171,336 households in those twelve districts were selected and interviewed using face-to-face interviews.\n\nPersons who were injured by lightning and received treatment or who were unable to perform their usual activities for at least 3 days because of lightning injury were enrolled in the study. We also interviewed the next of kin of people who had died from lightning injuries. About 2.7% of households could not be interviewed because of unavailability of respondents in the households. A total of 166,766 households were included in the study. The methodology has been described elsewhere8,9.\n\nDaily national popular Bangladeshi and English-language electronic newspapers were searched for reports on lightning injuries. Two Bengali and three English-language national newspapers which are widely read in Bangladesh were selected. In addition, we searched international online news sites. Three international English-language newspapers and three purposively selected international online news sites were also included in the search.\n\nAs previously mentioned, a high number of lightning events have been reported in Bangladesh for the period 13–15 May 2016. Therefore, we reviewed newspapers to find information on these events. Two researchers collected the relevant information from the selected sources. The next morning, two different researchers sat together and read the news headlines to select relevant articles and eliminate duplicate news. They further read all collected news and then made a further selection pertaining to the study aims. The researchers, who were bilingual, also translated the Bengali news into English.\n\nOnly articles relevant to the aims of this study have been included in this study. Each newspaper article constitutes a unit of analysis. Two qualitative researchers conducted content analysis. To determine the overall content and framing of the article, the researchers read, re-read and annotated the news articles by attaching key words to segments of text10.\n\nStandard descriptive statistics using means, standard deviation (SD) and proportions were used to analyse the characteristics of lightning victims. The gender, age, and place of residence of cases of lightning injuries were determined. Cases were categorized into eight age groups. The yearly incidence of lightning injuries was estimated from the occurrence of lightning morbidity in 6 months, multiplied by 2. The reason was that data were collected with a recall period of 6 months. Rates were calculated and 95% confidence intervals (CIs) computed. We estimated the relative risks (RRs) in relation to different age groups, place of residence, and gender. We used cross-tables and EPI-Info 6 software.\n\nThe current study formed part of a larger study titled “Bangladesh Health and Injury Survey (BHIS)”. The study has received ethics approval from the Ethics Committee of the Institute of Child and Mother Health, Dhaka. Participants were informed about the benefits and objectives of the study. Written consent was obtained from each head of household before proceeding with the interviews. The participants were told that they had the right to withdraw from the study at any time and the study objective was explained to them. Data collectors were trained in ethical issues.\n\nMedia news was publicly available. Information from media was anonymously presented without any direct quotation from the media reports. Also the study had not used any personal identification and information related to media reports.\n\n\nResults\n\nIncidence. A total of 163 people with lightning injuries were identified, 98 males (60.12%) and 65 females (39.8%). Of them, 160 (98.15%) had suffered non-fatal injury and three (1.84%) had died. The annual death rate was 3.661 (95% CI 0.9313–9.964) per 1000 people. The overall incidence of lightning injury was 19.89 per 100,000 people. Males were more vulnerable, with a 1.46 times higher risk of being hit by lightning compared with females (RR 1.46, 95% CI 1.06–1.99). The mean age of the victims was 26.2 (SD±21.83) years (range 2–75 years). Altogether 84 (51.5%) of those struck by lightning were children. The highest incidence of injuries was found in the age group of 50 and above (Figure 1).\n\nMagnitude of the injury. The majority of victims were of poor socioeconomic status, 86.7% (n=139), with a monthly income of <US$100. Students (31.2%), agricultural workers (17.9%) and housewives (14.5%) were the main victims of lightning injury. Among the victims, 90.80% (n=148) were from rural areas and 9.20% (n=15) from urban areas. People from rural areas were more vulnerable, with an 8.73 times increased risk compared with urban populations (RR 8.73; 95% CI 13–14.86).\n\nAbout 36% (n=59) of the injuries took place between 6 am and 12 pm, while 43.2% (n=71) occurred between 12 noon and 6 pm, and 18% (n=29) from 6 pm to midnight. A total of 31.7% of victims were outside at work when lightning struck; 24.6% were travelling when they were hit by lightning. Home courtyards were the most common places (65.1%) for lightning strikes, followed by roads and footpaths (26%).\n\nThe leg was the most common site of injury, with an incidence of 63.5% (n=97), followed by hand injury in 17.4% of cases (n=27) and abdomen injury in 10.5% (n=16). Among the casualties, 95% (n=155) sought treatment from different level health care providers, with the majority of people (n=134) seeking treatment from the village doctor or traditional healer (83.1%). Only 7.3% (n=8) received treatment at a health facility. Among the injured, 41.8% (n=68) were unable to perform regular activities for 1–6 days while 19.1% (n=31) were unable to do so for ≥1 week. Only 1% (n=2) of the injured reported the incident to the police (Table 2).\n\nBangladesh has had a high incidence of preventable deaths from lightning for decades. Data on the period 2005–2016 showed that the highest number of deaths in a single day was in May 2016, when lightning killed 81 people in 26 districts, mostly in the rural north and central Bangladesh11–13. By comparison, lightning deaths between 2005 and 2008 totalled 41. Over the next few years, the number of deaths progressively increased. The English-language Bangladeshi newspaper Daily Star reports that from 2010 to 2016 a total of 645 people died in thunderstorms14. Another source, the Foundation for Disaster Forum in Bangladesh, reports 1390 deaths due to lightning for the period 2010–201515 (Figure 2). Other newspapers have reported that an average of 300 people die every year in Bangladesh due to lightning; however, this is underreporting11,16,17.\n\nAccording to the newspaper reports, the youngest person who died from lightning was 13 years old, and the oldest lightning victim was 70 years. In most cases, lightning occurred outdoors in a rural area while the person was performing daily household work or other usual activities. One newspaper reported that 51% of the fatalities were farmers who were working in the fields.\n\nAccording to the National Geographic, lightning storms in Bangladesh occur mostly in May and in the afternoon, when the temperature is high. The fact that the country is densely populated contributes to the high incidence of human lightning strikes. Other sources also mentioned an increase in deforestation, and the felling of tall trees, as a contributing factor. In addition, use of metal objects such as mobiles or structures such as cell phone towers or electrical power distribution towers can result in lightning deaths. It was also mentioned that in rural areas, taller trees usually attract lightning flashes. Internationally, scientists have warned that an increase in lightning storms may happen as part of climate change and global warming. Global warming is causing more water evaporation, increasing cloud formation, the amount of rainfall and the potential for lightning storms18–20.\n\nAfter the fatal lightning injury event in May 2016, the Bangladesh government declared lightning a disaster, adding lightning injuries to the country’s list of official types of natural disasters, which includes droughts, floods, cyclones, storm surges and riverbank erosion, and earthquakes20,21. In 2016 the government pledged to compensate lightning strike victims and/or their families12,22.\n\n\nDiscussion\n\nLightning injury has been identified as one the major causes of weather-related deaths in Bangladesh. In response to the lightning event in 2016, when 81 lives were lost in just 2 days due to lightning, the government of Bangladesh has declared lightning a natural disaster21,23. The magnitude of the problem has become worse over recent years. According to the current study the annual incidence is 19.89/100,000 population. The majority of victims were males from rural communities, and most injuries were incurred in the afternoon. Labour-intensive agricultural economy, poor infrastructure, illiteracy, and a tropical climate play a role in higher rates of lightning-related deaths and injuries in countries such as South Africa, Malaysia, India and Bangladesh19. For example, one study reports 6.3 deaths/1,000,000 inhabitants in a region mainly populated by the urban poor in Highveld, South Africa7.\n\nBy contrast, a decline in lightning fatalities in recent decades has been reported from developed countries24–27, reasons for which are: development of medical responses and treatments; education of the public; meteorological warnings; and improved building codes for lightning protection. The latter include housing structures with grounded plumbing, electric conducting materials, improved fire resistance of homes, and lightning rods28.\n\nA previous study reports an annual death rate due to lightning in Bangladesh of 0.9 per 1,000,000 population6. Our study presents the annual death rate as 3.661 (95% CI 0.9313–9.964) per 1,000,000 people. However, these figures are probably underreported because of a poor vital registration system. Lightning deaths are not currently reported in the health system or in the police recording system, which is reliable for public health researchers26. In the United States the number of deaths due to lightning has declined significantly, but the challenge remains to accurately capture the number of deaths25.\n\nWe have found that males are most affected by lighting injuries. The majority of victims are from rural communities and were hit in the summertime, in the afternoon. These results correlate with previous studies3,4,29. People living in rural communities in Bangladesh have a number of misconceptions including religious myths and superstitions, as well as social stigma attached to lightning injuries5,24,30,31. An initiative has already been taken in an African region to raise awareness of preventive measures against lightning injury among the population to reduce the number of lightning-related deaths and injuries per year25.\n\n\nConclusion\n\nLightning injuries are important to study in an epidemiological context. In the context of Bangladesh, lightning has become a public health issue that requires urgent action. The country is becoming increasingly urbanized, and has a very high population density. However, rural communities still make up about 70% of the total population. A public lightning awareness programme and the eradication of traditional or religious myths, as well as other preventive measures, such as installing lightning protection systems, can reduce the fatality rate. A multi-stakeholder involvement is required at this stage, including medical doctors, public health professionals, engineers, meteorologists and political leaders, to identify possible and effective solutions for preventing lightning-related deaths. Moreover, it is also important to establish an emergency pre-hospital care system for lightning victims in rural communities, as well as a comprehensive vital registration system that records each death, for future preventive action.\n\n\nData availability\n\nData is stored at the Department of Public Health Sciences and Injury Prevention of CIPRB. Data sharing is subject to the ethical committee’s further permission due to sensitivity and other restrictions. Data can be made available upon detailed request to the corresponding author. The corresponding author will then communicate directly with ethical committee and liaison between the third party willing to avail of the data and the ethical committee.", "appendix": "Author contributions\n\n\n\nA.B., J.H., F.R. and S.R.M. conceived and designed the study. A.B. and J.H. analysed the findings. A.B., K.D., J.H., K.B., F.R. and S.R.M. wrote the paper. KD critically reviewed the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe BHIS was funded by UNICEF, Bangladesh.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to UNICEF, Bangladesh, who funded the Bangladesh Health and Injury Survey (BHIS). In particular, we would like to thank Steve Wills of the Royal National Lifeboat Institution (RNLI), UK, for assistance with English editing.\n\n\nSupplementary material\n\nMorbidity questionnaire, mortality questionnaire, and a list of newspaper articles used in this study.\n\nClick here to access the data.\n\n\nReferences\n\nGarcía Gutiérrez JJ, Meléndez J, Torrero JV, et al.: Lightning injuries in a pregnant woman: a case report and review of the literature. Burns. 2005; 31(8): 1045–9. PubMed Abstract | Publisher Full Text\n\nOkafor UV: Lightning injuries and acute renal failure: a review. Ren Fail. 2005; 27(2): 129–34. PubMed Abstract | Publisher Full Text\n\nForster SA, Silva IM, Ramos ML, et al.: Lightning burn--review and case report. Burns. 2013; 39(2): e8–12. PubMed Abstract | Publisher Full Text\n\nAslar AK, Soran A, Yildiz Y, et al.: Epidemiology, morbidity, mortality and treatment of lightning injuries in a Turkish burns units. Int J Clin Pract. 2001; 55(8): 502–4. PubMed Abstract\n\nSumangala CN, Kumar MP: Lightning Death: A Case Report. J Indian Acad Forensic Med. 2015; 37(1): 93–95. Publisher Full Text\n\nHolle RL: Lightning Fatalities in Tropical and Subtropical Regions. Prepr 29th Conf Hurricanes Trop Meteorol. 2010; 1–10. Reference Source\n\nRitenour AE, Morton MJ, Mcmanus JG, et al.: Lightning injury: a review. Burns. 2008; 34(5): 585–94. PubMed Abstract | Publisher Full Text\n\nMashreky SR, Rahman A, Chowdhury SM, et al.: Epidemiology of childhood burn: yield of largest community based injury survey in Bangladesh. Burns. 2008; 34(6): 856–62. PubMed Abstract | Publisher Full Text\n\nHossain J, Biswas A, Rahman F, et al.: Snakebite Epidemiology in Bangladesh — A National Community Based Health and Injury Survey. Health (Irvine Calif). 2016; 8: 479–86. Publisher Full Text\n\nBowen GA: Document Analysis as a Qualitative Research Method. 1997.\n\nLightning kills 81 people in two days in Bangladesh. The Indain EXPRESS. [Internet]. 2016. Reference Source\n\nLightning claims 81 lives across Bangladesh in two days. The Daily Samakal. [Internet]. 2016. Reference Source\n\nLightning strikes killed 81 people across Bangladesh during recent thunderstorms, says government. bdnews24.com. [Internet]. 2016. Reference Source\n\nLightning takes 17 more lives in 11 districts. The Daily Star. [Internet]. Reference Source\n\nAwareness can reduce death rate from lightning. daily sun. [Internet]. Reference Source\n\nBangladesh lightning death toll rises to 35. The Indain EXPRESS. [Internet]. 2016. Reference Source\n\nBangladesh lightning death toll rises to 35. The Hindu. [Internet]. 2016. Reference Source\n\nMore than 60 killed by lightning in Bangladesh in two days [Internet]. FOXNEWS World. 2016. Reference Source\n\nQuinn M: Death by Lightning a Danger in Developing Countries [Internet]. 2013. Reference Source\n\nIslam S: Bangladesh declares lightning strikes a disaster as deaths surge [Internet]. REUTERS. 2016. Reference Source\n\nLightning kills at least 93 as monsoon sweeps India. The Telegraph. [Internet]. 2016. Reference Source\n\nLightning now a disaster : Ministry. The Indepdendent. [Internet]. 2016. Reference Source\n\nLightning : The New Natural Disaster. The Daily Star. [Internet]. 2016. Reference Source\n\nO’Keefe Gatewood M, Zane RD: Lightning injuries. Emerg Med Clin North Am. 2004; 22(2): 369–403. PubMed Abstract | Publisher Full Text\n\nGomes C, Kithil R, Ahmed M: Developing a Lightning Awareness Program Model for Third World Based on American-South Asian Experience. Proc 28th Int Conf Light Prot. 2006; 5. Reference Source\n\nHolle RL: Annual rates of lightning fatalities by country. USA; 2003.\n\nHuss F, Erlandsson U, Cooray V, et al.: [Lightning injuries--a mixture of electrical, thermal and multiple trauma]. Lakartidningen. 2004; 101(28–29): 2328–31. PubMed Abstract\n\nPincus JL, Lathrop SL, Briones AJ, et al.: Lightning deaths: a retrospective review of New Mexico’s cases, 1977-2009. J Forensic Sci. 2015; 60(1): 66–71. PubMed Abstract | Publisher Full Text\n\nBlumenthal R: Lightning fatalities on the South African Highveld: a retrospective descriptive study for the period 1997 to 2000. Am J Forensic Med Pathol. 2005; 26(1): 66–9. PubMed Abstract\n\nTrengove E, Jandrell I: Lightning myths in southern Africa. Nat Hazards. 2015; 77(1): 101–10. Publisher Full Text\n\nIkpeme IA, Udosen AM, Asuquo ME, et al.: Lightning burns and traditional medical treatment: a case report. West Afr J Med. 2007; 26(1): 53–4. PubMed Abstract | Publisher Full Text" }
[ { "id": "19402", "date": "17 Jan 2017", "name": "Mohammad Delwer Hossain Hawlader", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have described one of the most important public health needs for further action. Paper is well structured and written well too. Although the findings is from Bangladesh health and injury survey which conducted in 2003, therefore the epidemiology and factors may be modified in last 10 years. Since, there is no alternative data to compare, this paper is important for the decision makers, at the policy level.\nEnglish of the manuscript is understandable. I would recommend to accept and approve the article to be indexed.", "responses": [] }, { "id": "18837", "date": "24 Jan 2017", "name": "Aziz Rahman", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have focused on an important area of public health research from the perspective of Bangladesh. Due to scarcity of data availability, it is important to publish more evidence based research. The findings of the study are interesting and would shed lights for the future awareness programs and possible policy changes. The following issues need to be addressed for my approval:\nThe title needs to be amended to make it correct grammatically.\n\nThe article, specifically the abstract needs to be revised for grammatical and language errors.\n\nIntroduction: Needs to have more discussion on data availability from Bangladesh or neighboring countries.\n\nResults: The numbers presented under the incidence are confusing. The authors mentioned that 51.5% were children, who were stuck by lightning, whereas the next line says that the highest incidence was among >50 years old. The authors should consider presenting more inferential analyses (only gender and residence location are presented)\n\nDiscussion: There should be a comprehensive discussion on the ways forward, how to prevent such incidence mentioning the steps taken in other neighboring countries.", "responses": [] }, { "id": "19557", "date": "30 Jan 2017", "name": "Kazi Md Noor-ul Ferdous", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the above article the authors have conducted an important but neglected public health issues in Bangladesh.  The study is part of national injury survey which is country representative data. They clearly surface the magnitude of the problem and factors associated with this. The authors also matched the findings with newspaper findings of 2016.\nThe study is well designed, their title is also clear and specific. The objectives matched with the title and results. Conclusion has been directed about way forward, however, this could be interesting if the authors could discuss on how different stakeholders can involve in the process and work together in reduction of lightning injury and deaths in Bangladesh.\nYou may accept the paper to be indexed.", "responses": [] }, { "id": "19744", "date": "01 Feb 2017", "name": "Mithila Faruque", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have gone through the article and I think this is a public health concern and I would like to thank the authors for depicting the timely problem in front of the readers. Definitely it is a good initiative. I only have the following comments to clarify by the authors:\n\nTitle: The title would be more appropriate if make it in one sentence and more simple.\nMethods:\nWhich area you have taken as urban and rural in the study should be specifically explained. Dhaka Metropolitan City and Upazila cannot be the same as urban area. Why has Dhaka been selected separately out of 12 random district selections? What was your denominator in the calculation of incidence rate over 6 months?\nDiscussion:\nIf you could compare the results (like study subjects affected more and the lightning circumstances) with other studies, then it would add more value to the study. ‘People living in rural communities in Bangladesh have a number of misconceptions including religious myths and superstitions, as well as social stigma attached to lightning injuries’ – why it came in the discussion? Did you get this type of information from your study?\nConclusion: Conclusion should contain what you have actually obtained in the study. The others may be a part of the recommendation.", "responses": [] } ]
1
https://f1000research.com/articles/5-2931
https://f1000research.com/articles/5-2930/v1
29 Dec 16
{ "type": "Research Article", "title": "Measuring the relative importance of different agricultural inputs to global and regional crop yield growth since 1975", "authors": [ "Erik Nelson", "Clare Bates Congdon", "Clare Bates Congdon" ], "abstract": "Background: We identify the agricultural inputs that drove the growth in global and regional crop yields from 1975 to the mid-2000s. Methods: We compare and contrast the inputs that drove yield change as identified by econometrically estimated yield functions and decision trees that use yield change as the class attribute. Results: We find that improvements in agricultural science and management, increased fertilizer use, and changes in crop mix around the world explained most of the gain in global crop yields, although the yield impacts of input use varied across the latitudinal gradient. Climate change over this time period caused yields to be only slightly lower than they would have been otherwise. In some cases, cropland extensification had as much of a negative impact on global and regional yields as climate change. Conclusions: To maintain the momentum in yield growth across the globe 1) the transfer of agricultural chemicals and investment in agricultural science and management in the tropics must increase rapidly and 2) international trade in agricultural products must expand significantly.", "keywords": [ "climate change", "agricultural yields", "cropland extensification", "econometrics", "decision trees", "international trade" ], "content": "Introduction\n\nA consensus has emerged that recent climate change has had a negative effect on crop yields around the world (e.g., 1–4). Accelerating climate change is likely to put even more downward pressure on agricultural productivity around the world in coming years. Further, demand for food will grow quickly as the world races to a population of ~12 billion by 21005. Therefore, the vital question is: How can the world’s farmers increase crop productivity, as necessitated by global population growth, despite the expected drag on yields caused by climate change, while leaving the socially desirable amount of forest, grasslands, and other semi-natural land cover around the world?6\n\nBefore suggesting a way forward on this issue, we first have to determine what agricultural inputs are most important to yield growth around the world. Here we use global yield and agricultural input data from 1975 to the mid-2000s to determine what agricultural production inputs were most responsible for the growth in global and regional yields during this time period. The inputs we consider include growing season weather, crop choice, investment in irrigation capability, land, and machinery, agricultural science and management, fertilizer use, cropped footprint7, and cropped soil quality. We find that improvements in agricultural science and management (e.g., technology and chemical use), increased fertilizer use, and changes in crop mix around the world explained most of the gain in global crop yields from 1975 to the mid-2000s. Improvements in agricultural science and management were particularly important drivers of yield growth in the temperate region and changes in crop mix and increased fertilizer use were particularly important drivers of yield growth in the tropics. Further, the deleterious impacts of climate change on yield were small compared to the yield-augmenting factors noted above. Finally, cropland extensification over the last 40 years has dragged average global yields down as well, sometimes as much as climate change has.\n\nOur results indicate that 1) transferring better agricultural science and management and other inputs to the tropics, 2) encouraging countries to exclusively concentrate on growing the crops most suited to their soil-climate conditions (and trading for the rest of the crops their consumers want), and 3) focusing on increasing the productivity of existing cropland in lieu of concentrating on cropland extensification will be the most effective ways to ameliorate climate change’s expected drag on global yields.\n\n\nResults\n\nWe used two analytical methods to measure relative importance of agricultural inputs to the growth in global and regional crop yields between 1975 and the mid-2000s.\n\nFirst, we estimated country-level yield functions with a fixed-effects econometric model using a 1975 to the mid-2000s global panel dataset (Supplementary Table 1 and Supplementary Table 2; Dataset 1 and Dataset 28,9). We estimated country-level yield functions using both Mg ha-1 and M kcals ha-1 yield metrics: Mg or M kcal production across all crops in a country in year t divided by hectares of cropland in the country in year t. Second, we used the estimated yield functions and the panel data to obtain annual expected country-level yields, both in Mg ha-1 and M kcals ha-1, for the 1975 to the mid-2000s time period. Third, we generated global and regional expected crop yields in year t by taking the weighted average of expected country-level yields in year t using country-level cropped hectarage as weights. This process generated three expected “all-crop” yield curves, one for the globe, one for the temperate region, and one for the tropics region (see Figure 1 for the global Mg ha-1 and M kcals ha-1 expected yield functions).\n\nThe counterfactual global yield curves were constructed by holding all country-level agricultural inputs at 1975 levels except growing season weather. These graphs are based on “long” model results (based on the dataset with 1975 to 2007 data). Expected global yield grew 46.5% when measured in Mg ha-1 (A) and 58.8% when measured in M kcals ha-1 (B) between 1975 and 2007. Under the numeraire counterfactual global yield fell 2.1% when measured in Mg ha-1 (A) and 2.5% when measured in M kcals ha-1 (B) between 1975 and 2007. The light gray line indicates observed global yields.\n\nTo estimate the overall contribution of an agriculture production input or a group of inputs on 1975 to mid-2000s global or regional crop yield trends, we again found the expected global or region yield curve (as explained above) while holding the input or inputs in question fixed at observed 1975 levels (all other variables took on observed values). For example, to measure the impact of the change in cropped land soil quality on yield trends, the “soil quality” counterfactual yield curves were estimated with the quality of cropped land soil around the world remaining fixed at 1975 levels while all other inputs varied as observed. Then by integrating over the gap formed between the expected global or regional yield curve and the counterfactual global or regional yield curve we have measured the relative contribution of that input or group of inputs to 1975 to mid-2000s growth in global or regional yields, all else equal. The larger a counterfactual’s integral (in absolute terms), the greater the impact that the input or group of inputs in question had on global or regional yield trends from 1975 to the mid-2000s. A positive (negative) integral means that the 1975 to mid-2000s changes in the input in question had, on net, a positive (negative) impact on average global or regional yield.\n\nWhen discussing results below, we normalize the size of a counterfactual’s integral by measuring its size relative to the size of the integral formed by the numeraire counterfactual. In a numeraire counterfactual all inputs are held at 1975 levels, except growing season weather over each country’s crop production area, which varied as observed (the numeraire counterfactuals always form the largest integrals). We refer to a numeraire counterfactual’s integral as the ‘Mg gap’ or the ‘kcals gap’ (Figure 2). For example, the mean global “crop mix” counterfactual has an integral of 9.11 over the 1975 to 2007 period when yield is measured in Mg ha-1. The mean global “numeraire Mg” counterfactual produces an integral of 30.53. Thus, the mean global “crop mix” counterfactual makes up or explains 9.11/30.53 = 29.83% of the 1975 to 2007 global Mg gap. The larger the percentage, positive or negative, the more important the counterfactual’s input or group of inputs was to determining the 1975 to mid-2000s global or regional yield trend.\n\nIn (A) an estimated global or regional counterfactual yield curve (one or more inputs are held fixed at 1975 levels in each country), measured in Mg, is given by the dotted black line. Assume the integral of the area between the expected global or regional yield curve (the solid black line) and the estimated counterfactual global or region yield curve is 10.00. Further, assume the integral of the area between the expected global or regional yield curve (the solid black line) and the numeraire counterfactual yield curve (the solid blue line) is 30.53. Then the counterfactual explains 10/30.53 or 33% of the “global Mg gap.” In (B) the estimated global or regional counterfactual explains −5/30.53 or −16% of the “global Mg gap.”\n\nWe also used decision tree algorithms to obtain a “second opinion” on which agricultural inputs were most important in explaining the growth in global and regional crop yields between 1975 and the mid-2000s. A decision tree segregates a process’ outcomes (in our case, annual changes in observed country-level yields) based on the attributes of a process (in our case, annual changes in each country’s input levels). A tree can be interpreted as the rules that map attributes of a process to the outcome of the process. In our case we find rules – ranges in annual changes in input levels – that predicted annual changes in country-level yields best (Supplementary Figure 1–Supplementary Figure 12; Dataset 310). When using econometric techniques to build a yield function, we made several assumptions regarding the variable-generating process. In the decision tree analysis, a machine learning algorithm, we identified key features of the data without committing to statistical assumptions.\n\nFor each analytical method we discuss two sets of results. In one case, we derive results for the time period 1975 to 2007. However, this set of results does not include fertilizer as a production input. In the other case we derive results for the time period 1975 to 2002. This set of results does include fertilizer use as an explanatory variable. The source of much of our agriculture data changed their fertilizer collection methods beginning in 200311. Harmonizing the two fertilizer databases was not practical. Below we will refer to results derived from the 1975 to 2002 dataset as the “wide” results and results derived from the 1975 to 2007 dataset as the “long” results.\n\nImprovements in agricultural science and management, crop-mix change, and increased fertilizer use has explained most recent yield growth. When using either the long and wide datasets, time was the largest contributor to crop yield growth (both in terms of Mg ha-1 and M kcals ha-1) at the global and temperate region levels (Table 1 and Table 2 for the wide and long results, respectively). (Unless otherwise stated, we discuss mean results in the text.) At the global level, the time counterfactual’s integral makes up approximately 57% or 72% of the Mg gap (always wide and long results, respectively, unless otherwise stated) and 37% or 47% of the kcal gap. In the time counterfactual, we held the year variable fixed at 1975. In the temperate region, the time counterfactual makes up 79% or 90% of the Mg gap and 62% and 67% of the kcal gap. At the other extreme, the time counterfactual only explains -1.5% or 24% and -12.5% or 18% of the tropic’s Mg and kcal gaps, respectively.\n\nThe global model uses all countries while the regional models only use countries in the given region. The “Low” estimates are calculated with the 25th percentile annual yield estimates in each country. The “High” estimates are calculated with the 75th percentile annual yield estimates in each country. The cells in black indicate the integral if all agricultural inputs other than weather are fixed at 1975 levels (the numeraire counterfactuals; see Figure 1 and Figure 2). All other cells have an increasingly dark shade of green (red) as the integrals get more positive (negative). Pure white occurs at 0.\n\nSee the legend of Table 1 for more details.\n\nOur econometric model’s time trend jointly captures the impact of several agricultural inputs that are omitted from our global panel database. Between 1975 and the mid-2000s, agricultural technology, agriculture management science, pesticide use, and international trade of agricultural commodities (variables missing from our dataset) increased around the world12. That greater technology, better management, and more pesticides increased yield is intuitive. However, the impact of increasing globalization on yields was important as well. Greater liberalization of agricultural production policies around the world and advancements in shipping technology meant that farmers were able to access international markets at increasingly lower costs13. And this increased market access spurred greater investment in farms (e.g., 14). Further, as cropland around the world became scarcer relative to the supply of rural labor, farmers increasingly became motivated to maximize yield rather than economize on labor use (e.g., 15). The time trend crudely accounts for the joint impact of these unobserved factors on yields (including fertilizer use in the long results but not in the wide results, which explicitly includes fertilizer use). Our results make it clear that the recent growth in agricultural technology, input use, farm management, globalization, and market liberalization disproportionally benefited the farmers of more developed nations in the temperate region than it did farmers of tropical countries.\n\nWhen using either the wide or long datasets, change in crop mix was the largest net contributor to yield growth in the tropics. The tropical region’s integral from the crop mix counterfactual, where we kept the relative mix of crop hectarage in each country frozen at 1975 levels, makes up 55% or 61% and 58% or 65% of the tropic’s Mg and kcals gaps, respectively. Between 1975 and 2007 oil crops, sugarcane, roots and tubers, and fruit became a larger part of cropped area in the tropic region (Figure 3). According to the econometrically estimated yield models (Supplementary Table 1 and Supplementary Table 2), replacing wheat and other grain production with sugarcane, roots and tubers, and fruit production was particularly important to improving overall crop yield in the tropics. The gain in yield due to this crop switching can partly be explained by a simple substitution effect: Tropical cropland was increasingly used to grow denser fruits and roots and tubers versus less dense grains. However, this also reflects a comparative advantage effect, as wheat and most grains are most effectively grown in cooler climates while fruits are most cost-effectively grown in the tropics16. In comparison to its impact in the tropics, change in crop mix in the temperate region had little impact on yield when measured in Mg and only slightly improved yield when measured in M kcals.\n\nCropped area by crop type (crop mix) across the globe (A), across countries in the temperate region (B), and across countries in the tropical region (C). These graphs give the weighted average of area planted in each crop group across the globe or region over time. We use cropped hectarage in country c in year t as weights. Red (black) indicates a decrease (increase) in the crop or crop group’s share in the overall mix between 1975 and 2007. The percentage change indicates the change between 1975 and 2007.\n\nThe change in a country’s crop mix from 1975 to the mid-2000s was most likely driven by changes in global demand for various foodstuffs (e.g., 17,18) and the increasing globalization of crop production and trade12. As an example of the former effect, retail sales of foods with high oil and fat content increased dramatically in many countries from 1983 to 2002. Further, the number of calories that the average global person obtained from cereals fell while the number of calories they obtained from fruits and vegetables rose from 1996 to 200219. As an example of the globalization effect, consider that the reduction of several trade barriers in the early 1990s was largely responsible for the doubling of soybean production in Brazil20. Other potential explanations for country-level changes in crop mix include farmers adapting to climate change. However, there is little evidence of adaptation being a large driver of crop mix change.\n\nIncreasing fertilizer use across the globe from 1975 to 2002 (Table 3) was the next most important contributor to the steady gains in yield over that time period (only the wide dataset includes fertilizer data). When yield is measured in Mg ha-1, the fertilizer counterfactual makes up 23% to 32% to 38% of the Mg gaps (the temperate, global, and tropics Mg gaps, respectively). When yield is measured in M kcals ha-1, fertilizer makes up 12% to 23% to 42% of the kcals gaps (again, the temperate, global, and tropics Mg gaps, respectively). Further, the time trend no longer has a positive effect on the tropical yield when using the wide dataset. In fact, the time counterfactual produces a negative kcal gap in the tropics.\n\nAll averages are weighted by cropped area in each country in each year.\n\nRecent climate change slightly dampened yield growth. Compared to time, crop mix, and fertilizer use, the impact of the other agricultural inputs on recent global and regional yield was much less significant in terms of magnitude. When using the long or wide datasets, recent increases in daytime growing season temperatures (DGSTs; Table 4) negatively affected global and regional yields. When yield is measured in Mg ha-1, the DGST counterfactual makes up –4% or –6% of the global Mg gap (as before, the order is always wide and long results, respectively, unless otherwise stated). When yield is measured in M kcals ha-1, the DGST counterfactual makes up –4% or –5% of the global kcals gap. In the DGST counterfactual we fixed DGSTs around the world at 1975–1977 averages. The negative impact of increasing DGSTs on global yield was almost entirely explained by its drag on tropical yields; the impact of increasing DGSTs on temperate region yields was almost non-existent.\n\nAll averages are weighted by cropped area in each country in each year.\n\nAll else equal, warm days and cool nights allow for vigorous plant growth during the day and efficient plant respiration at night21–24. In contrast, warmer nighttime temperatures cause more wasteful respiration and less energy for growth during the day, all else equal. Therefore, we were surprised to find that increasing nighttime growing season temperatures (NGSTs) at the global and tropical region scales (Table 4) were associated with a boost in yields. The NGST counterfactual makes up ~10% of tropic’s Mg and kcal gaps. However, in the temperate region we find evidence of the expected impact of increasing NGS temperatures on yield: the NGST counterfactual makes up –3% or –4% and –3% or –2% of the temperate region’s Mg and kcal gaps, respectively. Changes in growing season precipitation had no effect on global or regional yields.\n\nRecent change in cropped soil quality and cropland footprint had a negligible effect on yield growth. Recent changes in the quality of cropped land around the world have had a mixed effect on yield growth. One way we measure the change in the quality of land a country crops on is by measuring the change in its cropped soil’s nutrient availability and retention capacity as its cropland footprint shifts across the landscape25. We also measure a country’s extensive change in footprint by tracking its net areal change in cropland over time. The extensive change in cropped area is a catch-all for the change in land quality conditions not measured by the change in the nutrient availability and retention capacity of cropped soils. We assume that a country’s most productive land has long been used for crops and net growth in cropland extent since 1975 will have had a negative impact on yield as only more marginal lands were available for cropping after 1975. For example, most of the globe’s 1975 to mid-2000s growth in cropland extent occurred in the tropics (Table 4). Further, the decline in the overall quality of cropped soil has been more dramatic in the tropics as more and more tropical forest area and their poor soils have been used for crops since 197526.\n\nA general worsening in the nutrient availability and retention capacity of cropped soils across the globe was associated with slightly lower yields (Table 1 and Table 2). However, the extent of the loss was very small (the soil quality counterfactual makes up –0.2% to –1.2% of global Mg and kcal gaps). As expected, net growth in a cropped area was associated with a decline in global and tropical Mg yields. Again, however, the extent of the negative impact is relatively minor (the area cultivated counterfactual makes up –13% or –2% to of global Mg gaps and –7% or –5% of tropical Mg gaps). In contrast, and contrary to expectations, net growth in cropped area was associated with an increase in global and temperate region yields when measured in M kcals ha-1. Again, however, the extent of the gap created by net change in cropped area in these cases is relatively small (the area cultivated counterfactual makes up 5% or 16% of global kcals gaps and 12% or 19% of temperate region kcals gaps).\n\nThe counterintuitive positive relationship between net cropland expansion and higher M kcal ha-1 yield in the temperate region may hold for several reasons. First, it may be that land that was marginal for crops grown earlier in the 20th century became more suitable for the more kcal-denser crop mixes grown over the last 40 years. Second, land that was marginal given earlier technology and cultivars may have become increasingly productive, especially for kcal-rich crops, with emerging technology. Third, cropland across the world has generally become better connected to transportation infrastructure, thereby encouraging farmers to invest in their operations and potentially more than compensating for their land’s quality shortcomings14,27. Finally, we note that these counter intuitive results are less noticeable when using the wide dataset. In other words, the yield curves estimated with the long dataset may be biased upwards with respect to the area cultivated variable due to the omitted fertilizer variable.\n\nInvestment in land, machinery, and irrigation had little impact on recent yield growth. Surprisingly, investment in irrigation capacity and investment in land and equipment and machinery (Table 4) had very little effect on global and regional yields (see the irrigation capability and investment in land and equipment counterfactuals in Table 1 and Table 2). Increases in irrigation capacity had a positive effect on Mg and kcal yield across the globe and in both regions but no irrigation capacity counterfactual produced an integral larger than 4% of a gap. Further, investment in land and farm machinery and equipment appears to have contributed little to yield growth over time. Investment in land may have had little effect on yield because land development investment per cropped hectare only increased by 10% around the globe between 1975 and 2007 and actually fell over this time period in the tropics (Table 4). However, the lack of investment in land in the tropics was countered by a contemporaneous 60% increase in the value of farm machinery and equipment per cropped hectare in the region. The large increase in machinery and equipment use in the tropics vis-à-vis the temperate region may explain why the tropical integrals for the investment in land, machinery, and equipment counterfactual are larger than the analogous integrals for the temperate region. The investment in land, machinery, and equipment counterfactual makes up 6% of the tropic’s Mg gap (with both the wide and long model estimates) and 8% or 1% of the tropic’s kcal gap (with the wide and long model estimates, respectively).\n\nBefore we analyzed our two panel datasets with decision trees, we first transformed them into annual change datasets. These annual change datasets begin with each country’s 1975 to 1976 changes and end with each country’s 2001 to 2002 changes (wide dataset) or 2006 to 2007 changes (long dataset). Further, we transformed the continuous distributions of annual change in country-level yields into discrete distributions of three tertiles; low annual change (L), moderate annual change (M), and high annual change (H) (see Table 5 for an exact numerical definition of these categories).\n\nNotes: A high yield change (“H”) in a country is given by a one year change of (0.158,10.1] Mg ha-1 or (0.354,30.2] M kcals ha-1 with the long dataset and (0.17,7.66] Mg ha-1 or (0.401,30.2] M kcals ha-1 with the wide dataset. A low yield change (“L”) in a country is given by a one year change of ([-10.2,-0.0647] Mg ha-1 or [-30.7,-0.197] M kcals ha-1 with the long dataset and [-10.2,-0.0703] Mg ha-1 or [-30.7,-0.208] M kcals ha-1 with the wide dataset. Input names in black refer to crop mix inputs, names in red refer growing season weather inputs, and names in blue refer to other input types.\n\nThe decision tree algorithm recursively partitions the dataset, eventually settling on n sets of decision sequences that predict outcomes of L, M, and H (n traversals of a tree, from the “root” that contains all the data to a “leaf” that contains a subset of the data)28–30. The partitioning of the data can be constrained by one or more pruning rules. We pruned trees to make them easier to interpret and to increase our confidence in their predictive power. Here, we pruned trees by mandating that each leaf node in a tree has at least 50 records that support the decision sequence leading to the leaf node. In other words, sets of country-level year-to-year changes in inputs could not be mapped as a branch unless at least 50 instances of that set were observed in the data. After meeting the pruning rules, the decision tree algorithm produced the sets of annual changes in agricultural inputs that best predicted whether a country had an L, M, or H categorical change in annual yield.\n\nUnique combinations of yield metric {Mg ha-1, M kcals ha-1}, scale {globe, temperate, tropics}, and dataset {wide dataset, long dataset} means that we created 12 unique trees of annual yield change predictions. (see Supplementary Figure 1–Supplementary Figure 12). We summarize the 12 decision trees in several ways. First, we report on the accuracy and complexity of each tree (Table 5; Dataset 310). Second, we list all of the inputs that are found in the first three levels of a tree. We highlight these inputs because they do the most towards predicting annual change in a country’s yield. Third, we highlight the traversal in each tree with the highest number of records. These traversals indicate the annual changes in agricultural inputs that are most common across space and time. Finally, we indicate the traversals that generate the greatest proportion of high (H) and low (L) annual country-level yield changes in a tree. These traversals give the ranges in annual input change that, respectively, best predict a high and low annual yield change in a country.\n\nWe find that the trees constructed from the wide dataset are simpler (fewer traversals) than those constructed from the long dataset and the trees constructed with the change in Mg ha-1 yield metric are simpler than those constructed with the change in M kcal ha-1 yield metric. (The econometric analysis also indicates that the wide dataset with yield measured in Mg ha-1 fits the yield model better than the other three yield measure - dataset combinations.) In terms of prediction accuracy, the trees constructed over the temperate countries are better than the trees generated over all countries and tropical countries only, and the trees generated with yield measured in M Kcals ha-1 are better than the trees generated with yield measured in Mg ha-1. Therefore, annual yield changes in the temperate countries are explained by a narrower set of annual input changes than annual yield changes in the tropics. To put it another way, explanations of changes in tropical yields are messier.\n\nNext we describe the inputs found closest to the roots of trees where the root of the tree contains all the data. We define “close to the root” as the first three levels of a tree from its root (the first three decisions). Changes in a country’s crop mix – change in relative area devoted to sugarcane, roots and tubers, and wheat – appear close to the roots of all 12 trees. In particular, sugarcane is found close to the root of all 12 trees and the roots and tubers crop category is found close to the root of all three trees formed with the long dataset when yield is measured in Mg ha-1. The annual change in DGSTs is close to the root of three of the four trees estimated over the tropical countries. Finally, change in cultivated area is found close to the root of the two trees estimated over the temperate countries when yield is measured in Mg ha-1. Therefore, the decision trees indicate that recent annual changes in yield across the globe were most associated with changes in crop mix and that each region had idiosyncratic drivers of yield change as well.\n\n(In the decision tree analysis we de-trended the data by using annual changes; in the fixed-effects analysis we de-trended the data by including time as an explanatory variable. This means the decision tree analysis cannot account for the various unobserved inputs that are correlated with time.)\n\nA gain in the proportion of a country’s crop mix devoted to sugarcane is the best predictor of high (H) yield change in five of the six trees created with the wide dataset and four of the six trees created with the long dataset. Prediction of the H category is a bit more complicated in the global trees estimated with the long dataset. According to trees estimated with the long dataset, gains in wheat and roots and tubers in the proportional mix of a country’s crop profile, modest changes in sugarcane’s contribution to the proportional mix, and growing seasons that had cooler daytime temperatures than the previous growing season were most likely to have led to a high annual gain in a country’s yield.\n\nThe best set of predictors for a negative change in annual yield (the L yield category) is a bit more expansive than the sets of best predictors for the H yield category. Not surprisingly, losses in proportion of a country’s crop mix devoted to sugarcane are found in all tree branches with the highest proportion of L observations. In the tropics, a one-year gain in DGST and NGST were also associated with yield losses from one year to the next. Finally, an increase in a country’s cultivated area from one year to the next was associated with a negative change in a temperate country’s Mg ha-1 yield.\n\nWhen we compare the decision trees (Table 5) to the econometrically estimated counterfactual results (Table 1 and Table 2) several similarities and differences emerge. First, both analyses highlight that changes in crop mix have been one of the most important contributions to the gain in crop yields over the last 40 years. The decision tree analysis also reinforces the econometric evidence that gains in DGSTs dampened gains in yields more in the tropics than in the temperate region. The trees, like the counterfactual analysis, also suggest that investment in irrigation, land, machinery, and equipment and the quality of cropped soil had little effect on yield change. The counterfactual and the decision tree analyses disagree on the importance of fertilizer use in explaining yield gains over the last 40 years, however; the counterfactual analysis deems this input more important than the decision tree analysis.\n\n\nDiscussion\n\nImprovements in agricultural technology, management, and science, changes in crop mix, and increased fertilizer use were responsible for the lion’s share of yield improvement around the world from 1975 to 2007. The negative yield impacts associated with increases in growing season temperatures were smaller. In some cases, the changes in the quality of land used for crops and cropland footprint were just as detrimental to yields as changes in climate.\n\nThe downward pressure on crop yields due to climate change will worsen in the future (e.g., 31). We see two paths to continued yield improvements despite this growing drag on yields. First, investment in agricultural technology, chemical inputs, management, and science in the tropics is vitally important (the so-called closing of “yield gaps”15). As indicated by the “time” counterfactuals, the tropics have not yet experienced the agricultural science and management revolution that the temperate region has. Second, if each country can increasingly specialize in the crops best suited for their (changing) climate and trade for the rest of their crop needs, then the spatial allocation of crops will become more efficient. For example, our results suggest the continued divestment in grain production in the tropics and greater investment in grain production in the temperate zone would do much to boost food production in the future. Further, greater fruit and sugarcane production in the tropics relative to the temperate zone would also help accelerate food production32. More trade liberalization and the reduction or even elimination of national crop subsidy programs will make it easier for each country to grow the crops best suited for their soil-climate conditions13.\n\nSeveral suggested paths to greater food production are not supported by our analysis. Cropland extensification contributed little to yield gains in the immediate past and are not likely to do so in the future27. Instead, switching to more climate-appropriate crops, using more fertilizers, chemicals and improved cultivars, and improving the nutrient retention capability of already existing cropland appears to be a more effective strategy for increasing worldwide yields and, ultimately, food production (i.e., land sparing versus land sharing; 33). This strategy would also leave more land for nature in an increasingly populated world. Further, we are also skeptical that an emphasis on investment in infrastructure in of itself (i.e., machinery and irrigation capacity) will significantly increase yields in the future; these investments did not do much to boost crop production in the recent past. Machinery that is compatible with precision agriculture (i.e., technology) is likely to be more effective than just more tractors and other machinery. Of course, the recommendation on investment in irrigation could change if climate change severely disrupts current rainfall patterns.\n\nThis analysis is limited by several data issues. First, our treatment of weather data (see Materials and Methods) did not allow us to isolate changes in growing season weather due to spatial reallocation of cropland versus changes in the atmospheric system. Separating these trends would help us better understand the effect of recent climate change on crop yields around the world. Another shortcoming of this analysis is that it does not specifically account for farmer reaction to climate change; this omission could bias our results. For example, if the changes in the spatial pattern of production and crop choice were partially affected by climate change, then we have underestimated the impact of climate change and overestimated the impact of crop choice and cropped-footprint change on recent yield trends. In addition, we are missing data for all countries that were in the Soviet Union and many Warsaw Pact countries (e.g. Poland and Hungary). One of the data sources we used to construct our panel datasets does not contain a consistent set of data back to 1975 for these countries. Most of these countries are in the temperate region. Therefore, our analysis, especially the temperate region analysis, could be biased due to the omission of these countries from the dataset. Further, the source of our gridded crop maps stopped providing annual grid cell maps of global cropland beyond 200734. Thus our dataset ends with 2007 data and cannot be extend into the early 2010s. Finally, to conduct this analysis, we either had to summarize the native grid-level data on cropped soil quality and growing season weather at the country level or we had to decompose the native country-level data on production, crop mix, and investment to the grid-cell level. We used the former approach.\n\nA limitation of our decision tree analysis is that trees are constructed in a “greedy” fashion, iteratively splitting on the most powerful agricultural inputs (in a predictive sense) as the branches are built; this can lead to suboptimal trees when there are nonlinear interactions among the variables. Quinlan’s C4.5 algorithm28 for the decision tree approach strives to mitigate the biasing effect of the iterative tree-building approach by repeatedly building a tree with a subset of the data and assessing its quality on the held-out data to find the most robust trees; the RWeka decision-tree packaged used for this analysis is a slightly updated version of C4.5. Additionally, we could do more to explore the sensitivity of tree results to different transformations of the data, for example, whether the trees would have greater explanatory power if change in yield outcomes were transformed to a discrete distribution of four categories instead of three.\n\n\nMaterials and methods\n\nFirst, we used the method of least squares to estimate a fixed effects model of annual per hectare crop yield at the country level from years ṯ through t̄.\n\n\n\nThe land investment variable in vector Kct measures major improvements in the quantity, quality or productivity of land or prevention of deterioration. Activities such as land clearance, land contouring, creation of wells and watering holes are integral to the land improvement. The concept of land improvement includes 1) field improvements undertaken by farmers (e.g., making boundaries, irrigation channels) and 2) other activities undertaken by government and other local bodies such as irrigation works, soil-conservation works, and flood-control structure. The machinery and equipment investment variable in vector Kct measures the value of tractors, harvesters and thrashers, milking machines and hand tools in a country.\n\nSee the section ‘Creating country-level data for crop yield model and decision tree analysis’ for more information on how we constructed the variables in the vector Zct.\n\nIn the estimate of model (1) using the “long” dataset (Dataset 29) Fct is not included and time ṯ equals 1975 and time t̄ equals 2007. In the estimate of model (1) using the “wide” dataset (Dataset 18) Fct is included and time ṯ equals 1975 and time t̄ equals 2002. We estimate the long and wide versions of model (1) with all countries, tropical countries only, and temperate countries only. A country’s regional affiliation is defined by the latitude of the country’s capital and the Tropics of Cancer and Capricorn. Model (1) was estimated with the reg command in Stata 12.1. See Supplementary Table 1 and Supplementary Table 2 for estimates of model (1), including estimated standard errors and p-values. Stata code and related databases can be found in Supplementary materials under Stata Files.\n\nWe built expected yield curves for country c, Ŷct for years ṯ through t̄, by running the country’s input data from years ṯ to t̄ through an estimate of model (1),\n\nIn Figure 1, we present the global Ŷrt for years 1975 through 2007 (the long dataset) where yield is measured in Mg ha-1 (black solid curve in Figure 1A) and M kcals ha-1 (black solid curve in Figure 1B).\n\nWe built counterfactual yield curves for country c, Ỹct for years ṯ through t̄, by running the country’s input data from years ṯ to t̄ through an estimate of model (1), holding one or more of c’s inputs fixed at 1975 levels (the exception is a growing season weather counterfactual; in those cases, we fix the appropriate input at the 1975–1977 annual average). Each country has 84 counterfactual yield curves for the years ṯ through t̄, one for each unique combination of yield measure {Mg ha-1, M kcals ha-1}, scale {globe, appropriate region}, and 10 counterfactuals with the long dataset and 11 counterfactuals with the wide dataset. Using these country-level counterfactual yield curves, we calculated 42 counterfactual global-yield curves, one for each unique combination of yield measure {Mg ha-1, M kcals ha-1} and 10 counterfactuals with the long dataset and 11 counterfactuals with the wide dataset and 84 expected regional yield curves, one for each unique combination of yield measure {Mg ha-1, M kcals ha-1}, scale {temperate, tropics}, and 10 counterfactuals with the long dataset and 11 counterfactuals with the wide dataset. To construct a global or regional counterfactual yield curve, Ỹrt for years ṯ through t̄, we averaged Ỹrt for each year t across all c in r, weighed by each country’s cropped hectarage in year t,\n\nIn the mean columns of Table 1 and Table 2 we present the counterfactual integrals,\n\nThe counterfactual analyses were conducted with MATLAB R2013a. MATLAB code and related databases can be found in Supplementary materials under MATLAB Code for Table 1 and Table 2.\n\nWe generated the “low” and “high” results for each q, m, r, and d counterfactual combination in the following manner (Table 1 and Table 2). First, we created 1000 unique vectors of model (1) coefficients by randomly drawing from the multivariate normal distribution with a mean of [β^0,β^1,β^2β^3,β^4,β^5,β^6,β^7,β^8] (the estimated vector of beta coefficients) and a covariance matrix of,\n\nSecond, using the 1000 randomly generated β coefficient vectors, we generated 1000 values of Ŷctmd for all c and t for each unique m and d combination and 1000 values of Ỹqctmd for all c and t for each unique q, m, and d combination. Third, we generated expected 25th and 75th percentile yield curves for each country and each unique m and d combination by selecting the 25th percentile and 75th percentile values of Ỹctmd at each t. Fourth, we generated counterfactual 25th and 75th percentile yield curves for each country and each unique q, m, and d combination by selecting the 25th percentile and 75th percentile values of Ỹqctmd at each t. Fifth, we calculated a region or the globe’s expected percentile yield in year t with,\n\nWe constructed decision trees using the RWeka package in R (RWeka 0.4-24 and RWekajars 3.7.12.-1) and J48 classifiers in particular. These are a reimplementation of Quinlan’s C4.5 algorithm28. We evaluated trees for prediction accuracy using a 10-fold cross-validation strategy. Decision trees are given in Supplementary Figure 1–Supplementary Figure 12, and the results are summarized in Table 5. In the analysis reported here, “leaf nodes” (the resulting subsets of the data after the branching of the tree on decision variables) were required to contain at least 50 observations, using the M option to control the minimum number of instances per leaf. This approach was used to yield trees with higher human interpretability as well as higher prediction accuracy. While 50 is somewhat arbitrary, we explored other values and empirically found it to lead to high prediction accuracy and greater interpretability in the resulting trees. (Interestingly, this approach also worked better for this data than using the C option to control the “confidence” in the pruned trees.)\n\nTo create country-level summary statistics of the quality of cropped soil (Sct) and growing season weather over cropland (contained in vector Zct) in each country in each harvest year t we used annual global grid cell maps of cropped land34 along with gridded global maps of soil quality25, monthly weather8, and growing season months9. (Ramankutty and Foley stopped updating annual global grid-cell maps of cropped land after releasing the 2007 data. Thus, our dataset ends with 2007 data.) By combining the gridded maps on soil, weather, and growing season months with gridded cropland maps we were able to create summary statistics that preserved the observed spatial heterogeneity in agronomic conditions across a county in any given year. For example, consider the landscape in Figure 4. Suppose the square landscape represents a country. Assume the large number in each grid cell in Figure 4A represents the number of cropland hectares in that cell in harvest year t (the small number in the corner of a cell is its ID number). In Figure 4B each cell’s nutrient availability score is given where a 1 indicates ‘No or slight nutrient constraint’, 2 indicates ‘moderate nutrient constraint’, 3 indicates ‘severe nutrient constraint’, 4 indicates ‘very severe nutrient constraint’, and 5 indicates ‘mainly non-soil’ (in other words, lower scores mean better soil quality; see 25. Nutrient availability (Nct) is decisive for successful low-level-input farming and, in some cases, intermediate-input-level farming. A country’s composite nutrient availability score on cropland in harvest year t is the weighted average of the nutrient availability scores across all cropland area in the country in harvest year t or,\n\nHarvested hectares in each grid cell in an illustrative country (A) where the small numbers in the corner of a grid cell indicate cell ID. Nutrient availability score (Nct) in each grid cell (B) where 1 indicates ‘No or slight nutrient constraint’, 2 indicates ‘moderate nutrient constraint’, 3 indicates ‘severe nutrient constraint’, 4 indicates ‘very severe nutrient constraint’, and 5 indicates ‘mainly non-soil’25.\n\nWe use the same method to calculate a country’s nutrient retention score, given by Uct. Nutrient retention capacity is of particular importance for the effectiveness of fertilizer applications and is therefore of special relevance for intermediate and high input level cropping conditions. The explanatory soil statistic used in the model, Sct, is the average of Nct and Uct.\n\nThe weather vector Z includes weather statistics that summarize the weather conditions over a country’s cropland during the growing season. We summarized each weather variable at the country level in year t with a procedure very similar to that used to find the country-level cropland soil statistic S. Let DGSTjmt and NGSTjmt indicate the average daytime high and nighttime low temperature in grid cell j in month m of harvest year t (measured in degrees Celsius)8. Let DGSTjt and NGSTjt indicate the average of DGSTjmt and NGSTjmt, respectively, across grid cell j’s growing season months of harvest year t where we use a grid cell’s growing season months for maize to define growing season. Let Pjt be the total precipitation in grid cell j during the cell’s growing season in harvest year t (measured in millimeters). If a crop was harvested in the spring of year t then some of the weather that contributes to DGSTjt, NGSTjt, and Pjt occurred in the final months of year t – 1. Let DGSTct, NGSTct, and Pct measure the average monthly daytime high, monthly nighttime low, and growing season precipitation, respectively, over c’s cropland during the course of growing season t where weather data is weighted by cropland density in grid cell j.\n\n\n\nMATLAB code was used to construct Sct, DGSTct, NGSTct, and Pct. The code and related databases can be found in Supplementary materials under MATLAB Code for creating country-level variables.\n\nMaps of 1975 – 1977 to 2005 – 2007 country-level changes in various model (1) inputs are given in Supplementary Figure 13–Supplementary Figure 21. These figures can be found Supplementary material under the zip file Supplementary Figures.\n\n\nData availability\n\nDataset 1. “Wide” dataset. doi, 10.5256/f1000research.10419.d1463388\n\n1. ID: UNFAO Country Code\n\n2. Year\n\n3. Tropical: a 1 indicates that that country is a tropical country and a 0 indicates that the country is a temperate country\n\n4. tons/ha: a country's crop yield in year t in metric tons/ha (I summed all tons of crops produced in a country and divided by total cropped hectares in a country)\n\n5. million kcals/ha: a country's crop yield in year t in millions of kcals/ha (I summed all kcals of crops produced in a country and divided by total cropped hectares in a country)\n\n6. soilscore: The composite soil quality score of the land that was cropped in year t in country k (on a 1 to 5 scale with lower numbers indicating better soil).\n\n7. ha: total cropped hectares in year t in country k\n\n8. rice: percentage of cropped area in rice in year t in country k\n\n9. wheat: percentage of cropped area in wheat in year t in country k\n\n10. sugar: percentage of cropped area in sugarcane in year t in country k\n\n11. grains: percentage of cropped area in coarse grains in year t in country k\n\n12. oil: percentage of cropped area in oil crops in year t in country k\n\n13. fruits: percentage of cropped area in fruits in year t in country k\n\n14. roots: percentage of cropped area in roots and tubers in year t in country k\n\n15. other: percentage of cropped area in all other crops in year t in country k\n\n16. davg: The composite average daytime temperature over cropped lands during the growing season year t in country k (Celsius)\n\n17. navg: The composite average nighttime temperature over cropped lands during the growing season year t in country k (Celsius)\n\n18. pavg: The total rainfall over cropped lands during the growing season year t in country k (mm)\n\n19. irr: Fraction of cropped lands that are equipped for irrigation in year t in country k\n\n20. land: total money invested in agricultural land development divided by cropped hectares in year t in country k (2005 constant US $/ha)\n\n21. eqp: total money invested in agricultural equipment divided by cropped hectares in year t in country k (2005 constant US $/ha)\n\n22. fert: kilograms of fertilizer used in the country divicde by cropped hectares in year t in country k.\n\nDataset 2. “Long” dataset. doi, 10.5256/f1000research.10419.d1463399\n\n1. ID: UNFAO Country Code\n\n2. Year\n\n3. Tropical: a 1 indicates that that country is a tropical country and a 0 indicates that the country is a temperate country\n\n4. tons/ha: a country's crop yield in year t in metric tons/ha (I summed all tons of crops produced in a country and divided by total cropped hectares in a country)\n\n5. million kcals/ha: a country's crop yield in year t in millions of kcals/ha (I summed all kcals of crops produced in a country and divided by total cropped hectares in a country)\n\n6. soilscore: The composite soil quality score of the land that was cropped in year t in country k (on a 1 to 5 scale with lower numbers indicating better soil).\n\n7. ha: total cropped hectares in year t in country k\n\n8. rice: percentage of cropped area in rice in year t in country k\n\n9. wheat: percentage of cropped area in wheat in year t in country k\n\n10. sugar: percentage of cropped area in sugarcane in year t in country k\n\n11. grains: percentage of cropped area in coarse grains in year t in country k\n\n12. oil: percentage of cropped area in oil crops in year t in country k\n\n13. fruits: percentage of cropped area in fruits in year t in country k\n\n14. roots: percentage of cropped area in roots and tubers in year t in country k\n\n15. other: percentage of cropped area in all other crops in year t in country k\n\n16. davg: The composite average daytime temperature over cropped lands during the growing season year t in country k (Celsius)\n\n17. navg: The composite average nighttime temperature over cropped lands during the growing season year t in country k (Celsius)\n\n18. pavg: The total rainfall over cropped lands during the growing season year t in country k (mm)\n\n19. irr: Fraction of cropped lands that are equipped for irrigation in year t in country k\n\n20. land: total money invested in agricultural land development divided by cropped hectares in year t in country k (2005 constant US $/ha)\n\n21. eqp: total money invested in agricultural equipment divided by cropped hectares in year t in country k (2005 constant US $/ha)\n\nDataset 3. Accuracy of decision trees. doi, 10.5256/f1000research.10419.d14634010", "appendix": "Author contributions\n\n\n\nE.J.N. did everything other than construct the decision trees. C.B.C constructed the decision trees. C.B.C. also wrote and edited portions of the text.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledegments\n\nThe authors wish to thank Jae Bradley, Clarissa Hunnewell, and Isabel Schwartz, undergraduates at Bowdoin College, for help with putting datasets together and analyzing data.\n\n\nSupplementary materials\n\nSupplementary Figures 1–21:\n\nClick here to access the data.\n\n(1) Decision tree for globe, yield measured in Mg ha-1, using the “long” dataset.\n\n(2) Decision tree for temperate region, yield measured in Mg ha-1, using the “long” dataset.\n\n(3) Decision tree for tropics, yield measured in Mg ha-1, using the “long” dataset.\n\n(4) Decision tree for globe, yield measured in M kcals ha-1, using the “long” dataset.\n\n(5) Decision tree for temperate region, yield measured in M kcals ha-1, using the “long” dataset.\n\n(6) Decision tree for tropics, yield measured in M kcals ha-1, using the “long” dataset.\n\n(7) Decision tree for globe, yield measured in Mg ha-1, using the “wide” dataset.\n\n(8) Decision tree for temperate region, yield measured in Mg ha-1, using the “wide” dataset.\n\n(9) Decision tree for tropics, yield measured in Mg ha-1, using the “wide” dataset.\n\n(10) Decision tree for globe, yield measured in M kcals ha-1, using the “wide” dataset.\n\n(11) Decision tree for temperate region, yield measured in M kcals ha-1, using the “wide” dataset.\n\n(12) Decision tree for tropics, yield measured in M kcals ha-1, using the “wide” dataset.\n\n(13) Percentage change in 1975–1977 to 2005–2007 growing season daytime temperature by country.\n\n(14) Percentage change in 1975–1977 to 2005–2007 growing season nighttime temperature by country.\n\n(15) Percentage change in 1975–1977 to 2005–2007 growing season precipitation by country.\n\n(16) Percentage change in 1975–1977 to 2005–2007 soil score by country.\n\n(17) Percentage change in 1975–1977 to 2005–2007 hectares of irrigation capacity per cropped hectare by country.\n\n(18) Percentage change in 1975–1977 to 2005–2007 equipment investment ($2005) per cropped hectare by country.\n\n(19) Percentage change in 1975–1977 to 2005–2007 land investment ($ 2005) per cropped hectare by country.\n\n(20) Percentage change in 1975–1977 to 2005–2007 all crop M kcals per hectare yield by country.\n\n(21) Percentage change in 1975–1977 to 2005–2007 all crop Mg per hectare yield by country.\n\nSupplementary Table 1: Econometric estimates of fixed effects model (1) with the “long” global, tropics, and temperate datasets. Estimated coefficients with standard errors in parentheses. Standard errors are robust standard errors. ‘***’ indicates statistical significance at p = 0.01, ‘**’ indicates statistical significance at p = 0.05, and ‘*’ indicates statistical significance at p = 0.10. Country fixed effect coefficients and SE are available upon request.\n\nClick here to access the data.\n\nSupplementary Table 2: Econometric estimates of fixed effects model (1) with the “wide” global, tropics, and temperate datasets.\n\nClick here to access the data.\n\nSupplementary Methods: Crop groups used to define crop mix.\n\nClick here to access the data.\n\nMATLAB Code for Tables 1 and 2.\n\nClick here to access the data.\n\nMATLAB Code for creating country-level variables.\n\nClick here to access the data.\n\nStata Files.\n\nClick here to access the data.\n\n\nReferences\n\nSchlenker W, Haneman WM, Fisher AC: The impact of global warming on U.S. agriculture: an econometric analysis of optimal growing conditions. Rev Econ Stat. 2006; 88(1): 113–125. Publisher Full Text\n\nSchlenker W, Roberts MJ: Nonlinear temperature effects indicate severe damages to U.S. crop yields under climate change. Proc Natl Acad Sci U S A. 2009; 106(37): 15594–15598. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAshenfelter O, Storchmann K: Using hedonic models of solar radiation and weather to assess the economic effect of climate change: the case of Mosel valley vineyards. Rev Econ Stat. 2010; 92(2): 333–349. Publisher Full Text\n\nLobell DB, Schlenker W, Costa-Roberts J: Climate trends and global crop production since 1980. Science. 2011; 333(6042): 616–620. PubMed Abstract | Publisher Full Text\n\nTilman D, Balzer C, Hill J, et al.: Global food demand and the sustainable intensification of agriculture. Proc Natl Acad Sci U S A. 2011; 108(50): 20260–20264. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFoley JA, Ramankutty N, Brauman KA, et al.: Solutions for a cultivated planet. Nature. 2011; 478(7369): 337–342. PubMed Abstract | Publisher Full Text\n\nBeddow JM, Pardey PG: Moving matters: the effect of location on crop production. J Econ Hist. 2015; 75(1): 219–249. Publisher Full Text\n\nNelson E, Congdon CB: Dataset 1 in: Measuring the relative importance of different agricultural inputs to global and regional crop yield growth since 1975. F1000Research. 2016a. Data Source\n\nNelson E, Congdon CB: Dataset 2 in: Measuring the relative importance of different agricultural inputs to global and regional crop yield growth since 1975. F1000Research. 2016b. Data Source\n\nNelson E, Congdon CB: Dataset 3 in: Measuring the relative importance of different agricultural inputs to global and regional crop yield growth since 1975. F1000Research. 2016c. Data Source\n\nFAOSTAT (Food and Agriculture Organization of the United Nations): FAOStat database. 2011. Reference Source\n\nAlston JM, Pardey PG: Agriculture in the global economy. J Econ Perspect. 2014; 28(1): 121–146. Publisher Full Text\n\nAnderson K: Globalization's effects on world agricultural trade, 1960–2050. Philos Trans R Soc Lond B Biol Sci. 2010; 365(1554): 3007–3021. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVera-Diaz MD, Kaufmann RK, Nepstad DC, et al.: An interdisciplinary model of soybean yield in the Amazon Basin: the climatic, edaphic, and economic determinants. Ecol Econ. 2008; 65(2): 420–431. Publisher Full Text\n\nLobell DB, Cassman KG, Field CB: Crop yield gaps: their importance, magnitudes, and causes. Annu Rev Environ Resour. 2009; 34(1): 179–204. Publisher Full Text\n\nCostinot A, Donaldson D: Ricardo’s theory of comparative advantage: old idea, new evidence. Am Econ Rev. (National Bureau of Economic Research, No. w17969), 2012; 102(3): 453–58. Publisher Full Text\n\nPollack SL: Consumer demand for fruit and vegetables: the U.S. example. Changing Structure of Global Food Consumption and Trade. 2001; 6: 49–54. Reference Source\n\nPingali P: Westernization of Asian diets and the transformation of food systems: implications for research and policy. Food Policy. 2007; 32(3): 281–298. Publisher Full Text\n\nRegmi A, Gehlhar M: New Directions in Global Food Markets. AIB-794. USDA/ERS. 2005. Reference Source\n\nSchnepf RD, Dohlman E, Bolling C: Agriculture in Brazil and Argentina: Developments and Prospects for Major Field Crops. Market and Trade Economics Division, Economic Research Service, U.S. Department of Agriculture, Agriculture and Trade Report, WRS-01-03. 2001. Reference Source\n\nPeng S, Huang J, Sheehy JE, et al.: Rice yields decline with higher night temperature from global warming. Proc Natl Acad Sci U S A. 2004; 101(27): 9971–9975. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFulu T, Yokozawa M, Xu Y, et al.: Climate changes and trends in phenology and yields of field crops in China, 1981–2000. Agr Forest Meteorol. 2006; 138(1–4): 82–92. Publisher Full Text\n\nThomison P: Can warm nights reduce grain yield in corn? C.O.R.N. Newsletter-Ohio State University. 2010; 22. Reference Source\n\nAnderegg WR, Ballantyne AP, Smith WK, et al.: Tropical nighttime warming as a dominant driver of variability in the terrestrial carbon sink. Proc Natl Acad Sci U S A. 2015; 112(51): 15591–15596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFischer G, Nachtergaele F, Prieler S, et al.: Global Agro-ecological Zones Assessment for Agriculture (GAEZ 2008). IIASA, Laxenburg, Austria and FAO, Rome, Italy; 2008.\n\nWest PC, Gibbs HK, Monfreda C, et al.: Trading carbon for food: global comparison of carbon stocks vs. crop yields on agricultural land. Proc Natl Acad Sci U S A. 2010; 107(46): 19645–19648. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaurance WF, Sayer J, Cassman KG: Agricultural expansion and its impacts on tropical nature. Trends Ecol Evol. 2014; 29(2): 107–116. PubMed Abstract | Publisher Full Text\n\nQuinlan JR: C4.5: Programs for machine learning. Morgan Kaufmann Publishers, 1993. Reference Source\n\nLoh WY: Classification and regression trees. In: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2011; 1(1): 14–23. Publisher Full Text\n\nVarian HR: Big data: New tricks for econometrics. J Econ Perspect. 2014; 28(2): 3–27. Publisher Full Text\n\nTai AP, Martin MV, Heald CL: Threat to future global food security from climate change and ozone air pollution. Nat Clim Chang. 2014; 4: 817–821. Publisher Full Text\n\nMauser W, Klepper G, Zabel F, et al.: Global biomass production potentials exceed expected future demand without the need for cropland expansion. Nat Commun. 2015; 6: 8946. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBalmford A, Green R, Phalan B: What conservationists need to know about farming. Proc Biol Sci. 2012; 279(1739): 2714–2724. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRamankutty N, Foley J: Estimating historical changes in global land cover: croplands from 1700 to 1992. Global Biogeochem Cy. 1999; 13: 997–1028. Publisher Full Text" }
[ { "id": "19369", "date": "08 Feb 2017", "name": "Timothy S Thomas", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI found the article by Nelson and Congdon to be quite interesting in what they attempt to do, how they approach the problem, and in the results that they get. Essentially, aggregating FAO production figures for each country from 1975 to the mid-2000s, they run fixed effects regressions to determine the source of productivity growth in agriculture, accounting for weather. One of the most interesting innovations that they did was to separate analyses for temperate and tropical countries. They find that most growth in temperate countries is due to growth in agricultural technology, along with growth in inputs other than land, fertilizer, irrigation, and farm equipment (e.g., pesticides). However, they actually find that the growth rate from agricultural technology and inputs excluding the ones mentioned is negative.\nCritique of One of Their Main Findings\nThe fact that there is a difference between temperate and tropical countries is quite interesting and entirely believable, but the fact that for tropical countries the growth from technology and other inputs is negative seems implausible. While publishing such a result would not be improper, it would seem important for the authors to suggest a stronger explanation as to why it might be feasible.\nEven from within the data, they could do more to understand the result. For example, if they differentiated between Asia, Africa, and Latin America, could they find whether some have negative growth rates while others have positive growth rates? I have a difficult time thinking of a scenario where this could be true for Asia with the Green Revolution, but could understand where this might be true for Africa, especially with the transition post-independence and the decline of many agriculture-supporting institutions.\nFurthermore, they could try to differentiate by time. Is there a different trend before and after the mid-1980’s? They could either do this by dividing the dataset into two groups, or could use a quadratic term for the time variable. Answering the region and time questions could shed much light on the source of the negative technological growth rates in the tropics.\nStatistical Analysis Issues\nI endeavored to reproduce their results in Stata, but was unsuccessful. I tried both reg (with and without country dummies) and xtreg, with and without weights and using different variance matrix specifications. They did not say in their article whether they use weights in their regressions, but it would seem that weighting by harvested area (or some similar variable) would allow for better conclusions to be drawn for the aggregations used in their article.\nIt seemed improper to use total harvested area as an explanatory variable and then interpret the variable as the authors do. That is, large values of harvested area can be thought of as having two components: either high percentage of national land in agriculture; a large country; or both. The authors, however, treat that variable as if it were cropland expansion, which it is not. They should possibly use the proportional change in harvested area from previous year, or possibly the proportion of cropland in total land for the country.\nIn their regressions, the authors include daytime and nighttime temperatures, along with their squares. Unfortunately, the signs they get in their regressions are implausible considering where the inflection points are.  For example, they find that yields rise rapidly in the tropics above 8 degrees C during the daytime. While they will rise above 8 degrees C, their regressions suggest that yield continue to rise even above 40 degrees C. The problem is that daytime and nighttime temperatures are very highly correlated, and that it is very difficult to estimate them both in the same regression.  Regressions will clearly signal joint significance, but rarely will there be individual significance, and the signs on the parameters are often implausible.  The authors should probably elect to focus on just daytime temperatures, so that they can contribute to evaluating the impact of climate change on agricultural productivity.\nOne additional issue related to data: I was unable to find in their article what the source of their climate data was, and it would be important to include that. The same is true for the soil score.\nInterpretation Issues\nThe authors endeavor to explain why countries changed their proportions of crops grown, without actually looking at the data to see if they did change.  They know the global, temperate, and tropical aggregates changed, but these could have come about without a single country changing proportions, but rather by countries changing their harvested areas. That is, if the countries with the largest harvested areas in 1975 were different than the countries with large harvest areas in the mid-2000’s, and if the large countries in 1975 had vastly different distributions than those in the mid-2000's, then the aggregate ratios would change without a single country changing their proportions. I'm not suggesting that the authors are wrong about the proportions changing within countries, but it would be helpful to give an example (perhaps India or China), or some kind of table that shows how grains or some other crop group has changed through time, by country.\nIt is important to point out that the regressions are not entirely proper the way the authors did them. All of the input variables are endogenous, and they did not attempt to use instrumental variables to control for the endogeneity, so the parameters are biased.  This may be acceptable for analyzing historical data for the sake of determining the influence of various variables on yields – much like a hedonic regression – but it is not proper for making policy recommendations for the future. So when the authors make conclusions from estimates based on the endogenous variables (e.g., they suggested that the tropics should reduce grain production and increase fruit and sugarcane production to maximize global calories and yields), one wonders whether they have gone too far in drawing implications from an improperly specified model. The proportions of each crop group in historical data were chosen by individual agents maximizing their utility, taking into consideration market prices along with knowledge of the climate and soils. Implementing policy to change these proportions in order to maximize yields and calories is likely to backfire.\nRecommendation\nThe article clearly makes an important contribution to understanding sources of yield growth globally. There are some relatively minor issues addressed in the preceding sections which can be dealt with, making the article much better for publication.", "responses": [] }, { "id": "20596", "date": "28 Feb 2017", "name": "Nathaniel D Mueller", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNelson and Congdon provide an analysis of historical crop yield evolution across the globe. The analysis is conceptually straightforward and provides a useful, high-level perspective. Overall, I think the analysis is a useful contribution to the literature on global crop production trends.\n\nOne of the strengths of the analysis is the fact that the authors focus on production across many crops, which allows the authors to talk about macro-level trends. However, this lumping together of many crops leads their findings to be heavily dependent upon crop mix trends. This is partly alleviated by putting crops on a ‘common currency’ using kcals. However, I would like to see more discussion about crop weight, water content of harvested products, and calorie content of various crops and how these characteristics drive some of the results. I am thinking specifically how sugarcane and roots and tubers fall out as very important in the decision-tree analysis.\n\nI find the current format of the paper somewhat disorienting. Although the Results section contains information about the analytical methods, data sources are not described in much detail, nor are they very well-described in the “Dataset” boxes later on Page 4. And why should the analytical methods sections be contained within the Results? The Materials and Methods section at the end provides extensive documentation of the equations and statistical approach, but still little information about the data. Nowhere did I see the source of climate data described, the growing season definitions, the soils dataset, kcal conversions, etc … these are essential details to evaluate the quality of the research. The authors will have to decide what re-organization makes sense in the context of F1000 formatting guidelines, but the current orientation and missing details makes the manuscript hard to read front-to-back.\n\nThe authors should be more explicit that they are mixing together variables that directly influence plant growth (e.g. weather, nutrient availability, and fertilizers) and those that proxy for unobserved factors that may influence plant growth (e.g. time and machinery investment).\n\nI take issue with the authors’ statement on Page 13: “Several suggested paths to greater food production are not supported by our analysis. Cropland extensification contributed little to yield gains in the immediate past and are not likely to do so in the future.” This statement about cropland extensification is true, but cropland extensification doesn’t need to boost yields to increase food production … more food production is simply achieved due to greater harvested area. Extensification has many negative environmental impacts, which could be discussed.\n\nThe “Low” and “High” columns in the Table 1 legend and elsewhere are poorly described. You only have one yield observation per country and year, so how are you using an interquartile range of yields? What information should the reader be getting here? It would be most interesting to present a confidence interval on the size of the area between the expected yield curve and the counterfactual’s yield curve, through utilizing your distribution of coefficient estimates from the bootstrap. Based on what I think is the relevant Methods text on Page 15, it seems like the authors have done something slightly different. It’s unclear what information we are supposed to be gleaning from their calculation, and there is no consistent directionality to the Low vs High estimates (nor do they usually bracket the mean value). I suggest the authors use their distribution of coefficient estimates to provide a straightforward-to-interpret confidence interval on the counterfactual area calculation itself. Calculate the counterfactual area for each combination of coefficients across all countries, then report percentiles of that counterfactual area distribution.\n\nI agree with the previous reviewer that attempting to use both daytime and nighttime temperatures is likely pushing the data too hard. The strange coefficient estimates certainly seem to imply that to be the case.\n\nPage 2, Introduction, paragraph 1: References 1–3 do not support the statement in the first sentence, as they do not actually analyze the impacts of historical climate trends on historical crop yields. Reference 4 does support the statement.\n\nPage 2, Introduction, paragraph 2: If you introduce the term “cropped footprint” here it needs to be differentiated from “land”.\n\nPage 3: It might be easier for readers to call the two versions of the analysis “with fertilizer” and “without fertilizer” instead of “wide” and “long”.\n\nPage 4: The time trend also captures the diffusion of modern crop varieties (see, for example, Evenson and Gollin 20031).\n\nPage 12: It seems appropriate to reiterate the very high productivity (and weight) of sugarcane, root and tuber crops here, given their strong predictive power in the decision tree analysis.", "responses": [] } ]
1
https://f1000research.com/articles/5-2930
https://f1000research.com/articles/5-2929/v1
29 Dec 16
{ "type": "Research Article", "title": "Expression of lectin-like transcript-1 in human tissues", "authors": [ "Alba Llibre", "Lucy Garner", "Amy Partridge", "Gordon J. Freeman", "Paul Klenerman", "Chris B. Willberg", "Lucy Garner", "Amy Partridge", "Gordon J. Freeman", "Paul Klenerman" ], "abstract": "Background: Receptor-ligand pairs of C-type lectin-like proteins have been shown to play an important role in cross talk between lymphocytes, as well as in immune responses within concrete tissues and structures, such as the skin or the germinal centres. The CD161-Lectin-like Transcript 1 (LLT1) pair has gained particular attention in recent years, yet a detailed analysis of LLT1 distribution in human tissue is lacking. One reason for this is the limited availability and poor characterisation of anti-LLT1 antibodies. Methods: We assessed the staining capabilities of a novel anti-LLT1 antibody clone (2H7), both by immunohistochemistry and flow cytometry, showing its efficiency at LLT1 recognition in both settings. We then analysed LLT1 expression in a wide variety of human tissues. Results: We found LLT1 expression in circulating B cells and monocytes, but not in lung and liver-resident macrophages. We found strikingly high LLT1 expression in immune-privileged sites, such as the brain, placenta and testes, and confirmed the ability of LLT1 to inhibit NK cell function. Conclusions: Overall, this study contributes to the development of efficient tools for the study of LLT1. Moreover, its expression in different healthy human tissues and, particularly, in immune-privileged sites, establishes LLT1 as a good candidate as a regulator of immune responses.", "keywords": [ "Lectin-Like Transcript 1 (LLT1)", "C-type lectins", "immune-privilege", "human", "distribution", "natural killer cell" ], "content": "Background\n\nReceptor-ligand pairs of C-type lectin-like proteins have been shown to play an important role in cross-talk between lymphocytes and in immune responses within tissues. Three examples have been well characterised in humans. These are the NKp65-Keratinocyte associated C type lectin (KACL), the NKp80-Activated induced C-type lectin (AICL) and the CD161-Lectin-Like Transcript 1 (LLT1), which are involved in skin immunobiology1, cross-talk between Natural Killer (NK) cells and monocytes2 and modulation of T, NK and B cell immune responses3–6, respectively. Amongst these, the CD161-LLT1 pair has been the focus of attention of several recent studies6–10. LLT1 has been described as a multi-functional protein11, and to fully elucidate the functional consequences of its interactions with its receptor, CD161, a comprehensive characterisation of LLT1 distribution is needed. The current published literature presents inconsistencies, which may partially be due to the activation state of the cells tested and the different anti-LLT1 antibodies used. Indeed, LLT1 has been shown to be upregulated upon different forms of activation6,12–15.\n\nTissues within the body display varying antigenic profiles, and the expression of specific molecules is involved in the maintenance of tissue function. Tissue grafts placed in particular anatomical structures can avoid rejection for long periods of time16. This observation led to the notion of immune-privilege, believed to be an evolutionary adaptation to protect essential organs from harmful inflammatory responses. At first, it was thought that antigens did not have access to immune-privileged sites, thus avoiding a response. However, more recent evidence suggested that the maintenance of immune-privilege relies on active rather than passive mechanisms17,18. Some examples include: a lack of lymphatic drainage, low expression of MHC class I molecules, local production of immunosuppressive cytokines, as well as enhanced expression of inhibitory surface molecules19. Immunologically privileged sites include the brain, the eyes, the placenta, the fetus and the testes. Although there has been abundant research regarding the mechanisms behind effective suppression of inflammatory responses in immune-privileged structures, further studies are required to fully elucidate and understand them20.\n\nThe main aim of this study was to broadly characterise the expression of LLT1 within the human body. We screened a wide variety of human cell types and tissues using our novel monoclonal antibody, clone 2H7, and described LLT1 expression in circulating B cells and monocytes. The presence of LLT1 could also be observed in B cells in tonsils, as previously described6,9,21, but not in Kupffer cells in the liver or alveolar macrophages in the lung. Furthermore, LLT1 could be detected in several healthy human tissues, but it was remarkably prevalent in immune-privileged sites, such as brain, placenta and testes. We also confirmed the previously described phenomenon that LLT1 inhibits NK cell function4,5,14,22.\n\nOverall, the current study contributes to the development of effective tools for the study of LLT1. We characterised the strong expression of this C-type lectin in B cells, monocytes and immune-privileged tissues; thus, postulating a role for LLT1 in cross talk between lymphocytes and immune tolerance.\n\n\nMaterials and methods\n\nThe 300.19 cell line is an Abelson leukemia virus transformed murine pre-B cell line derived from Swiss Webster mice (H-2d). They were maintained in RPMI 1640 (Sigma-Aldrich) supplemented with 10% fetal calf serum (FCS; PAA Laboratories), 1% streptomycin/penicillin (Sigma Aldrich), 1% L-glutamine (Sigma Aldrich), 15mg/ml gentamycin (Sigma Aldrich) and 50 × 10-6 M β-mercaptoethanol (Sigma Aldrich).\n\nThe cell lines 300.19-CD161 and 300.19-LLT1 are 300.19 cells transfected with a vector expressing human CD161/LLT1 cDNA and the puromycin resistance gene. These cell lines were maintained in the same media as 300.19 cells, with the addition of puromycin (5mg/ml; Gibco Life Technologies). All three 300.19 cell lines were kept at 37°C, 5% CO2 and split 1:10 three times a week.\n\nAll three 300.19 cell lines were kindly gifted by Gordon Freeman (Harvard Medical School, Boston, MA, USA).\n\nA series of normal paraffin-embedded human tissues comprising samples of tonsil, liver and lung were obtained from Proteogenix.\n\nTonsils were also obtained following routine tonsillectomy from the ENT Department at the John Radcliffe Hospital, Oxford. Ethical approval was obtained from the John Radcliffe Hospital, and written informed consent was obtained from all subjects.\n\nFormalin-fixed, paraffin-embedded healthy and tumour tissue arrays were obtained from AMS Biotechnology.\n\nPeripheral Blood Monocluear cells (PBMCs), obtained from the National Blood Transfusion Service, were isolated on a Lymphoprep gradient (Axis Shield), aliquoted in FCS + 10% dimethyl sulfoxide (Sigma Aldrich) and stored in liquid nitrogen until required.\n\nB cells were isolated by negative magnetic selection using EasySep™ Human B cell Enrichment Kit (STEMCELL technologies), following the manufacturer’s instructions.\n\nClone, 359.2H7; mIgG2a; kappa; dilution, 1.42 mg/ml. Validated by flow cytometric stain of human LLT1 transfected cells. The purified antibody is dialysed against phosphate buffered saline (PBS), is low in endotoxin (< 2EU/mg), and is sterile filtered. This antibody was generated in the laboratory of Gordon Freeman, Harvard Medical School (Boston, MA, USA). For immunohistochemical and flow cytometry stainings, the 2H7 antibody was used at 1:500 and 1:50 dilution, respectively, in PBS.\n\nFor external staining, cells from cell lines or PBMCs in PBS were incubated with anti-surface antibodies at room temperature (RT) for 20 min. Live/dead staining was performed using LIVE/DEAD® Fixable Near-IR Dead Cell Stain Kit(Invitrogen), at 633 or 635 nm excitation.\n\nFor internal staining, cells were fixed with 2% formaldehyde (Sigma Aldrich) in PBS for 10 min and permeabilized with IX permeabilization buffer (eBioscience) in water.\n\nThe following antibodies were used: CD3-FITC (BioLegend, Catalog No. 300406, clone UCHT1, Mouse IgG1, k), CD8-PerCP-Cy5.5 (BioLegend, Catalog No. 344710, clone SK1, Mouse IgG1, k), CD38-PerCP-Cy5.5 (BioLegend, Catalog No. 303522, clone HIT2, Mouse IgG1, k), CD56-APC (Biolegend, Catalog No. 318310, clone HCD56, Mouse IgG1, k); CD19-BV421 (BD Bioscience, Catalog No. 562441, clone HIB19, mouse IgG1, k); CD4-VioGreen (Miltenyi Biotec, Catalog No. 130-106-712, clone M-T466, Mouse IgG1, k), CD161-PE (Miltenyi Biotec, Catalog No. 130-092-677, clone 191B8, Mouse IgG2a), IgG2A isotype control (R&D Systems, Catalog No. MAB003, mouse); and 2H7 mAb. When non-conjugated primary antibodies were used, a secondary rat anti-mouse IgG2A-PE (R&D Systems, Catalog No. F0129, clone 344701, IgG1) was used.\n\nFACS analysis was performed on Miltenyi Biotec MACSQuant cytometer and analyzed with FlowJo Version 9.6.2 software (TreeStar).\n\nTissue deparaffinisation was performed using Histo-Clear (National Diagnostics) and ethanol (Sigma Aldrich; 100%, 90% and 70%). Heat mediated antigen retrieval was achieved using Dako target retrieval solution (Dako). Endogenous peroxidase activity was blocked using 3% H2O2 (5 min × 2; Alfa Aesar) and 0.1% sodium azide (15 min; Sigma Aldrich) in water. Non-specific binding was blocked by incubating the sample for 30 min at RT with 0.5% blocking reagent (PerkinElmer) in PBS. The 2H7 mAb or IgG2A isotype control (R&D Systems) (3 μg/ml) were added and incubated overnight at 4°C. The sample was then incubated with horse anti-mouse polymer horseradish peroxidase (HRP)-conjugated (Vector Laboratories, Catalog No. MP-7402) for 30 min at RT. ImmPACT DAB peroxidase substrate (Vector Laboratories) was added and incubated for 2–10 min. The reaction was stopped with running deionised water. The section was covered with hematoxylin (Vector Laboratories) for 45 seconds and rinsed with deionised water. Samples were then dehydrated by serial passage through 70%, 90% and 100% ethanol followed by Histo-Clear. Samples were allowed to dry and mounted with VectaMount mounting media (Vector Laboratories). For analysis of immunohistochemical staining, images were acquired on a DSS1 Coolscope Slide Scanner (Nikon).\n\nFor immunofluorescent staining, the following primary antibodies were used: anti-LLT1 (R&D Systems, Catalog No. AF3480, goat polyclonal) and anti-CD68 (DAKO, Catalog No. M0876, clone PG-M1, mouse IgG3, k). They were diluted in blocking buffer and incubated for 30 min at RT. Anti-goat HRP-conjugated polymer was added followed by a 30 min incubation at RT. Cyanine 5 Amplification Reagent (PerkinElmer) was diluted 1/300 in Tyramide amplification buffer [12ml 500mM Tris, 18ml H2O, 20 mg Imidazole (Sigma Aldrich), O2], added and incubated in the dark for 15 min. Residual peroxidase activity was blocked by incubating the slides in 3% H2O2 for 5 min and then 0.1% sodium azide for 15 min in water. Anti-mouse HRP-conjugated antibody was added followed by a 30 min incubation at RT. Fluorescein Amplification Reagent (PerkinElmer) was diluted 1/300 in new Tyramide amplification buffer, added and incubated in the dark for 15 min. Slides were mounted with ProLong® Gold Antifade Reagent with DAPI (Invitrogen). For immunofluorescent microscopy, images were acquired on an Olympus Fluoview FV1000 microscope (Olympus) and analyzed using Fiji (ImageJ v1.47h); National Institute of Health, USA).\n\nIn total, 2 × 105 PBMCs per well were seeded in 100μl R10 with IL-15 and IL-2 (1 ng/ml each; PeproTech) and incubated over night at 37°C with 5% CO2. A total of 4 × 104 of 300.19 or 300.19-LLT1 cells were added, together with the CD107a-PE-Cy7 (1/1000; BioLegend) and incubated for 1h at 37°C. Monensin (1/1000; BioLegend) was added and the cells were incubated for another 4–5h before the cells were stained for FACS.\n\nGraphs and statistical analysis were performed using GraphPad Prism Version 6.0a (GraphPad Software) and Adobe Illustrator CS4 14.0.0.\n\n\nResults\n\nThere are still many inconsistences in the published data regarding the distribution of LLT1 in human tissues and cell types. Some past studies reported LLT1 expression in resting PBMCs, whereas others could only detect it after activation13–15.\n\nWe assessed the presence of this C-type lectin in resting and activated PBMCs. We analysed different cell subsets (Figure 1A) and detected abundant expression of LLT1 in resting monocytes (60–80%) and B cells (15–30%) (Figure 1B). These results fit with the previously characterised expression of LLT1 in B-cell derived Raji cells5,13,23 and monocyte-derived THP-1s23. Interestingly, the receptor ligand pair LLT1-CD161 was expressed on PBMCs in an exclusive manner (Figure 1B). While monocytes and B cells expressed LLT1, their levels of CD161 expression were null. On the contrary, all the other subsets tested expressed CD161 to a certain extent, although they did not express its ligand, LLT1.\n\nLectin-like transcript 1 (LLT1) (stained using the 2H7 mAb) and CD161 levels were measured by flow cytometry in monocytes (M) and lymphocytes (L). (A) The gating strategy, (B) representative and cumulative data for the expression of CD161 (blue) and LLT1 (green) compared to the isotype control (IC, red) (n=4). (C) Representative image of LLT1 staining in human tonsil tissue using the 2H7 anti-LLT1 antibody (10×; n=6), together with a representative FACS plot of LLT1 staining of purified tonsillar B cells with the 2H7 antibody. The GC population is highlighted (rectangle; n=8). (D) Immunofluorescent co-staining of LLT1 (red) and CD68 (green) in lung and liver (scale bar = 100μm). Representative of two independent experiments.\n\nWe next studied LLT1 presence in tissue-resident B cells and macrophages. We and others have shown the expression of LLT1 in tissue resident germinal centre B cells6,9. We demonstrated that the 2H7 mAb recognises LLT1 on germinal centre B cells, both immunohistologically and by flow cytometry (Figure 1C)6. Thus, the 2H7 mAb is a good tool for studying the distribution of LLT1 in tissue through immunohistochemical staining. Expression of this C-type lectin on tissue-resident macrophages had previously only been addressed in the joints of rheumatoid arthritis (RA) patients, which were positive for LLT110. We wanted to assess the expression of LLT1 in macrophages resident in other tissues. In order to do so, we performed immunofluorescent staining of lung and liver sections, using LLT1 and the macrophage marker CD68. CD68 expressing alveolar macrophages could be detected in the lung, as well as CD68 expressing Kupffer cells in the liver. However, both cell types were negative for LLT1 (Figure 1D; Dataset 134), suggesting that terminally differentiated macrophages do not express this C-type lectin. Nonetheless, LLT1+ cells could be detected in both tissues, suggesting that these LLT1+ cells may, for example, be epithelial cells, but further work is needed for their full characterisation.\n\nThe expression of LLT1 on activated PBMCs was assessed. Stimulation with PMA/ionomycin for 24 and 48h had no significant effect on CD4+ T cells (Figure 2A). Minimal levels of LLT1 were observed on CD8+ T cells after stimulation, although this did not reach significance (Figure 2C). However, the expression levels were very low and of questionable biological relevance. Similar results were seen using PHA (Figure 2B and D).\n\nPBMCs were stimulated for either 24 or 48h with PMA/ionomycin or PHA. Lectin-like transcript 1 (LLT1) expression on different cell subsets was measured by FACS, and presented as the percentage of LLT1+ cells (left hand graphs) or the gMFI of LLT1 expression (middle graphs) within the given populations. Representative histograms showing isotype control (after 48h; light grey), or LLT1 expression at 24h (dark grey) and 48h (black) are shown in the right hand plots. Two-way ANOVA with Bonferroni’s multiple-comparisons test were applied (**<0.01, ***<0.001 , ****<0.0001). Data from two pooled experiments.\n\nInterestingly, the percentage and levels of LLT1 on B cells initially dropped after 24h, but increased after 48h (Figure 2E) upon stimulation with PMA/ionomocyin; a similar trend was observed with PHA (Figure 2F).\n\nMonocytes also lowered LLT1 expression upon activation after both 24 and 48h stimulation with PMA/ionomycin (Figure 2G), but not after PHA stimulation (Figure 2H).\n\nThere have been limited attempts to characterise the distribution of LLT1 within human tissues. In this study, we screened a wide variety of human healthy tissues using the 2H7 antibody clone. A representative stain of each tissue tested is shown in Figure 3. LLT1 could be detected in a wide variety of tissues, such as the gallbladder and the digestive tract (glandular cells), as well as in the kidneys (cells in tubules) or the lung (pneumocytes). We also compared the expression pattern of LLT1 in healthy and tumour human tissues (Supplementary Figure 1). Although LLT1 upregulation has been shown in glioblastoma and prostate cancer22,24, our results did not support this being a common trend in all cancerous tissues. Most likely, changes in LLT1 expression upon malignant transformation are tissue-dependent.\n\nStaining of human healthy tissue with the 2H7 anti-LLT1 antibody at 1/500. Representative images from three independent experiments. 5×, 10× and 20× magnification.\n\nAlthough LLT1 could be detected in different human tissues (Figure 3), its expression was strikingly high in immune-privileged sites (Figure 4). Cells in the seminiferous ducts within the testes, trophoblastic cells in the placenta and neurons strongly expressed LLT1. Purkinje cells, a large type of neuron that resides in the cerebellum and release the neurotransmitter gamma-aminobutyric (GABA) was also found to be positive for LLT1. A key feature of immune privilege is low expression of MHC class I molecules, which protects certain tissues from excessive and damaging inflammatory T cell responses19. However, downregulation of MHC class I molecules results in increased susceptibility to NK cell killing. Therefore, we next tested the effect of LLT1 on NK cell cytotoxic effector functions.\n\nLectin-like transcript 1 (LLT1) and isotype control stainings of testes (A), brain (B) and placenta (C) using the anti-LLT1 2H7 mAb (1/500). Representative image of three independent experiments. 20× magnification.\n\nA role for LLT1 in suppression of NK cell function has been described4,5,14,22. Here, we confirmed that the presence of LLT1 reduces NK cell degranulation. NK cell surface expression of CD107a was reduced when NK cells were cultured with target cells, the 300.19 cell line transfected with LLT1, as compared to controls (Figure 5A and B). Figure 5C shows expression levels of LLT1 on target cells, confirming very high levels of this C-type lectin in the transfected 300.19-LLT1 cells, as expected. In summary, our data suggests a plausible role for LLT1 in immune-regulation and, particularly, in negative modulation of NK cell responses in immune-privileged sites.\n\nThe percentage of CD107a- expressing NK cells, (gated on live, CD3- CD56+ cells) (A), as well as CD107a geoMFI values (B) with 300.19- lectin-like transcript 1 (LLT1) as targets compared to the untransfected 300.19 cells (*<0.05, non-parametric paired T-test; CD107a geoMFI: p value=0.0239; percentage of CD107a-expressing cells: p value=0.0239). Data pooled together from two independent experiments (n=6). (C) Expression of LLT1 on untransfected 300.19 and 300.19-LLT1 cell lines. Representative histogram of three independent FACS stainings with the 2H7 anti-LLT1 antibody.\n\n\nDiscussion\n\nIn humans, there are three well-characterised NKC-encoded receptor-ligand pairs: these are the CD161-LLT1, NKp65-KACL and NKp80-AICL. The expression of CD161 has been widely studied. It has been described on the vast majority of NK cells25, on different innate-like T cell subset,s such as NKT cells3, mucosal-associated invariant T (MAIT) cells26, γδ T cells27,28 and in other T cell subgroups, both in the CD4+ and CD8+ compartments. CD161 defines cell populations with shared transcriptional and functional features across different human T cell lineages8. In contrast, the expression and localisation of LLT1 has been much less studied.\n\nLLT1 was first described on NK, T and B cells29, although some subsequent studies showed different results12,13. We showed LLT1 expression on circulating B cells and monocytes, confirming the results obtained in previous research12. It is important to note that the current literature still presents inconsistencies regarding LLT1 distribution in PBMCs, which could be due to the use of different antibodies as well as the diverse activation state of the cells tested5,6,12,13.\n\nLLT1 has been shown in joint-resident macrophages of RA patients10; however, we could not detect LLT1 on macrophages from the liver or the lung. These results could be explained by the state of activation of macrophages, suggesting an increase of LLT1 expression under inflammatory conditions. PMA/ionomycin and mitogen (PHA) stimulation of PBMCs demonstrated that a broad range of cell types could, to a limited degree, express some LLT1, which was dependent on the duration of the stimuli. In particular, B cells showed a bi-phasic expression pattern.\n\nWe showed expression of LLT1 in different healthy human tissues (Figure 3) and, particularly, in immune-privileged sites (Figure 4). It is believed that immune-privilege is the result of an evolutionary process that confers special immune tolerance to certain structures19. Organs, such as the eye, the brain or the placenta, present the exceptional capacity of preventing classical inflammatory responses that could be highly detrimental or even fatal. This singular immune status is linked to low expression levels of MHC class I molecules, which subsequently lead to increased susceptibility to killing by NK cells. We and others have shown that the presence of LLT1 results in decreased NK cell function (Figure 5)4,5,14. Therefore, LLT1 could play a prominent role in keeping NK cells under control in immune-privilege sites, thus preventing damage of low-expressing MHC class I tissues. This hypothesis matches with the role described for the murine version of LLT1, mOCIL. The distribution of mOCIL differs substantially from its human homolog, as it is believed to be expressed almost ubiquitously, similarly to MHC class I molecules30,31.\n\nA high degree of homology has been described for the mouse and human forms of the CLEC2D protein32, suggesting that these antibodies could be reacting with both mouse (Clr-b) and human (LLT1) forms. The distribution of Clr-b has been widely studied, in contrast to the human one. However, so far, there is no particular mention of the presence of Clr-b in mouse B cells, although it has been described in nearly all haematopoietic cells and abundant mouse tissues, with some exceptions (i.e. the brain)30,31.\n\nWe found LLT1 and CD161 to be expressed in different subgroups of lymphocytes as well as monocytes (Figure 1). We also described expression of LLT1 in various human healthy tissues and particularly in immune-privileged sites (Figure 3 and Figure 4). It is tempting to speculate that this pair of C-type lectins is involved in the cross-talk between distinct LLT1 and CD161 expressing immune cell types, such as B cells and T cells or monocytes and NK cells. The closely related pair of C-type lectins NKp80-AICL follows this same pattern: they are expressed on NK cells and monocytes, respectively, playing a role in reciprocal cell activation2. The other well-described human C-type lectin pair is NKp65-KACL, which is expressed mainly on NK cells and keratinocytes, respectively33. Thus, this particular pair illustrates a very different case, as it is involved in the immune surveillance of a specific tissue (i.e. the skin). This framework could also apply to LLT1 and CD161, both in terms of lymphocyte/monocyte interaction and interaction between NK cells and immune-privileged sites, although these hypotheses require further investigation.\n\nOverall, we have contributed to the development and optimisation of tools necessary for the study of LLT1. Its striking expression in immune-privileged sites, as well as its presence in different immune cell types establishes LLT1 as an excellent candidate for immune-regulation. A detailed understanding of LLT1 distribution, regulation and function will give great insights into our knowledge on how immune-privilege works, as well as helping us to comprehend tissue-specific immune responses during inflammation.\n\n\nData availability\n\nDataset 1: All staining and flow cytometry experiments undertaken by the present study. Doi, 10.5256/f1000research.10009.d14745934.", "appendix": "Author contributions\n\n\n\nAL, CBW, and PK designed, performed, and analyzed experiments and wrote the manuscript; GJF provided the anti-LLT1 Abs and the 300 cell lines; LG and AP contributed to specific experiments; and CBW and PK designed experiments and provided overall guidance.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by grants from the NIHR Biomedical Research Centre, Oxford, and NIHNIAID U19 Bio-defense Programme (NIH NIAID 1U19AI082630-01) (CBW), Wellcome Trust Senior Fellowship WT091663MA (PK) and Obra Social La Caixa (AL).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors would like to thank Miss Karina Canzig for their technical help; Dr Ayako Kurioka and Dr Joannah R Fergusson for their comments and critical reading of this manuscript.\n\n\nSupplementary material\n\nSupplementary Figure 1. LLT1 staining of human tumor tissues. Staining of cancerous human tissues was performed using the 2H7 antibody (1/500). Changes in Lectin-Like Transcript 1 (LLT1) expression were tissue dependent. Examples of brain (A), colon (B) and uterus tissue are shown. 10× and 20× magnification.\n\nClick here to access the data.\n\n\nReferences\n\nSpreu J, Kuttruff S, Stejfova V, et al.: Interaction of C-type lectin-like receptors NKp65 and KACL facilitates dedicated immune recognition of human keratinocytes. Proc Natl Acad Sci U S A. 2010; 107(11): 5100–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWelte S, Kuttruff S, Waldhauer I, et al.: Mutual activation of natural killer cells and monocytes mediated by NKp80-AICL interaction. Nat Immunol. 2006; 7(12): 1334–42. PubMed Abstract | Publisher Full Text\n\nExley M, Porcelli S, Furman M, et al.: CD161 (NKR-P1A) costimulation of CD1d-dependent activation of human T cells expressing invariant Vα24JαQ T cell receptor α chains. J Exp Med. 1998; 188(5): 867–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAldemir H, Prod’homme V, Dumaurier MJ, et al.: Cutting edge: lectin-like transcript 1 is a ligand for the CD161 receptor. J Immunol. 2005; 175(12): 7791–5. PubMed Abstract | Publisher Full Text\n\nRosen DB, Cao W, Avery DT, et al.: Functional consequences of interactions between human NKR-P1A and its ligand LLT1 expressed on activated dendritic cells and B cells. J Immunol. 2008; 180(10): 6508–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLlibre A, López-Macías C, Marafioti T, et al.: LLT1 and CD161 Expression in Human Germinal Centers Promotes B Cell Activation and CXCR4 Downregulation. J Immunol. 2016; 196(5): 2085–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUssher JE, Bilton M, Attwod E, et al.: CD161++ CD8+ T cells, including the MAIT cell subset, are specifically activated by IL-12+IL-18 in a TCR-independent manner. Eur J Immunol. 2014; 44(1): 195–203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFergusson JR, Smith KE, Fleming VM, et al.: CD161 defines a transcriptional and functional phenotype across distinct human T cell lineages. Cell Rep. 2014; 9(3): 1075–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGermain C, Guillaudeux T, Galsgaard ED, et al.: Lectin-like transcript 1 is a marker of germinal center-derived B-cell non-Hodgkin’s lymphomas dampening natural killer cell functions. OncoImmunology. 2015; 4(8): e1026503. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalan P, Bijzet J, Huitema MG, et al.: Expression of Lectin-Like Transcript 1, the Ligand for CD161, in Rheumatoid Arthritis. PLoS One. 2015; 10(7): e0132436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLlibre A, Klenerman P, Willberg CB: Multi-functional lectin-like transcript-1: A new player in human immune regulation. Immunol Lett. 2016; 177: 62–9. PubMed Abstract | Publisher Full Text\n\nMathew PA, Chuang SS, Vaidya SV, et al.: The LLT1 receptor induces IFN-gamma production by human natural killer cells. Mol Immunol. 2004; 40(16): 1157–63. PubMed Abstract | Publisher Full Text\n\nGermain C, Meier A, Jensen T, et al.: Induction of lectin-like transcript 1 (LLT1) protein cell surface expression by pathogens and interferon-γ contributes to modulate immune responses. J Biol Chem. 2011; 286(44): 37964–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosen DB, Bettadapura J, Alsharifi M, et al.: Cutting edge: lectin-like transcript-1 is a ligand for the inhibitory human NKR-P1A receptor. J Immunol. 2005; 175(12): 7796–9. PubMed Abstract | Publisher Full Text\n\nEichler W, Ruschpler P, Wobus M, et al.: Differentially induced expression of C-type lectins in activated lymphocytes. J Cell Biochem Suppl. 2001; 81(Suppl 36): 201–8. PubMed Abstract | Publisher Full Text\n\nBarker CF, Billingham RE: Immunologically Privileged Sites. Adv Immunol. 1978; 25: 1–54. PubMed Abstract | Publisher Full Text\n\nStreilein JW, Takeuchi M, Taylor AW: Immune privilege, T-cell tolerance, and tissue-restricted autoimmunity. Hum Immunol. 1997; 52(2): 138–43. PubMed Abstract | Publisher Full Text\n\nFerguson TA, Griffith TS: A vision of cell death: insights into immune privilege. Immunol Rev. 1997; 156(1): 167–84. PubMed Abstract | Publisher Full Text\n\nHong S, Van Kaer L: Immune privilege: keeping an eye on natural killer T cells. J Exp Med. 1999; 190(9): 1197–200. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForrester JV, Xu H, Lambe T, et al.: Immune privilege or privileged immunity? Mucosal Immunol. 2008; 1(5): 372–81. PubMed Abstract | Publisher Full Text\n\nGermain C, Bihl F, Zahn S, et al.: Characterization of alternatively spliced transcript variants of CLEC2D gene. J Biol Chem. 2010; 285(46): 36207–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoth P, Mittelbronn M, Wick W, et al.: Malignant glioma cells counteract antitumor immune responses through expression of lectin-like transcript-1. Cancer Res. 2007; 67(8): 3540–4. PubMed Abstract | Publisher Full Text\n\nGermain C, Bihl F, Zahn S, et al.: Characterization of alternatively spliced transcript variants of CLEC2D gene. J Biol Chem. 2010; 285(46): 36207–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMathew SO, Chaudhary P, Powers SB, et al.: Overexpression of LLT1 (OCIL, CLEC2D) on prostate cancer cells inhibits NK cell-mediated killing through LLT1-NKRP1A (CD161) interaction. Oncotarget. 2016. PubMed Abstract | Publisher Full Text\n\nLanier LL, Chang C, Phillips JH: Human NKR-P1A. A disulfide-linked homodimer of the C-type lectin superfamily expressed by a subset of NK and T lymphocytes. J Immunol. 1994; 153(6): 2417–28. PubMed Abstract\n\nMartin E, Treiner E, Duban L, et al.: Stepwise development of MAIT cells in mouse and human. PLoS Biol. 2009; 7(3): e54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaggi L, Santarlasci V, Capone M, et al.: CD161 is a marker of all human IL-17-producing T-cell subsets and is induced by RORC. Eur J Immunol. 2010; 40(8): 2174–81. PubMed Abstract | Publisher Full Text\n\nRajoriya N, Fergusson JR, Leithead JA, et al.: Gamma Delta T-lymphocytes in Hepatitis C and Chronic Liver Disease. Front Immunol. 2014; 5: 400. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoles KS, Barten R, Kumaresan PR, et al.: Cloning of a new lectin-like receptor expressed on human NK cells. Immunogenetics. 1999; 50(1–2): 1–7. PubMed Abstract | Publisher Full Text\n\nPlougastel B, Dubbelde C, Yokoyama WM: Cloning of Clr, a new family of lectin-like genes localized between mouse Nkrp1a and Cd69. Immunogenetics. 2001; 53(3): 209–14. PubMed Abstract | Publisher Full Text\n\nZhang Q, Rahim MM, Allan DS, et al.: Mouse Nkrp1-Clr gene cluster sequence and expression analyses reveal conservation of tissue-specific MHC-independent immunosurveillance. PLoS One. 2012; 7(12): e50561. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu YS, Zhou H, Myers D, et al.: Isolation of a human homolog of osteoclast inhibitory lectin that inhibits the formation and function of osteoclasts. J Bone Miner Res. 2004; 19(1): 89–99. PubMed Abstract | Publisher Full Text\n\nSpreu J, Kienle EC, Schrage B, et al.: CLEC2A: a novel, alternatively spliced and skin-associated member of the NKC-encoded AICL-CD69-LLT1 family. Immunogenetics. 2007; 59(12): 903–12. PubMed Abstract | Publisher Full Text\n\nLlibre A, Garner L, Partridge A, et al.: Dataset 1 in: Expression of lectin-like transcript-1 in human tissues. F1000Research. 2016. Data Source" }
[ { "id": "18845", "date": "03 Jan 2017", "name": "Lewis Lanier", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a comprehensive analysis of the LLT1 (CLEC2D) protein on human cells and tissues using a newly generated monoclonal antibody, clone 2H7, using flow cytometry and immunohistochemistry. In general, the information will be of interest to the scientific community studying this ligand - receptor (CD161) pair. The following points should be addressed:\nWhen describing the amounts of antibody used to stain cells or tissues, the absolute concentration of the antibody should be stated (e.g. 1 ug/ml, etc.), rather than \"1/500\", which is meaningless unless the protein concentration of the antibody solution is provided. In the Materials mention a solution of 1.42 mg/ml, so is this then diluted 1/500? It would be clearer to simply state the concentration used in the staining experiment rather than \"1/500\".\n\nThe authors show decreased degranulation when NK cells were co-cultured with a cell line transfected with LLT1. However, subclones of cell lines or transfectants can vary in their ability to induce NK cell degranulation that is independent of the transduced gene or cDNA. The authors should provide a control experiment in which the LLT1-induced inhibition is reversed in the presence of a blocking antibody against CD161 and LLT1. Does the 2H7 antibody block the CD161-LLT1 interaction? This is important information that should be included in the revised paper.\n\nThe authors should comment on how surface expression of LLT1 relates to expression of LLT1 transcripts in these cell populations. Do they correlate or not?\n\nPMA and ionomycin were used to stimulate the PBMC before analysis for LLT1 expression. It is well know that PMA can induce shedding of many cell surface proteins, for example CD4. Is LLT1 affected by PMA? This should be investigated and reported.\n\nIs the 2H7 antibody available commercially or will the authors deposit the hybridoma in the ATCC so it is available to others in the community? This is important so that other investigators can take advantage of this new information about LLT1 expression.", "responses": [] }, { "id": "20405", "date": "08 Mar 2017", "name": "Ondřej Vaněk", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present study by A. Llibre et al. is describing novel monoclonal antibody, clone 2H7, that is able to detect protein LLT1 in various healthy as well as tumour human tissues, as shown by immunohistochemistry, fluorescence microscopy and flow cytometry. LLT1 was shown to have broad expression pattern with the highest expression levels detected in circulating B cells and monocytes and, surprisingly, also in immune-privileged sites - brain, placenta, testes. This observation supports its role in inhibition of NK cell cytotoxicity mediated by its interaction with inhibitory NK cell receptor NKR-P1. LLT1 inhibitory properties were confirmed in the present study using NK cell degranulation assay.\nOverall, this study is of general interest, mainly to the scientists studying directly this receptor:ligand interaction pair or other closely related pairs. Given the fact that LLT1 and especially NKR-P1 are linked to multiple human immune pathologies or other diseases, detailed characterization of LLT1 expression is adding valuable piece of information to this field.\nHowever, as already pointed out by Prof. Lanier in his peer review, I would also argue that for maximising impact of this work and its possible benefit to the scientific community, the availability of this novel antibody should be addressed, as well as the information whether it is blocking the LLT1:NKR-P1 interaction or not. Also, as already mentioned in preceding review, the use of blocking antibody (possibly 2H7) should have been included in degranulation assay, as it is a standard control setup in such experiments. This would also directly show blocking capabilities of 2H7 clone.", "responses": [] }, { "id": "20406", "date": "20 Mar 2017", "name": "Fiona Culley", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present data on the distribution of LLT1 in human tissues, obtained by flow cytometric analysis of PBMC and by immunohistochemical analysis of diverse tissues, using a novel monoclonal antibody 2H7. The current literature is inconstant in this regard, so this paper will be a useful reference point for those studying of the biology of LLT1.\nCan the authors provide more information on how the antibody and cell lines used to test it were originally generated and selected in the laboratory of Gordon Freeman, or provide a reference.\nIn the results section describing Figure 1, the authors state that 2H7 is a good tool for studying distribution in tissue, but the methods section state that for Fig 1D, where immunofluorescence is shown, the antibody AF3480 is used. Is this correct? Can the authors clarify why a different antibody was used here?\nPlease clarify what the 6 histograms in each part of Figure 2 represent. The columns are labelled 24h and 48h, but this does not appear to correspond to the figure legend.\nThe authors should add scale bars to the images, rather than stating the magnification of the objective lens.\nPlease correct IgG2A to IgG2a throughout.\nThe authors should state the reference number for the ethical permission obtained.", "responses": [] } ]
1
https://f1000research.com/articles/5-2929
https://f1000research.com/articles/5-1542/v1
29 Jun 16
{ "type": "Software Tool Article", "title": "TCGA Workflow: Analyze cancer genomics and epigenomics data using Bioconductor packages", "authors": [ "Tiago C. Silva", "Antonio Colaprico", "Catharina Olsen", "Fulvio D'Angelo", "Gianluca Bontempi", "Michele Ceccarelli", "Houtan Noushmehr", "Antonio Colaprico", "Catharina Olsen", "Fulvio D'Angelo", "Gianluca Bontempi", "Michele Ceccarelli" ], "abstract": "Biotechnological advances in sequencing have led to an explosion of publicly available data via large international consortia such as The Cancer Genome Atlas (TCGA), The Encyclopedia of DNA Elements (ENCODE), and The NIH Roadmap Epigenomics Mapping Consortium (Roadmap). These projects have provided unprecedented opportunities to interrogate the epigenome of cultured cancer cell lines as well as normal and tumor tissues with high genomic resolution. The bioconductor project offers more than 1,000 open-source software and statistical packages to analyze high-throughput genomic data. However, most packages are designed for specific data types (e.g. expression, epigenetics, genomics) and there is no comprehensive tool that provides a complete integrative analysis harnessing the resources and data provided by all three public projects. A need to create an integration of these different analyses was recently proposed. In this workflow, we provide a series of biologically focused integrative downstream analyses of different molecular data. We describe how to download, process and prepare TCGA data and by harnessing several key bioconductor packages, we describe how to extract biologically meaningful genomic and epigenomic data and by using Roadmap and ENCODE data, we provide a workplan to identify candidate biologically relevant functional epigenomic elements associated with cancer. To illustrate our workflow, we analyzed two types of brain tumors : low-grade glioma (LGG) versus high-grade glioma (glioblastoma multiform or GBM). This workflow introduces the following Bioconductor packages: AnnotationHub, ChIPSeeker, ComplexHeatmap, pathview, ELMER, GAIA, MINET, RTCGAtoolbox, TCGAbiolinks.", "keywords": [ "Epigenomics", "Genomics", "Cancer", "non-coding", "TCGA", "ENCODE", "Roadmap", "Bioinformatics" ], "content": "Introduction\n\nCancer is a complex genetic disease spanning multiple molecular events such as point mutations, structural variations, translocations and activation of epigenetic and transcriptional signatures and networks. The effects of these events take place at different spatial and temporal scales with interlayer communications and feedback mechanisms creating a highly complex dynamic system. In order to get insight in the the biology of tumors most of the research in cancer genomics is aimed at the integration of the observations at multiple molecular scales and the analysis of their interplay. Even if many tumors share similar recurrent genomic events, the understanding of their relationships with the observed phenotype are often not understood. For example, although we know that the majority of the most aggressive form of brain tumors such as glioma harbor the mutation of a single gene (IDH), the mechanistic explanation of the activation of its characteristic epigenetic and transcriptional signatures are still far to be well characterized. Moreover, network-based strategies have recently emerged as an effective framework for the discovery functional disease drivers that act as main regulators of cancer phenotypes. Here we describe a comprehensive workflow that integrates many Bioconductor packages in order to analyze and integrate the molteplicity of molecular observation layers in large scale cancer dataset.\n\nIndeed, recent technological developments allowed the deposition of large amounts of genomic and epigenomic data, such as gene expression, DNA methylation, and genomic localization of transcription factors, into freely available public international consortia like The Cancer Genome Atlas (TCGA), The Encyclopedia of DNA Elements (ENCODE), and The NIH Roadmap Epigenomics Mapping Consortium (Roadmap)1. An overview of the three consortia is described below:\n\nThe Cancer Genome Atlas (TCGA): The TCGA consortium, which is a National Institute of Health (NIH) initiative, makes publicly available molecular and clinical information for more than 30 types of human cancers that include: exome (variant analysis), single nucleotide polymorphism (SNP), DNA methylation, transcriptome (mRNA), microRNA (miRNA), proteome and clinical information. Sample types available at TCGA are: primary solid tumors, recurrent solid tumors, blood derived normal and tumor, and solid tissue normal2.\n\nThe Encyclopedia of DNA Elements (ENCODE): Found in 2003 by the National Human Genome Research Institute (NHGRI), the project aims to build a comprehensive list of functional elements that have an active role in the genome, including regulatory elements that govern gene expression. Biosamples includes immortalized cell lines, tissues, primary cells and stem cells3.\n\nThe NIH Roadmap Epigenomics Mapping Consortium: This was launched with the goal of producing a public resource of human epigenomic data in order to analyze biology and disease-oriented research. Roadmap maps DNA methylation, histone modifications, chromatin accessibility, and small RNA transcripts in stem cells and primary ex vivo tissues4,5.\n\nBriefly, these three consortia provide large scale epigenomic data onto a variety of microarrays and next-generation sequencing (NGS) platforms. Each consortium encompasses specific types of biological information on specific type of tissue or cell and when analyzed together, it provides an invaluable opportunity for research laboratories to better understand the developmental progression of normal to cancer state at the molecular level and, importantly, it correlates these phenotypes with tissue of origins.\n\nAlthough there exists a wealth of possibilities6 in accessing cancer associated data, bioconductor represents the most comprehensive set of open source, updated and integrated professional tools for the statistical analysis of large scale genomic data. Thus, we propose our workflow within bioconductor to describe how to download, process, analyze and integrate cancer data to understand specific cancer-related specific questions. However, there is no tool that solves the issue of integration in a comprehensive sequence and mutation information, epigenomic state and gene expression within the context of gene regulatory networks to identify oncogenic drivers and characterize altered pathways during cancer progression. Our workflow presents several bioconductor packages to work with genomic and epigenomics data.\n\n\nMethods\n\nTCGA data are accessible via the TCGA data portal and the Broad Institute’s GDAC Firehose. The data are provided as different levels or tiers: Level 1 (Raw Data), Level 2 (Processed Data), Level 3 (Segmented or Interpreted Data) and Level 4 (Region of Interest Data). While the TCGA data portal provides level 1 to 3 data, Firehose only provides level 3 and 4. An explanation of the different levels can be found at TCGA Wikipedia. The data provided by TCGA data portal can be accessed using Bioconductor package TCGAbiolinks, while the data provided by Firehose can be accessed by Bioconductor package RTCGAtoolbox.\n\nThe next steps describes how one could use TCGAbiolinks & RTCGAtoolbox to download clinical, genomics, transcriptomics, epigenomics data, as well as subtype information and GISTIC results (identified genes targeted by somatic copy-number alterations (SCNAs) that drive cancer growth). Just to reiterate, the data used in this workflow are published data and freely available.\n\nDownloading data from TCGA data portal. The Bioconductor package TCGAbiolinks7 has three main functions TGCAquery, TCGAdownload and TCGAprepare that should sequentially be used to respectively search, download and load the data as an R object.\n\nTGCAquery searches in a pre-processed TCGA database and returns a summary table with the found files, samples, version and other useful information. The most important TGCAquery arguments are tumor which receives one or multiple tumor types (USC, LGG, SKCM, KICH, CHO, etc), platform which receives the platform (HumanMethylation27, Genome_Wide_SNP_6, IlluminaHiSeq_RNASeqV2, etc), version which receives the version of the data to be downloaded if the user wants an older version and samples which receives a list of TCGA barcodes (ex. \"TCGA-CS-4938\") to filter the search results. A complete list of possible entries for arguments can be found in the TCGAbiolinks vignette. Lines 6 and 13 of Listing 1 show an example of this function.\n\nAfter searching, the user will be able to download the data with TCGAdownload. An important feature of this function is the ability to filter the data using the arguments type if the user wants to specify file tumor type and samples if user wants to specify samples (list of TCGA barcodes). For example, lines 15 and 18 of Listing 1 are used to select a specific tumor type to download and prepare the data respectively. The platforms and their possible inputs for the type argument is shown below:\n\nRNASeqV2: junction_quantification, rsem.genes.results, rsem.isoforms.results, rsem.genes.normalized_results, rsem.isoforms.normalized_results, bt.exon_quantification\n\nRNASeq: exon.quantification, spljxn.quantification, gene.quantification\n\ngenome_wide_snp_6: hg18.seg, hg19.seg, nocnv_,hg18.seg, nocnv_hg19.seg\n\nIlluminaHiSeq_miRNASeq: hg19.mirbase20.mirna.quantification, hg19.mirbase20.isoform.quantification, mirna.quantification, isoform.quantification\n\nFinally, TCGAprepare transforms the downloaded data into a summarizedExperiment object or a data frame. If summarizedExperiment is set to TRUE, TCGAbiolinks will add metadata to the object in order to help the user when working with the data. Also, if the user sets the argument add.subtype to TRUE the summarizedExperiment will receive subtype information defined by The Cancer Genome Atlas (TCGA) Research Network reports (the full list of papers can be seen in TCGAquery_subtype section in TCGAbiolinks vignette), Likewise, if the user sets the argument add.clinical to TRUE the summarizedExperiment will receive clinical information. Lines 8–11 and 18–22 of Listing 1 illustrates this function.\n\n\n\nListing 1. Downloading DNA methylation and gene expression data from TCGA with TCGAbiolinks\n\nIf a summarizedExperiment object was chosen, the data can be accessed with three different accessors: assay for the data information, rowRanges to gets the range of values in each row and colData to get the sample information (patient, batch, sample type, etc)8,9. An example is shown in Listing 2.\n\n\n\nListing 2. summarizedExperiment accessors\n\nClinical data can be obtained using the function TCGAquery_clinical which can be used as described in Listing 3. This function has three arguments tumor, clinical_data_type and samples. The clinical_data_type argument is always required and should be accompanied by at least one of the other two parameters. Examples for the argument clinical_data_type are: “clinical_drug”, “clinical_patient”, and “clinical_radiation” (a complete list and description can be found in the section ‘Working with clinical data.’ of the TCGAbiolinks vignette).\n\nAn important note about the clinical data is that follow-up data for TCGA patients are contained in the ‘clinical_follow_up’ files for each cancer type and to obtain all available disease progression information, the users should use all the follow_up files in your analyses, not just the latest version.\n\n\n\nListing 3. Downloading clinical data with TCGAbiolinks\n\nMutation information is stored in Mutation Annotation Format (MAF) files which contain different mutation types (somatic or germline) and states (validated or putative). A summary of all the Mutation Annotation Format (MAF) can be accessed at TCGA wiki. To download these data using TCGAbiolinks, TCGAquery_maf function is provided. It will download the non-obsolete tables from TCGA wiki, remove the protected entries and ask the user which file s/he wants to download (see Listing 4). It will then download and return a data frame with the data.\n\n\n\nListing 4. Downloading mutation data with TCGAbiolinks\n\nFinally, the Cancer Genome Atlas (TCGA) Research Network has reported integrated genome-wide studies of various diseases, in what is called ‘PanCan’. TCGAqueryPrepare function can automatically import the subtypes defined by these reports and incorporate them into a summarizedExperiment object. The subtypes can also be accessed using TCGAquery_subtype function. The subtypes include: LGG10, GBM10, STAD11, BRCA12, READ13, COAD13 and LUAD14.\n\n\n\nListing 5. summarizedExperiment accessors\n\nDownloading data from Broad TCGA GDAC. The Bioconductor package RTCGAtoolbox15 provides access to Firehose Level 3 and 4 data through the function getFirehoseData. The following arguments allows users to select the version and tumor type of interest:\n\ndataset - Tumor to download. A complete list of possibilities are listed in getFirehoseDatasets function.\n\nrunDate - Stddata run dates. Dates can be viewed with getFirehoseRunningDates function.\n\ngistic2_Date - Analyze run dates. Dates can viewed with getFirehoseAnalyzeDates function.\n\nThese arguments can be used to select the data type to download: RNAseq_Gene, Clinic, miRNASeq_Gene, ccRNAseq2_Gene_Norm, CNA_SNP, CNV_SNP, CNA_Seq, CNA_CGH, Methylation, Mutation, mRNA_Array, miRNA_Array, and RPPA.\n\nBy default, RTCGAtoolbox allows users to download up to 500 MB worth of data. To increase the size of the download, users are encouraged to use fileSizeLimit argument. An example is found in Listing 6. The getData function allow users to access the downloaded data (see lines 22–24 of Listing 6) as a S4Vector object.\n\n\n\nListing 6. Downloading TCGA data files with RTCGAtoolbox\n\nFinnaly, RTCGAtoolbox can access level 4 data, which can be handy when the user requires GISTIC results. GISTIC is used to identify genes targeted by somatic copy-number alterations (SCNAs)16 (see Listing 7).\n\n\n\nListing 7. Using RTCGAToolbox to get the GISTIC results\n\nCopy number variations (CNV) has a critical role in cancer development and progression. A chromosomal segment can be deleted or amplified as a result of genomic rearrangements, such as deletions, duplications, insertions and translocations. CNV are genomic regions greater than 1 kb with an alteration of copy number between two conditions, e.g. Tumor versus Normal.\n\nTCGA collects copy number data and allows the CNV profiling of cancer. Tumor and paired-normal DNA samples were analyzed for CNV detection using microarray- and sequencing-based technologies. Level 3 processed data are the aberrant regions along the genome resulting from CNV segmentation, and they are available for all copy number technologies.\n\nIn this section, we will show how to analyze CNV level 3 data from TCGA to identify recurrent alterations in cancer genome. We analyzed GBM and LGG segmented CNV from SNP array (Affymetrix Genome-Wide Human SNP Array 6.0).\n\nPre-Processing Data. The only CNV platform available for both LGG and GBM in TCGA is \"Affymetrix Genome-Wide Human SNP Array 6.0\". Using TCGAbiolinks, we queried for CNV SNP6 level 3 data for primary solid tumor samples. Data for selected samples were downloaded and prepared in two separate rse objects (RangedSummarizedExperiment).\n\n\n\nListing 8. Searching, downloading and preparing CNV data with TCGAbiolinks\n\nIdentification of recurrent CNV in cancer. Cancer related CNV have to be present in many of the analyzed genomes. The most significant recurrent CNV were identified using GAIA17, an iterative procedure where a statistical hypothesis framework is extended to take into account within-sample homogeneity. GAIA is based on a conservative permutation test allowing the estimation of the probability distribution of the contemporary mutations expected for non-driver markers. Segmented data retrieved from TCGA were used to generate a matrix including all needed information about the observed aberrant regions. Furthermore, GAIA requires genomic probes metadata (specific for each CNV technology), that can be downloaded from broadinstitute website.\n\n\n\nListing 9.Recurrent CNV identification in cancer with GAIA\n\nRecurrent amplifications and deletions were identified for both LGG (Figure 1a) and GBM (Figure 1b), and represented in chromosomal overview plots by a statistical score (—log10 corrected p-value for amplifications and log10 corrected p-value for deletions). Genomic regions identified as significantly altered in copy number (corrected p-value < 10–4) were then annotated to report amplified and deleted genes potentially related with cancer.\n\nGene annotation of recurrent CNV. The aberrant recurrent genomic regions in cancer, as identified by GAIA, have to be annotated to verify which genes are significantly amplified or deleted. Using biomaRt we retrieved the genomic ranges of all human genes and we compared them with significant aberrant regions to select full length genes. An example of the result is shown in Table 1.\n\n\n\nListing 10. Gene annotation of recurrent CNV\n\nVisualizing multiple genomic alteration events. In order to visualize multiple genomic alteration events we recommend using OncoPrint plot which is provided by bioconductor package complexHeatmap18. The Listing 11 shows how to download mutation data using TCGAquery_maf (line 4), then we filtered the genes to obtain genes with mutations found among glioma specific pathways (lines 6 – 12). The following steps prepared the data into a matrix to fit oncoPrint function. We defined SNPs as blue, insertions as green and deletions as red. The upper barplot indicates the number of genetic mutation per patient, while the right barplot shows the number of genetic mutations per gene. Also, it is possible to add annotations to rows or columns. In the columns case, if an insertion is made at the top, will remove the barplot. The final result for adding the annotation to the bottom is highlighted in Figure 2.\n\nBlue defines SNP, green defines insertions and red defines deletions. The upper barplot shows the number of these genetic mutation for each patient, while the right barplot shows the number of genetic mutations for each gene. The bottom bar shows the group of each sample.\n\n\n\nListing 11. Oncoprint\n\nOverview of genomic alterations by circos plot\n\nGenomic alterations in cancer, including CNV and mutations, can be represented in an effective overview plot named circos. We used circlize CRAN package to represent significant CNV (resulting from GAIA analysis) and recurrent mutations (selecting curated genetic variations retrieved from TCGA that are identified in at least two tumor samples) in LGG (see Listing 13). Circos plot can illustrate molecular alterations genome-wide or only in one or more selected chromosomes. The Figure 3 shows the resulting circos plot for all chromosomes, while the Figure 4 shows the plot for only the chromosome 17.\n\n\n\nListing 12. Genomic aberration overview by circos plot\n\nPre-Processing Data. The LGG and GBM data used for following transcriptomic analysis were downloaded using TCGAbiolinks. We downloaded only primary solid tumor (TP) samples, which resulted in 516 LGG samples and 156 GBM samples, then prepared it in two separate rse object (RangedSummarizedExperiment) saving them as an R object with a filename including both the name of the cancer and the name of the plaftorm used for gene expression data (see Listing 13).\n\n\n\nListing 13. Searching, downloading and preparing RNA-seq data with TCGAbiolinks\n\nTo pre-process the data, first, we searched for possible outliers using the TCGAanalyze_Preprocessing function, which performs an Array Array Intensity correlation AAIC (lines 14-17 and 26-29 of Listing 14). In this way we defined a square symmetric matrix of Pearson correlation among all samples in each cancer type (LGG or GBM). This matrix found 0 samples with low correlation (cor.cut = 0.6) that can be identified as possible outliers.\n\nSecond, using the TCGAanalyze_Normalization function, which encompasses the functions of the EDASeq package, we normalized mRNA transcripts.\n\nThis function implements Within-lane normalization procedures to adjust for GC-content effect (or other gene-level effects) on read counts: loess robust local regression, global-scaling, and full-quantile normalization19 and between-lane normalization procedures to adjust for distributional differences between lanes (e.g., sequencing depth): global-scaling and full-quantile normalization20.\n\n\n\nListing 14. Normalizing mRNA transcripts and differentially expression analysis with TCGAbiolinks\n\nUsing TCGAanalyze_DEA, we identified 2,901 differentially expressed genes (DEG)(log fold change >=1 and FDR < 1%) between 515 LGG and 155 GBM samples.\n\nEA: enrichment analysis. In order to understand the underlying biological process from DEGs we performed an enrichment analysis using TCGAanalyze_EA_complete function (see Listing 15).\n\n\n\nListing 15. Enrichment analysis\n\nTCGAanalyze_EAbarplot outputs a bar chart as shown in Figure 5 with the number of genes for the main categories of three ontologies (GO:biological process, GO:cellular component, and GO:molecular function.\n\nThe Figure 5 shows canonical pathways significantly overrepresented (enriched) by the DEGs. The most statistically significant canonical pathways identified in DEGs list are listed according to their p-value corrected FDR (-Log10) (colored bars) and the ratio of list genes found in each pathway over the total number of genes in that pathway (ratio, red line).\n\nPEA: Pathways enrichment analysis. To verify if the genes found have a specific role in a pathway, the bioconductor package pathview21 can be used. Listing 16 shows an example how to use it. It can receive, for example, a named vector of gene with the expression level, the pathway.id which can be found in KEGG database, the species ('hsa' for Homo sapiens) and the limits for the gene expression.\n\n\n\nListing 16. Pathways enrichment analysis with pathview package\n\nThe most statistically significant canonical pathways identified in DEGs list are listed according to their p value corrected FDR (-Log) (colored bars) and the ratio of list genes found in each pathway over the total number of genes in that pathway (ratio, red line).\n\nThe red genes are up-regulated and the green genes are down-regulated in the LGG samples compared to GBM.\n\nInference of gene regulatory networks. Starting with the set of differentially expressed genes, we infer gene regulatory networks using the following state-of-the art inference algorithms: ARACNE22, CLR23, MRNET24 and C3NET25. These methods are based on mutual inference and use different heuristics to infer the edges in the network. These methods have been made available via Bioconductor/CRAN packages (MINET26, and c3net,25 respectively).\n\nMany gene regulatory interactions have been experimentally validated and published. These ‘known’ interactions can be accessed using different tools and databases such as BioGrid27 or GeneMANIA28. However, this knowledge is far from complete and in most cases only contains a small subset of the real interactome. The quality assessment of the inferred networks can be carried out by comparing the inferred interactions to those that have been validated. This comparison results in a confusion matrix as presented in Table 2. Different quality measures can then be computed such as the false positive rate\n\nRed defines genes that are up-regulated and green defines genes that are down-regulated.\n\nThe performance of an algorithm can then be summarized using ROC (false positive rate versus true positive rate) or PR (precision versus recall) curves.\n\nA weakness of this type of comparison is that an edge that is not present in the set of known interactions can either mean that an experimental validation has been tried and did not show any regulatory mechanism or (more likely) has not yet been attempted.\n\nIn the following, we ran the nce on i) the 2,901 differentially expressed genes identified in Section “Transcriptomic analysis”.\n\nRetrieving known interactions\n\nWe obtained a set of known interactions from the BioGrid database.\n\n\n\nThere are 3,941 unique interactions between the 2,901 differentially expressed genes.\n\nUsing differentially expressed genes from TCGAbiolinks workflow\n\nWe start this analysis by inferring two gene regulatory networks (the corresponding number of edges are presented in Table 3) for the GBM data set and for the LGG data set using one gene set.\n\n\n\nIn Figure 7, the obtained ROC curve and the corresponding area under curve (AUC) are presented. It can be observed that CLR and MRNET perform best when comparing the inferred network with known interactions from the BioGrid database.\n\nThe DNA methylation is an important component in numerous cellular processes, such as embryonic development, genomic imprinting, X-chromosome inactivation, and preservation of chromosome stability29.\n\nIn mammals, DNA methylation is found sparsely but globally, distributed in definite CpG sequences throughout the entire genome. There is however an exception, the CpG islands (CGIs) which are short interspersed DNA sequences with are enriched for GC. These CpG islands are normally found in sites of transcription initiation and their methylation can lead to gene silencing30\n\nThus, the investigation of the DNA methylation is crucial to understanding regulatory gene networks in cancer as the DNA methylation represses transcription31. Therefore, the DMR (differentially Methylation Region) detection can help us investigate regulatory gene networks.\n\nThis section describes the analysis of DNA methylation using the bioconductor package TCGAbiolinks7. For this analysis, and due to the time required to perform it, we selected only 10 LGG samples and 10 GBM samples that have both DNA methylation data from Infinium HumanMethylation450 and gene expression from Illumina HiSeq 2000 RNA Sequencing Version 2 analysis (lines 1–7 of the Listing 17 describes how to make the data acquisition). We started by checking the mean DNA methylation of different groups of samples, then a DMR (Differentially methylated region) analysis is performed in which we search for regions that have possible biological significance, for example, regions that are methylated in one group and unmethylated in the other. After finding these regions, they can be visualized using heatmaps.\n\nVisualizing the mean DNA methylation of each patient. It should be highlighted that some pre-processing of the DNA methylation data was done. The DNA methylation data from the 450k platform have three types of probes cg (CpG loci) , ch (non-CpG loci) and rs (SNP assay). The last type of probe can be used for sample identification and tracking and should be excluded for differential methylation analysis according to the ilumina manual. Therefore, the rs probes were removed (see Listing 17 lines 43). Also, probes in chromosomes X, Y were removed to eliminate potential artifacts originating from the presence of a different proportion of males and females32. The last pre-processing steps were to remove probes with at least one NA value (see Listing 17 lines 40).\n\nAfter this pre-processing step, using the function TCGAvisualize_meanMethylation provided we can take a look at the mean DNA methylation of each patient in each group. It receives as argument a summarizedExperiment object with the DNA methylation data, and the arguments groupCol and subgroupCol which should be two columns from the sample information matrix of the summarizedExperiment object (accessed by the colData function) (see Listing 17 lines 46-50).\n\n\n\nListing 17. Visualizing the DNA mean methylation of groups\n\nFigure 8 illustrates a mean DNA methylation plot for each sample in the GBM group (140 samples) and a mean DNA methylation for each sample in the LGG group. Genome-wide view of the data highlights a difference between the groups of tumors (p-value = 6.1 × 10−06 ).\n\nSearching for differentially methylated CpG sites. The next step is to define differentially methylated CpG sites between the two groups. This can be done using the TCGAanalyze_DMR function (see Listing 18). The DNA methylation data (level 3) is presented in the form of beta-values that uses a scale ranging from 0.0 (probes completely unmethylated ) up to 1.0 (probes completely methylated).\n\nTo find these differentially methylated CpG sites, first, it calculates the difference between the mean DNA methylation (mean of the beta-values) of each group for each probe. Second, it tests for differential expression between two groups using the Wilcoxon test adjusting by the Benjamini-Hochberg method. Arguments of TCGAanalyze_DMR was set to require a minimum absolute beta-values difference of 0.25 and an adjusted p-value of less than 10−2.\n\nAfter these tests, a volcano plot (x-axis: difference of mean DNA methylation, y-axis: statistical significance) is created to help users identify the differentially methylated CpG sites and return the object with the results in the rowRanges. Figure 9 shows the volcano plot produced by Listing 18. This plot aids the user in selecting relevant thresholds, as we search for candidate biological DMRs.\n\n\n\nListing 18. Finding differentially methylated CpG sites\n\nTo visualize the level of DNA methylation of these probes across all samples, we use heatmaps that can be generate by the bioconductor package complexHeatmap18. To create a heatmap using the complexHeatmap package, the user should provide at least one matrix with the DNA methylation levels. Also, annotation layers can be added and placed at the bottom, top, left side and right side of the heatmap to provide additional metadata description. The Listing 19 shows the code to produce the heatmap of a DNA methylation data (Figure 10).\n\n\n\nListing 19. Creating heatmaps for DNA methylation using ComplexHeatmap\n\nRows are probes and columns are samples (patients). The DNA methylation values ranges from 0.0 (completely DNA unmethylated, blue) to 1.0 (completely DNA methylated, red). The groups of each sample were annotated in the top bar and the DNA methylation status for each probe was annotated in the right bar.\n\nMotif analysis. Motif discovery is the attempt to extract small sequence signals hidden within largely non-functional intergenic sequences. These sequence short nucleotide sequences (6–15 bp) might have a biological significance as it can be used to control the expression of genes. These sequences are called Regulatory motifs. The bioconductor package rGADEM33,34 provides an efficient de novo motif discovery algorithm for large-scale genomic sequence data.\n\nThe user may be interested in looking for unique signatures in the regions defined by ‘differentially methylated’ to identify candidate transcription factors that could bind to these elements affected by the accumulation or absence of DNA methylation. For this analysis we use a sequence of 100 bases before and after the probe location (See lines 6–8 in the Listing 20). An object will be returned which contains all relevant information about your motif analysis (sequence consensus, pwm, chromosome, pvalue…).\n\nUsing bioconductor package motifStack35 it is possible to generate a graphic representation of multiple motifs with different similarity scores (see Figure 11).\n\n\n\nListing 20. rGADEM: de novo motif discovery\n\nAfter rGADEM returns it’s results, the user can use MotIV package36–39 to start the motif matching analysis (line 4 of Listing 21). The result is shown in Figure 12.\n\n\n\nListing 21. MotIV: motifs matches analysis\n\nRecent studies have shown that providing a deep integrative analysis can aid researchers in identifying and extracting biological insight from high through put data29,40,41. In this section, we will introduce a bioconductor package called ELMER to identify regulatory enhancers using gene expression + DNA methylation data + motif analysis. In addition, we show how to integrate the results from the previous sections with important epigenomic data derived from both the ENCODE and Roadmap.\n\nIntegration of DNA methylation & gene expression data. After finding differentially methylated CpG sites, one possible question one might ask is whether nearby genes also undergo a change in its expression, either an increase or a decrease. DNA methylation at promoters of genes have been shown to be associated with silencing of the respective gene.\n\nThe starburst plot is proposed to combine information from two volcano plots, and is applied for a study of DNA methylation and gene expression42. Even though, being desirable that both gene expression and DNA methylation data are from the same samples, the starburst plot can be applied as a meta-analysis tool, combining data from different samples43.\n\nThe function TCGAvisualize_starburst creates a Starburst plot for comparison of DNA methylation and gene expression. The log10 (FDR-corrected P value) for DNA methylation is plotted on the x axis, and for gene expression on the y axis, for each gene. The horizontal black dashed line shows the FDR-adjusted P value of 10−2 for the expression data and the vertical ones shows the FDR-adjusted P value of 10−2 for the DNA methylation data. The Starburst plot for the Listing 22 is shown in Figure 13. While the argument met.p.cut and exp.p.cut controls the black dashed lines, the arguments diffmean.cut and logFC.cut will be used to highlight the genes that respects these parameters (circled genes in Figure 13). For the example below we set higher p.cuts trying to get the most significant list of pair gene/probes. But for the next sections we will use exp.p.cut = 0.01 and logFC.cut = 1 as the previous sections.\n\n\n\nListing 22. Starburst plot for comparison of DNA methylation and gene expression\n\nThe starburst plot highlights nine distinct quadrants. Highlighted genes might have the potential for activation due to epigenetic alterations.\n\nChIP-seq analysis. ChIP-seq is used primarily to determine how transcription factors and other chromatin-associated proteins influence phenotype-affecting mechanisms. Determining how proteins interact with DNA to regulate gene expression is essential for fully understanding many biological processes and disease states. The aim is to explore significant overlap datasets for inferring co-regulation or transcription factor complex for further investigation. A summary of the association of each histone mark is shown in Table 4. Besides, ChIP-seq data exist in the ROADMAP database and can be obtained through the AnnotationHub package44 or from Roadmap web portal. The Table 5 shows the description for all the roadmap files that are available through AnnotationHub.\n\nAfter obtaining the ChIP-seq data, we can then identify overlapping regions with the regions identified in the starburst plot. The narrowPeak files are the ones selected for this step.\n\nFor a complete pipeline with Chip-seq data, bioconductor provides excellent tutorials to work with ChIP-seq and we encourage our readers to review the following article53.\n\nThe first step shown in Listing 23 is to download the Chip-seq data. The function query received as argument the annotationHub database (ah) and a list of keywords to be used for searching the data, EpigenomeRoadmap is selecting the roadmap database, consolidated is selecting only the consolidate epigenomes, brain is selecting the brain samples, E068 is one of the epigenomes for brain (a table for the list is found in this summary table)54.and narrowPeak is selecting the type of file. The data downloaded are processed data from an integrative Analysis of 111 reference human epigenomes54.\n\n\n\nListing 23. Download chip-seq data\n\nThe Chipseeker package55 implements functions that uses Chip-seq data to retrieve the nearest genes around the peak, to annotate genomic region of the peak, among others. Also, it provides several visualization functions to summarize the coverage of the peak, average profile and heatmap of peaks binding to TSS regions, genomic annotation, distance to TSS and overlap of peaks or genes.\n\nAfter downloading the histone marks (see Listing 23, it is useful to verify the average profile of peaks binding to hypomethylated and hypermethylated regions, which will help the user understand better the regions found. Listing 24 shows an example of code to plot the average profile. Figure 14 shows the result.\n\nThe figure indicates that the differentially methylated regions overlaps regions of enhancers, promoters and increased activation of genomic elements.\n\nTo help the user understand better the regions found in the DMR analysis, we downloaded histone marks specific for brain tissue, which was done using the AnnotationHub package that can access Roadmap datababse (Listing 23). After, the Chipseeker was used to visualize how histone modifications are enriched to to hypomethylated and hypermethylated regions, (Listing 24). The enrichment heatmap and the average profile of peaks binding to those region is shown in Figure 14 and Figure 15 respectively.\n\nThe figure indicates that the most of the peaks that overlaps the probes are not brain specific.\n\nThe hypomethylated and hypermethylated regions are enriched for H3K4me3, H3K9ac, H3K27ac and H3K4me1 which indicates regions of enhancers, promoters and increased activation of genomic elements. However, these regions are not associated neither with transcribed regions nor Polycomb repression as the H3K36me3 and H3K27me3 heatmaps does not show an enrichment nearby the position 0, and the average profile also does not show a peak at position 0.\n\n\n\nListing 24. Average profile plot\n\nTo annotate the location of a given peak in terms of genomic features, annotatePeak assigns peaks to genomic annotation in “annotation” column of the output, which includes whether a peak is in the TSS, Exon, 5’ UTR, 3’ UTR, Intronic or Intergenic.\n\n\n\nListing 25. Annotate the location of a given peak in terms of genomic features\n\nIdentification of Regulatory Enhancers. Recently, many studies suggest that enhancers play a major role as regulators of cell-specific phenotypes leading to alteration in transcriptomes related to diseases56–59. In order to investigate regulatory enhancers that can be located at long distances upstream or downstream of target genes bioconductor offers the Enhancer Linking by Methylation/Expression Relationship (ELMER) package. This package is designed to combine DNA methylation and gene expression data from human tissues to infer multi-level cis-regulatory networks. It uses DNA methylation to identify enhancers and correlates their state with expression of nearby genes to identify one or more transcriptional targets. Transcription factor (TF) binding site analysis of enhancers is coupled with expression analysis of all TFs to infer upstream regulators. This package can be easily applied to TCGA public available cancer data sets and custom DNA methylation and gene expression data sets60.\n\nELMER analysis has five main steps:\n\n1. Identify distal enhancer probes on HM450K.\n\n2. Identify distal enhancer probes with significantly different DNA methyaltion level in control group and experiment group.\n\n3. Identify putative target genes for differentially methylated distal enhancer probes.\n\n4. Identify enriched motifs for the distal enhancer probes which are significantly differentially methylated and linked to putative target gene.\n\n5. Identify regulatory TFs whose expression associate with DNA methylation at motifs.\n\nThis section shows how to use ELMER to analyze TCGA data using as example LGG and GBM samples.\n\nPreparing the data for ELMER package. After downloading the data with TCGAbiolinks package, some steps are still required to use TCGA data with ELMER. These steps can be done with the function TCGAprepare_elmer. This function for the DNA methylation data will remove probes with NA values in more than 20% samples and remove the annotation data, for RNA expression data it will take the log2(expression + 1) of the expression matrix in order to linearize the relation between DNA methylation and expression. Also, it will prepare the row names of the matrix as required by the package.\n\nThe Listing 26 shows how to use TCGAbiolinks7 to search, download and prepare the data for the ELMER package. Due to time and memory constraints, we will use in this example only data from 10 LGG patients and 10 GBM patients that have both DNA methylation and gene expression data. These samples are the same used in the previous steps.\n\n\n\nListing 26. Preparing TCGA data for ELMER’s mee object\n\nFinally, the ELMER input is a mee object that contains a DNA methylation matrix, a gene expression matrix, a probe information GRanges, the gene information GRanges and a data frame summarizing the data. It should be highlighted that samples without both the gene expression and DNA methylation data will be removed from the mee object.\n\nBy default the function fetch.mee that is used to create the mee will separate the samples into two groups, the control group (normal samples) and the experiment group (tumor samples), but the user can relabel the samples to compare different groups. For the next sections, we will work with two groups the experiment group (LGG) and control samples (GBM).\n\n\n\nListing 27. Creating mee object with TCGA data to be used in ELMER\n\nELMER analysis. After preparing the data into a mee object, we executed the five ELMER steps for both the hypo (distal enhancer probes hypomethylated in the LGG group) and hyper (distal enhancer probes hypermethylated in the LGG group) direction. The code is shown below. A description of how these distal enhancer probes are identified is found in the ELMER.data vignette.\n\n\n\nListing 28. Running ELMER analysis\n\nWhen ELMER identifies the enriched motifs for the distal enhancer probes which are significantly differentially methylated and linked to putative target gene, it will plot the Odds Ratio (x axis) for the each motifs found.\n\nThe list of motifs found for the hyper direction (probes hypomethylated in LGG group compared to the GBM group) is found in Figure 17. To select the motifs we select the motifs that had a minimum incidence of 10 in the given probes set and the smallest lower boundary of 95% confidence interval for Odds Ratio of 1.1. These values are the default from the ELMER package.\n\nThe range shows the 95% confidence interval for each Odds Ratio.\n\nThe analysis found 14 enriched motifs for the hyper direction and no enriched motifs for the hypo direction.\n\nAfter finding these list of enriched motifs, ELMER identifies regulatory TFs whose expression associate with DNA methylation at motifs and for each enriched motif a TF ranking plot is created automatically by ELMER. This plot shows the TF ranking plots based on the score (−log(Pvalue)) of association between TF expression and DNA methylation of the motif. We can see in Figure 18 that the top three associated TFs that are associated with that AP1 motif are POLR3K, DLX3 and NEUROD2.\n\nThe dashed line indicates the boundary of the top 5% association score and the TF within this boundary were considered candidate upstream regulators. The top 3 associated TFs and the TF family members (dots in red) that are associated with that specific motif are labeled in the plot.\n\nThe output of this step is a data frame with the following columns:\n\n1. motif: the names of motif.\n\n2. top.potential.TF: the highest ranking upstream TFs which are known recognized the motif.\n\n3. potential.TFs: TFs which are within top 5% list and are known recognized the motif. top5percent: all TFs which are within top 5% list considered candidate upstream regulators\n\nAlso, for each motif we can take a look on the three most relevant transcription factors. For example, for the AP1 motif the average DNA methylation level of sites with the AP1 motif plotted against the expression of the transcription factors WT1, ZNF208, ATF4 and DDX5 is show in Figure 19. We can see that the experiment group (GBM samples) has a lower average methylation level of sites with the AP1 motif plotted and a higher expression of the transcription factors.\n\n\n\nListing 29. Visualizing the average DNA methylation level of sites with a chosen motif vs TF expression\n\nAnd for each relevant TF we will use the clinical data to analyze the survival curves for the 30% patients with higher expression of that transcription factor versus the 30% with lower expression. The code below shows that analysis.\n\n\n\nListing 30. Survival analysis for samples with lower expression of regulatory TF and higher expression\n\nThe Figures 20, shows that the samples with lower expression of these TFs have a better survival than those with higher expression.\n\nA) Survival plot for the 30% patients with high expression and low expression of FOXP4 TF. B) Survival plot for the 30% patients with high expression and low expression of FOXE3 TF.\n\n\nConclusion\n\nThis workflow outlines how one can use specific Bioconductor packages for the analysis of cancer genomics and epigenomics data derived from the TCGA. In addition, we highlight the importance of using ENCODE and Roadmap data to inform on the biology of the non-coding elements defined by functional roles in gene regulation. We introduced TCGAbiolinks and RTCGAtoolbox bioconductor packages in order to illustrate how one can acquire TCGA specific data, followed by key steps for genomics analysis using GAIA package, for transcriptomic analysis using TCGAbiolinks, dnet, pathview packages and for DNA methylation analysis using TCGAbiolinks package. An inference of gene regulatory networks was also introduced by MINET package. Finally, we introduced bioconductor packages AnnotationHub, ChIPSeeker, ComplexHeatmap, and ELMER to illustrate how one can acquire ENCODE/Roadmap data and integrate with the results obtained from analyzing TCGA data in order to identify and characterize candidate regulatory enhancers associated with cancer.\n\n\nData and software availability\n\nThis workflow depends on various packages from version 3.2 of the Bioconductor project, running on R version 3.2.2 or higher. It requires a number of software packages, including AnnotationHub, ChIPSeeker, ELMER, ComplexHeatmap, GAIA, rGADEM, MotIV, MINET, RTCGAtoolbox and TCGAbiolinks.\n\nVersion numbers for all packages used are in section \"Session Information\". Listing 31 shows how to install all the required packages.\n\n\n\nListing 31. Installing packages\n\nAll data used in this workflow is freely available and can be accessed using a R/Bioconductor package. There are two main sources of data: The Cancer Genome Atlas (TCGA) and a supplementary data repository with processed datasets from the Roadmap Epigenomics Project and from The Encyclopedia of DNA Elements (ENCODE) project54. For the first, a summary of the data available can be seen in https://tcga-data.nci.nih.gov/tcga/ and its data can be accessed using the R/Bioconductor TCGAbiolinks package. For the second, a summary of the data available can be found in this spread sheet and the data can be accessed using the R/Bioconductor AnnotationHub package.\n\n\nSession information\n\n", "appendix": "Author contributions\n\n\n\nHN conceived the study. HN, MC and GB provided direction on the design of the Transcriptomics, Genomics, master regulatory networks and DNA methylation workflows. TCS developed and tested sections \"Experimental data\", \"DNA methylation analysis\", \"Motif analysis\" and \"Integrative analysis\". AC developed and tested section \"Transcriptomic analysis\". CO developed and tested the section \"Inference of gene regulatory networks'. FDA developed and tested section \"Genomic analysis\". TCS, AC, CO, and FDA prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe project was supported by the São Paulo Research Foundation (FAPESP) (2015/02844-7 and 2016/01389-7 to T.C.S. & H.N. and 2015/07925-5 to H.N.), the BridgeIRIS project, funded by INNOVIRIS, Region de Bruxelles Capitale, Brussels, Belgium, and by GENomic profiling of Gastrointestinal Inflammatory-Sensitive CANcers (GENGISCAN), Belgian FNRS PDR (T100914F to G.B.). Funding for open access charge: São Paulo Research Foundation (FAPESP) (2015/07925-5).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nHawkins RD, Hon GC, Ren B: Next-generation genomics: an integrative approach. Nat Rev Genet. 2010; 11(7): 476–486. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Research Network, Weinstein JN, Collisson EA, et al.: The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013; 45(10): 1113–1120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nENCODE Project Consortium: A user’s guide to the encyclopedia of DNA elements (ENCODE). PLoS Biol. 2011; 9(4): e1001046. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFingerman IM, McDaniel L, Zhang X, et al.: NCBI Epigenomics: a new public resource for exploring epigenomic data sets. Nucleic Acids Res. 2011; 39(Database issue): D908–912. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernstein BE, Stamatoyannopoulos JA, Costello JF, et al.: The NIH Roadmap Epigenomics Mapping Consortium. Nat Biotechnol. 2010; 28(10): 1045–1048. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKannan L, Ramos M, Re A, et al.: Public data and open source tools for multi-assay genomic investigation of disease. Brief Bioinform. 2015; pii bbv080. PubMed Abstract | Publisher Full Text\n\nColaprico A, Silva TC, Olsen C, et al.: Tcgabiolinks: an R/Bioconductor package for integrative analysis of TCGA data. Nucleic Acids Res. 2016; 44(8): e71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorgan HJM, Obenchain V, Pagès H: Summarizedexperiment: Summarizedexperiment container. r package version 1.1.0. Reference Source\n\nCeccarelli M, Barthel FP, Tathiane M, et al.: Molecular Profiling Reveals Biologically Discrete Subsets and Pathways of Progression in Diffuse Glioma. Cell. 2016; 164(3): 550–563. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Research Network: Comprehensive molecular characterization of gastric adenocarcinoma. Nature. 2014; 513(7517): 202–209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Network: Comprehensive molecular portraits of human breast tumours. Nature. 2012; 490(7418): 61–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Network: Comprehensive molecular characterization of human colon and rectal cancer. Nature. 2012; 487(7407): 330–337. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Research Network: Comprehensive molecular profiling of lung adenocarcinoma. Nature. 2014; 511(7511): 543–550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSamur MK: Rtcgatoolbox: A new Tool for exporting TCGA Firehose data. PLoS One. 2014; 9(9): e106397. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMermel CH, Schumacher SE, Hill B, et al.: Gistic2.0 facilitates sensitive and confident localization of the targets of focal somatic copy-number alteration in human cancers. Genome Biol. 2011; 12(4): R41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorganella S, Pagnotta SM, Ceccarelli M: Gaia: Genomic analysis of important aberrations.Reference Source\n\nGu Z: Complexheatmap: Making complex heatmaps. r package version 1.7.1. Reference Source\n\nRisso D, Schwartz K, Sherlock G, et al.: GC-content normalization for RNA-Seq data. BMC bioinformatics. 2011; 12(1): 480. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBullard JH, Purdom E, Hansen KD, et al.: Evaluation of statistical methods for normalization and differential expression in mRNA-seq experiments. BMC bioinformatics. 2010; 11(1): 94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLuo W, Brouwer C: Pathview: an R/Bioconductor package for pathway-based data integration and visualization. Bioinformatics. 2013; 29(14): 1830–1831. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMargolin AA, Nemenman I, Basso K, et al.: ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC bioinformatics. 2006; 7(Suppl 1): S7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFaith JJ, Hayete B, Thaden JT, et al.: Large-scale mapping and validation of Escherichia coli transcriptional regulation from a compendium of expression profiles. PLoS Biol. 2007; 5(1): e8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeyer PE, Kontos K, Lafitte F, et al.: Information-theoretic inference of large transcriptional regulatory networks. EURASIP J Bioinform Syst Biol. 2007; 2007(1): 79879. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltay G, Emmert-Streib F: Inferring the conservative causal core of gene regulatory networks. BMC Syst Biol. 2010; 4(1): 132. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeyer PE, Lafitte F, Bontempi G: minet: A R/Bioconductor package for inferring large transcriptional networks using mutual information. BMC Bioinformatics. 2008; 9(1): 461. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStark C, Breitkreutz BJ, Reguly T, et al.: BioGRID: a general repository for interaction datasets. Nucleic Acids Res. 2006; 34(Database issue): D535–D539. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMontojo J, Zuberi K, Rodriguez H, et al.: GeneMANIA Cytoscape plugin: fast gene function predictions on the desktop. Bioinformatics. 2010; 26(22): 2927–2928. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPhillips T: The role of methylation in gene expression. Nature Education. 2008; 1(1): 116. Reference Source\n\nDeaton AM, Bird A: CpG islands and the regulation of transcription. Genes Dev. 2011; 25(10): 1010–1022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobertson KD: DNA methylation and human disease. Nat Rev Genet. 2005; 6(8): 597–610. PubMed Abstract | Publisher Full Text\n\nMarabita F, Almgren M, Lindholm ME, et al.: An evaluation of analysis pipelines for DNA methylation profiling using the Illumina HumanMethylation450 BeadChip platform. Epigenetics. 2013; 8(3): 333–346. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDroit A, Gottardo R, Robertson G, et al.: rgadem: De novo motif discovery.2015. Reference Source\n\nLi L: GADEM: a genetic algorithm guided formation of spaced dyads coupled with an EM algorithm for motif discovery. J Comput Biol. 2009; 16(2): 317–329. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOu J, Brodsky M, Wolfe S, et al.: motifstack: Plot stacked logos for single or multiple DNA, RNA and amino acid sequence.2013. Reference Source\n\nMercier E, Gottardo R: Motiv: Motif identification and validation.2014. Reference Source\n\nMahony S, Auron PE, Benos PV: DNA familial binding profiles made easy: comparison of various motif alignment and clustering strategies. PLoS Comput Biol. 2007; 3(3): e61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahony S, Benos PV: STAMP: a web tool for exploring DNA-binding motif similarities. Nucleic Acids Res. 2007; 35(Web Server issue): W253–W258. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMercier E, Droit A, Li L, et al.: An integrated pipeline for the genome-wide analysis of transcription factor binding sites from ChIP-Seq. PLoS One. 2011; 6(2): e16432. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShi X, Liu J, Huang J, et al.: Integrative analysis of high-throughput cancer studies with contrasted penalization. Genet Epidemiol. 2014; 38(2): 144–151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRhodes DR, Chinnaiyan AM: Integrative analysis of the cancer transcriptome. Nat Genet. 2005; 37(Suppl): S31–S37. PubMed Abstract | Publisher Full Text\n\nNoushmehr H, Weisenberger DJ, Diefes K, et al.: Identification of a CpG island methylator phenotype that defines a distinct subgroup of glioma. Cancer cell. 2010; 17(5): 510–522. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSiegmund KD: Statistical approaches for the analysis of DNA methylation microarray data. Hum Genet. 2011; 129(6): 585–595. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTenenbaum D, Morgan M, Carlson M, et al.: Annotationhub: Client to access annotationhub resources. r package version 2.2.2. Reference Source\n\nHeintzman ND, Stuart RK, Hon G, et al.: Distinct and predictive chromatin signatures of transcriptional promoters and enhancers in the human genome. Nat Genet. 2007; 39(3): 311–318. PubMed Abstract | Publisher Full Text\n\nBernstein BE, Kamal M, Lindblad-Toh K, et al.: Genomic maps and comparative analysis of histone modifications in human and mouse. Cell. 2005; 120(2): 169–181. PubMed Abstract | Publisher Full Text\n\nBonasio R, Tu S, Reinberg D: Molecular signals of epigenetic states. Science. 2010; 330(6004): 612–616. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAntoine HF, Kubicek S, Mechtler K, et al.: Partitioning and plasticity of repressive histone methylation states in mammalian chromatin. Mol Cell. 2003; 12(6): 1577–1589. PubMed Abstract | Publisher Full Text\n\nHeintzman ND, Hon GC, Hawkins RD, et al.: Histone modifications at human enhancers reflect global cell-type-specific gene expression. Nature. 2009; 459(7243): 108–112. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRada-Iglesias A, Bajpai R, Swigut T, et al.: A unique chromatin signature uncovers early developmental enhancers in humans. Nature. 2011; 470(7333): 279–283. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCreyghton MP, Cheng AW, Welstead GG, et al.: Histone H3K27ac separates active from poised enhancers and predicts developmental state. Proc Natl Acad Sci U S A. 2010; 107(50): 21931–21936. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNishida H, Suzuki T, Kondo S, et al.: Histone H3 acetylated at lysine 9 in promoter is associated with low nucleosome density in the vicinity of transcription start site in human cell. Chromosome Res. 2006; 14(2): 203–211. PubMed Abstract | Publisher Full Text\n\nPekowska A, Anders S: ChIP-seq analysis basics.2015. Reference Source\n\nRoadmap Epigenomics Consortium, Kundaje A, Meuleman W, et al.: Integrative analysis of 111 reference human epigenomes. Nature. 2015; 518(7539): 317–330. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu G, Wang LG, He QY, et al.: ChIPseeker: an R/Bioconductor package for ChIP peak annotation, comparison and visualization. Bioinformatics. 2015; 31(14): 2382–2383. PubMed Abstract | Publisher Full Text\n\nGiorgio E, Robyr D, Spielmann M, et al.: A large genomic deletion leads to enhancer adoption by the lamin b1 gene: a second path to autosomal dominant leukodystrophy (adld). Hum Mol Genet. 2015; page ddv065.\n\nGröschel S, Sanders MA, Hoogenboezem R, et al.: A single oncogenic enhancer rearrangement causes concomitant EVI1 and GATA2 deregulation in leukemia. Cell. 2014; 157(2): 369–381. PubMed Abstract | Publisher Full Text\n\nSur IK, Hallikas O, Vähärautio A, et al.: Mice lacking a Myc enhancer that includes human SNP rs6983267 are resistant to intestinal tumors. Science. 2012; 338(6112): 1360–1363. PubMed Abstract | Publisher Full Text\n\nYao L, Berman BP, Farnham PJ: Demystifying the secret mission of enhancers: linking distal regulatory elements to target genes. Crit Rev Biochem Mol Biol. 2015; 50(6): 550–573. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYao L, Shen H, Laird PW, et al.: Inferring regulatory element landscapes and transcription factor networks from cancer methylomes. Genome Biol. 2015; 16(1): 105–105. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "14695", "date": "18 Jul 2016", "name": "Kyle Ellrott", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review comes at a very inopportune moment. The entire software pipeline is based on the TCGAbiolinks tool kit, which downloads files from the TCGA DCC service. Unfortunately, just as this paper was being sent for review, the NCI began their migration to the GDC service. This means that all of the data services at DCC TCGA data portal (https://tcga-data.nci.nih.gov) are no longer active, and users are being directed to the GDC NCI data portal (https://gdc.nci.nih.gov/). This means that the TCGAbiolinks tool kit is broken. My attempts to run some of the examples listed in the paper where stopped by this issue. Many parts of the TCGAbiolinks will have to be fixed and/or re-written to deal with this change. This isn’t the fault of the authors, but it does render the example and pipelines described in the paper inoperable.\nThere are a few technical issues that need to be addressed. The package ‘RTCGAtoolbox’ is mentioned, but actually its name is ‘RTCGAToolbox’. (lower-case T to upper-case), and because the bioconductor website is case-sensitive the provided URL ( http://bioconductor.org/packages/RTCGAtoolbox/ ) doesn’t work. On page 3, ‘TCGA Wikipedia’ should be ‘TCGA Wiki’. In the methods section on page 3, where the various data levels of TCGA are explained, on the most important aspects about the different levels is of access requirements. Data levels 1 and 2 cannot be accessed without dbGap, while level 3 and 4 are generally available for public access.\n\nThere is also the question of whether or not this paper fits the criteria of the Software Tools Article guidelines. The authors have presented a loose set of examples that utilize various existing, and previously published, tools. In the guidelines of F1000Research's Software Tool Articles, the criteria for a paper is “We welcome the description of new software tools. A Software Tool Article should include the rationale for the development of the tool and details of the code used for its construction.”  The authors refer to the code included in the paper as a workflow, but reads more like a series of point example that the reader can copy and alter for their own research. In introducing a new piece of software, one expects that the authors of the article are responsible for the software being presented. And while this article mentioned an extensive number of R packages, I believe only one of them, TCGAbiolinks, was written by the authors, and was published last year. While the analysis they present is very detailed and covers an expansive number of topics, I’m not sure if this should be classified as ‘Software Tool Article’. On my first read through, my assumption was that the authors were responsible for all of the tools mentioned in the abstract. And this wasn’t clarified in the text. For example, in the conclusion section on page 43, they include the phrase ‘We introduced TCGAbiolinks and RTCGAtoolbox bioconductor packages in order to illustrate how one can acquire TCGA specific data’, it wasn’t until later, that I realized that the author of the RTCGAToolbox wasn’t in the author list of the paper.\n\nOf course, this is at the discretion of the editors. I would feel that this material would better be presented as a review article covering the different methods of TCGA data analysis, with better notation and attributions about the source of different pieces of software.", "responses": [ { "c_id": "2391", "date": "29 Dec 2016", "name": "Tiago Chedraoui Silva", "role": "Author Response", "response": "Dear Kyle Ellrott, Thank you for your comments and suggestions. We made several changes in the version 2 of the workflow, some of the changes and answers to your points are below: TCGAbiolinks package was entirely redesigned to query, download and prepare data from the GDC NCI data portal (https://gdc.nci.nih.gov/). All the codes are working and we will submit very soon the workflow to Bioconductor. For the moment, the RMarkdown can be found in our Github repository (https://github.com/BioinformaticsFMRP/f1000_TCGA_Workflow/blob/master/f1000.Rmd).   We also highlighted the difference between open (TCGA level 3 and 4 data) and controlled data (TCGA level 1 and 2 data) and we added some useful sources that can help the user request access to the controlled data.   Despite the description of a software article, we decided by this type of article simply because there were already other workflows with that choice and we did not find any other possibility that best described this type of article. In addition, our main focus was on using the tools (in a reasonable time) rather than analyzing the results. The analysis itself can be verified with some articles of our group, for example, the DNA methylation analysis was described in Ceccarelli, Michele, et al. \"Molecular profiling reveals biologically discrete subsets and pathways of progression in diffuse glioma.\" Cell 164.3 (2016): 550-563 as well as the analysis performed by the ELMER tool which is described in Yao, Lijing, et al. \"Inferring regulatory element landscapes and transcription factor networks from cancer methylomes.\" Genome biology 16.1 (2015): 1.   We added to \"Author contributions\" that we are the authors from the R/Bioconductor packages TCGAbiolinks and GAIA. Also, I'm working on a new version of the ELMER package.   We corrected links and references such as RTCGAToolbox." } ] }, { "id": "15858", "date": "21 Sep 2016", "name": "Elena Papaleo", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAs it has been pointed out already by the first reviewer, it is important to verify that the pipeline is updated according to the data migration to the GCD server.\nApart from this, I find this manuscript very nice and the workflow an important contribution in the field, allowing to make accessible large datasets with genomics profiling of cancer patients to the community and even for not advanced R-users. I would like to compliment with the authors to make such a comprehensive workflow available.\nSome minor points before final publication is indexed:\nThe main issue with this manuscript is that it will need an extensive revision by a native English speaker since some sentences are hard to read and very often they risk to convey the wrong message or to be misinterpreted. One example from the abstract, \"we provide a series of biologically focused integrative downstream analyses of different molecular data\" includes to many adjectives. Few lines below \"we provide a workplan to identify candidate biologically relevant functional epigenomic elements associated with cancer\", this sentence does not sound that right in English.\n\nBe also careful in the style use to call the software or Bioconductor packages in the text since sometimes they are italics and other times they are not.\n\nThere is also a lot of redundancy on some of the statements in the introduction that if removed will improve the readability. For example the sentence at the end of the first paragraph of the introduction starting with \" Here we describe a comprehensive workflow that integrates many bioconductor packages...\" is probably not needed since the same concept is provided at the end of the introduction in the same page.\n\nIn the Introduction when TCGA data are illustrated I would also mention metastatic samples (they are there, isn't it? or may be I don't recall correctly...)\n\nI suggest to change the title of the first section of the methods from 'Experimental data' to 'Access to the data' or something similar so that it will better reflect the content of the paragraph.\n\nI felt that sometimes the manuscript sounds more a user guide than an article, so I would suggest taking as an example the section on methylation to improve the discussion of the analyses outcome and aims in the other paragraphs.\n\nAlso please be careful to make things accessible to any readers for example at page 4, the authors introduce the GISTIC data without explaining them in details.\n\nThe sentence at page 4 \"the data used in this workflow are published data and freely available\" might be redundant so the authors could remove it.\n\nPage 4 summarizedExperiment object is missing the reference to the original paper about summarizedExperiment format.\n\nPage 5: in the text the authors illustrate assay, rowRanges and colData but in the example the order is different, i.e. assay, colData, rowRanges - I would suggest to keep consistency in the order between main text and examples (it also holds for other places in the paper).\n\nI am not sure I can understand the sentence at page 5, \"the users should use all the follow up files in your analyses, not just the latest version\".\n\nPage 6: the sentence beginning with \"Finally, the Cancer Genome Atlas (TCGA) Research Network has reported integrated genome-wide studies of various diseases, in what is called 'Pan Cancer'...\" I am not sure fits within the paragraph where the main focus is on subtypes. It sounds a bit confusing for the reader.\n\nNot sure that Listing 5 also fits there... why not mention it before together with the others summ.exp. accessors.\n\nIn writing it is important that the authors really pay attention to convey the message that it is an integrative approach ... at the very first reading of the manuscript I felt that it was more a list of tools. and the integrative part of it was missing\n\nPage 18: a reference is missing for the AAIC method and an explanation to the reader about the choice of the 0.6 correlation cutoff.\n\nIn the list at page 18 with the normalization, I believe that there are some repetitions (i.e. global scaling and full-quantile are appearing twice, also there are some typos).\n\nPage 19, I might have missed something but I cannot find in maintext any explanation of the DEA pipeline and more comments would be needed there for a new user to really appreciate it and its importance.\n\nPage 20 on the second line I believe a ). is missing.\n\nOn PEA, I was wondering why for example the R package for Reactome was not integrated here and if the authors could comment on this.\n\nPage 23, why for protein-protein interactions only BioGrid is used and not for example a more comprehensive resource such as I2D (Interologous Interaction Database).", "responses": [ { "c_id": "2392", "date": "29 Dec 2016", "name": "Tiago Chedraoui Silva", "role": "Author Response", "response": "Dear Elena Papaleo,   First, we would like to thank you for your review and for providing a detailed feedback for our workflow. We have made several changes in this new version (2) of the workflow. The changes and responses of all authors to these points are described below: \"As it has been pointed out already by the first reviewer, it is important to verify that the pipeline is updated according to the data migration to the GCD server.\" TCGAbiolinks package was entirely redesigned to query, download and prepare data from the GDC NCI data portal (https://gdc.nci.nih.gov/). All the codes are working and we will submit very soon the workflow to Bioconductor. For the moment, the RMarkdown can be found in our Github repository (https://github.com/BioinformaticsFMRP/f1000_TCGA_Workflow/blob/master/f1000.Rmd).   \"Be also careful in the style use to call the software or Bioconductor packages in the text since sometimes they are italics and other times they are not.\" We applied the following pattern: name of the packages normal with a link to the package, name of functions and objects in italics.   \"I felt that sometimes the manuscript sounds more a user guide than an article, so I would suggest taking as an example the section on methylation to improve the discussion of the analyses outcome and aims in the other paragraphs.\" Our main focus was on using the tools (in a reasonable time) rather than analyzing the results. The analysis itself can be verified with some articles of our group, for example, the DNA methylation analysis was described in Ceccarelli, Michele, et al. \"Molecular profiling reveals biologically discrete subsets and pathways of progression in diffuse glioma.\" Cell 164.3 (2016): 550-563 as well as the analysis performed by the ELMER tool which is described in Yao, Lijing, et al. \"Inferring regulatory element landscapes and transcription factor networks from cancer methylomes.\" Genome biology 16.1 (2015): 1.   \"In writing it is important that the authors really pay attention to convey the message that it is an integrative approach ... at the very first reading of the manuscript I felt that it was more a list of tools. and the integrative part of it was missing\" We agree with you. The article starts with specific analysis, which might cover 65% of the content and unfortunately it might lead to the feeling of the missing integrative approach. Also, we decided not to go much deeper into the analyzes because they already exist in the referenced articles and we focus more on the execution of the analyzes with the tools of the bioconductor project, something that was not covered by the cited articles, which might also give the feeling of  a \"list of tools\". But, the integrative analysis is shown strongly in the last two sections. We may highlight that ELMER, for example, uses DNA methylation, gene expression, histone marks, motif information and clinical data for its analysis. Also, after the DNA methylation analysis available in TCGAbiolinks package, we do a motif analysis on those regions and integrated the roadmap histone marks data to evaluate them. Finally, another integrative analysis presented in the TCGAbiolinks package is the starburst plot which integrates the differential expression analysis results with the DNA methylation results and we try to identify if the nearest gene might have been affected by the change of the DNA methylation.   \"Page 18: a reference is missing for the AAIC method and an explanation to the reader about the choice of the 0.6 correlation cutoff.\" Array-array intensity correlation plot (AAIC) is a re-adaptation of a affyQCReport::correlationPlot working with data from gene expression of class RangedSummarizedExperiment as output of GDCprepare.Reference from https://www.bioconductor.org/packages/devel/bioc/vignettes/affyQCReport/inst/doc/affyQCReport.pdf. AAIC shows an heat map of the array-array Spearman / Pearson rank correlation coefficients. The arrays are ordered using the phenotypic data (if available) in order to place arrays with similar samples adjacent to each other. Self-self correlations are on the diagonal and by definition have a correlation coefficient of 1.0. Data from similar tissues or treatments will tend to have higher coefficients. This plot is useful for detecting outliers, failed hybridizations, or mistracked samples. The 0.6 threshold came out from unsupervised hierarchical clustering with ward methodology that obtained distinct groups of samples and first one had pearson correlation lower than 0.6 considering them as possible outliers.   \"Page 19, I might have missed something but I cannot find in maintext any explanation of the DEA pipeline and more comments would be needed there for a new user to really appreciate it and its importance.\" For DEA pipeline we used the TCGAanalyze_DEA function from our package TCGAbiolinks, that allows user to perform Differentially expression analysis (DEA), using edgeR package to identify differentially expressed genes (DEGs). It is possible to do a two-class analysis. TCGAanalyze_DEA performs DEA using following functions from edgeR: edgeR::DGEList converts the count matrix into an edgeR object. edgeR::estimateCommonDisp each gene gets assigned the same dispersion estimate. edgeR::exactTest performs pair-wise tests for differential expression between two groups. edgeR::topTags takes the output from exactTest(), adjusts the raw p-values using the False Discovery Rate (FDR) correction, and returns the top differentially expressed genes. \"On PEA, I was wondering why for example the R package for Reactome was not integrated here and if the authors could comment on this.\" Thank you for this suggestion we are working to integrate Reactome as source of genes annotated within pathways. Our PEA provided only one plot to show Gene ontologies and pathways enriched by a list of genes to have an overview of top biological functions and pathways altered by molecules inside the gene signature. We focused mainly on Bioconductor packages, but in particular we are interested to add wrapping to some functionalities of the ReactomePA package.   \"Page 23, why for protein-protein interactions only BioGrid is used and not for example a more comprehensive resource such as I2D (Interologous Interaction Database).\" Different sources for protein-protein interactions are available. We used BioGrid as an example, but the users can chose their preferred database such as I2D Regarding the other points: We revised the text to correct links, references and remove redundancies. We added metastatic samples as one of the available sample types We changed the name of the section 'Experimental data' to 'Access to the data' We included a paragraph introducing GISTIC data" } ] } ]
1
https://f1000research.com/articles/5-1542
https://f1000research.com/articles/5-2925/v1
28 Dec 16
{ "type": "Research Article", "title": "Effects of sub-lethal teratogen exposure during larval development on egg laying and egg quality in adult Caenorhabditis elegans", "authors": [ "Alexis Killeen", "Caralina Marin de Evsikova", "Alexis Killeen" ], "abstract": "Background: Acute high dose exposure to teratogenic chemicals alters the proper development of an embryo leading to infertility, impaired fecundity, and few viable offspring. However, chronic exposure to sub-toxic doses of teratogens during early development may also have long-term impacts on egg quality and embryo viability. Methods: To test the hypothesis that low dose exposure during early development can impact long-term reproductive health, Caenorhabditis elegans larvae were exposed to 10 teratogens during larval development, and subsequently were examined for the pattern of egg-laying and egg quality (hatched larvae and embryo viability) as gravid adults. After the exposure, adult gravid worms were transferred to untreated plates and the numbers of eggs laid were recorded every 3 hours, and the day following exposure the numbers of hatched larvae were counted. Results: While fecundity and fertility were typically impaired by teratogens, unexpectedly, many teratogens initially increased egg-laying at the earliest interval compared to control but not at later intervals. However, egg quality, as assessed by embryo viability, remained the same because many of the eggs (<50%) did not hatch. Conclusions: Chronic, low dose exposures to teratogens during early larval development have subtle, long-term effects on egg laying and egg quality.", "keywords": [ "Arsenic", "Benzo-α-pyrene", "Biocides", "Bisphenol A", "Cadmium", "Cigarette smoke", "Combustion Pollutants", "Diethylstilbestrol", "Egg Laying", "Egg Hatching", "Egg Viability", "Endocrine Disruptors", "Fenthion", "Nicotine", "Tributyltin", "Triclosan" ], "content": "Introduction\n\nTeratogens are agents that negatively impact reproduction and embryonic development and include radiation, maternal infections, pharmaceuticals, and chemicals (Wilson, 1973). Numerous chemicals act as teratogens that adversely affect human health, with the time period of exposure as a critical factor determining teratogen susceptibility. For instance, maternal and prenatal teratogen exposure is associated with birth defects, spontaneous abortion, and stillbirth, and sometimes cancer in the reproductive tract of progeny (Reed et al., 2013; Sanders et al., 2014; Wigle et al., 2008). Previous studies demonstrated that acute, high-dose teratogen exposure causes reproductive decline, but the long-term ramifications of low dose teratogen exposure during early development later in life remain unknown (Allard & Colaiácovo, 2010; Parodi et al., 2015). In this study, we used egg-laying, hatching, and offspring viability assays phenotyping screen throughout early development after exposing Caenorhabditis elegans to sub-lethal doses from in three classes of teratogens, including biocides, endocrine disruptors, and combustion pollutants, to detect impacts on reproductive phenotype in adults.\n\nIn addition, identification of the time frame, yielding the maximal egg-laying, after teratogen exposure, is critical for behavioral and developmental experiments requiring an aged-matched offspring population. The results of our study guide experimental procedures to obtain the requisite population size for subsequent experiments employing 3 hours egg-laying time window to achieve developmentally synchronous, aged-matched offspring population for subsequent experiments assessing long-term effects.\n\n\nMethods\n\nN2 ancestral Bristol strain of C. elegans (CGC, Minneapolis, MN, USA) were maintained on NGM lite plates (N1005, US Biologicals, Salem, MA, USA) with bacterial lawns at 25±0.5°C for all experiments. Bacterial lawns were grown on Petri dishes (10, 60, 100 mm Corning, USA) overnight using 10, 20 or 60 µl of 1X E. coli OP50 (CGC, Minneapolis, MN, USA; 1X=OD600=8.0×108 cells/mL).\n\nTeratogens, except for cigarette smoke extract, were purchased from Sigma Aldrich (St. Louis, MO, USA): tributyltin-chloride (0.1 µM, T50202), cadmium-chloride (0.5 µM, C2554), benzo-α-pyrene (0.5 µM, B1760), nicotine (0.5 µM, N-3876), bisphenol-A (10 µM, 239658), diethylstilbestrol (10 µM, D4628), arsenic(III) oxide (0.5 µM, A1010), triclosan (0.1 µM PHR1338), fenthion (0.1, 1 µM, 36552). Cigarette smoke extract (0.1 µM) was purchased from Murty Pharmaceuticals (0.1 µM, Lexington, KY, USA). Stock solutions of teratogens were made in DMSO or water, and final concentrations were adjusted to contain 0.01% DMSO (ACS grade, Fisher Scientific, USA). Sub-lethal doses were chosen based on preliminary studies (unpublished McIntyre, Killeen & Marín de Evsikova).\n\nThe experimental design for all studies is depicted in Figure 1. A mixed population of worms was used to prepare a developmentally synchronized population at L1 larval stage (Stiernagle, 2006). All plates were coded to prevent experimenter bias. Larvae were transferred onto vehicle or teratogen exposure plates (with bacterial lawns) and cultivated for 48 hours at 25°C (Figure 1). Adult worms from each group were washed 3 times in M9 buffer and transferred into a well in the first row of a 24-well NGM plate. These worms were moved to an adjacent well every three hours for 12 hours. Laid eggs, and the following day, hatched embryos were counted. Embryo viability was determined as the ratio of hatched larvae to number of eggs. Experiments were repeated six times (n=60 worms/group, yielding N=660 worms), albeit fenthion, which was repeated twice at 1 µM (n=20), and after unexpectedly curtailed egg-laying, the dose was decreased to 0.1 µM (n=40). Statistically significant differences were determined after outlier analysis using SPSS v. 23 software (IBM, USA) by ANOVA followed by post hoc or nonparametric (Χ2 or Fisher’s Exact Tests) for fecundity, fertility, eggs laid, hatched larvae, and embryo viability among all groups with significance set at p<0.05.\n\nAn age-synchronized population of C. elegans L1 larvae was exposed to 10 known teratogens at sub-lethal, micromolar concentrations for 48 hours. Gravid adult worms were transferred to pure NGM with bacterial lawns and the number of eggs laid was recorded every 3 hours up to 12 hours. Hatched larvae counted the following day.\n\n\nResults\n\nFecundity and fertility, measures of reproductive fitness of the organism, were decreased after worms were exposed to the biocides, triclosan and fenthion, and combustion pollutant, benzo-α-pyrene (BAP; Χ2= 11.27, 5.21, and 83.1, p<0.05, respectively, Figure 2A & B). Despite the overall detrimental effect on fecundity and fertility, the initial temporal pattern of egg laying was increased by some teratogens. The cumulative amount of eggs laid, irrespective of teratogen or vehicle, increased over time except for 1 µM dose of fenthion (Figure 3A, F(3,162)=119, p<0.05). Furthermore, chronic larval exposure to some low dose teratogens, such as nicotine, cadmium, and tributyltin, increased egg laying and cumulatively produced more eggs than vehicle control (Figure 3A). Unexpectedly, many teratogens, except for arsenic and benzo-α-pyrene, produced more eggs during the 0–3 hours interval compared to vehicle (Figure 3B), although this interval had the overall lowest yield of eggs. While the greatest amount of egg-laying occurred during the 9–12 hours compared to the 0–3 hours interval, no differences in the amount of eggs laid were detected among the intervals at 3–6 hours, 6–9 hours, and 9–12 hours (p>0.05). Hatched larvae, unlike eggs, did not increase at any interval after teratogen exposure (Figure 4B, F(33,162)=1.051) but total hatched larvae increased cumulatively by 12 hours (F(3,162)=24.8, p<0.05). Despite these increases, egg quality did not improve after teratogen exposure as overall embryo viability was similar across groups and time points (Figure 5A F(33,162) = 1.015, P>.4) with lowest viability at the first interval (Figure 5B) albeit not significant.\n\nEffects of teratogens on fecundity (A) and fertility (B). The overall percentage of worms that laid eggs shown as fecundity and the overall percentage of viable and dead eggs as a measure of fertility. Asterisks indicate P<0.05.\n\nTemporal pattern of (A) cumulative eggs laid at 3, 6, 9 and 12 hours post-exposure, and (B) rate of eggs laying in 3 hours intervals (mean ± sem). Asterisks indicate P<0.05.\n\nTemporal pattern of (A) cumulative hatched larvae from eggs laid at 3, 6, 9 and 12 hours post-exposure to teratogens and (B) number of hatched larvae per teratogen group or control in 3 hours intervals (mean ± sem). Asterisks indicate P<0.05.\n\nTemporal pattern of (A) overall embryo viability at 3, 6, 9 and 12 hours post-exposure to teratogens and (B) embryo viability at 3 hours intervals. Embryo viability calculated as the ratio of hatched larvae/eggs laid X 100% (mean ± sem). Asterisks indicate P< 0.05.\n\n\nDiscussion\n\nEarly larval exposure to low dose teratogens alters egg-laying without improving embryo viability at later time points, which result in some overall detrimental impacts on fecundity and fertility. An improvement in embryonic viability at the latter time intervals was expected because adult hermaphrodites replenish their entire gonad every 6.5 hours (Hirsh et al., 1976), which implies that eggs laid during or after the 6–9 hour interval hours would have not been exposed to teratogens and egg-laying and egg quality should have increased at the latter two intervals compared to earlier intervals of 0–3 hours and 3–6 hours. While egg-laying improved during at the 9–12 hours interval, egg quality did not improve compared to either 0–3 or 3–6 hours intervals, as measured by embryo viability. Due to specifics of gametogenesis in C. elegans, it is possible that teratogens may affect egg quality by altering sperm cell quality (Hirsh et al., 1976). It is also possible that chronic exposure to teratogens may be exacerbated by their extended bioactivity, or in some cases, through actions of their metabolites. These results indicate greater egg harvests are necessary to obtain an offspring population to further assess long-term teratogen effects on offspring.\n\nThis procedure using C. elegans not only identifies sub-lethal teratogen concentrations with potential long-term effects upon egg laying and quality, it provides the experimental platform to expand knowledge to prevent birth defects by developing interventions to ameliorate chronic subtoxic, teratogenic exposures upon the embryo, and is an initial step towards designing and testing safe therapeutics to be used before and during gestation. Developing phenotyping screens with simple organisms, such as C. elegans, is an efficient way to identify putative teratogens with potential long-term effects to further knowledge on the underlying developmental processes susceptible to environmental insults, and reveals the basis of how environment affects and shapes development.\n\n\nData availability\n\nF1000Research: Dataset1. Dataset of screening sub-lethal teratogens on egg laying, hatching and viability in adult C. elegans during larval development, 10.5256/f1000research.8934.d124253 (Killeen & Marín de Evsikova, 2016).", "appendix": "Author contributions\n\n\n\nAK and CM conceived the study and designed the experiments. AK executed and analyzed the experiments. AK and CM drafted the manuscript. Both authors were involved in the revision of the draft and agreed to the final manuscript.\n\n\nCompeting interests\n\n\n\nThe authors have no competing interests to disclose.\n\n\nGrant information\n\nC. elegans N2 ancestral strain and E. coli OP50 strain was provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440).\n\n\nAcknowledgements\n\nThe authors thank Shpetim Karandrea for coding the groups to collect the data blind with respect to teratogen or control exposure.\n\n\nReferences\n\nAllard P, Colaiácovo MP: Bisphenol A impairs the double-strand break repair machinery in the germline and causes chromosome abnormalities. Proc Natl Acad Sci U S A. 2010; 107(47): 20405–20410. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHirsh D, Oppenheim D, Klass M: Development of the reproductive system of Caenorhabditis elegans. Dev Biol. 1976; 49(1): 200–219. PubMed Abstract | Publisher Full Text\n\nKilleen A, Marín de Evsikova C: Dataset 1 in: Screening sub-lethal teratogens during larval development for long-term effects on egg laying, hatching, and viability in adult Caenorhabditis elegans. F1000Research. 2016. Data Source\n\nParodi DA, Sjarif J, Chen Y, et al.: Reproductive toxicity and meiotic dysfunction following exposure to the pesticides Maneb, Diazinon and Fenarimol. Toxicol Res (Camb). 2015; 4(3): 645–654. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReed CE, Fenton SE: Exposure to Diethylstilbestrol during Sensitive Life Stages: A legacy of Heritable Health Effects. Birth Defects Res C Embryo Today. 2013; 99(2): 134–146. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSanders AP, Desrosiers TA, Warren JL, et al.: Association between arsenic, cadmium, manganese, and lead levels in private wells and birth defects prevalence in North Carolina: a semi-ecologic study. BMC Public Health. 2014; 14(14): 955. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStiernagle T: Maintenance of C.elegans. WormBook. ed. The C.elegans Research Community, WormBook, 2006; 1–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWigle DT, Arbuckle TE, Turner MC, et al.: Epidemiologic Evidence of Relationships Between Reproductive and Child Health Outcomes and Environmental Chemical Contaminants. J Toxicol Environ Health B Crit Rev. 2008; 11(5–6): 373–517. PubMed Abstract | Publisher Full Text\n\nWilson JG: Environment and Birth Defects (Environmental Science Series). London: Academic Press; 1973." }
[ { "id": "19052", "date": "06 Jan 2017", "name": "Keith T. Jones", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well conducted experiment examining the influence of various teratogens on the egg laying rate and embryo quality, and progeny, in C. elegans. There is an expected fall off in the fertility of worms exposed, however the actual rate of egg laying on initial exposure was raised.\nMy only comment would be that egg laying in worms is associated with/ needs worm movement. Thus you can get a ‘bag of worms’ phenotype if all movement is blocked. Hypothetically worm irritancy by noxious exposure in itself may promote movement, which then raises egg laying rate. It may be worthwhile commenting on this as a possibly explanation.", "responses": [] }, { "id": "18823", "date": "16 Jan 2017", "name": "Keith P. Choe", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a study providing proof of principle for an approach to screening potential teratogens using the nematode model C. elegans.  The small size, simple culturing methods, and genetic tractability of C. elegans make it useful for rapid screening of compounds for biological activity and for identifying genetic factors that contribute to compound mode of action.\nThe results demonstrate the ability of the assay to detect compound effects on egg production and embryo viability.  The authors might comment of embryo viability; the percent of eggs hatching is pretty low (generally under 50% even with vehicle control).  Embryo viability on agar plates is usually far higher.\nA useful modification of the assay would be to extend monitoring for the whole egg-laying period of 3-4 days to measure total eggs laid and to measure the latency of recovery on egg production and embryo viability.", "responses": [] } ]
1
https://f1000research.com/articles/5-2925
https://f1000research.com/articles/5-2911/v1
23 Dec 16
{ "type": "Opinion Article", "title": "The novel POSEIDON stratification of ‘Low prognosis patients in Assisted Reproductive Technology’ and its proposed marker of successful outcome", "authors": [ "Peter Humaidan", "Carlo Alviggi", "Robert Fischer", "Sandro C. Esteves", "Peter Humaidan", "Carlo Alviggi", "Robert Fischer" ], "abstract": "In reproductive medicine little progress has been achieved regarding the clinical management of patients with a reduced ovarian reserve or poor ovarian response (POR) to stimulation with exogenous gonadotropins -a frustrating experience for clinicians as well as patients. Despite the efforts to optimize the definition of this subgroup of patients, the existing POR criteria unfortunately comprise a heterogeneous population and, importantly, do not offer any recommendations for clinical handling. Recently, the POSEIDON group (Patient-Oriented Strategies Encompassing IndividualizeD Oocyte Number) proposed a new stratification of assisted reproductive technology (ART) in patients with a reduced ovarian reserve or unexpected inappropriate ovarian response to exogenous gonadotropins. In brief, four subgroups have been suggested based on quantitative and qualitative parameters, namely, i. Age and the expected aneuploidy rate; ii. Ovarian biomarkers (i.e. antral follicle count [AFC] and anti-Müllerian hormone [AMH]), and iii. Ovarian response - provided a previous stimulation cycle was performed. The new classification introduces a more nuanced picture of the “low prognosis patient” in ART, using clinically relevant criteria to guide the physician to most optimally manage this group of patients. The POSEIDON group also introduced a new measure for successful ART treatment, namely, the ability to retrieve the number of oocytes needed for the specific patient to obtain at least one euploid embryo for transfer. This feature represents a pragmatic endpoint to clinicians and enables the development of prediction models aiming to reduce the time-to-pregnancy (TTP). Consequently, the POSEIDON stratification should not be applied for retrospective analyses having live birth rate (LBR) as endpoint. Such an approach would fail as the attribution of patients to each Poseidon group is related to specific requirements and could only be made prospectively. On the other hand, any prospective approach (i.e. RCT) should be performed separately in each specific group.", "keywords": [ "Assisted Reproductive Technology", "Diagnosis", "Gonadotropins", "Group POSEIDON", "Ovarian stimulation", "Embryo aneuploidy", "Poor ovarian response", "Prognosis." ], "content": "\n\nThe management of patients with an impaired ovarian reserve or poor ovarian response (POR) to exogenous gonadotropin stimulation has challenged reproductive specialists for decades. Apart from limited understanding of the pathophysiology, wide heterogeneity exists in the definition of the poor responder patient as well as overall disappointing outcomes in assisted reproductive technology (ART) (Papathanasiou et al., 2016).\n\nThe POSEIDON group (Patient-Oriented Strategies Encompassing IndividualizeD Oocyte Number) was recently established to focus specifically on the diagnosis and management of low prognosis patients (Poseidon Group, 2016). Composed by reproductive endocrinologists and reproductive medicine specialists from 7 countries [Carlo Alviggi (Italy), Claus Y. Andersen (Denmark), Klaus Buhler (Germany), Alessandro Conforti (Italy), Giuseppe de Placido (Italy), Sandro C. Esteves (Brazil), Robert Fischer (Germany), Daniela Galliano (Spain), Nikolaos P. Polyzos (Belgium), Sesh K. Sunkara (United Kingdom), Fillipo M. Ubaldi (Italy), and Peter Humaidan (Denmark)] with long-standing clinical and/or research experience, the POSEIDON group in an opening paper proposed a new stratification to classify infertility patients with a reduced ovarian reserve or unexpected inappropriate ovarian response to exogenous gonadotropins (Poseidon Group, 2016). In brief, four subgroups have been suggested based on quantitative and qualitative parameters, namely, i. Age and the expected aneuploidy rate; ii. Ovarian biomarkers (i.e. antral follicle count [AFC] and anti-Müllerian hormone [AMH]), and iii. Ovarian response - provided a previous stimulation cycle was performed (Figure 1). The POSEIDON group also introduced a new measure for successful ART treatment, namely, the ability to retrieve the number of oocytes needed for the specific patient to obtain at least one euploid embryo for transfer.\n\nAFC: antral follicle count; AMH: anti-Müllerian hormone. Adapted with permission from Elsevier; Poseidon Group (Patient-Oriented Strategies Encompassing IndividualizeD Oocyte Number)., Alviggi C, Andersen CY, Buehler K, Conforti A, De Placido G, Esteves SC, Fischer R, Galliano D, Polyzos NP, Sunkara SK, Ubaldi FM, Humaidan P. A new more detailed stratification of low responders to ovarian stimulation: from a poor ovarian response to a low prognosis concept. Fertil Steril. 2016 Jun;105(6):1452–3.\n\nFollowing its publication earlier this year (Poseidon Group, 2016), the POSEIDON stratification system has sparked interest among infertility practitioners. Here, we expand the discussion as to why the new concept has been proposed, providing new and important information as below.\n\nFirst, it is clear that the major players involved in the complex POR equation are not fully satisfied with the existing classification criteria. Taking the scholarly perspective, for instance, until now more than 70 randomized controlled trials (RCTs) compared interventions in poor responders using a wide range of definitions, including the most recent Bologna criteria (Ferraretti et al., 2011; Papathanasiou et al., 2016). Among the trials registered in www.clinicaltrials.gov until November 2016, 44 were specific to POR. However, analyzing the results of completed trials and the published literature, the overall conclusion is that there is insufficient evidence to support the routine use of any particular intervention for POR. Thus, data indicate that the current classification criteria have been unable to discriminate patient subsets within the POR population who could benefit from specific interventions (Nagels et al., 2015; Pandian et al., 2010; Papathanasiou et al., 2016). A possible explanation is that the analysis of whole populations of POR with different baseline characteristics and, therefore, different prognosis in a given RCT may dilute the effect size.\n\nAlong the same lines, but taking the perspective of the clinician, a recent international survey showed that the most frequently used criterion to identify POR was the “number of follicles produced”, which surprisingly has been rarely included in the scholarly definition of POR (Patrizio et al., 2015). Moreover, due to the absence of efficient remedies, most practices do not use an evidence-based treatment for this category of patients (Patrizio et al., 2015). Lastly, according to the standpoint of the patient, RESOLVE (www.RESOLVE.org), a non-for profit patient organization dedicated to providing education to couples suffering from infertility, classifies POR as women who require large doses of medication and who produce less than an optimal number of oocytes. This indicates that patients themselves have introduced a new element into the already complex POR equation, namely, suboptimal response to ovarian stimulation.\n\nSecondly, it is important to further discuss the issue of quantity versus quality regarding oocytes. It is difficult to deny that counting the number of oocytes retrieved or estimating their numbers using ovarian biomarkers may not be sufficient for clinical management. Equally important is the age-related decrease in oocyte quality, which largely depends on chromosomal abnormalities occurring prior to meiosis II (Sakakibara et al., 2015). Despite recognizing that other biochemical processes are also relevant to oocyte quality, the genetic competence of the oocyte is paramount as it affects the implantation potential of the resulting embryo. For instance, blastocyst euploidy rates of about 60% are observed in younger women (<35 years of age) undergoing ART whereas these numbers fall to 30% or lower in patients aged 40–42 (Ata et al., 2012). As a result, the age-related embryo aneuploidy rate dramatically changes the prognosis of women with the same oocyte yield as well as those with different oocyte yields.\n\nLastly, and most importantly we wish to stress the new POSEIDON marker of successful outcome, i.e., the ability to retrieve the number of oocytes necessary to achieve at least one euploid embryo for transfer in each patient. We strongly believe this represents a more pragmatic endpoint for clinicians providing care to infertility patients. Furthermore, it opens the possibility of developing prediction models to help clinicians counsel and set patient expectations and establish a working plan to reduce the time-to-pregnancy (TTP). This is essential to avoid any misunderstanding regarding the POSEIDON concept, as the intention of the concept is to help guide clinicians through the medical management, and as such it should not to be used in retrospective analyses having live birth rate (LBR) as an endpoint.\n\nWhile LBR is more appropriate for counseling purposes and designing RCTs, the POSEIDON concept is based on (i) a better stratification of women with \"low prognosis\" in ART, and (ii) individualized therapeutic approaches in each group, having as endpoint the number of oocytes required to have at least one euploid embryo for transfer in each patient. Essentially, the POSEIDON concept was designed to offer a practical endpoint to clinicians as it may help set a clear goal for management.\n\nObviously, retrospective analyses of previously structured databases can match patients to fit into POSEIDON subgroups. As an example, from an existing database (pre-POSEIDON) one might analyze the LBR of women >=35 years with low ovarian reserve (i.e., POSEIDON group 4). However, assuming commonly reported metaphase II rates (e.g. 75%), 2PN fertilization rates (e.g. 70%), blastulation rates (e.g. 45%), and blastocyst euploidy rates (e.g. 50%), approximately 12 oocytes are needed to obtain at least one euploid blastocyst for transfer in a given 36 year old patient. Nevertheless, it is unlikely that this hypothetical patient was treated according to the POSEIDON concept, using an individualized therapeutic plan, based on the number of oocytes to obtain at least one euploid blastocyst. Hence, any analysis, using LBR as an endpoint to be valid should ensure that patients were prospectively stratified as per POSEIDON groups and treated with the mindset of achieving the proposed POSEIDON marker of success.\n\nIn conclusion, in comparison with previously suggested models to define POR patients from a rigid standpoint and without any clinical guidance, the POSEIDON concept contemplates clinical recommendations with a new pragmatic endpoint, the number of oocytes needed to obtain one euploid embryo for transfer in each patient. We see this novel initiative as an important working – and counseling tool for the ART specialist who handles the low prognosis patient.", "appendix": "Author contributions\n\n\n\nAll authors contributed equally to the preparation and revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nPH, CA, RF, and SE are members of the POSEIDON group.\n\n\nGrant information\n\nThis work is supported by institutional funding of Androfert Center to the Poseidon group.\n\n\nReferences\n\nAta B, Kaplan B, Danzer H, et al.: Array CGH analysis shows that aneuploidy is not related to the number of embryos generated. Reprod Biomed Online. 2012; 24(6): 614–620. PubMed Abstract | Publisher Full Text\n\nFerraretti AP, La Marca A, Fauser BC, et al.: ESHRE consensus on the definition of 'poor response' to ovarian stimulation for in vitro fertilization: the Bologna criteria. Hum Reprod. 2011; 26(7): 1616–1624. PubMed Abstract | Publisher Full Text\n\nNagels HE, Rishworth JR, Siristatidis CS, et al.: Androgens (dehydroepiandrosterone or testosterone) for women undergoing assisted reproduction. Cochrane Database Syst Rev. 2015; 26(11): CD009749. PubMed Abstract | Publisher Full Text\n\nPandian Z, McTavish AR, Aucott L, et al.: Interventions for 'poor responders' to controlled ovarian hyper stimulation (COH) in in-vitro fertilisation (IVF). Cochrane Database Syst Rev. 2010; 20(1): CD004379. PubMed Abstract | Publisher Full Text\n\nPapathanasiou A, Searle BJ, King NM, et al.: Trends in 'poor responder' research: lessons learned from RCTs in assisted conception. Hum Reprod Update. 2016; 22(3): pii: dmw001. PubMed Abstract | Publisher Full Text\n\nPatrizio P, Vaiarelli A, Setti L, et al.: How to define, diagnose and treat poor responders? Responses from a worldwide survey of IVF clinics. Reprod Biomed Online. 2015; 30(6): 581–592. PubMed Abstract | Publisher Full Text\n\nPoseidon Group (Patient-Oriented Strategies Encompassing IndividualizeD Oocyte Number), Alviggi C, Andersen CY, et al.: A new more detailed stratification of low responders to ovarian stimulation: from a poor ovarian response to a low prognosis concept. Fertil Steril. 2016; 105(6): 1452–3. PubMed Abstract | Publisher Full Text\n\nSakakibara Y, Hashimoto S, Nakaoka Y, et al.: Bivalent separation into univalents precedes age-related meiosis I errors in oocytes. Nat Commun. 2015; 6: 7550. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19116", "date": "23 Jan 2017", "name": "Giuliano Bedoschi", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nManuscript Review\n\nThe novel POSEIDON stratification of ‘Low prognosis patients in Assisted Reproductive Technology’ and its proposed marker of successful outcome\n\nF1000 Research\n\nThe premise of this opinion article was to expand the discussion on the POSEIDON stratification system, including new and important information about this new concept.\n\nFirst, the authors demonstrated the researchers' dissatisfaction with the existing criteria for the definition of poor ovarian response (POR). As discussed in the article, dozens of randomized clinical trials have been published using different criteria for the diagnosis of POR. This demonstrates the importance of elaborating new stratification criteria that takes into account relevant prognosis information for POR patients. As discussed by the authors, the fact that the different strategies proposed by the clinical trials do not show improvement of in vitro fertilization cycles outcomes could be related to the dilution of the effect size when using a broad POR classification. Strategies developed for more specific groups of patients may demonstrate different reproductive outcomes. This information is of utmost relevance, especially in patients diagnosed with POR. The division of patients in subgroups would allow a more refined and individualized strategy for specific groups of patients.\n\nSecondly, age criterion is essential to estimate the prognosis of in vitro fertilization cycles, due to the increase in the rates of aneuploidy as the maternal age advances. The inclusion of this criterion for dividing patients’ subgroups is of great value.\n\nThe POISEDON marker of successful outcome, based on the number of oocytes necessary to achieve euploid embryo transfer, may assist reproductive specialists for counseling purposes.\n\nIn conclusion, the POSEIDON stratification concept presents several advantages when compared to previously described models. This facilitates the evaluation of strategies that could result in increased success of in vitro fertilization cycles for specific subgroups of patients. In addition, reproductive specialists would be able to better advise patients about their treatment prognosis.", "responses": [] }, { "id": "19119", "date": "30 Jan 2017", "name": "Ariel Weissman", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFive years have passed since the publication of the Bologna criteria for the definition of poor ovarian response (POR) 1 and evidence is continuing to accumulate to assess their employment as well as limitations in the ART setting. This has been recently encapsulated in a contemporary critical appraisal  2. The POSEIDON group is making a sincere effort to introduce a more specific and higher resolution set of criteria in an attempt to guide the physician to most optimally manage this group of patients (3 and current F1000Research paper). There are, however, several reservations regarding the proposed stratification system and some clarifications that need to be made. Since age is perhaps the most important criterion for oocyte quality, embryo ploidy and therefore also prognosis, it is now used as a major criterion in the POSEIDON stratification, with 35 years being the age cut-off used. Thirty five years is indeed the age where both aneuploidy rates begin to rise, and pregnancy rates begin to decline in many large ART data sets. The choice of 35 years as a cut-off should be briefly explained and backed up by references. A large amount of research has established antral follicle count (AFC) and anti-müllerian hormone (AMH) as the most reliable and accurate ovarian reserve tests (ORT) in predicting ovarian response.  Specific cut-off values of <1.2 ng/mL and <5 antral follicles have been selected for AMH and AFC, respectively, in contrast to the less clearly defined values in the Bologna criteria, ≤5-7 follicles for AFC and ≤0.5-1.1 ng/mL for AMH. How were the cutoffs of AMH and AFC reached/decided?\n\nThe standardization of measurement of both markers is still underway. AMH assays are in a constant process of precision improvement and automation. It is unclear for AMH whether \"one size fits all\", for example, should two patients, one with AMH levels of 1 ng/mL and the other with undetectable AMH levels, be counseled and managed the same way? Furthermore, currently, a 36 years old patient with regular cycles and POR, may have the same undetectable AMH levels as a 50 years old menopausal woman. Ultra-sensitive AMH levels are currently being developed, and whether they will have a role in better discrimination and prognostication of POR patients, remains to be seen. Similarly, there is yet a lack of standardization of AFC measurements. The performance and resolution of ultrasound machines is being constantly improved, and standards should be set for optimal imaging requirements. The fact that the techniques for measurement of both AMH and AFC are still under development should be mentioned. The POSEIDON group classifies the retrieval of 4-9 oocytes as a suboptimal response. For many, including ourselves, 8-10 oocytes is considered a goal and a successful outcome of COS, in terms of both safety and efficacy, especially in young patients.  Furthermore, Cai et al. 4 have recently demonstrated that similarly aged patients have similar pregnancy prospects after fresh embryo transfer when the same number and quality of embryos are replaced, irrespective of their number of oocytes retrieved. It is therefore questionable whether the range of 4-9 oocytes is again not too broad in terms of prognostication of outcome.\n\nThe authors claim that the Bologna classification criteria have been unable to discriminate patient subsets within the POR population who could benefit from specific interventions. This is still premature to decide, as there are many studies based on the Bologna criteria that are still underway. Furthermore, a recent well performed multi-center study has shown that in normal responders the individualized treatment has similar efficacy to conventional ovarian stimulation 5. Whether the same is true in the POR population employing the Bologna or other criteria has yet to be explored.\n\nThe POSEIDON group also suggests that stimulation should be tailored according to \"the age-related embryo/blastocyst aneuploidy rate\" with the intention \"to retrieve the number of oocytes necessary to obtain at least one euploid embryo for transfer in each patient. The corresponding references and nomograms should be brought and constructed, respectively, in order for the stratification to become practical and for clinicians to be able to proceed with its use.\n\nHaving an euploid blastocyst as a goal may be too simplistic, since many (in some publications >50%) euploid blastocysts never implant. Do the authors suggest to extend embryo culture to the blastocyst stage, or just use having one blastocyst as an end-point for calculation of the desired oocyte number per age group? It is questionable whether embryos in POR patients should be cultured at all to the blastocyst stage, or transferred as early as possible in the cleavage stage 6.\n\nThe POSEIDON stratification aims not only to define poor ovarian response but also to establish the prognosis for patients. It is likely that by using these criteria more homogenous populations can be established for clinical trials. However, it remains to be seen whether the management and outcome of POR patients undergoing ART can be also improved by their incorporation. It is possible that unless we will witness major breakthrough in the management of POR patients, the lay definition \"women who require large doses of medication and who produce less than an optimal number of oocytes\" used by patient organization Resolve (www.resolve.org) will be as good as any stratification system.\nIn summary, it is clear that the POSEIDON  group offers an improved stratification for POR patients, which has the potential to improve study designs and fine tune prognostication. It remains to be seen whether the reproductive outcome of POR patients will also be improved following the new classification system. Perhaps it is time to set an international expert meeting in order to revise the Bologna criteria and establish new consensus criteria that would enhance both diagnosis and prognosis.", "responses": [ { "c_id": "2575", "date": "22 Mar 2017", "name": "Sandro C Esteves", "role": "Author Response", "response": "The Poseidon stratification of ’Low Prognosis patients’ in Assisted Reproductive Technology is on the right track. Response to: Weissman A and Younis JS. Referee Report For: The novel POSEIDON stratification of ‘Low prognosis patients in Assisted Reproductive Technology’ and its proposed marker of successful outcome [version 1; referees: 2 approved, 1 approved with reservations]. F1000Research 2016, 5:2911 (doi: 10.5256/f1000research.11186.r19119) Dear Editors, We thank doctors Weissman and Younis for their insightful commentary on our most recent publication in F1000 Research (Humaidan et al. 2016). First of all, we are pleased to see that our colleagues agree in the fact that the novel proposed Poseidon stratification of ‘Low prognosis patients in Assisted Reproductive Technology (ART)’ is a “sincere effort to introduce a more specific and higher resolution set of criteria” and that in comparison with the Bologna criteria this new classification system offers “an improved stratification for poor ovarian response (POR) patients which has the potential to improve study designs and fine-tune prognostication”. Moreover, that this stratification system also has the potential to help guide the clinician to most optimally manage a heterogeneous and challenging group of ART patients. Importantly, the Poseidon stratification is based on the concept that although POR is very relevant, it is not the only variable for defining prognosis. In fact, the Poseidon group’s proposal is to move from a strict view of POR to the “low prognosis” concept. In particular, two new prognostic indicators have been introduced: i) the hypo-sensitivity to standard doses of gonadotrophins, and ii) the ovarian “quality”. The former is related to Poseidon’s groups 1 and 2 and is based on the principle of the ‘Follicle Output RaTE’ (FORT). In brief, patients classified as Poseidon’s groups 1 and 2 have an oocyte yield lower than expected and can be probably retreated with different OS regimens. The latter is based on the age-related aneuploidy rate, which might offer the possibility of exploring different stimulation strategies and treatments, including an oocyte/blastocyst accumulation program. As regards the clarifications requested by Weissman and Younis, we are delighted to have the opportunity to provide the answers to help the authors as well as clinicians with interest in the matter concerned to view the Poseidon concept in the correct perspective. Firstly, the cut-off of 35 years of maternal age is a generally accepted and well-recognized limit to distinguish the young and the ageing patient, as it overall determines the initiation of age-related changes in not only oocyte quantity (Ferraretti et al., 2011; Ata et al. 2012) but also oocyte quality (Ben-Meir et al. 2015; Weall et al. 2015). As for the specific cut-off values of AFC and AMH set at < 5 antral follicles and < 1.2 ng/ml, we certainly agree that these limits are more clearly defined than those suggested by the Bologna criteria (≤ 5-7 and ≤ 0.5 – 1.1 ng/ml, respectively). For AMH, the best cut-off values reported in the literature are in the range from 0.5 to 1.1 ng/ml, whereas for AFC the values range from less than 5 to less than 7 (Broekmans et al., 2006; Broer et al., 2010; La Marca et al., 2010). Thus, the cut-off levels used for the Poseidon stratification are well within the accepted criteria to define POR, but more clearly defined to make them applicable to daily clinical practice as well as research. Secondly, the Poseidon stratification classifies 4-9 oocytes as a suboptimal response, based on the results of the largest analysis until now including a total of 400 135 IVF cycles showing that live birth rates (LBR) within this population were ∼20–30% lower compared with women of the same age with 10–15 oocytes retrieved (Sunkara et al., 2011). These observations were recently confirmed by Drakopoulos et al. (2016) in a retrospective cohort study involving 1099 women 18-40 years old subjected to IVF/ICSI. The cumulative LBR, i.e. the sum of all live births obtained in the first fresh IVF/ICSI including those achieved by utilization of all cryopreserved embryos available, varied as a function of the number of oocytes retrieved, being 21.7% among patients with 1-3 oocytes, 39.7% in those with 4-9 oocytes, 50.5% in the group with 10-15 oocytes, and 61.5% among the patients with greater than 15 oocytes. In particular, suboptimal responders (4-9 oocytes) had a significantly lower cumulative LBR (P=0.02) than normal (10-15 oocytes) responders (Drakopoulos et al. 2016). Whether or not this range is too broad needs to be determined in future prospective trials applying the Poseidon criteria. Thirdly, as regards tailoring of ovarian stimulation to obtain at least one euploid blastocyst for transfer in each individual patient – the new proposed measure of a successful ART treatment by the Poseidon group (Humaidan et al. 2016; Alviggi et al. 2015)- and the corresponding nomograms and references, we are happy to announce that these are on their way, and that a “Poseidon Calculator” is currently being developed, using mathematical and statistical models, to provide the clinician a useful tool to calculate with a few clicks the number of oocytes needed for each specific patient – also taking the results of the individual ART laboratory into account. Although it might still be debated whether an embryo should be transferred at the cleavage or blastocyst stages in the ‘Low prognosis’ ART patient, we are convinced that in a well-functioning ART laboratory blastocyst transfer is the correct way to go. Lastly, we concur with Weissman and Younis that like all other criteria set in medicine prospective trials are needed to explore the efficacy of the Poseidon criteria in each specific sub-group to evaluate whether the incorporation of the new stratification improves the management and outcome of low responder patients. Along these lines, we would certainly welcome an initiative to set a larger international board to further improve the diagnosis and prognosis of the ‘Low prognosis patient’. The response to Weissman and Younis commentary as above is authored by Humaidan P, Esteves SC, Fischer R and Alviggi C.   References   Humaidan P, Alviggi C, Fischer R, et al.: The novel POSEIDON stratification of 'Low prognosis patients in Assisted Reproductive Technology' and its proposed marker of successful outcome. F1000Res. 2016; 5: 2911.   Ferraretti AP, La Marca A, Fauser BC, et al.: ESHRE consensus on the definition of ‘‘poor response’’ to ovarian stimulation for in vitro fertilization: The Bologna criteria. Hum Reprod. 2011; 26:1616–1624.   Ata B, Kaplan B, Danzer H, et al.: Array CGH analysis shows that aneuploidy is not related to the number of embryos generated. Reprod Biomed Online. 2012; 24: 614-620. Ben-Meir A, Burstein E, Borrego-Alvarez A, et al.: Coenzyme Q10 restores oocyte mitochondrial function and fertility during reproductive aging. Aging Cell 2015; 14: 887-895. Weall BM, Al-Samerria S, Conceicao J, et al.: A direct action for GH in improvement of oocyte quality in poor-responder patients. Reproduction 2015; 149: 147-154. Broekmans FJ, Kwee J, Hendriks DJ, et al.: A systematic review of tests predicting ovarian reserve and IVF outcome. Hum Reprod Update 2006; 12: 685–718.   Broer SL, Mol B, Dölleman M, et al.: The role of anti-Müllerian hormone assessment in assisted reproductive technology outcome. Curr Opin Obstet Gynecol. 2010; 22: 193–201.   La Marca A, Sighinolfi G, Radi D, Argento C, Baraldi E, Artenisio AC, et al. Anti-Mullerian hormone (AMH) as a predictive marker in assisted reproductive technology (ART). Hum Reprod Update. 2010; 16:113–130.   Sunkara SK, Rittenberg V, Raine-Fenning N, et al.: Association between the number of eggs and live birth in IVF treatment: an analysis of 400 135 treatment cycles. Hum Reprod. 2011; 26: 1768–1774.   Drakopoulos P, Blockeel C, Stoop D, et al.: Conventional ovarian stimulation and single embryo transfer for IVF/ICSI. How many oocytes do we need to maximize cumulative live birth rates after utilization of all fresh and frozen embryos? Hum Reprod. 2016; 31: 370-376. Poseidon Group (Patient-Oriented Strategies Encompassing IndividualizeD Oocyte Number), Alviggi C, Andersen CY, Buehler K, et al. A new more detailed stratification of low responders to ovarian stimulation: from a poor ovarian response to a low prognosis concept. Fertil Steril. 2016; 105(6): 1452-1453." } ] }, { "id": "18761", "date": "08 Feb 2017", "name": "Colin M. Howles", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis initiative to provide further clarity by stratifying 'Low Prognosis' ART patients into 4 main groupings is a valuable step forward in assisting clinicians and organisations considering clinical trials in such individuals.\nThere is one point that I would urge the authors to consider in their future deliberations on this topic. If a patient in Poseidon Group 1 or 2  had a suboptimal response, following an individualised FSH dosing scheme based upon for example AMH, should this be checked again in a subsequent stimulation cycle where a higher incremental FSH starting dose is used, prior to categorising the patient in these groupings? R. Fleming has  recently referred to the existence of an 'iatrogenic poor response' (ESHRE Annual meeting 2016 Helsinki, sponsored symposium). In his dataset, a higher FSH starting dose given in a subsequent cycle (within 9 months of the poor response cycle), increased the oocyte yield. On the contrary, ART patients already identified by low AMH to be 'poor responders' did not benefit from an increase in the FSH dose.", "responses": [] } ]
1
https://f1000research.com/articles/5-2911
https://f1000research.com/articles/5-2789/v1
29 Nov 16
{ "type": "Research Note", "title": "Recapitulating phylogenies using k-mers: from trees to networks", "authors": [ "Guillaume Bernard", "Mark A. Ragan", "Cheong Xin Chan", "Guillaume Bernard", "Mark A. Ragan" ], "abstract": "Ernst Haeckel based his landmark Tree of Life on the supposed ontogenic recapitulation of phylogeny, i.e. that successive embryonic stages during the development of an organism re-trace the morphological forms of its ancestors over the course of evolution. Much of this idea has since been discredited. Today, phylogenies are often based on molecular sequences. A typical phylogenetic inference aims to capture and represent, in the form of a tree, the evolutionary history of a family of molecular sequences. The standard approach starts with a multiple sequence alignment, in which the sequences are arranged relative to each other in a way that maximises a measure of similarity position-by-position along their entire length. However, this approach ignores important evolutionary processes that are known to shape the genomes of microbes (bacteria, archaea and some morphologically simple eukaryotes). Recombination, genome rearrangement and lateral genetic transfer undermine the assumptions that underlie multiple sequence alignment, and imply that a tree-like structure may be too simplistic. Here, using genome sequences of 143 bacterial and archaeal genomes, we construct a network of phylogenetic relatedness based on the number of shared k-mers (subsequences at fixed length k). Our findings suggest that the network captures not only key aspects of microbial genome evolution as inferred from a tree, but also features that are not treelike. The method is highly scalable, allowing for investigation of genome evolution across a large number of genomes. Instead of using specific regions or sequences from genome sequences, or indeed Haeckel’s idea of ontogeny, we argue that genome phylogenies can be inferred using k-mers from whole-genome sequences. Representing these networks dynamically allows biological questions of interest to be formulated and addressed quickly and in a visually intuitive manner.", "keywords": [ "phylogenies", "phylogenetic trees", "phylogenetic networks", "k-mers" ], "content": "Introduction\n\nErnst Haeckel coined the term Phylogenie to describe the series of morphological stages in the evolutionary history of an organism or group of organisms1. In his Tree of Life published 150 years ago2, Haeckel postulated that living organisms trace their evolutionary origin(s) along three distinct lineages (Plantae, Protista and Animalia) to a “common Moneran root of autogonous organisms”. In some (but not all) later works (e.g. in 18683) he allowed that different Monera may have arisen independently by spontaneous generation. Either way, these views accord with the Larmackian notion of a built-in direction of evolution from morphologically simple “lower” organisms to more-complex “higher” forms4.\n\nHaeckel through his “Biogenetic Law” advocated that “ontogeny recapitulates phylogeny”2: that the embryonic series of an organism is a record of its evolutionary history. Under this view, morphologies observed at different developmental stages of an organism resemble and represent the successive stages (including adult stages) of its ancestors over the course of evolution. Of course, he worked before the advent of genetics and the modern synthesis, and before it was appreciated that information on hereditary is carried by DNA and can be recovered by sequencing and statistical analysis. He could not have foreseen that these DNA sequences code for other biomolecules and control life processes, including his beloved developmental series and organismal phenotype, through vastly complex molecular webs of interactions. Nor could Haeckel have envisaged the scale of phylogenetic analysis that can be carried out today using these DNA sequences across multiple genomes, made possible by the advent of high-throughput sequencing and computing technologies.\n\nFast-forwarding 150 years, phylogenetic inference based on comparative analysis of biological sequences is now a common practice. The similarity among sequences is commonly interpreted as evidence of homology5,6, i.e. that they share a common ancestry. From the earliest days of molecular phylogenetics, multiple sequences have been aligned7,8 to display this homology position-by-position along the length of the sequences. That is, the residues are arranged relative to each other such that the best available hypothesis of homology is achieved at every position (column) of the alignment. By default, it is assumed that the best alignment can be achieved simply by displaying the sequences in the same direction, and inserting gaps where needed (to represent insertions and deletions). This assumption is largely valid when working with exons or proteins of morphologically complex eukaryotes. However, in microbes this assumption is violated by commonplace evolutionary processes including genome rearrangement, genetic recombination and lateral genetic transfer9–14. These scenarios cannot be captured simply in a tree or tree-like representation of evolutionary relationships. As Haeckel observed when he drew his Tree2, biological evolution can be anything but straightforward, and these complications have become ever more-complicated15,16.\n\nAlternative approaches for inferring and representing phylogenies are available. An attractive strategy that addresses the issue of full-length alignability is to compute relatedness among a set of sequences based on the number or extent of k-mers (short sub-sequences of a fixed length k) that they share. Such approaches avoid multiple sequence alignment, and for this reason are termed alignment-free. As opposed to heuristics in multiple sequence alignment, these methods provide exact solutions. Various modifications are available, e.g. the use of degenerate k-mers, scoring match lengths rather than k-mer composition, and grammar-based techniques; see recent reviews17,18 for more detail. Importantly, evolutionary relationships can also be depicted as a network, with taxa and relationships represented respectively as nodes and edges19–21, rather than as a strictly bifurcating tree. Using simulated and empirical sequence data, we recently demonstrated that alignment-free approaches can yield phylogenetic trees that are biologically meaningful22–24. We find that these approaches are more robust to genome rearrangement and lateral genetic transfer, and are highly scalable22,23, a much-desired feature given the current deluge of sequence data facing the research community25. Here we extend the alignment-free phylogenetic approaches on 143 bacterial and archaeal genomes to generate a network of phylogenetic relatedness, and assess biological implications of this network relative to the phylogenetic tree.\n\n\nMethods\n\nUsing 143 complete genomes of Bacteria and Archaea22, we inferred the relatedness of these genome sequences using an alignment-free method based on the D2S statistic26,27. We computed a D2S distance, d for each possible pair of 143 genomes based on the presence of shared 25-mers using jD2Stat version 1.0 (http://bioinformatics.org.au/tools/jD2Stat/)23 and following Bernard et al.22. Here the distance d is normalised based on genome sizes and the probabilities that corresponding k-mers occur in the compared sequences26,27; d ranges between 0.0 (i.e. two genomes are identical) and 15.5 (< 0.0001% 25-mers are shared between the two genomes). For a pair of genomes a and b, we transformed dab into a similarity measure Sab, in which Sab = 10 – dab. We ignore instances of d >10, as these pairs of sequences share ≤ 0.01% of 25-mers (i.e. there is little evidence of homology). To visualise the phylogenetic relatedness of these genomes, we adopted the D3 JavaScript library for data-driven documents (https://d3js.org/). In this network, each node represents a genome, and an edge connecting two nodes represents the qualitative evidence of shared k-mers between them. We set a threshold function t for which only edges with S ≥ t are displayed on the screen. Changing t dynamically changes the network structure. The resulting dynamic network is available at http://bioinformatics.org.au/tools/AFnetwork/.\n\n\nResults and discussion\n\nFigure 1 shows the phylogenetic tree of the 143 Bacteria and Archaea genomes that we previously inferred using an alignment-free method based on the D2S statistic26,27. In an earlier study9, a supertree was generated for these genomes, summarising 22,432 protein phylogenies. Incongruence between the two trees was observed in 42% of the bipartitions, most of which are at terminal branches22. The alignment-free tree (Figure 1) recovers 13 out of the 15 “backbone” nodes9, distinct clades of Archaea and Bacteria, a monophyletic clade of Proteobacteria, and the lack of resolution between gamma- and beta-Proteobacteria, in agreement with previously published studies; as such, this tree represents reality as presently understood, i.e. is biologically correct.\n\nEach phylum is represented in a distinct colour, and the backbones identified in Beiko et al.9 are shown on the internal node with black filled circles. The association of Coxiella burnetii and Nitrosomonas europaea is marked with an asterisk.\n\nFigure 2 shows the network of phylogenetic relatedness of the same 143 genomes; a dynamic view of this network is available at http://bioinformatics.org.au/tools/AFnetwork/. As in our tree (Figure 1), Archaea and Bacteria form two separate paracliques; even at t = 0, we found only one archaean isolate (the euryarchaeote Methanocaldococcus jannaschii DSM 2661) linked to the bacterial groups Thermotogales and Aquificales22. Upon reaching t = 3, most of the 14 phyla have formed distinct densely connected subgraphs in our network, i.e. Cyanobacteria and Chlamydiales form cliques at t = 1.5 and all subgroups of Proteobacteria form a large paraclique with the Firmicutes at t = 2. Four Escherichia coli and two Shigella isolates, known to be closely related, form a clique up to t = 8.5. Interestingly, this network also showcases the extent that genomic regions are shared among diverse phyla, e.g. the high extent of genetic similarity among Proteobacteria versus the low extent between Chlamydiales and Cyanobacteria. Our observations largely agree with published studies9,22, but also highlight the inadequacy of representing microbial phylogeny as a tree. For instance, in the tree Coxiella burnetii, a member of the gamma-Proteobacteria, is grouped with Nitrosomonas europaea of the alpha-Proteobacteria (marked with an asterisk in Figure 1); in the network, the strongest connection of C. burnetii is with Wigglesworthia glossinidia, a member of the gamma-Proteobacteria (marked with an asterisk in Figure 2) at t = 2. Both W. glossinidia and C. burnetii are parasites; the W. glossinidia genome (0.7 Mbp) is highly reduced28 and the C. burnetii genome (2 Mbp) is proposed to be undergoing reduction29. As both the tree (Figure 1) and network presented here were generated using the same alignment-free method, the contradictory position of C. burnetii is likely caused by the neighbour-joining algorithm used for tree inference22. In this scenario, the C. burnetii genome connects with N. europaea because it shares high similarity with N. europaea and Neisseria genomes of the beta-Proteobacteria (S between 1.43 and 1.68), second only to W. glossinidia (S = 2.05), and because it shares little or no similarity with other genomes of gamma-Proteobacteria that are closely related to W. glossinidia, i.e. Buchnera aphidicola isolates (average S = 0.63) and “Candidatus Blochmannia floridanus” (S = 0).\n\nEach phylum is represented in a distinct colour, each node represents a genome and an edge represents a qualitative evidence of shared 25-mers between two genomes. The association between Coxiella burnetii and Wigglesworthia glossinidia is marked with an asterisk.\n\nBy changing the threshold t, we can dynamically visualise changes in the network structure. These changes are not random, but appear to correlate to the evolutionary history of the species. At t = 0, Archaea and Bacteria form two distinct paracliques, linked only by two edges, and the Planctomycetes isolate forms a singleton. When we increase t from 1 to 2, the Archaea and Bacteria paracliques quickly dissociate from each other; within the Bacteria, cliques of Chlamydiales and Cyanobacteria are formed and the Spirochaetales become isolated. Going from t = 2 to t = 3 we observe a scission between Firmicutes and Proteobacteria, and at t > 3 all classes of Proteobacteria start to form respective paracliques. The separation (as t is incremented) of a densely connected subgraph involving all representatives of a phylum, from the rest of the network mimics the divergence of this phylum from a common ancestor. Because the similarity measures do not have a unit (such as number of substitutions per site), it is not straightforward to interpret S as an evolutionary rate or divergence time. However, our findings suggest that our alignment-free network yields snapshots of biologically meaningful evolutionary relationship among these genomes, and that increasing the threshold based on the proportion of shared k-mers recapitulates the progressive separation of genomic lineages in evolution.\n\nThe alignment-free network reconstructed using whole-genome sequences thus recovers phylogenetic signals that cannot be captured in a binary tree. Using this approach, we generated the network in < 30 minutes; a whole-genome alignment of 143 sequences would have taken days, and even then, the alignment would be difficult to interpret given the genome dynamics in Bacteria and Archaea9–14. One can imagine inferring a network of thousands of microbial genomes in a few hours using distributed computing. More importantly, the network can be visualised dynamically, explored interactively and shared.\n\nOther biological questions could be addressed by linking the k-mers to their genomic locations and annotated genome features, e.g. in a relational database30. For instance, we could use such a database to compare thousands of isolates and identify core gene functions for a specific phylum or genus, or exclusive versus non-exclusive functions in bacterial pathogens, in a matter of seconds. We can also use k-mers to quickly search for biological information e.g. functions relevant to lateral genetic transfer, recombination or duplications.\n\nIn contrast to Haeckel’s “Biogenetic Law”, k-mers used in this way recapitulate phylogenetic signal, not ontogeny. Alignment-free approaches generate a biologically meaningful phylogenetic inference, and are highly scalable. More importantly, representing alignment-free phylogenetic relationships using a network captures aspects of evolutionary histories that are not possible in a tree. As more genome data become available, Haeckel’s goal of depicting the History of Life is closer to reality.\n\n\nData availability\n\nThe 143 Bacteria and Archaea genomes used in this work are the same dataset used in an earlier study22, available at http://dx.doi.org/10.14264/uql.2016.90831. The dynamic phylogenetic network of these genomes is available at http://bioinformatics.org.au/tools/AFnetwork, with the source code available at http://dx.doi.org/10.14264/uql.2016.95232", "appendix": "Author contributions\n\n\n\nGB, MAR and CXC conceived the study and designed the experiments. GB carried out the experiments. GB and CXC prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nWe thank funding support from the Australian Research Council (DP150101875) awarded to MAR and CXC, and a James S. McDonnell Foundation grant awarded to MAR.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nDayrat B: The roots of phylogeny: how did Haeckel build his trees? Syst Biol. 2003; 52(4): 515–27. PubMed Abstract | Publisher Full Text\n\nHaeckel E: Generelle Morphologie der Organismen. Allgemeine Grundzüge der organischen Formen-Wissenschaft, mechanisch begründet durch die von Charles Darwin reformirte Descendenztheorie. Bd. 1 und 2. Berlin: Reimer; 1866. Publisher Full Text\n\nHaeckel E: Natürliche Schöpfungsgeschichte.. Berlin: Reimer; 1868. Reference Source\n\nBurkhardt RW Jr: Lamarck, evolution, and the inheritance of acquired characters. Genetics. 2013; 194(4): 793–805. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFitch WM: Homology: a personal view on some of the problems. Trends Genet. 2000; 16(5): 227–31. PubMed Abstract | Publisher Full Text\n\nHall BK: Homology: the hierarchical basis of comparative biology. San Diego: Academic Press; 1994. Reference Source\n\nNotredame C: Recent progress in multiple sequence alignment: a survey. Pharmacogenomics. 2002; 3(1): 131–44. PubMed Abstract | Publisher Full Text\n\nNotredame C: Recent evolutions of multiple sequence alignment algorithms. PLoS Comput Biol. 2007; 3(8): e123. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBeiko RG, Harlow TJ, Ragan MA: Highways of gene sharing in prokaryotes. Proc Natl Acad Sci U S A. 2005; 102(40): 14332–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDagan T, Martin W: The tree of one percent. Genome Biol. 2006; 7(10): 118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDarling AE, Miklós I, Ragan MA: Dynamics of genome rearrangement in bacterial populations. PLoS Genet. 2008; 4(7): e1000128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDoolittle WF: Phylogenetic classification and the universal tree. Science. 1999; 284(5423): 2124–9. PubMed Abstract | Publisher Full Text\n\nKoonin EV: Horizontal gene transfer: essentiality and evolvability in prokaryotes, and roles in evolutionary transitions [version 1; referees: 2 approved]. F1000Res. 2016; 5: pii: F1000 Faculty Rev-1805. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPuigbò P, Lobkovsky AE, Kristensen DM, et al.: Genomes in turmoil: quantification of genome dynamics in prokaryote supergenomes. BMC Biol. 2014; 12: 66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdl SM, Simpson AG, Lane CE, et al.: The revised classification of eukaryotes. J Eukaryot Microbiol. 2012; 59(5): 429–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpang A, Saw JH, Jørgensen SL, et al.: Complex archaea that bridge the gap between prokaryotes and eukaryotes. Nature. 2015; 521(7551): 173–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBonham-Carter O, Steele J, Bastola D: Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis. Brief Bioinform. 2014; 15(6): 890–905. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaubold B: Alignment-free phylogenetics and population genetics. Brief Bioinform. 2014; 15(3): 407–18. PubMed Abstract | Publisher Full Text\n\nCorel E, Lopez P, Méheust R, et al.: Network-thinking: graphs to analyze microbial complexity and evolution. Trends Microbiol. 2016; 24(3): 224–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDagan T: Phylogenomic networks. Trends Microbiol. 2011; 19(10): 483–91. PubMed Abstract | Publisher Full Text\n\nHuson DH, Scornavacca C: A survey of combinatorial methods for phylogenetic networks. Genome Biol Evol. 2011; 3: 23–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernard G, Chan CX, Ragan MA: Alignment-free microbial phylogenomics under scenarios of sequence divergence, genome rearrangement and lateral genetic transfer. Sci Rep. 2016; 6: 28970. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChan CX, Bernard G, Poirion O, et al.: Inferring phylogenies of evolving sequences without multiple sequence alignment. Sci Rep. 2014; 4: 6504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRagan MA, Bernard G, Chan CX: Molecular phylogenetics before sequences: oligonucleotide catalogs as k-mer spectra. RNA Biol. 2014; 11(3): 176–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChan CX, Ragan MA: Next-generation phylogenomics. Biol Direct. 2013; 8: 3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReinert G, Chew D, Sun F, et al.: Alignment-free sequence comparison (I): statistics and power. J Comput Biol. 2009; 16(12): 1615–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWan L, Reinert G, Sun F, et al.: Alignment-free sequence comparison (II): theoretical power of comparison statistics. J Comput Biol. 2010; 17(11): 1467–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAkman L, Yamashita A, Watanabe H, et al.: Genome sequence of the endocellular obligate symbiont of tsetse flies, Wigglesworthia glossinidia. Nat Genet. 2002; 32(3): 402–7. PubMed Abstract | Publisher Full Text\n\nSeshadri R, Paulsen IT, Eisen JA, et al.: Complete genome sequence of the Q-fever pathogen Coxiella burnetii. Proc Natl Acad Sci U S A. 2003; 100(9): 5455–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGreenfield P, Roehm U: Answering biological questions by querying k-mer databases. Concurr Comput Pract Exper. 2013; 25(4): 497–509. Publisher Full Text\n\nBernard G, Chan CX, Ragan MA: 143 Prokaryote genomes. Dataset. 2016. Data Source\n\nBernard G, Chan CX, Ragan MA: Alignment-free network of 143 prokaryote genomes. Dataset. 2016. Data Source" }
[ { "id": "18402", "date": "12 Dec 2016", "name": "Bernhard Haubold", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPhylogeny reconstruction is a classical research topic in bioinformatics. In this context the standard trade-off between speed and accuracy becomes a choice between slow but accurate sequence alignment on the one hand and fast but less accurate alignment-free methods on the other. Bernard et al. aim for speed and use an established alignment-free measure, D_2, to reconstruct the phylogeny of 143 Bacteria and Archaea from full genome sequences. D_2 is based on the number of shared k-mers, and the main contribution of the paper is the visualization of the D_2 distance matrix of the 143 taxa as a network rather than the traditional bifurcating tree. This visualization is dynamic in the sense that the user can choose a similarity threshold between 0 and 10, and watch as the taxa disintegrate from initially two clusters to essentially every taxon on its own. This is an innovative way of presenting large-scale evolutionary relationships, and the tool is fun to use. As the authors remark, it is unclear how the D_2 metric scales with more familiar measures of evolutionary time such as substitutions per site. It would thus be interesting to explored this in future work; for example by supplying a version of the visualization tool that allows users to upload their own sequences. I was also wondering how the networks generated by Bernard et al. compare to established methods of network-based evolutionary analysis such as SplitsTree and minimum spanning trees. I realize that these are both usually based on alignments, but it is always possible to analyze a given alignment using D_2, thereby allowing a direct assessment of the accuracy lost (if any) for the speed gained.", "responses": [ { "c_id": "2371", "date": "19 Dec 2016", "name": "Cheong Xin Chan", "role": "Author Response", "response": "Thank you for these comments. Indeed, the correlation between D2 metrics and evolutionary distances is an interesting area, and a tool that allows users to upload their own datasets would be useful. A comparative analysis between a k-mer-based network and a phylogenetic network based on multiple sequence alignment, although doable, is not straightforward. We believe the adoption of alignment-free methods in phylogenetic inference is still in its infancy, and we hope that this work will inspire and encourage other researchers to pursue this approach." } ] }, { "id": "18060", "date": "13 Dec 2016", "name": "Weilong Hao", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript uses k-mers from whole-genome sequences to recapitulate phylogenetic relationships from trees to networks. The analyses seemed to be convincing, and of general interest. I just have some comments on the manuscript structure and some other minor suggestions.\nThe authors used Ernst Haeckel’s phylogeny and Biogenetic Law to start their manuscript. Although it is fun to read all these historical pieces, the link between Haeckel’s ideas and the construction of networks using k-mers was not made strong in the current version of the manuscript.\nThe authors compared alignment-free data against sequence alignments, and stated that the sequence alignment approach “ignores important evolutionary processes that are known to shape the genomes of microbes” followed by mentioning recombination, genome rearrangement, and lateral gene transfer. This is not accurate, as sequence alignments can also be used to reconstruct web-like phylogenetic relationships, which are sometimes called phylogenetic networks (e.g., Huson and Bryant 2006). I think it is important to carefully define and compare the networks mentioned in this manuscript and the phylogenetic networks mentioned by Huson and Bryant. Along this line, approaches based on sequence alignments might not all assume tree-like relationship. Furthermore, the authors mentioned evolutionary events, such as recombination, genome rearrangement, and lateral gene transfer, that are difficult to study using sequence alignments, but did not provide detailed evidence on whether k-mers can tackle them all. I suggest the authors to rather stay closer to their data and make more specific statements.\nIn the third introduction paragraph, “By default, it is assumed that the best alignment can be achieved simply by displaying the sequences in the same direction and inserting gaps where needed. This assumption is largely valid when working with exons or proteins of morphologically complex eukaryotes. However, in microbes this assumption is violated...” I feel the meaning of “assumption” in each of these sentences is a moving target. If they are talking about orthologous sequences, the analysis of orthologs should hold for both eukaryotes and prokaryotes. The key here, I guess, is the comparison of ortholgs, versus, the comparison of exenologs even non-homologs. Another minor point is the use of “microbes”, which can mean, bacteria, archaea, and small-eukaryotes. I don’t think it is a good word to use here.\nThe authors did not justify the use of the 143 genomes. It seemed that they were inherited from their previous study conducted some time ago, and likely skewed in terms of taxon-sampling. Since taxon-samping is important for tree-like phylogenetic analysis, it would be nice to address how the improved (or more balanced) taxon-sampling can benefit the network analyses.\nThe authors wrote “... in agreement with previously published studies; as such, this tree represents reality as presently understood, i.e., is biologically correct”. The use of words such as reality, biologically correct here, is inappropriate.\nThe data of Wigglesworthia, Coxiella and others are of potential interest. The readers would definitely appreciate some real data analyses to address them, which are currently lacking.\nThe cited references are relatively recent and skewed. Some of the older and more influential papers need to be added (for both networks and alignment free).", "responses": [ { "c_id": "2372", "date": "23 Dec 2016", "name": "Cheong Xin Chan", "role": "Author Response", "response": "Thank you for these comments. The link between Haeckel’s ideas and the construction of networks using k-mers was not made strong in the current version of the manuscript. The work we present here is a proof-of-concept for a biologically informative network based on k-mers extracted from whole-genome sequences. We hope to convince readers that dynamic visualization of such a network is intuitive for exploring and addressing biological questions, aiding discovery. The paper is part of a special collection of F1000Research articles in phylogenetics, commemorating the 150th anniversary of Ernst Haeckel’s Tree of Life published in 1866. Here we argue that by using k-mers we can recapitulate phylogenetic signal, somewhat in the same spirit as Haeckel famously argued that “ontogeny recapitulates phylogeny”. More precisely, our claim is that “increasing the threshold based on the proportion of shared k-mers recapitulates the progressive separation of genomic lineages in evolution”. Full consideration of Haeckel’s work in the context of Darwinian evolution then and today is well beyond the scope of our brief paper, although we cite some key references. The authors … stated that the sequence alignment approach “ignores important evolutionary processes that are known to shape the genomes of microbes” followed by mentioning recombination, genome rearrangement, and lateral gene transfer. This is not accurate, as sequence alignments can also be used to reconstruct web-like phylogenetic relationships, which are sometimes called phylogenetic networks (e.g., Huson and Bryant 2006). I think it is important to carefully define and compare the networks mentioned in this manuscript and the phylogenetic networks mentioned by Huson and Bryant. Along this line, approaches based on sequence alignments might not all assume tree-like relationship. We agree and have now rewritten part of the Abstract to stage our argument more clearly: genomic processes in microbes can undermine the assumptions that underlie multiple sequence alignment, hence phylogenetic inference as usually practiced. We have now cited other articles on phylogenetic networks in the text where appropriate, specifically Huson and Bryant1 and Kunin et al.2. Comprehensive comparison of k-mer-based and (alignment-based) phylogenetic networks is important but, due to its complexity, beyond the scope of this paper; we have now clarified this in the revised text. The authors mentioned evolutionary events, such as recombination, genome rearrangement, and lateral gene transfer … but did not provide detailed evidence on whether k-mers can tackle them all. I suggest the authors to rather stay closer to their data and make more specific statements. In Chan et al.3 and Bernard et al.4 we provided detailed evidence that alignment-free approaches based on k-mers, at multi-genome scale, can be robust to insertions/deletions, genome rearrangement and lateral genetic transfer; these articles are cited where appropriate. In the third introduction paragraph, “By default, it is assumed that the best alignment can be achieved simply by displaying the sequences in the same direction and inserting gaps where needed. This assumption is largely valid when working with exons or proteins of morphologically complex eukaryotes. However, in microbes this assumption is violated...” I feel the meaning of “assumption” in each of these sentences is a moving target. If they are talking about orthologous sequences, the analysis of orthologs should hold for both eukaryotes and prokaryotes. We have now revised the text to make it clear that the main assumption underlying multiple sequence alignment, i.e. that the alignment columns display homology position-by-position along the length of the sequences, is largely valid when working with highly conserved orthologs of any source; and that the validity of this assumption is often undermined in the case of microbial genome sequences, due to recombination and rearrangement. Another minor point is the use of “microbes”, which can mean, bacteria, archaea, and small-eukaryotes. I don’t think it is a good word to use here. We used the word “microbes” here specifically to include archaea, bacteria and microbial eukaryotes. Genomes of many microbial eukaryotes are known to be impacted by lateral genetic transfer, at frequencies sometimes nearly as large as in bacteria and archaea. The authors did not justify the use of the 143 genomes. … Since taxon-sampling is important for tree-like phylogenetic analysis, it would be nice to address how the improved (or more balanced) taxon-sampling can benefit the network analyses. Here we used the 143-genome dataset because the phylogenetic relationships among these genomes have been studied using careful alignment-based methods5 and by alignment-free approaches4; it thus provides a good reference for comparison. We have now clarified this in the text. In our alignment-free network, each edge represents the qualitative evidence of k-mers shared pairwise between two genomes. This evidence is not affected by other genomes present in (or absent from) the dataset. Therefore, our networks are not affected by taxon-sampling biases of the sort encountered in tree inference. Of course, the presence or absence of a critical node (genome) might affect the biological conclusion we draw from a network, but the same is true for any scientific analysis. We considered the effect of phyletic balance on the inference of lateral genetic transfer networks in another context6.   The authors wrote “... in agreement with previously published studies; as such, this tree represents reality as presently understood, i.e., is biologically correct”. The use of words such as reality, biologically correct here, is inappropriate. We agree and now state that “as such, this tree captures most of the major biological groupings of Bacteria and Archaea as presently understood”. The data of Wigglesworthia, Coxiella and others are of potential interest. The readers would definitely appreciate some real data analyses to address them, which are currently lacking. A follow-up analysis between Wigglesworthia and Coxiella would indeed be interesting, but is beyond the scope of this Research Note, the aim of which is to present limited findings in hopes of inspiring and encouraging others to explore this research area. Some of the older and more influential papers need to be added (for both networks and alignment free). We have now cited older, relevant references in the text for both networks1, 2 and alignment-free methods7. References Huson DH, Bryant D: Application of phylogenetic networks in evolutionary studies. Mol Biol Evol. 2006; 23(2): 254-67. Kunin V, Goldovsky L, Darzentas N, et al.: The net of life: reconstructing the microbial phylogenetic network. Genome Res. 2005; 15(7): 954-9. Chan CX, Bernard G, Poirion O, et al.: Inferring phylogenies of evolving sequences without multiple sequence alignment. Sci Rep. 2014; 4: 6504. Bernard G, Chan CX, Ragan MA: Alignment-free microbial phylogenomics under scenarios of sequence divergence, genome rearrangement and lateral genetic transfer. Sci Rep. 2016; 6: 28970. Beiko RG, Harlow TJ, Ragan MA: Highways of gene sharing in prokaryotes. Proc Natl Acad Sci U S A. 2005; 102(40): 14332-7. Cong Y, Chan YB, Ragan MA: Exploring lateral genetic transfer among microbial genomes using TF-IDF. Sci Rep. 2016; 6: 29319. Domazet-Lošo M, Haubold B: Alignment-free detection of local similarity among viral and bacterial genomes. Bioinformatics. 2011; 27(11): 1466-72." } ] } ]
1
https://f1000research.com/articles/5-2789
https://f1000research.com/articles/5-2807/v1
01 Dec 16
{ "type": "Research Note", "title": "Evolutionary relations and population differentiation of Acipenser gueldenstaedtii Brandt, Acipenser persicus Borodin, and Acipenser baerii Brandt", "authors": [ "Alexey A. Sergeev" ], "abstract": "Russian (Acipenser gueldenstaedtii), Persian (A. persicus) and Siberian (A. baerii) sturgeons are closely related ‘Ponto-Caspian’ species. Investigation of their population structure is an important problem, the solution of which determines measures for conservation of these species. According to previous studies, ‘baerii-like’ mitotypes were found in the Caspian Sea among 35% of Russian sturgeon specimens, but were not found in Persian sturgeons. This confirms genetic isolation of the Persian sturgeon from the Russian sturgeon in the Caspian Sea. However, in order to clarify the relationships of these species it is necessary to analyze nuclear DNA markers. The amplified fragment length polymorphism (method) allows estimating interpopulation and interspecific genetic distances using nuclear DNA markers. In the present study, four samples were compared: Persian sturgeons from the South Caspian Sea, Russian sturgeons from the Caspian Sea and the Sea of Azov, and Siberian sturgeons from the Ob’ River, which are close to these two species, but are also clearly morphologically and genetically distinct from them. For the AFLP method, eight pairs of selective primers were used. The analysis revealed that the Siberian sturgeon has formed a separate branch from the overall Persian-Russian sturgeons cluster, which was an expected result. In addition, the results showed that the Caspian Russian sturgeon is closer to the Persian sturgeon from the Caspian Sea than to the Russian Sturgeon from the Sea of Azov. The present DNA marker data confirm that despite the genetic isolation of the Persian sturgeon from the Russian sturgeon in the Caspian Sea, the Persian sturgeon is a young species.", "keywords": [ "Russian Sturgeon", "Persian Sturgeon", "Siberian sturgeon", "AFLP" ], "content": "Introduction\n\nThree closely related species, the Russian (Acipenser gueldenstaedtii), Persian (A. persicus), and Siberian (A. baerii) sturgeons belong to a polychromosomal group of sturgeon species (2n = 240–260; Vasil’ev, 1985). They form the Ponto–Caspian clade of sturgeons (Birstein & DeSalle, 1998). A. persicus inhabits the Caspian Sea, and A. gueldenstaedtii inhabits the Caspian Sea and the Azov Sea (Berg, 1961). A. baerii is geographically isolated from the other two species, and it inhabits Siberian Rivers. Presumably, its Ponto-Caspian ancestors migrated to Siberia (Birstein & DeSalle, 1998).\n\nThese species are closely related, which has caused some difficulties with their molecular genetic identification and clarification of their phylogenetic relations. A. persicus was described as a species by Borodin in 1897 (Borodin, 1897). Later, Berg called it a morphologically distinguishable subspecies of A. gueldenstaedtii (Berg, 1961). Following Berg researchers considered the Persian sturgeon as a subspecies of the Russian sturgeon Acipenser gueldaenstadti persicus (Legeza, 1975), Acipenser gueldaenstadti persicus natio kurensis (Abdurakhmanov, 1962; Legeza & Voinova, 1967). Research of the antigenic components of sturgeon blood serum proteins, carried out in 1974, revealed that the Persian sturgeon is a valid sympatric species (Lukyanenko et al., 1974a; Lukyanenko et al., 1974b).\n\nThe taxonomic rank of A. persicus is still disputed. Some researchers point to a distinct morphological differences between Russian and Persian sturgeons (Artyukhin, 2008; Vasil'eva, 2004). Others find these differences weak and point to the impossibility to distinguish single specimens of Russian, Persian and Adriatic (A. naccarii) sturgeons by mitochondrial DNA markers (Birstein et al., 2005; Ruban et al., 2008).\n\nThe Siberian sturgeon is geographically isolated from the Russian and Persian sturgeons and morphologically is easily distinguishable from them. However, approximately 30% of the Russian sturgeon specimens from the Caspian Sea have mitochondrial DNA that is similar to mitochondrial DNA of A. baerii (Jenneckens et al., 2000). It was shown that a ‘baerii-like’ mitotype of A. gueldenstaedtii is similar, but not identical, to mitochondrial DNA of A. baerii (Muge et al., 2008). In total, 2% of Russian sturgeons in the Azov Sea also have a ‘baerii-like’ mitotype (Timoshkina et al., 2009), whereas this has not been found in Persian sturgeons (Muge et al., 2008). It is assumed that the ‘baerii-like’ mitochondrial DNA found in some Russian sturgeons from the Caspian Sea is a result of an introgression event during the Pleistocene glaciation (Muge et al., 2008; Rastorguev et al., 2013).\n\nIn order to clarify the phylogenetic relations and population structure of the species within the Ponto-Caspian sturgeon clade, some authors point out the necessity to explore nuclear DNA markers (Krieger et al., 2008; Miuge et al., 2008). It should be noted that currently researchers have the opportunity to work with single nucleotide polymorphism (SNP) markers, which have been discovered for Ponto-Caspian sturgeons (Ogden et al., 2013; Rastorguev et al., 2013).\n\nMoreover, to estimate genetic distances within the Ponto-Caspian sturgeon species group, the amplified fragment length polymorphism (AFLP) method is also applicable, as the AFLP technique allows to obtain a high number of dominant nuclear DNA markers (Congiu et al., 2002).\n\nThis report presents the results of a molecular genetic study of interpopulation and interspecific genetic distances of the Ponto-Caspian sturgeon clade carried out with the AFLP method.\n\n\nMaterials and methods\n\nFor this research, sturgeon tissue samples (ethanol fixed fin fragments) were obtained from the Russian Federal Reference Collection of Genetic Materials (maintained by the Russian Federal Research Institute of Fisheries and Oceanography, Moscow, Russia). The sample included 24 specimens of A. gueldenstaedtii from the Azov Sea (catalog number GUE2906,2908-2930), 24 specimens of A. gueldenstaedtii from the Caspian Sea (catalog number GUE2812-2835), 24 specimens of A. persicus from the Southern Caspian Sea (catalog number PER0120-143) and 24 specimens of A. baerii from the Ob’ River (catalog number BAE0325-348).\n\nDNA was extracted and purified with the Wizard SV Genomic DNA Purification System (Promega). For genetic analysis, the AFLP method was used (Vos et al., 1995). Briefly, genomic DNA was incubated with the MspI and EcoR enzyme combination (Fermentas). Next, DNA fragments were ligated with oligonucleotide adapters and used for pre-selective and selective PCR with eight combinations of fluorescent primers (Table 1):\n\n1) EcoFAM_AAG - Msp_pr_AAC, 2) EcoFAM_ATT - Msp_pr_AAG, 3) EcoFAM_ACA - Msp_pr_AAT, 4) EcoFAM_AAG - Msp_pr_ACA, 5) EcoFAM_ACA - Msp_pr_ACC, 6) EcoFAM_ATT - Msp_pr_ACC, 7) EcoFAM_AAG - Msp_pr_ACT, 8) Eco-FAM_AAG - Msp_pr_ATC.\n\nPre-selective PCR was performed for 20 cycles with the following cycle profile: a 30 sec DNA denaturation step at 94°C, a 1 min annealing step at 56°C, and a 1 min extension step at 72°C. Selective PCR was performed for 36 cycles with the following cycle profile: a 30 sec DNA denaturation step at 94°C, a 30 sec annealing step, and a 1 min extension step at 72°C. The annealing temperature in the first cycle was 65°C, was subsequently reduced each cycle by 0.7°C for the next 12 cycles, and was continued at 56°C for the remaining 23 cycles. All steps were carried out with the PTC-225 Peltier Thermal Cycler (MJ Research).\n\nCapillary electrophoresis was carried out with the ABI Prism Genetic Analyzer 3100 (Applied Biosystems).\n\nAnalysis of the obtained AFLP-profiles was performed using Phoretix 1D Advanced v. 5.20 software (Nonlinear Dynamics). The resulting binary matrix was created for further statistical analysis with the program Tools for Population Genetic Analysis v 1.3 (TFPGA). To estimate the allele frequencies of the dominant markers, we used the approach of Lynch & Milligan (1994), which allows work with tetraploid species (Rodzen & May, 2002). With TFPGA and the unweighted-pair group method with arithmetic means (UPGMA) method, we obtained the matrix of genetic distances (Nei, 1978) between investigated samples and constructed a dendrogram.\n\n\nResults\n\nUsing eight combinations of primers, we obtained AFLP profiles (Figure 1) with 588 markers (molecular length from 100 to 380 bp). In total, 79.59% of the loci were polymorphic. A total of 4 loci were species-specific and monomorphic in the AFLP profiles of A. baerii. The differentiation between Russian and Persian sturgeons was observed only in the marker frequencies.\n\n(1–8) A. gueldenstaedtii from the Caspian Sea; (9–16) A. gueldenstaedtii from the Azov Sea; (17–24) A. baerii from the Ob’ River; and (25–32) A. persicus from the Southern Caspian Sea.\n\nUsing the TFPGA software, genetic distances (Nei, 1978) were estimated between four sturgeon samples: (1) A. gueldenstaedtii from the Caspian Sea; (2) A. gueldenstaedtii from the Azov Sea; (3) A. baerii from the Ob’ River; and (4) A. persicus from the Southern Caspian Sea (Table 2). We considered the sample size, the amount of obtained markers and used unbiased statistical estimation. The UPGMA dendrogram was constructed with a bootstrap support (1000 permutations) for each node to validate the resulting topology (Figure 2).\n\n(1) A. gueldenstaedtii from the Caspian Sea; (2) A. gueldenstaedtii from the Azov Sea; (3) A. baerii from the Ob’ River; and (4) A. persicus from the Southern Caspian Sea.\n\n(1) A. gueldenstaedtii from the Caspian Sea; (2) A. gueldenstaedtii from the Azov Sea; (3) A. baerii from the Ob’ River; and (4) A. persicus from the Southern Caspian Sea. Similarities were estimated based on the UPGMA method. The values refer to bootstrap values greater than 0.7.\n\n\nDiscussion\n\nThe AFLP method conducted in the present study revealed that the Siberian sturgeon has formed a branch that is separate from the overall Persian-Russian sturgeon cluster. The Siberian sturgeon is geographically isolated from Persian and Russian sturgeons and is morphologically easily distinguishable from them. According to the results obtained, the Caspian Russian sturgeon is closer to the Persian sturgeon from the Caspian Sea than to the Russian Sturgeon from the Sea of Azov.\n\nThe DNA marker data confirms that, despite the genetic isolation, the Persian sturgeon is a young species. Presumably, the reproductive isolation of Persian sturgeon appeared later than the event of geographic isolation of the Black Sea-Azov and the Caspian basins. Perhaps, there is a gene flow between populations of Persian and Russian sturgeons in the Caspian Sea, which is typical for sturgeons’ natural interspecific hybridization. In this case, it should be mentioned that there is no gene flow from the Russian sturgeon to the Persian sturgeon, as the Persian sturgeon is completely free from the ‘baeri-like’ mitotype, typical for the Russian sturgeon in the Caspian Sea (Miuge et al., 2008).\n\nThe results of this study show the special status of the Russian sturgeon of the Azov Sea, which is geographically and genetically isolated from the Russian sturgeon of the Caspian Sea. This differentiation was shown in previous studies with morphology, mtDNA and STR markers of the Russian sturgeon from the Black Sea-Azov and the Caspian basins (Timoshkina et al., 2009). The present study has now confirmed these results using the AFLP method.\n\nOn the dendrogram, we can observe high bootstrap support values (Salemi & Vandamme, 2003). However, the obtained genetic distances are unusually small for river spawning species. This can be explained by a slower molecular evolution rate of sturgeons (Krieger & Fuerst, 2002). Further studies applying SNP and microsatellite analysis approaches are needed in order to confirm results of this study.\n\n\nData availability\n\nThe raw data is available from Zenodo: (https://zenodo.org/record/167463#.WC8wTtWLTcs) DOI, 10.5281/zenodo.167463 (Sergeev, 2016).\n\nDataset 1 includes AFLP chromatograms (ABI Prism Genetic Analyzer 3100, Applied Biosystems). Dataset 2 includes AFLP profiles for Phoretix 1D Advanced v. 5.20 software (Nonlinear Dynamics). Dataset 3 includes TFPGA files (Tools for Population Genetic Analysis v 1.3) with genetic distances and trees.", "appendix": "Author contributions\n\n\n\nAS carried out all work relating to this study.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe author would like to thank Alexander A. Volkov, Sergey M. Rastorguev, and Nikolai S. Mugue for their valuable guidance and very useful methodical comments and discussion.\n\n\nReferences\n\nAbdurakhmanov YA: Fishes of Fresh Waters of Azerbaidjan. [in Russian]. 1962.\n\nArtyukhin EN: Sturgeons. Ecology, geographic distribution and phylogeny. St. Petersburg State University, [in Russian]. 2008; 136. Reference Source\n\nBerg LS: Selected Papers. Ichthyology. Moscow-Leningrad. Academy of Sciences of the USSR. [in Russian]; 1961; 4.\n\nBirstein VJ, DeSalle R: Molecular phylogeny of Acipenserinae. Mol Phylogenet Evol. 1998; 9(1): 141–155. PubMed Abstract | Publisher Full Text\n\nBirstein VJ, Ruban G, Ludwig A, et al.: The enigmatic Caspian Sea Russian sturgeon: how many cryptic forms does it contain? Syst Biodivers. 2005; 3(2): 203–218. Publisher Full Text\n\nBorodin NA: Report on excursion with the zoological purpose in the summer 1895 on the cruiser \"Uralets\" in the northern part of the Caspian Sea. Bulletin of Fishing industry. [in Russian]. 1897; 12(1): 1–31.\n\nCongiu L, Fontana F, Patarnello T, et al.: The use of AFLP in sturgeon identification. J Appl Ichthyol. 2002; 18(4–6): 286–289. Publisher Full Text\n\nJenneckens I, Meyer JN, Debus L, et al.: Evidence of mitochondrial DNA clones of Siberian sturgeon, Acipenser baerii, within Russian sturgeon, Acipenser gueldenstaedtii, caught in the River Volga. Ecol Lett. 2000; 3(6): 503–508. Publisher Full Text\n\nKrieger J, Hett AK, Fuerst PA, et al.: The molecular phylogeny of the order Acipenseriformes revisited. Appl Ichthyol. 2008; 24(S1): 36–45. Publisher Full Text\n\nKrieger J, Fuerst PA: Evidence for a slowed rate of molecular evolution in the order acipenseriformes. Mol Biol Evol. 2002; 19(6): 891–897. PubMed Abstract | Publisher Full Text\n\nLegeza MI: Sturgeon Distribution in the Caspian Sea. Tr VNIRO Moscow. [in Russian]. 1975; 108: 121–133.\n\nLegeza MI, Voinova IA: Contemporary Status of Sturgeon in Kura-Caspian Region. Tr TSNIORKh Moscow. [in Russian]. 1967; (1): 26–33.\n\nLynch M, Milligan BG: Analysis of population genetic structure with RAPD markers. Mol Ecol. 1994; 3(2): 91–99. PubMed Abstract | Publisher Full Text\n\nLukyanenko VI, Dubinin VN, Karataeva BB, et al.: On Species Status of So-Called Late Spring or Summer Spawning Sturgeon in the Volga River. Abstracts of the Account Session of TSNIORKh. Astrakhan, 1974a; 92–94.\n\nLukyanenko VI, Umerov ZG, Karataeva BB: South- Caspian Sturgeon—a Distinct Species of Genus Acipenser. Izv Akad Nauk SSSR Ser Biol. 1974b; (5): 736–738.\n\nMayr E: Principles of systematic zoology. New-York- Sydney: McGraw-Hill Book Company; 1969. Reference Source\n\nMuge NS, Barmintseva AE, Rastorguev SM, et al.: Polymorphism of the mitochondrial DNA control region in eight sturgeon species and development of a system for DNA-based species identification. Russ J Genet. 2008; 44(7): 793–8. PubMed Abstract | Publisher Full Text\n\nNei M: Estimation of average heterozygosity and genetic distance from a small number of individuals. Genetics. 1978; 89(3): 583–590. PubMed Abstract | Free Full Text\n\nOgden R, Gharbi K, Mugue N, et al.: Sturgeon conservation genomics: SNP discovery and validation using RAD sequencing. Mol Ecol. 2013; 22(11): 3112–31123. PubMed Abstract | Publisher Full Text\n\nRastorguev SM, Nedoluzhko AV, Mazur AM, et al.: High-throughput SNP-genotyping analysis of the relationships among Ponto-Caspian sturgeon species. Ecol Evol. 2013; 3(8): 2612–2618. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRodzen JA, May B: Inheritance of microsatellite loci in the white sturgeon (Acipenser transmontanus). Genome. 2002; 45(6): 1064–1076. PubMed Abstract | Publisher Full Text\n\nRuban GI, Kholodova MV, Kalmykov VA, et al.: Morphological and molecular genetic study of the persian sturgeon Acipenser persicus Borodin (Acipenseridae) taxonomic status. J Ichthyol. 2008; 48(10): 891–903. Publisher Full Text\n\nSalemi M, Vandamme AM: The Phylogenetic Handbook: A Practical Approach to DNA and Protein Phylogeny. Cambridge Univercity Press, 2003; 100–119. Reference Source\n\nSergeev A: Evolutionary relations and population differentiation of Acipenser gueldenstaedtii Brandt, Acipenser persicus Borodin and Acipenser baerii Brandt. Zenodo. 2016. Data Source\n\nTimoshkina NN, Barmintseva AE, Usatov AV, et al.: [Intraspecific genetic polymorphism of Russian sturgeon Acipencer gueldenstaedtii]. Genetika. [in Russian]. 2009; 45(9): 1250–1259. PubMed Abstract | Publisher Full Text\n\nVasil’ev VP: Evolutionary Karyology of Fishes. Nauka, Moscow, [in Russian]. 1985; 1–300.\n\nVasil’eva ED: Morphological data corroborating the assumption of independent origins within octoploid sturgeon species. J Ichthyology. 2004; 44(Suppl 1): 63–72.\n\nVos P, Hogers R, Bleeker M, et al.: AFLP: a new technique for DNA fingerprinting. Nucleic Acids Res. 1995; 23(21): 4407–4414. PubMed Abstract | Free Full Text" }
[ { "id": "18201", "date": "06 Dec 2016", "name": "Ekaterina V. Ponomareva", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article, Sergeev addressed the problem having long and complex history of discussion 1-3. Molecular genetic methods (namely, AFLP in Sergeev's study) open a possibility to solve some principal and controversial questions of sturgeon species systematic relationships.\nIt is of particular interest also, because the problem has strong practical (and international) dimension, connected with commercial use, conservation and restoration of sturgeons. In spite of significant amount of literature on the subject, there is still a deficit of studies using genomic markers.\nIt should be specially mentioned, that Sergeev's study supports differentiation between Russian sturgeons (Acipenser gueldenstaedtii) from the Caspian Sea and the Sea of Azov, previously revealed by using other methods4.\nI think results obtained by Sergeev are important for resolving relationships between Acipenser spp.", "responses": [ { "c_id": "2370", "date": "15 Dec 2016", "name": "Alexey Sergeev", "role": "Author Response", "response": "We appreciate the time and effort of Dr. Ponomareva and would like to thank her for detailed analysis of the investigated theme and high appreciation of this work." } ] }, { "id": "18153", "date": "08 Dec 2016", "name": "Dmitri D. Pervouchine", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript entitled “Evolutionary relations and population differentiation of Acipenser gueldenstaedtii Brandt, Acipenser persicus Borodin, and Acipenser baerii Brandt” by Alexey Sergeev describes the relationship between mitochondrial genotypes of four closely related sturgeon species by using the amplified fragment length polymorphism (AFPL) method. As a result, an analog of a phylogenetic tree is constructed, one in which the species paradoxically cluster by the habitat (A. gueldenstaedtii from the Caspian Sea is more closely related to A. persicus from the Southern Caspian Sea rather than to A. gueldenstaedtii from the Azov Sea).\nGenerally, the experiment is carried out at a good level and the findings are sufficiently novel. My main criticism is that it is not quite clear to what extent the AFPL method reflects the actual phylogenetic distance between species and that only the mitochondrial genome was interrogated. Generally, the reader has to get an idea of the evolutionary similarity between species by looking at the similarity of patterns in the AFPL profiles. I think the paper can be accepted for indexing if the author explains the caveats and limitations related to this method somewhere in the introduction. Otherwise the readership might be mislead by the dendrogram and the phylogeny reported by the author.\nCurrently I have very cosmetic comments which are outlined below.\nAbstract:\nAccording to previous studies, ‘baerii-like’ mitotypes => mitochondrial genotypes -- the readership might be unfamiliar with this term\nThe amplified fragment length polymorphism (method) =>  The amplified fragment length polymorphism method (AFLP) -- the abbreviation must cited in the abstract next to full size sentence, since it appears later without reference\nPersian sturgeons from the South Caspian Sea, Russian sturgeons from the Caspian Sea and the Sea of Azov, and Siberian sturgeons from the Ob’ River, which are close to these - which these? Change 'these' to 'the latter two' - two species, but are also clearly morphologically and genetically distinct from them.\nMain text has to be checked carefully with a native English speaker for word usage.", "responses": [ { "c_id": "2369", "date": "15 Dec 2016", "name": "Alexey Sergeev", "role": "Author Response", "response": "We appreciate a lot such a detailed analysis of the article and very useful comments of Dr. Pervuchine. All suggestions will be applied to the edited version of the article. Some clarifications are needed. In this study the mitochondrial genome was not investigated at all.  It will be mentioned in the article. In fact, mitochondrial DNA of A. gueldenstaedtii has only two restriction sites which EcoRI endonuclease recognizes and about fourty sites matched to MspI. In our research we analyze those fragments of AFLP pattern which meet the following conditions: 1) the AFLP band must have lenghth between 100 – 400 bp, 2) the amplified fragment must be produced with EcoRI selective primer (due to that only EcoRI primer has fluorescent dye), 3) the bases that followed by restriction site should be complementary to the selective bases of primers at final step of amplification. This mtDNA has only two restriction fragments which could be produced but they do not meet these conditions. Previous studies showed weak ability of mitochondrial DNA markers to perform exact species identification of individuals from this sturgeon species group and to clarify their phylogenetic relations. The main goal of this work was to analize nuclear markers as much more informative ones. The AFLP method allows obtaining a large set of anonymous nuclear marker of the genome.   AFLP profiles reveal patterns of nuclear DNA markers obtained from the whole genome and reflect its polymorphism. Interrogating these profiles we can estimate similarity of the sample genomes, and statistically verify significance of  AFLP pattern difference which reflect similarity of nuclear genomes. By examining the differences between the populations and computing genetic distances, and taking into account former geological events, we can make suggestions on approximate time of the population and species separation (Nei, 1972).   Obviously, this method has some limitations. Dominant markers are applicable for polyploid genome study but less informative than co-dominant markers (G. Guillot and Carpentier-Skandalis, 2010). We obtained large marker set from nuclear DNA but these markers are anonymous. We can’t distinguish which of them are selectively neutral and more informative. We work with them in complex which can somewhat distort the whole picture. Therefore, it’s not correct to make the ultimate phylogenetic conclusions based only on this data. However, these results can be very useful in comparison with the data obtained from other approaches. It will also be mentioned in the article.    Many thanks for very useful comments. They will be very helpful for improving the manuscript. 1. Guillot G.∗ and Carpentier-Skandalis  А. 2010. On the Informativeness of Dominant and Co-Dominant Genetic Markers for Bayesian Supervised Clustering.  The Open Statistics & Probability Journal 3(1) · December 2011 with 160 ReadsDOI: 10.2174/1876527001103010007 2. Nei, M. (1972). \"Genetic distance between populations\". Am. Nat. 106: 283–292. doi:10.1086/282771" } ] } ]
1
https://f1000research.com/articles/5-2807
https://f1000research.com/articles/5-2906/v1
22 Dec 16
{ "type": "Clinical Practice Article", "title": "Atypical presentations and treatment variations of pelvic congestion syndrome: A four patient case series", "authors": [ "David Greuner", "Andrew Amorosso", "Arno Rotgans", "Chris Hollingsworth", "Adam Tonis", "David Greuner", "Arno Rotgans", "Chris Hollingsworth", "Adam Tonis" ], "abstract": "This is a retrospective case review of four patients randomly selected from a pool of 43 patients who presented to our practice with historically atypical symptoms for pelvic congestion syndrome, and the treatment they received. These 43 patients were treated between June and December of 2016. Each patient presented with various atypical symptoms including chronic lower back pain, urinary frequency and incontinence, hip pain, tenesmus, and uncontrollable flatulence. Diagnostic abdominal and pelvic duplex ultrasound and fluoroscopic venography was performed on all patients with informed consent. The four selected patients for this study were all positive for pelvic venous reflux, pelvic venous insufficiency and ovarian/gonadal vein reflux and varicosities. All four of the patients selected in this retrospective study were examined at 1 week from date of intervention and again at 1 month from date of intervention. At the 1 week postoperative exam all four patients had experienced significant resolution of their symptoms, although all had residual congestion present on their right side. After re-intervention to treat right sided congestion via the right gonadal vein, at the 1 month postoperative exam all four patients had experienced an almost complete resolution of symptoms.", "keywords": [ "Pelvic Congestion Syndrome", "venous reflux", "pelvic reflux", "chronic venous insufficiency", "PCS" ], "content": "Introduction\n\nPelvic congestion syndrome (PCS) presents as a constellation of symptoms all caused by the development of varicosities in veins typically drained by either the ovarian or internal iliac veins. Its typical presentation is one of noncyclical dull, aching pelvic pain, typically unilateral, and persisting for more than 6 months. It was present in all patients in our series. Other symptoms that are commonly associated with the condition include dysmenorrhea and dyspareunia, seen respectively in case 1 and 4, and to some degree in all patients in our series.\n\nOur experience has shown us that many authentic pelvic congestion syndrome diagnoses are being lost in a newfound reliance on cross sectional imaging as a diagnostic measure. We have found that to truly diagnose active pelvic congestion syndrome and treat the patient’s correlating symptomatic disease, the gold standard diagnostic imaging modality is fluoroscopic venography. In the following four cases you will read about patients who presented at our practice with historically atypical symptoms for pelvic congestion. After our suspicions were confirmed with duplex sonography revealing ovarian vein reflux and/or pelvic varicosities each patient underwent a fluoroscopic venogram and the appropriate image guided treatment. Although all of these patients presented with symptoms that are not indicative of pelvic congestion, all patients did indeed have advanced pelvic congestion. When each patient was treated for their specific disease state in the pelvic congestion syndrome treatment spectrum all symptoms resolved. Our intention is that the documentation of this experience will help other clinicians in their vigilance to rule out pelvic congestion syndrome with fluoroscopic venography before starting patients on a long road to a differential diagnosis.\n\nA 61-year-old female with two children presented to our clinic with a chief complaint of left hip pain for 5 years, as well as pain in her lower back and pelvis, dysmenorrhea, and urinary frequency present for 2 years. Her hip pain was dull and aching in quality, though severe enough to interfere with exercise. She denied paresthesia radiating from her back down into either thigh, and she denied any history of prior trauma to the area. The patient had been seen both by orthopedists and pain management specialists, and both an MRI and a bone scan failed to pinpoint an etiology. She had taken non-steroidal anti-inflammatory drug (NSAID) regimens, systemic corticosteroids, and ultrasound-guided steroid injections without any improvement in her pain.\n\nThe patient’s past medical history was also notable for chronic venous insufficiency refractory to conservative and operative management, with endovenous thermal ablation therapy for the great saphenous veins in both legs. Three months after operative treatment, the patient reported continued pain, heaviness, fatigue, throbbing, cramping, and edema in both legs, aggravated by prolonged standing, and consistent with persistent chronic venous insufficiency.\n\nOn physical examination, the lower extremities were remarkable for spider veins, reticular veins, varicose veins, and mild pigmentation in her ankles and calves. Infragluteal varices were present. There was some mild pitting edema present, around the ankle and calf bilaterally.\n\nA duplex sonogram was performed on both the pelvis and lower extremities. On sonographic examination of her pelvis, reflux was noted in her pelvic veins. In the legs, persistent reflux was noted in superficial veins in both legs.\n\nA fluoroscopic venogram was performed on the patient, with access obtained at the right common femoral vein. The study noted significant reflux in both internal iliac veins and reflux and stenosis in the left common iliac vein. Delayed washout of contrast was seen in the cross-pelvic collaterals of the tributaries off the internal iliac veins. Most notably, significant varicosities with delayed contrast washout were seen extending off the left ovarian vein (Figure 1) and communicating with the internal iliac veins. The varicose circulation form the refluxing left gonadal vein included prominent lateral extension towards the left side of the patient. Embolization of the left ovarian vein and collaterals from both internal iliac veins was performed. Post-embolization venography performed immediately after demonstrated successful closure of these refluxing venous branches.\n\nThe patient’s post-procedure course was uneventful. When seen again in our clinic 8 days later, her symptoms were significantly improved. Her hip pain had almost completely resolved, and at her 4 week post-operative checkup the patient reports she is again able to exercise without any impediment.\n\nA 34-year-old female with 3 children by Cesarean section presented with an 8 year history of back pain. The pain was described as a dull ache, which increased throughout the day and improved after long periods in a recumbent position. She denied any radiation of her pain down her thighs. She denied any history of trauma. She did however, note some increase in the severity of her pain with increases in temperature and around the time of her menses. She had been on NSAIDs and had multiple spinal manipulations with a chiropractor with no improvement in her symptoms. An MRI of the lumbar spine showed only a mild L4 disc protrusion, and a nerve conduction study (NCS) showed no evidence of lumbosacral radiculopathy.\n\nUpon further questioning, the patient noted a dull ache on the left side of her pelvis, and pain, cramping, and edema in both legs. As with her back pain, these symptoms would worsen with both daily activity and prolonged standing or sitting, and were worse on the left side. Her leg symptoms would improve slightly with leg elevation.\n\nOn physical examination, the lower extremities were remarkable for varicose veins and mild pitting edema noted around the ankle and calf bilaterally.\n\nA duplex ultrasound examination was performed on the pelvis. A dilated left ovarian vein was visualized with 4.2 seconds of reflux.\n\nAn initial fluoroscopic venogram was performed that demonstrated severe reflux in her internal iliac veins bilaterally, with cross-pelvic collateral circulation lying near the sacrum, and also showed significant dilatation and reflux in both a left primary and accessory ovarian vein, both terminating into large and dilated pelvic varicosities that communicated with the left internal iliac vein and the right internal iliac vein via cross-pelvic collaterals. An intravascular ultrasound (IVUS) catheter noted dilation of the right ovarian vein as well. Embolization of both left ovarian veins was performed using a polidocanol and gelfoam slurry. The patient was brought back to the radiology suite 2 weeks later for a repeat venogram. During this procedure, the pelvic varicosities extending off branches of the left internal iliac vein approaching the sacrum, were embolized with gelfoam slurry and detachable coils. Closure of the left ovarian vein from the prior procedure was confirmed with a left renal venogram, and a post-embolization venogram confirmed closure of the internal iliac varicosities.\n\nRoughly seven weeks after her two embolizations, the patient reported vast improvement in her symptoms on the left, both in her pelvis and in her back. She did, however, have persisting symptoms on her right side, and in addition noted post-coital tenderness that she was now noticing as some of her other discomfort was resolving. A follow-up sonogram demonstrated resolution of the left ovarian vein reflux, but significant reflux and varicosities on the right side of her pelvis consistent with right-sided pelvic congestion. At the 10 week mark from her initial treatment she has returned to the procedure suite for a third venogram with embolization of the right gonadal vein (Figure 2) and at the 12 week post-operative point has full resolution of her symptoms.\n\nA 42-year-old multiparous woman presented to our clinic with 8 years tenesmus, fecal incontinence, and uncontrollable flatulence. These symptoms began shortly after the birth of her first child. The pregnancy was notable for a significant amount of pain and swelling in her legs and pelvis, and the vaginal delivery was assisted with an episiotomy. Although she had some improvement in her symptoms in the months after her pregnancy, she has since had 4 more pregnancies. Despite having Cesarean sections for her deliveries, each pregnancy has worsened her incontinence and overall bowel control, as well as her baseline control. She has been evaluated by gastroenterologists for the above problems, as well as for gas and bloating in her abdomen. She has undergone endoscopies and multiple imaging studies, and was diagnosed with Irritable Bowel Syndrome. Lastly, the patient had a history of hemorrhoids, which had bled on at least one occasion.\n\nIn addition to her bowel symptoms, the patient reported problems with urinary frequency, dyspareunia, and pelvic heaviness, and ovarian pain. Her pain would worsen around the time of her menses. She has been followed by a gynecologist for these issues, which were attributed to endometriosis. Her obstetrician has also attributed her incontinence issues to the episiotomy and postpartum pelvic floor dysfunction, and had suggested a potential sacral nerve stimulator implant to help her.\n\nThe patient also noted some cramping, heaviness, and swelling in both legs. These symptoms would worsen with daily activity, as well as with prolonged standing or sitting. She has had hemorrhoids and varicose veins over her perineum, and has had intermittent bleeding from both.\n\nFor all these symptoms, and despite her extensive workup by both gastroenterologists and gynecologists, the patient had been unable to find any relief for her numerous problems.\n\nOn physical examination, the lower extremities were remarkable for spider veins, varicose veins, prominent perineal veins, and vulvar varicosities. Mild pitting edema was noted as well. The abdomen was soft, with some ovarian tenderness on palpation. Duplex examination of the pelvis was performed. A collection of pelvic varicosities was seen surrounding the rectum.\n\nA diagnostic fluoroscopic venogram was performed using a right common femoral vein approach. Moderate reflux was noted in the common iliac veins and significant reflux noted in the internal iliac veins, with delayed contrast washout noted in several varices and cross-pelvic collaterals. The exam, however, was most notable for an immensely dilated left ovarian and accessory left ovarian vein. They were roughly 28–30 mm in diameter, showing delayed contrast washout and collateralization with the internal iliac veins. A large and aneurysmal terminal branch of the vessels was noted to lie on the sigmoid colon. The left ovarian vein, its accessory, and its branches were embolized with a combination of gelfoam slurry and a 20mm detachable coil. Embolization of the internal iliac branches bilaterally was performed with gelfoam. Post-embolization venography performed immediately after embolization, demonstrated successful closure of all concerning branches.\n\nAt 1 week out from her embolization, the patient has had almost complete resolution of both her flatulence and her tenesmus. At the 4 week post-operative mark she had improvement with her bladder control and dyspareunia as well. She has some residual pelvic pain, and will be returning for another diagnostic venogram with an intravascular ultrasound (IVUS) to assess the reflux in her common iliac veins.\n\nA 36-year-old multiparous woman presented to our clinic for pain, cramping, and edema in both legs for roughly 5 years. Of note, she was having significant back pain as well, which she chose to disclose after experiencing relief following venogram with successful embolization of the left gonadal vein and collaterals from the refluxing internal iliac veins.\n\nIn regard to her presenting complaint, her symptoms worsened over the course of the day and improved with leg elevation. They had shown no improvement with compression therapy and daily NSAIDs. Duplex sonography showed superficial venous reflux, which was ablated. Postoperative sonograms showed successful closure of each vein.\n\nFour months later, the patient experienced a relapse of her leg pain and cramping. A repeat sonogram was performed, which showed reopening of all previously ablated veins. A sonogram was performed on the pelvis, which showed 2.9 seconds of reflux in the left ovarian vein. On further questioning and examination, the patient was found to have pelvic pain and varicosities in her inguinal area, extending to the vulva, most prominently on the left. These varicosities had been a cause of dyspareunia for the patient for several years, beginning shortly after the birth of her third child.\n\nA diagnostic fluoroscopic venogram was performed from a right common femoral vein approach. Severe reflux was seen in the left internal iliac system, with cross-pelvic flow to the right side as well as retrograde contrast filling of the left ovarian vein. The left ovarian vein was almost as large as an iliac vein- roughly 14 to 16 mm in diameter. The left vein was embolized with a combination of polidocanol and gelfoam slurry. A post-procedure venogram was performed immediately after embolization and showed only moderate closure of the vein however, and intervention in the internal iliac system was thwarted due to vasospasm.\n\nA repeat venogram was performed 10 days later, again with a right common femoral vein approach. Reflux was still seen in the branches off both internal iliac veins. The left ovarian vein remained open and dilated, with delayed contrast washout. Embolization was again performed, this time on the left ovarian vein and on the collaterals extending from both internal iliac veins, using a combination of gelfoam slurry and a 13 mm detachable coil. Post-embolization venography immediately after coil placement showed successful closure of the refluxing veins. Visualization of the right iliac veins was performed using an IVUS catheter and showed no evidence of stenosis.\n\nAt 8 weeks out from our first intervention, the patient has had significant reduction in her pelvic pain. She also mentions finding reduction in back pain which she reported upon experiencing relief status-post venogram interventions with embolization.\n\n\nDiscussion\n\nPelvic congestion syndrome (PCS) presents as a constellation of symptoms all caused by the development of varicosities in veins typically drained by either the ovarian or internal iliac veins. Its typical presentation is one of noncyclical dull, aching pelvic pain, typically unilateral, and persisting for more than 6 months. It was present in all patients in our series. Other symptoms that are commonly associated with the condition include dysmenorrhea and dyspareunia, seen respectively in case 1 and 4, and to some degree in all patients in our series.\n\nPCS is also seen in association with lower extremity varicose veins. As many as 15–20% of patients with lower limb varicosities have demonstrated pelvic venous reflux on venogram or duplex ultrasound, with a 30% incidence among those patients with varicosities that have recurred after prior treatment1. On physical examination, these patients may often have varicosities on the vulva, on or just beneath the buttocks, and on the upper thighs, particularly near the groin, owing to reflux from the pelvic veins communicating to the legs via the inferior gluteal and internal/external pudental veins2. In our series, cases 1 and 4 had reopening of veins previously closed by endovenous thermal ablations. Cases 1, 3, and 4 presented with either vulvar or infragluteal varices.\n\nIt is important to remember that PCS affects its patients due to the local compression and consequent irritation and inflammation of organs in proximity to the engorged and swollen plexus of pelvic veins. Since these veins drain the bladder, vagina, uterus, rectum, and sacrum, PCS can be responsible for several atypical symptoms, depending on the organ compromised. Some form of bladder instability is not uncommon, and was seen in cases 1 and 3. Irritation of the lumbosacral nerves, although uncommon, can result in either back or hip pain. In our series, back pain was seen in cases 1 and 2. Hip pain was seen in case 1. The finding of hip pain is particularly rare, with only 2 other cases being reported in the literature3. Pressure of these veins on the rectum, or perhaps reflux through collaterals feeding off a refluxing ovarian vein, can result in hemorrhoids, a finding seen in case 3. The finding of a dilated pelvic varicosity compressing the sigmoid colon and causing both tenesmus and uncontrollable flatulence in case 3 has never before been reported.\n\nAs shown in our first three cases, this atypical presentation of PCS can be the predominant issue impacting the patient’s quality of life. These atypical presentations are typically underdiagnosed, as seen by the extensive and ultimately unfruitful workups experienced by 3 of the 4 patients in our series. It is therefore important to maintain a low threshold for suspicion of PCS in any patient with symptoms that can be attributed to an organ in the pelvis.\n\nOur workup for suspected PCS begins with pelvic sonography. Although there has been literature championing the efficacy of either the CT venography (CTV) or more often the magnetic resonance venography (MRV), in our experience, both studies have had a lower diagnostic yield4–6. Both imaging modalities require proper timing to catch the contrast in the pelvis during the venous phase, and are therefore very operator dependent. The studies also require the subject to be supine, a position that can compress ordinarily dilated pelvic veins. They are also expensive. In contrast, pelvic sonography is inexpensive, and can be performed with the patient upright, as the examiner looks for both dilated pelvic veins and flow reversal during Valsalva maneuvers7. The operator-dependent nature inherent to sonography mandates valuable sonogram technicians with high levels of training and experience in finding and reporting pelvic reflux and varicosities.\n\nAs with the CTV and MRV, the sonogram is also capable of ruling out other pelvic etiologies. Findings suggestive of PCS on ultrasound include ovarian veins >6 mm in diameter, the presence of dilated (>5 mm) arcuate veins crossing the uterine myometrium, and slow (<3 sm/sec) or reversed retrograde blood flow2,6. Polycystic changes in the ovaries have also been seen in about 50%4,6 of cases.\n\nDiagnostic venograms remain the gold standard in identifying PCS, and are indicated for all suspicious presentations, even, in the appropriate clinical setting, if the other studies are negative. It is a low risk procedure, and permits treatment and potential curing of the patient’s symptoms at the same time. Evaluation of the left renal, bilateral ovarian, iliac, and internal iliac veins is performed, looking for dilatation and reflux. Venography findings include an ovarian vein diameter >5 mm, ovarian vein reflux, stagnation of contrast in the pelvic veins, contralateral reflux across the midline, and filling of vulvoperineal or thigh varices2,4. If there is a question of a stenotic lesion in either the renal or iliac vein, pressures across the area in question can be measured, or an intravascular ultrasound can be placed. In all of our patients, pelvic venograms successfully identified the condition.\n\nIn our clinic, we perform diagnostic venography with the patient awake. We begin our assessment at the most common site of pelvic reflux, the left ovarian vein. If reflux is present, we attempt to reproduce the patient’s symptoms using high pressure contrast flushes in a 5 french sheath. If the venous distension created by the flush reproduces symptoms, and the anatomic distribution of the vessels distended corresponds to those symptoms, we embolize the vessel. Our preference in the ovarian vein is to use a combination of gelfoam or polidocanol for the terminal branches below the pelvic brim and an oversized (at least 150% of the diameter of the ovarian vein) coil in the ovarian vein, preferably near the ostia of any visualized ovarian collaterals, and no less than 4 mm from its confluence with the renal vein.\n\nAfter the treatment, we revisit the patient in roughly 1 week. If the patient is showing improvement, we repeat the pelvic sonogram to determine if any reflux remains. If reflux is noted, we perform a second venogram and embolization of pelvic varices fed by the refluxing internal iliac veins, and potentially the right ovarian vein if the patient’s symptoms are significantly greater on the right. Our preference is to treat the internal iliac branches with either sclerotherapy or gelfoam alone, given the higher risk of coil migration noted in this region. If the right ovarian vein is to be targeted, we consider a right internal jugular access to allow for easier cannulation.\n\nDuring this second procedure, we also employ an IVUS catheter to look for stenotic lesions. Pelvic reflux can frequently be secondary to anatomic anomalies resulting in downstream obstruction, such as compression of the left renal vein by the superior mesenteric artery (Nutcracker syndrome)1,2,7, or compression of a left common iliac vein by a right common iliac artery (May-Thurner syndrome)8. A May-Thurner’s phenomenon can be present in as many as two-thirds of the general population9. If any such lesions are identified, a full discussion with the patient regarding the anatomy affected by the stenosis, and the benefits and drawbacks of angioplasty alone versus stenting ensues.\n\nIf the patient wishes to proceed, we return for a third venogram to treat the stenotic segments. For milder lesions (≤ 50% reduction in cross-sectional area), or for focal, short lesions caused by a localized vascular band or web, we attempt only an angioplasty. For a longer and more severe stenotic segment (> 50%), we prefer to both angioplasty and stent. As with our coils, our preference is to oversize our stents (roughly 4 mm greater than the average diameter of the non-stenotic venous segment). Our preference is to avoid stenting for as long as possible in women who intend to get pregnant and in patients with clotting disorders or prior DVTs, given the higher incidence of post-stent complications in these patient populations.\n\nTranscatheter embolization is currently regarded as the least invasive and most efficacious management option for PCS, with complete or partial symptom relief in 68.2–100% of patients2,10. Apart from our preferences, a number of different approaches have been reported to affect closure of the refluxing veins, from simple coil embolization, to glue embolization, to combinations of sclerotherapy and coils. In studies using visual analog scale (VAS) pain scores to measure the extent of symptom relief, vast improvement was shown consistently, with mean scores of 7.3–7.6 decreasing to scores of 0.5–3.2 postoperatively. Depending on the initial severity, the symptoms can take as long as 9–13 months after therapy to resolve10. Complications from the procedure are rare, being reported in 3.4–4.4% of patients, and consist of coil migration, vein perforation, local phlebitis, deep venous thrombosis, and contrast reactions10. Patients may suffer from a brief period of flu-like symptoms within 72 hours of each embolization, but this consistently passes within 48 hours without need for intervention. Studies have also shown embolization to have no effect on the menstrual cycle or fertility4. This is an important consideration, given that the predominance of PCS is in premenopausal women.\n\nWith respect to the management of PCS secondary to stenotic vein lesions, endovenous stent placement is also safe and effective. In a review of multiple studies encompassing 1500 patients being treated for chronic iliac vein stenosis, stenting had a 90–100% patency rate for non-thrombotic disease and 74–89% patency rate for thrombotic disease at 3 to 5 years. Symptom relief was achieved in 86–94% of patients for pain, 66–89% for swelling, and 58–89% for healing of ulcers in the leg11. Although these studies focused on the outcomes of patients suffering from venous insufficiency in the legs, it is not unreasonable to assume that treatment of the stenosis would similarly benefit patients with PCS, given that both ailments are due to venous obstruction causing congestion in communicating upstream veins. Among the 1500 patients reviewed, no deaths or pulmonary emboli were reported, and access site complications and significant bleeding, despite the larger sheaths required for the stents, occurred in only 0.03–1%11.\n\nGiven the effectiveness and safety of these procedures, our practice recommends aggressive management of patients whose symptoms and venogram findings suggest the presence of PCS.\n\n\nConclusion\n\nPelvic congestion syndrome is an underdiagnosed condition in premenopausal women that presents with a broad spectrum of symptoms that are often attributed to other pathologies. It is important to maintain a high level of suspicion in parous women wether the presenting symptoms are typical or atypical in nature. The complexity of the abdominal/pelvic venous vasculature in the premenopausal patient can distort accuracy and cloud the patients ability to properly articlutale symptoms. With modern advancements made in diagnostic imaging many clinicians feel MRV to be definitive in diagnosing pelvic congestion, but our experience has proven otherwise. Cross sectional imaging relies heavily on timing, patient immobility and technical skill level of the performing technologist and even then can still provide false positive reults2,4,6. Our experience has shown us that fluoroscopic venography is the gold standard diagnostic imaging modality providing us with anatomical correlation as well as physiological confirmation of our suspicions. In this specific patient population a diagnostic venogram will go far in preventing patients with PCS from missing out on effective and low risk treatment.\n\n\nConsent\n\nWritten informed consent for publication of their clinical details was obtained from each patient in this paper.", "appendix": "Author contributions\n\n\n\nDG, AA, AR and CH all served as primary authors on this paper making substantial contributions to the design conception, acquisition, analysis and interpretation of the data for this paper. All four authors were substantially involved in the intellectual content drafting and subsequent revisions of this paper. All four authors gave approval for final version of this paper for publication and agree to be accountable for all aspects of the work therein. All authors ensure any questions related to the accuracy or integrity of any part of this work will be appropiately investigated and resolved. AT served as a scientific advisor for the entire duration of the data gathering phase of this research. AT also contributed through critically evaluating all drafts of this paper in the editing process.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nLopez AJ: Female Pelvic Vein Embolization: Indications, Techniques, and Outcomes. Cardiovasc Intervent Radiol. 2015; 38(4): 806–820. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurham JD, Machan L: Pelvic congestion syndrome. Semin Intervent Radiol. 2013; 30(4): 372–380. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShelkey J, Huang C, Karpa K, et al.: Case report: pelvic congestion syndrome as an unusual etiology for chronic hip pain in 2 active, middle-age women. Sports Health. 2014; 6(2): 145–148. PubMed Abstract | Free Full Text\n\nBittles MA, Hoffer EK: Gonadal vein embolization: treatment of varicocele and pelvic congestion syndrome. Semin Intervent Radiol. 2008; 25(3): 261–270. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGreiner M, Dadon M, Lemasle P, et al.: How does the patho-physiology influence the treatment of pelvic congestion syndrome and is the result long-lasting? Phlebology. 2012; 27(Suppl 1): 58–64. PubMed Abstract | Publisher Full Text\n\nIgnacio EA, Dua R, Sarin S, et al.: Pelvic congestion syndrome: diagnosis and treatment. Semin Intervent Radiol. 2008; 25(4): 361–368. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiddle AD, Davies AH: Pelvic congestion syndrome: chronic pelvic pain caused by ovarian and internal iliac varices. Phlebology. 2007; 22(3): 100–104. PubMed Abstract | Publisher Full Text\n\nMarsh P, Holdstock J, Harrison C, et al.: Pelvic vein reflux in female patients with varicose veins: comparison of incidence between a specialist private vein clinic and the vascular department of a National Health Service District General Hospital. Phlebology. 2009; 24(3): 108–113. PubMed Abstract | Publisher Full Text\n\nRaju S, Darcey R, Neglén P: Unexpected major role for venous stenting in deep reflux disease. J Vasc Surg. 2010; 51(2): 401–408; discussion 408. PubMed Abstract | Publisher Full Text\n\nMeissner MH, Gibson K: Clinical outcome after treatment of pelvic congestion syndrome: sense and nonsense. Phlebology. 2015; 30(1 Suppl): 73–80. PubMed Abstract\n\nRaju S: Best management options for chronic iliac vein stenosis and occlusion. J Vasc Surg. 2013; 57(4): 1163–1169. PubMed Abstract | Publisher Full Text" }
[ { "id": "19932", "date": "20 Feb 2017", "name": "Mark S. Whiteley", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI was very interested to read this article as the investigation and treatment of pelvic congestion syndrome (PCS) and pelvic venous reflux (PVR) is very topical at the moment and is a major area of interest for me. As clinicians seeing these patients, we need good quality research to develop our understanding of this complex collection of signs and symptoms, which may have underlying causes of both venous reflux and/or venous obstruction. I congratulate the authors on their attempt to define their rationale for their approach to this complex condition. However I do have some concerns about version 1 of this paper as it appears at present.\nI will detail these concerns below but before getting into the detail, there are two main points that need to be addressed. The first is why only 4 cases were presented when the authors state that they have seen 43 such atypical presentations. The second is why the paper concludes that fluoroscopic venography is a gold standard investigation for PCS when this is essentially a case series concentrating on atypical presentations of PCS and is not a study comparing fluoroscopic venography with any other method of diagnosis. As discussed in the review of the cases that they have presented, an alternative strategy might have been more beneficial to patients and reduced the need for invasive venography. To address the points in order as they appear within the version 1 paper.\nAbstract: The first and most obvious question as outlined above, why would the authors choose 4 cases “at random” from 43 who presented with atypical symptoms for PCS? As we are at the stage of trying to understand PCS, and the symptoms that PCS may cause1 the authors are missing a valuable opportunity to write up a case series of 43 atypical cases. To write up all 43 and to show trends and common features in these would represent at large series of typical presentations of PCS and would be a very useful contribution to the literature. I cannot understand the rationale to present only 4 at random of 43 cases as such a small sample of the cases (<10% of the total) could potentially miss some vital associations in such cases.\n\nThe authors do not make it clear where the venous reflux/insufficiency was found. They state: “The four selected patients for this study were all positive for pelvic venous reflux, pelvic venous insufficiency and ovarian/gonadal vein reflux and varicosities.” However I am unclear as to what they mean by this as some of the terms overlap. As written it appears to say that these patients have ovarian/gonadal vein reflux and varicosities, but also says they have “pelvic venous insufficiency” and also “pelvic venous reflux.” These terms are generally used to mean the same thing and, if the authors aren’t using these terms in this way, do they mean other pelvic veins? This needs to be made clearer.\nThe conclusion of the abstract does not fit with the conclusion of the paper. This is addressed in the conclusion section.\n\nIntroduction:\nThe authors give their opinion as to the validity of fluoroscopic venography without supporting evidence. They state “We have found that to truly diagnose active pelvic congestion syndrome and treat the patient’s correlating symptomatic disease, the gold standard diagnostic imaging modality is fluoroscopic venography.” The authors should reference where they have published this finding from studying it appropriately or, if they have not done such a study, they should reference the study they use that does justify this claim. As PCS is difficult to diagnose, for the authors to state that they can “truly diagnose” the condition with venography, there needs to be very good evidence supporting this. If evidence cannot be produced, it should be made clear that this is the authors opinion and even that it may be a widely held opinion, especially in the light of our work with transvaginal duplex ultrasound scanning (TVS)2.\n\nOur own work suggests that transvaginal duplex ultrasound (TVS) performed with our specific protocol is the gold-standard way of assessing pelvic venous reflux.\nThe authors do not state whether their duplex ultrasound examination was external (transabdominal) or transvaginal: “After our suspicions were confirmed with duplex sonography revealing ovarian vein reflux and/or pelvic varicosities … ” In the discussion, they do say it can be performed standing and during Valsalva, but as they don’t comment on the imaging of internal iliac veins and their territories, it appears that it is an external trans-abdominal scan.\nTrans-abdominal duplex ultrasound is not able to see reflux in distal ovarian veins, internal iliac vein tributaries, nor deep varicosities in most normal patients – hence the superiority of TVS over trans-abdominal scanning. As our published research shows that most pelvic vein reflux is in the internal iliac vein tributaries2,3 the authors should note that if their criteria for proceeding to venography is as stated: “suspicions were confirmed with duplex sonography revealing ovarian vein reflux and/or pelvic varicosities …” then they are potentially missing some cases too – something they have criticised others for doing by relying on cross-sectional imaging.\nThe authors state that all 4 patients: “did indeed have advanced pelvic congestion”. How was this confirmed to be “advanced”. Do they mean there was massive reflux in the relevant pelvic vein? Or that more than one pelvic vein was involved? Or that there were large varicosities? As there is no proven classification for PCS yet, the authors would be better to describe what the hemodynamic abnormalities were, rather than claim that these were “advanced pelvic congestion”.\nThe authors cannot make the claim that they do in their statement: “When each patient was treated for their specific disease state in the pelvic congestion syndrome treatment spectrum all symptoms resolved” as in 2 of the 4 cases they present, there were residual symptoms noted at the end of the reported follow-up. (Case 3 and case 4).\nIn view of the shortcomings noted above and also some more to come below, the authors stated aim of: “Our intention is that the documentation of this experience will help other clinicians in their vigilance to rule out pelvic congestion syndrome with fluoroscopic venography before starting patients on a long road to a differential diagnosis” might not be the optimal advice to give to other clinicians.\nDespite the current conclusion in version 1 of the paper and the authors stated view that fluoroscopic venography is the gold standard for PCS, the cases as presented does not support this protocol when compared to a protocol starting with diagnostic TVS.\nCase 1:\nThis patient had reflux in left ovarian and bilateral internal iliac veins which is the commonest pattern of reflux2,3. This diagnosis can be made with one TVS without the need for trans-abdominal duplex followed by invasive diagnostic venography. Even if the case should be made that the diagnostic venogram is also a therapeutic intervention, the protocol and patient pathway as outlined in this report results in neither the doctor nor patient knowing the extent of the intervention (i.e. how many vein needing coil embolization) before the procedure. Also, as the authors have noted that they use the femoral approach for their initial venogram, but a right jugular approach if they subsequently find the right ovarian vein requires treatment, a TVS would give them the information that they need to plan the therapeutic approach, something that their current protocol does not allow for.\n\nCase 2:\nThis patient had all 4 territories involved - bilateral ovarian vein (with accessory vein on the left) and bilateral internal iliac vein reflux. The patient had transabdominal duplex (as in the introduction to suggest PCS) 2 venograms to attempt to treat the left ovarian and bilateral internal iliac veins reflux, IVUS to diagnose the right ovarian vein reflux – leading to a third venogram and embolization. If TVS had been used to diagnose the reflux, all 4 veins would have been found to be refluxing on the first diagnostic test and all 4 would have been targeted in the therapeutic venogram for embolization.\n\nCase 3:\nAgain, this patient had the most common pattern we have reported of left ovarian vein and bilateral internal iliac veins refluxing2,3 albeit with an accessory left ovarian vein in this case. This patient was somewhat successfully treated with one diagnostic/therapuetic venogram and so there would not be a clear advantage to state a TVS would have changed the course of this treatment. However, it needs to be noted that the symptoms are not fully resolved and the patient is due an IVUS to investigate this residual pain.\nCase 4:\nOnce again the commonest pattern of left ovarian and bilateral internal iliac vein reflux2,3 where the patient ended up having with 2 venograms and a negative IVUS. They would probably have had only one therapeutic venogram with embolization if diagnosis by TVS had been the primary diagnostic text.\nDiscussion:\nThe authors state that the symptoms of PCS were found in all patients in their series. Do they mean the 4 presented or all of 43 that had atypical symptoms? This is the sort of data that reporting all 43 would have elucidated.\nThe authors quote the incidence figures from a review article by Lopez4 rather than going to the source data that Lopez quotes. By doing so, they have repeated the inaccuracies that Lopez made in his article. They state: “As many as 15–20% of patients with lower limb varicosities have demonstrated pelvic venous reflux on venogram or duplex ultrasound, with a 30% incidence among those patients with varicosities that have recurred after prior treatment”.\nHowever, the sources that Lopez quotes don’t actually give those figures. The first figure given for “patient with lower limb varicosities” was from Marsh’s paper in 20093 which only included females and actually gave the figures of approximately 20% of patients having non-saphenous lower leg venous reflux arising from the pelvis and approximately 16% having truncal pelvic vein reflux. Indeed, in men, our recent presentation at the American Venous Forum has shown it is only 3%5, and so when copying the figures from Lopez, the wrong impression to prevalence is given as there is no mention of sex.\nWith respect to the quoted “30% incidence among those patients with varicosities that have recurred after prior treatment”, once again this is taken from Lopez where he has not appreciated the data in the source paper6. In fact the 30% incidence is related to legs with recurrent varicose veins rather than patients, and this rate was only found in females who have had children. When a smaller number of men were added (group 1 in the study F:M 97:12) this dropped to 25% showing that this is predominantly a female problem, but not exclusively5.\nThe authors state as fact what most of us believe that “It is important to remember that PCS affects its patients due to the local compression and consequent irritation and inflammation of organs in proximity to the engorged and swollen plexus of pelvic veins.” However to quote this as a fact requires a reference showing that inflammation has been measured in the veins or surrounding tissues, related to the venous dilation. This becomes very important when considering PCS from obstructive causes rather than reflux, as well as when considering other possible treatment strategies such as anti-inflammatory medication.\nThe authors state that “Findings suggestive of PCS on ultrasound include ovarian veins >6 mm in diameter” and later in the same section “Venography findings include an ovarian vein diameter >5 mm.”  However ovarian vein diameter has been shown to be irrelevant in ovarian vein reflux7. Therefore the authors should state that this was their protocol and should note that there is now some doubt as to the validity of this measurement.\nOnce again the authors state that “Diagnostic venograms remain the gold standard in identifying PCS,” which requires evidence or references to good evidence. Unless there is clear proof, then this should be changed to a more accurate statement such as “Diagnostic venograms are believed by many to be the gold standard in identifying PCS,” and the authors should quote the TVS work to inform readers that this might not be the case anymore2.\nThe authors also claim that “In all of our patients, pelvic venograms successfully identified the condition.” This is not correct according to their own text. Case 2 required IVUS to diagnose the right ovarian pathology.\nThe authors have written “ … or compression of a left common iliac vein by a right common iliac artery (May-Thurner syndrome)8.” This reference is not correct. Reference 8 corresponds to the Marsh paper which looks at prevalence of non-saphenous leg vein reflux and the association with pelvic venous reflux – not May-Thurner syndrome.\nI am concerned by the statement that “Given the effectiveness and safety of these procedures, our practice recommends aggressive management of patients whose symptoms and venogram findings suggest the presence of PCS.” in that the authors had been including stents as well as coils in the preceding text. In view of some reports now being presented at academic meetings relating to long-term complications of venous stents, principally occlusion by intimal hyperplasia/fibrous tissue and the subsequent need for removal, reconstruction or bypass, the authors may wish to consider this statement carefully. There is good reason to add a cautionary note to this paragraph, particularly as these patients tend to be younger, being pre-menopausal women (as noted by the authors earlier in the discussion).\nConclusion:\nVersion 1 concludes: “Our experience has shown us that fluoroscopic venography is the gold standard diagnostic imaging modality providing us with anatomical correlation as well as physiological confirmation of our suspicions. In this specific patient population, a diagnostic venogram will go far in preventing patients with PCS from missing out on effective and low-risk treatment.”\nThe conclusion, as stated currently, does not arise from the information given within this paper. The paper is a case series of four patients, according to the authors chosen at random from 43 possible cases. They have not shown that fluoroscopic venography is a gold standard within this paper. Indeed patients have had to have more than one venography and, in one of the four cases, required IVUS to complete the diagnosis. As was shown multiple times at the end of the last century when colour flow duplex of the leg veins was being compared with venography, refluxing on venography is not necessarily physiological. This is due to the contrast having different physical characteristics than blood and also being injected under pressure hence possibly changing normal flow patterns. Duplex ultrasound, by its very nature, allows observation of flow by reflection of ultrasound from the blood cells, allowing true physiological flow to be seen in real time.\nIt is also perplexing that the final sentence of the abstract, usually regarded as the conclusion statement in the scientific method, concentrates on the resolution of the atypical symptoms after treatment whilst the conclusion of the actual body of the paper tries to persuade the reader that fluoroscopic venography is a gold standard.\nIn reality, the only conclusion that can be gleaned from this very small subset (less than 10%) of the atypical presentations that this group have stated that they have seen, is that the atypical symptoms do appear to be related to PCS as correction of reflux identified on transabdominal ultrasound and then fluoroscopic venography, and in one case IVUS, largely resolved. There is no evidence given to support the claims regarding fluoroscopic venography being a gold standard.", "responses": [ { "c_id": "2613", "date": "31 Mar 2017", "name": "Andrew Amorosso", "role": "Author Response", "response": "Dr.Whiteley, thank you for taking the time to give such a thorough review of our work. I want to let you know that in addition to the initial response I had sent you, your points where so glaringly clear and relevant that I am currently restructuring and rewriting this paper to be more inclusive of all of our valuable work. There is no reason why we shouldn't share all of our experience. I can tell you that I wanted to put this paper out in 2016 and that is why I put it out as a 4 patient case series.  I take your suggestions to heart and hope you find the re-write I send in shortly to be more reflective of the type of work we do at our practice. Your point on the conclusion is absolutely spot on and I guess all I can say is I got lost and didn't circle back to check the two out at the end. I am focused on that as piece together version 2. We will have that loaded in the next week or so and I will then eagerly await your input as it is very valuable to us and the patients we treat. Thank you so much for your help. Kind Regards, Andrew" } ] } ]
1
https://f1000research.com/articles/5-2906
https://f1000research.com/articles/5-2905/v1
22 Dec 16
{ "type": "Case Report", "title": "Case Report: A giant myopericytoma involving the occipital region of the scalp - a rare entity", "authors": [ "Sunil Munakomi", "Pramod Chaudhary", "Pramod Chaudhary" ], "abstract": "Herein we report a rare case of a giant myopericytoma presenting in a 16-year-old girl as a slowly progressive swelling involving the scalp in the occipital region. It was managed by complete excision. Histological examination of the lesion revealed  spindle-shaped cells forming characteristic rosettes around the blood vessels, and positive staining with smooth muscle actin.", "keywords": [ "myopericytoma", "subcutaneous lesion", "scalp" ], "content": "Introduction\n\nMyopericytoma is a rare entity. It mostly involves the skin and subcutaneous tissue of the distal extremities, torso, head and neck regions1–3. Rarely does it involve the visceral sites4,5. The spindle shaped cells of a myopericytoma show characteristic perivascular rosettes6,7. Though mostly benign, rare cases of its malignant counterpart have been described8. We report a case of a giant myopericytoma involving the occipital region of the scalp of a young female, with good post-operative outcome following its complete excision. We believe this is the first case report of a giant myopericytoma involving this region.\n\n\nCase report\n\nA 16-year-old female from Butwal, Nepal presented to our outpatient clinic with a chief complaint of slow progressive swelling in the occipital region of the scalp, which she had been experiencing for the last 2 years. There was no history of trauma, pain, tinnitus, dizziness or discharge associated with the lesion, and no significant previous medical or surgical illnesses had been reported. Local examination revealed a soft to firm subcutaneous lesion measuring 9 × 8 cm2, with no bruit within the lesion and normal overlying skin. There was no transmitted pulsation or cough impulse, and there were no palpable bony defects felt around the margins of the lesion. Lower cranial nerve examination was normal and cerebellar signs were negative. CT findings showed a homogeneously enhanced subcutaneous lesion (Figure 1), but with no intracranial extension (Figure 2).\n\nAfter thorough counselling and consent, the patient was booked in for excision of the lesion. Adequate blood for transfusion was supplied because of the vascularity of the scalp and the giant size of the lesion. A midline incision was given, with the patient in the prone position. The edges of the lesion were vascular, with major pedicles from bilateral occipital arteries. Complete excision was undertaken (Figure 3). Intra-operatively, the patient was transfused two pints of blood. Post-operative recovery was uneventful land she was discharged on the third day. Histological examination of the lesion revealed presence of spindle-shaped cells, forming characteristic rosettes around the blood vessels. Positive staining for smooth muscle actin (SMA) was highly suggestive for myopericytoma (Figure 4), and the lack of mitotic cells or tissue necrosis confirmed its benign nature. Patient follow-up took place 2 weeks later, with no symptoms and a well healed wound. She was advised to come for periodic follow-ups every month.\n\n\nDiscussion\n\nMyopericytoma has been described as being a type of perivascular tumor in the latest edition of World Health Organization classification of tumors of soft tissue and bone6. Histologically it is characterized by spindle cells forming perivascular rosettes and staining positive for SMA and negative for Desmin, Bcl2 and CD349. Though usually the size of a myopericytoma is less than 2 cm in superficial soft tissue, larger tumor size has been reported in the visceral locations10,11. Some cases of the malignant form showing invasion, mitotic figures and necrosis have been described6. These malignant forms also show a high Ki-67 proliferative index, contrary to benign forms with low Ki-67 index12.\n\nPrior to diagnosing the myopericytoma, initially the major differential diagnosis was of a giant diffuse lipoma. Other differential diagnoses included other mesenchymal lesions, like desmin positive angioleiomyomas, glomus tumors in which epitheloid cells form rosettes, and solitary fibrous tumors, which do not form visible perivascular rosettes13. These can be differentiated on the basis of their characteristic immunohistological reactivity patterns, such as positive staining with SMA and often also with h-Caldesmon9.\n\nRecurrence of the tumor can occur, even in benign cases, so complete excision should be the goal13. Following complete excision, patients should return for periodic follow-ups despite the benign nature of the tumor.\n\n\nConclusion\n\nThough a rarity, myopericytoma should be ruled out prior to surgical management of subcutaneous lesions, because sometimes the high vascular nature of the lesion may impose difficulties during its excision and pose a risk to the patients’ life if adequate arrangements for blood transfusions have not been made.\n\n\nConsent\n\nWritten informed consent for publication of the patient’s details and their images was obtained from the guardian of the patient.", "appendix": "Author contributions\n\n\n\nBoth authors contributed equally to the acquisition of data, literature review and preparation of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nCalonje E, Fletcher CD: Vascular Tumors. Philadelphia: Churchill Livingstone, Elsevier; 2013.\n\nDíaz-Flores L, Gutiérrez R, García MP, et al.: Ultrastructure of myopericytoma: a continuum of transitional phenotypes of myopericytes. Ultrastruct Pathol. 2012; 36(3): 189–194. PubMed Abstract | Publisher Full Text\n\nMentzel T, Dei Tos AP, Sapi Z, et al.: Myopericytoma of skin and soft tissues: clinicopathologic and immunohistochemical study of 54 cases. Am J Surg Pathol. 2006; 30(1): 104–113. PubMed Abstract\n\nAkbulut S, Berk D, Demir MG, et al.: Myopericytoma of the tongue: a case report. Acta Medica (Hradec Kralove). 2013; 56(3): 124–125. PubMed Abstract | Publisher Full Text\n\nNumata I, Nakagawa S, Hasegawa S, et al.: A myopericytoma of the nose. Acta Derm Venereol. 2010; 90(2): 192–193. PubMed Abstract\n\nFletcher CD, Bridge JA, Hogendoom PC, et al.: World Health Organization Classification of Tumours of Soft Tissue and Bone. Lyon: IARC Press; 2013. Reference Source\n\nFisher C: Unusual myoid, perivascular, and postradiation lesions, with emphasis on atypical vascular lesion, postradiation cutaneous angiosarcoma, myoepithelial tumors, myopericytoma, and perivascular epithelioid cell tumor. Semin Diagn Pathol. 2013; 30(1): 73–84. PubMed Abstract | Publisher Full Text\n\nMcMenamin ME, Fletcher CD: Malignant myopericytoma: expanding the spectrum of tumours with myopericytic differentiation. Histopathology. 2002; 41(5): 450–460. PubMed Abstract | Publisher Full Text\n\nDray MS, McCarthy SW, Palmer AA, et al.: Myopericytoma: a unifying term for a spectrum of tumours that show overlapping features with myofibroma. A review of 14 cases. J Clin Pathol. 2006; 59(1): 67–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLau SK, Klein R, Jiang Z, et al.: Myopericytoma of the kidney. Hum Pathol. 2010; 41(10): 1500–1504. PubMed Abstract | Publisher Full Text\n\nDhingra S, Ayala A, Chai H, et al.: Renal myopericytoma: case report and review of literature. Arch Pathol Lab Med. 2012; 136(5): 563–566. PubMed Abstract | Publisher Full Text\n\nTerada T: Minute myopericytoma of the neck: a case report with literature review and differential diagnosis. Pathol Oncol Res. 2010; 16(4): 613–616. PubMed Abstract | Publisher Full Text\n\nAung PP, Goldberg LJ, Mahalingam M, et al.: Cutaneous Myopericytoma: A Report of 3 Cases and Review of the Literature. Dermatopathology (Basel). 2015; 2(1): 9–14. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19555", "date": "23 Jan 2017", "name": "Ravi Dadlani", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI congratulate Dr Munakomi et al for an interesting article.\nThe following are my comments:\nThe article is well written and highlights a rare pathological entity which may thought of in the differential diagnosis of large scalp lesions.\nA clinical photograph outlining the lesion would be interesting.\n\nThere are no MRI images. Any large scalp tumor should have a pre-op MRI. If MRI images are available kindly upload those.\n\nEnglish language editing is recommended e.g. instead of using the word ‘pint’ the word ‘unit of blood’ seems more appropriate.\n\nThere is no discussion on the radiological characteristics of the tumor and its differential diagnosis.\n\nIt would be interesting to have a tabulated review of literature of myopericytomas of the scalp.\n\nI would also recommend the authors provide a table of differential diagnosis of various pathologies and their immunohistochemical characterizations.\n\nAlthough the take home message appears to be a high clinical suspicion in order to prevent excess blood loss intra-operatively, the authors do not specifically mention any particular measures, if any, they took to minimize bleeding intra-operatively.\n\nA two week follow up is too short for any tumor and some discussion on how frequent (%) is the recurrence after complete excision and what is the treatment strategy for recurrences.\n\nFinal Verdict: Paper may be accepted for indexing with the relevant changes.", "responses": [] }, { "id": "19917", "date": "07 Feb 2017", "name": "Sanela Zukić", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis case is very interesting, and describes the rare entity which will serve in the medical practice.\nThe title is appropriate for the content of the article and the abstract represent a suitable summary of the work.  The design, methods and analysis of the results from the study been explained and are they appropriate for the topic being studied.\nAlso, the conclusions are sensible, balanced and justified on the basis of the results of the study.", "responses": [] }, { "id": "19615", "date": "08 Feb 2017", "name": "Umit Eroglu", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nİt is a well written article and contains novel knowledge. It is an original article but provides minimal sufficient details for practitioners. It includes a background of the case's history and progression and provides details of any physical examination and diagnostic tests, treatment given and outcomes. It includes a discussion of the importance of the findings that also describes their relevance to future understanding of disease processes, diagnosis or treatment.", "responses": [] } ]
1
https://f1000research.com/articles/5-2905
https://f1000research.com/articles/5-2903/v1
22 Dec 16
{ "type": "Software Tool Article", "title": "Phylommand - a command line software package for phylogenetics", "authors": [ "Martin Ryberg" ], "abstract": "Phylogenetics is an intrinsic part of many analyses in evolutionary biology and ecology, and as the amount of data available for these analyses is increasing rapidly the need for automated pipelines to deal with the data also increases. Phylommand is a package of four programs to create, manipulate, and/or analyze phylogenetic trees or pairwise alignments. It is built to be easily implemented in software workflows, both directly on the command prompt, and executed using scripts. Inputs can be taken from standard input or a file, and the behavior of the programs can be changed through switches. By using standard file formats for phylogenetic analyses, such as newick, nexus, phylip, and fasta, phylommand is widely compatible with other software.", "keywords": [ "Phylogeny", "Work-flow", "Pipeline", "Supermatrix", "Supertree" ], "content": "Introduction\n\nThe improvement of high throughput sequencing methods, and the ability to produce more and/or longer reads at a cheaper price have allowed the use of these data in new areas of research (Ellison et al., 2011; Jumpponen & Jones, 2009; Lemmon et al., 2012). More or less, automated software pipelines are often an intrinsic part in the development of ways to process and analyze data for new types of research questions.\n\nPhylogenetic analyses are an integral part of many biological studies. Phylommand is a package of four software - treebender, treeator, contree, and pairalign - with capabilities in manipulation and analyses of phylogenetic trees. The functions include rooting, splitting, and comparing trees, calculating parsimony and likelihood scores, as well as performing parsimonious stepwise addition of taxa, nearest neighbour interchange branch swapping, and construction of neighbour joining trees. In addition there are functions, such as calculating decisiveness (Sanderson et al., 2010), MAD scores (Smith et al., 2009), and matrix representation of trees, for use in supermatrix and supertree pipelines. Similar to the Newick utilities software suit (Junier & Zdobnov, 2010), that also act on phylogenetic trees, the phylommand programs are designed to be used on the command line, without the overhead from a graphical interface. Phylommand also accepts inputs from file or standard input, outputs the results to standard output (i.e. usually screen if not redirected), and works without any configuration files or user input (after execution). It is thus made to work in software pipelines.\n\nBoth phylommand and Newick utilities can use the newick file format and are therefore compatible, and they complement each other as most of the functions in phylommand are not included in the Newick utilities. Even if phylommand is made for pipelines, each software work independent of any other program (given appropriate input).\n\n\nMethods\n\nPhylommand is written in the C++ programming language and is primarily distributed as source code. Its core utilities only depend on standard libraries to facilitate ease of compilation and use on multiple platforms. However, in addition to standard libraries, treeator may be compiled with the option to optimize model parameters under the maximum likelihood criterion using the NLopt library (Steven G. Johnson, The NLopt nonlinear-optimization package, http://ab-initio.mit.edu/nlopt); pairalign can be compiled in a parallelized version using pthreads, which may be useful to decrease runtime when doing pairwise alignment. As an addition to the four core programs rudisvg, a rudimentary svg viewer, that depends on the X11 library and a X11 server is included with phylommand (rudisvg).\n\nThe core programs of phylommand, in their basic version, have been successfully compiled and used on OS X 10.10 and Ubuntu Linux 16.04 (including Ubuntu 14.04 on Windows [anniversary edition of Windows 10]) using GNU g++ and make, and on Windows 10 using the same tools in MinGW. The svg viewer rudisvg has been successfully compiled and used on OS X 10.10 and Ubuntu 16.04 (Gnu/Linux; including Ubuntu on Windows). The behavior of the programs in phylommand are controlled by switches, and all programs have an extensive help documentation that can be accessed by the switch --help. Phylogenies can be given in either newick or nexus format. Pairalign reads DNA sequences in fasta format, and treeator reads character matrices/sequences in fasta, phylip, or nexus format (as outputted from for example AliView 1.19; Larsson, 2014) or work on distance matrices. The output formats for phylogenies are newick and nexus, but treebender can also output trees as standard vector graphics (svg). The svg output from treebender can be displayed by rudisvg. Treebender is faster at rooting trees than nw_reroot from Newick utilities (Table 1), but the difference is small compared to how much faster nw_reroot is for the same task compared with packages in interpreted languages such as BioPerl (Perl), APE (R), and ETE (python; Junier & Zdobnov, 2010).\n\n1Mean and standard deviation in seconds based on 10 separate runs on a workstation with a Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 24Gb RAM, running Ubuntu 16.04.1, with Linux kernel 4.4.0-34.\n\n\nUse cases\n\nIn the examples, alignment_file.fst can be replaced by any fasta formatted alignment file, and tree_file.tree (tree_file.trees if including more than one tree) can be replaced by any newick or nexus formatted file. Treebender can easily be used to create a svg image that can be piped into a file:\n\ntreebender --output svg tree_file.tree > tree_file.svg\n\nTreebender can also be used to create monophyletic operational taxonomic units in a tree based on branch lengths (c.f. virtual taxa sunsu Öpik et al., 2009):\n\ntreebender --cluster branch_length \\ --cut_off 0.03 tree_file.tree\n\nTo view a neighbour joining tree representation of a set of DNA sequences it is possible to use:\n\npairalign -j -n -m -v alignment_file.fst \\ treeator -n | treebender --output svg | rudisvg\n\nThe parsimony score and trees of all nearest neighbour interchange swaps from a topology can be given by:\n\ntreebender --nni all tree_file.tree \\ -f alignment_file.fst -p\n\nPairalign can be used to get which taxa can be aligned confidently according to MAD scores:\n\npairalign --group alignment_groups \\ -v alignment_file_with_taxon_string.fst\n\nContree can be used to draw the support values (e.g. bootstrap support) from a set of trees on a given topology:\n\ncontree -d tree_file.trees -a tree_file.tree\n\nContree can also be used to get supported conflicts between trees. The output can be either text or svg formatted as html (Figure 1):\n\nPart of output from contree -c 70 --html (re-formatted so trees are put next to each other and text removed). Tips in clade with more than 70 in support in tree to the left that is in conflict with tree to the right with more than 70 in support are colored green. The tips that cause the conflict in tree to the right are colored red.\n\ncontree -c 70 --html tree_file.trees\n\nThese and further examples, and an example of a bash script to do a search for the most parsimonious tree and a Perl script to find groups that are alignable according to MAD scores without a predefined taxonomy, are distributed with the source code.\n\n\nSummary\n\nPhylommand offers an efficient way of manipulating and analyzing phylogenetic trees without the overhead of a graphical interface or specialized command line interpreter. It can be used in both automated (through scripts) or manual work-flows. Since it is made to be compilable with minimum reliance of non standard libraries it is possible to use it on most operating systems including UNIX like systems as OS X and Linux, and Windows. This increases its utility for pipelines that will need to work on different platforms.\n\n\nSoftware availability\n\n\n\n1. Latest source code available at: https://github.com/mr-y/phylommand\n\n2. Archive source code as at the time of publication: http://doi.org/10.5281/zenodo.200397 (Ryberg, 2016)\n\n3. License: GPL3", "appendix": "Author contributions\n\n\n\nThe author coded the software and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nI am grateful to Ding He, Anders Larsson, John Pettersson, and Marisol Sanchez-Garcia for testing phylommand and giving comments on earlier versions of this manuscript, and Anders Larsson for contributions to phylommand’s documentation.\n\n\nReferences\n\nEllison CE, Hall C, Kowbel D, et al.: Population genomics and local adaptation in wild isolates of a model microbial eukaryote. Proc Natl Acad Sci U S A. 2011; 108(7): 2831–2836. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJumpponen A, Jones KL: Massively parallel 454 sequencing indicates hyperdiverse fungal communities in temperate Quercus macrocarpa phyllosphere. New Phytol. 2009; 184(2): 438–448. PubMed Abstract | Publisher Full Text\n\nJunier T, Zdobnov EM: The Newick utilities: high-throughput phylogenetic tree processing in the UNIX shell. Bioinformatics. 2010; 26(13): 1669–1670. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLarsson A: Aliview: a fast and lightweight alignment viewer and editor for large datasets. Bioinformatics. 2014; 30(22): 3276–3278. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLemmon AR, Emme SA, Lemmon EM: Anchored hybrid enrichment for massively high-throughput phylogenomics. Syst Biol. 2012; 61(5): 727–744. PubMed Abstract | Publisher Full Text\n\nÖpik M, Metsis M, Daniell TJ, et al.: Large-scale parallel 454 sequencing reveals host ecological group specificity of arbuscular mycorrhizal fungi in a boreonemoral forest. New Phytol. 2009; 184(2): 424–437. PubMed Abstract | Publisher Full Text\n\nRyberg M: Phylommand - a command line software package for phylogenetics. Zenodo. 2016. Data Source\n\nSanderson MJ, McMahon MM, Steel M: Phylogenomics with incomplete taxon coverage: the limits to inference. BMC Evol Biol. 2010; 10: 155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith SA, Beaulieu JM, Donoghue MJ: Mega-phylogeny approach for comparative biology: an alternative to supertree and supermatrix approaches. BMC Evol Biol. 2009; 9: 37. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "19805", "date": "06 Feb 2017", "name": "Thomas S. B. Schmidt", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present article is a short application note on an open-source phylogenetic software tool, phylomand. The article is very concise and can serve well as a reference for a tool under ongoing development. However, I believe that in its present form, it does not contain all necessary information to (i) fully describe the presented software and to therefore (ii) allow an informed evaluation of the tool.\nFrom the article itself, it remains relatively unclear what the individual programs in phylommand actually do. The pointer to the GitHub page, and the documentation included therein, are quite helpful. As a standalone, however, the article is quite generic and does not provide sufficient answers to the basic questions which (I think) should be addressed in a tool description:\n(i) what does the tool do? (ii) how does it do it (implementation details)? (iii) why is this important? (iv) how does it compare to other solutions?\nWith regard to the first point, the second paragraph of the article does contain a list of things that phylommand can do, but it does not provide context. I believe that even including a freeze of the GitHub wiki and tool documentation as a Supplement would help the reader to make more sense of the software's functionalities.\nImplementation details are provided; although they are quite generic, the necessary dependencies for compilation and additional functionalities (parallelization and and SVG viewer) are pointed out.\nThe article does not really provide information on why phylommand is an important addition to the list of existing tools, or how it fills an (important) gap in the present toolbox, other than stating that it can be integrated into software pipelines. In particular, it remains unclear why the implementation of exactly these functions (out of the wealth of possible operations on phylogenies) provides an added value over existing solutions. Clearly, phylommand is not a full-fledged standalone phylogenetic software suite, but it provides (efficient) solutions to specific problems; this could be carved out more clearly in the text.\nRelating to this, the article in general does not compare phylommand to existing tools, neither conceptually, nor in terms of performance. Newick utilities are mentioned several times, but there is such a wealth of phylogenetic software available, and it remains unclear how phylommand positions itself in that list. Perl-based, R-based and Python-based solutions are mentioned in a half-sentence, but from the article alone, it remains unclear what the \"niche\" of phylommand among the many existing tools can be. Likewise, I believe that a full assessment of the tool's performance requires more benchmarks than a speed test of one individual function (tree rooting) relative to one competitor. For the core implemented functions, it would be great to get at least some speed and memory benchmarks (relative to a limited set of competing tools).\nI was able to download and run phylommand on my OS X system. In general, I commend the author for developing and providing a potentially very useful tool for the community. However, as detailed above, I believe that the present article needs more information to act as a stand-alone reference.", "responses": [] }, { "id": "21590", "date": "19 Apr 2017", "name": "Siavash Mirarab", "expertise": [ "Reviewer Expertise Phylogenetics" ], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article describes the author’s open-source toolkit, “Phylommand,” which was designed to perform basic operations on phylogenetic trees from the command line. Phylommand was designed with an emphasis on portability/compatibility (by using standard libraries that enable easy cross-platform compilation), efficiency (by writing in C++ as opposed to higher-level scripting languages), and usability (by providing clean and extensive documentation, both online as well as in the tools themselves).\n\nThe article provides a nice summary of what the tools in Phylommand do. However, the reader is referred to the manuals to learn about the exact features implemented in the toolkit; in the paper, only general descriptions of the functions included in each tool are included. Moreover, some of the operations (e.g. mid-point rooting) require non-trivial algorithms, and how the tool approaches these steps is unclear.\n\nAppropriately, the authors make multiple references to their main competitor toolkit, namely Newick Utilities. There is one study comparing the two tools in terms of runtime, but this comparison is limited to a single task (rooting trees). Further, even though Newick Utilities is (to our knowledge) the only command line alternative, there are solutions to many of these problems outside of Newick Utilities in scripting languages (e.g. Dendropy for Python and APE in R). It can be argued that a command line tool has a different type of utility and is perhaps more usable. Nevertheless, since scripting languages are also relatively easy to use, comparisons to these platforms in terms of usability and speed would be informative. At a minimum, a mention would be needed.\n\nThe code is in C++ using standard libraries, making it simple to compile and run. We were able to compile it out-of-the-box with the “make” command on a Mac OS X laptop as well as on a CentOS and an Ubuntu server.\n\nRegarding the algorithmic aspects of the toolkit, we have concerns about efficiency, mainly driven by our examination of the code related to the midpoint rooting (note that other potentially non-trivial functions may also have issues, but we only explored midpoint rooting). As mentioned before, the algorithmic details are not provided in the paper. After digging a bit into the code, we were able to understand the midpoint rooting function (tree.cpp lines 528 through 577), and it appears a heuristic is performed: the algorithm starts with the initial rooting of the tree, and it then iteratively shifts the root to the left or right child to minimize the imbalance between the maximum distance to a left descendant leaf vs. a right descendant leaf. Once the imbalance does not improve beyond a hardcoded threshold (average of the maximum left tip distance and the maximum right tip distance, divided by 10,000), the iterative search terminates and the left and right branches of the “optimal” root are adjusted.\nThe main problem is the following. This “heuristic” solution is designed for a problem that has a trivial exact linear-time solution. Thus, the heuristic is unnecessary. With one bottom-up traversal and one top-down traversal of the tree, one can find the exact correct midpoint root in linear time. A linear time implementation of the midpoint rooting algorithm can be found here: https://github.com/uym2/MinVar-Rooting (runs in under a minute for 200,000 leaves).\nThe midpoint rooting code raises some questions. First, why is the number 10,000 hardcoded here? There is no restriction in the unit of distance used by trees that can be passed to this tool, so what if the input tree used a unit that resulted in huge integers for branch lengths? It seems as though hard-coding 10,000 could potentially cause issues. Second, this heuristic solution is quadratic-time in the worst case, but as mentioned earlier, an exact solution can be trivially obtained in linear time. Third, when given a tree rooted at a leaf, the algorithm fails (and the tool segfaults); why? This is a valid edge case. Below is the tree we tested: ((B:2000000000,(C:3000000000,D:4000000000)E:5000000000)F:1000000000)A;\n\nWe did not investigate any other functions but given our reservations about the correctness of the midpoint rooting function, we have reservations about the correctness of other non-trivial functions performed by Phylommand. Explanations of the algorithms behind the non-trivial functions seem essential to the article.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly", "responses": [] } ]
1
https://f1000research.com/articles/5-2903
https://f1000research.com/articles/5-2644/v1
07 Nov 16
{ "type": "Data Note", "title": "Whole genome resequencing of a laboratory-adapted Drosophila melanogaster population sample", "authors": [ "William P. Gilks", "Tanya M. Pennell", "Ilona Flis", "Matthew T. Webster", "Edward H. Morrow", "Tanya M. Pennell", "Ilona Flis", "Matthew T. Webster" ], "abstract": "As part of a study into the molecular genetics of sexually dimorphic complex traits, we used next-generation sequencing to obtain data on genomic variation in an outbred laboratory-adapted fruit fly (Drosophila melanogaster) population. We successfully resequenced the whole genome of 220 hemiclonal females that were heterozygous for the same Berkeley reference line genome (BDGP6/dm6), and a unique haplotype from the outbred base population (LHM). The use of a static and known genetic background enabled us to obtain sequences from whole genome phased haplotypes. We used a BWA-Picard-GATK pipeline for mapping sequence reads to the dm6 reference genome assembly, at a median depth of coverage of 31X, and have made the resulting data publicly-available in the NCBI Short Read Archive (Accession number SRP058502). We used Haplotype Caller to discover and genotype 1,726,931 small genomic variants (SNPs and indels, <200bp). Additionally we detected and genotyped 167 large structural variants (1-100Kb in size) using GenomeStrip/2.0. Sequence and genotype data are publicly-available at the corresponding NCBI databases: Short Read Archive, dbSNP and dbVar (BioProject PRJNA282591). We have also released the unfiltered genotype data, and the code and logs for data processing and summary statistics (https://zenodo.org/communities/sussex_drosophila_sequencing/).", "keywords": [ "drosophila", "sequencing", "data", "dimorphic traits" ], "content": "Introduction\n\nAs part of a study on the molecular genetics of sexually dimorphic complex traits, we used hemiclonal analysis in conjunction with next-generation sequencing to characterise molecular genetic variation across the genome, from an outbred laboratory-adapted population of Drosophila melanogaster, LHM1,2. The hemiclone experimental design allows the repeated phenotyping of multiple individuals, each with the same unrecombined haplotype on a different random genetic background. This method has been used to investigate standing genetic variation and intersexual genetic correlations for quantitative traits1 and gene expression3, but it has not yet been used to obtain genomic data.\n\nThe 220 hemiclone females that were sequenced in the present study have a maternal haplotype, from the dm6 reference assembly strain (BDGP6+ISO1 mito/dm6, Bloomington Drosophila Stock Center no. 2057)4,5, and have a different paternal genome each, sampled using cytogenetic cloning from the LHM base population. All non-reference genotypes in the sequenced LHM hemiclones were expected to be heterozygous and in-phase, except in rare instances where the in-house dm6 reference strain also had the same non-reference allele.\n\nPrevious studies indicate that the limits for DNA quantity in next-generation sequencing are 50–500ng6. We sequenced individual D.melanogaster, rather than pools of clones, because more biological information can be obtained, and because modern transposon-based library preparation allows accurate sequencing at low concentrations of DNA. D. melanogaster is a small insect (∼1μg) although this problem is off-set by the reduced proportion of repetitive intergenic sequence, and small genome size relative to other insects (170Mb verses ∼500Mb),6.\n\nWe mapped reads to the D. melanogaster dm6 reference assembly using a BWA-Picard-GATK pipeline, and called nucleotide variants using both HaplotypeCaller, and Genomestrip, the latter of which detections copy-number variation up to 1Mb in length. We have made the mapped sequencing data, and genotype data publicly-available on NCBI, and additionally have made the metadata, analysis code and logs publicly-available on Zenodo. This is the first report of a study which uses methods for detecting both SNPs, indels and structural variants (deletions and duplications >1Kb in length), genome-wide in next-generation sequencing data, and the first report of whole genome resequencing in hemiclonal individuals.\n\n\nMaterials and methods\n\nThe base population (LHM) was originally established from a set of 400 inseminated females, trapped by Larry Harshman in a citrus orchard near Escalon, California in 19912. It was initially kept at a large size (more than 1,800 reproducing adults) in the lab of William Rice (University College Santa Barbara, USA). In 1995 (approximately 100 generations since establishment) the rearing protocol was changed to include non-overlapping generations and a moderate rearing density with 16 adult pairs per vial (56 vials in total) during 2 days of adult competition, and 150–200 larvae during the larval competition stage2. In 2005, a copy of LHM population sample was transferred to Uppsala University, Sweden (approximately 370 generations since establishment), and in 2012, to the University of Sussex (UK), when the current set of 223 haplotypes were sampled. At the point of sampling we estimate that the population had undergone 545 generations under laboratory conditions, 445 of which had been using the same rearing protocol.\n\nHemiclonal lines were established by mating groups of five clone-generator females (C(1)DX,y,f ; T(2;3) rdgC st in ri pP bwD) with 230 individual males sampled from the LHM base population (see 1). A single male from each cross was then mated again to a group of five clone-generator females in order to amplify the number of individuals harbouring the sampled haplotype. Seven lines failed to become established at this point. The remaining 223 lines were maintained in groups of up to sixteen stock hemiclonal males in two vials that were transferred to fresh vials each week. Stock hemiclonal males were replenished every six weeks by mating with groups of clone-generator females. A stock of reference genome flies (Bloomington Drosophila Stock Center no. 2057) was established and maintained initially using five rounds of of sib-sib matings before expansion. 223 virgin reference genome females were then collected and mated to a single male from each of 223 hemiclonal lines. Female off-spring from this cross therefore have one copy of the reference genome and one copy of the hemiclonal haplotype. Groups of these hemiclonal females were collected as virgins, placed in 99% ethanol and stored at -20°C prior to DNA extraction.\n\nOne virgin female per hemiclonal line, was homogenised with a microtube pestle, followed by 30-minute mild-shaking incubation in proteinase K. DNA was purified using the DNeasy Blood and Tissue Kit (Qiagen, Valencia, CA), according to manufacturer’s instructions. Volumes were scaled-down according to mass of input material. Barrier pipette tips were used throughout, in order to minimise cross-contamination of DNA. Template assessment using the Qubit BR assay (Thermo Fischer, NY, USA) indicated double-stranded DNA, 10.4Kb in length at concentrations of 2–4ng/μl (total quantity 50–100ng).\n\nSequencing was performed under contract by Exeter Sequencing service, University of Exeter, UK. The sonication protocol for shearing of the DNA was optimised for low concentrations to generate fragments 200–500bp in length. Libraries were prepared and indexed using the Nextera Library Prep Kit (Illumina, San Diego, USA). All samples were sequenced on a HiSeq 2500 (Illumina), with five individuals per lane. We also sequenced DNA from two individuals from the in-house reference line (Bloomington Drosophila Stock Centre no. 2057). One was prepared as the hemiclones, using the Illumina Nextera library (sample RGil), and the other using an older, Illumina Nextflex method (sample RGfi). The median number of read pairs across all samples was 29.23×106 (IQR 14.07×106). Quality metrics for the sequencing data were generated with FastQC v0.10.0 by Exeter Biosciences, and used to determine whether results were suitable for further analyses. For twelve samples with less than 8×106 reads, sequencing was repeated successfully (H006, H041, H061, H084, H086, H087, H092, H098, H105), with a further three samples omitted entirely (H015, H016, H136), leaving 220 hemiclonal samples in total. As shown in Figure 1A & B, the read quality score and quality-per-base for the the samples taken forward for genotyping in this study were were well within acceptable standards, and similar across all samples.\n\nRaw data (fastq files) were stored and processed in the Linux Sun Grid Engine in the High-Performance Computing facility, University of Sussex. Adaptor sequences (Illumina Nextera N501-H508 and N701-N712), poor quality reads (Phred score <7) and short reads were removed using Fastq-mcf (ea-utils v.1.1.2). Settings were: log-adapter minimum-length-match: 2.2, occurrence threshold before adapter clipping: 0.25, maximum adapter difference: 10%, minimum remaining length: 19, skew percentage-less-than causing cycle removal: 2, bad reads causing cycle removal: 20%, quality threshold causing base removal: 10, window-size for quality trimming:1, number of reads to use for sub-sampling: 3×105.\n\nCleaned sequence reads were mapped to the D. melanogaster genome assembly, release 6.0 (Assembly Accession GCA_000001215.45) using Burrows-Wheeler Aligner mem (version 0.7.7-r441)7, with a mapping quality score threshold of 20. Fine mapping was performed with both Stampy v1.0.248 and the Genome Analysis Tool-Kit (GATK) v3.2.29 (following10). Removal of duplicate reads, indexing and sorting was performed with Picard-Tools v1.77 and SamTools v1.0. The median depth of coverage across all samples used for genotyping was 31X (IQR 14, see Figure 1C). As shown in Figure 1D, the mean nucleotide mis-match rate to the dm6 reference assembly for the LHM hemiclones was 3.27×10-3 per PCR cycle (IQR 0.2×10-3), contrasting with the two reference line samples for which the mis-match rate was 0.89 – 1.10×10-3 per cycle. We observed spikes of nucleotide mis-matches in some PCR cycles for some samples, which are likely to be errors rather than true sequence variation.\n\nA: Sequence read quality for each sample sequenced. Y-axis scale is logarithmic. B: Quality of sequences by nucleotide base position for each sample. C: Read depth of coverage distribution across each sample. Colouring corresponds to the order which which the samples were originally sequenced. D: Mis-matches to the dm6 reference genome assembly, by PCR cycle-number. Colouring is by sample as in plot C. The two red lines with visibly-lower mismatch rates than the others correspond to the two in-house BDGP/dm6 reference lines that were sequenced. Data and code for this figure is located at https://doi.org/10.5281/zenodo.159282.\n\nSingle-nucleotide polymorphisms (SNPs) and insertion/deletions (indels) <200bp in length, were detected and genotyped relative to the BDGP+ISO1/dm6 assembly, on chromosomes 2,3,4,X, and mitochondrial genome using Haplotyper Caller (GATK v3.4-0)11. Individual bam files were genotyped, omitting reads with a mapping quality under 20, stand call and emit confidence thresholds of 31, then combined and genotyped again. 143,726,002 bases of genomic sequence were analysed from which 1,996,556 variant loci were identified consisting of 1,581,341 SNPs, 196,582 deletions, and 218,633 insertions. Functional annotation was added using SNPeff v4.112.\n\nWe used hard-filtering to remove variants generated by error, because the alternative ’variant recalibration’ requires prior information on variant positions from a similar population or parents. Quality filtering thresholds were decided following inspection of the various sequencing metrics associated with each variant locus, and by software developers’ recommendations11. The filtering thresholds were: Quality-by-depth >2, strand bias (-log10.pFisher) <50, mapping quality >58, mapping quality rank sum >-7.0, read position rank sum >-5.0, combined read depth <15000, and call rate >90%. This filtering removed 167,319 variants (8.3%), leaving 1,829,237. Summary values for the variant quality metrics are shown in Table 1. Distributions of quality metrics for Haplotype Caller variants are shown in Supplementary Figure 2. The density of sequence variants, measured as the median for windows of 10Kb in length across the genome, was 75 per for biallelic SNPs, 1 for multi-allelic SNPs, 6 for biallelic indels, and 3 for multi-allelic indels (see Figure 2A). Mean separation between variants of any type or allele frequency was 78bp. As shown in Figure 2B the allele frequency distribution for bi-allelic SNPs and indels was similar, and broadly within expectations for an out-bred diploid population sample. The two in-house reference line individuals had 515 homozygous and 3171 heterozygous mutations from the reference assembly. The median genotype counts for the 220 LHM hemiclone individuals, were 585 homozygous, 728,214 heterozygous and 4963 no-call (IQR 400, 36707 and 7876). Genotype counts for each individual are shown in Figure 2C.\n\nValues show the total number of variants, median (and IQR) for each metric. Data generated from vcf file using GATK VariantsToTable, on the quality-filtered data. Code and data used to generate this table located at https://doi.org/10.5281/zenodo.159282.\n\nA: Density of common variants across the genome (MAF>0.05 (Variants from the in-house reference line are included but account for less than 3,686 of the 1,825,917 common variants plotted (<0.2%). B: Allele frequency distribution by variant type. *MAF values were calculated from the count of heterozygous calls, and so for multi-allelic variants, the MAF is derived from the combined count of both alternate alleles. C: Genotype counts per individual genotyped. Data generated using GATK/3.4 VariantEvaluation function. Data and code for this figure is located at https://doi.org/10.5281/zenodo.159282.\n\nFor data submission to dbSNP, we removed 44,644 indels that were multi-allelic or greater than 50bp in length, and a further 57,662 variants that had null alternate alleles (likely due to being situated within a deletion). The genotype data submitted to dbSNP consists of 1,726,931 quality-filtered, functionally-annotated variant records (1,423,039 SNPs and 303,892 short, biallelic insertion and deletion variants) corresponding to 383,378,682 individual genotype calls.\n\nLarge genomic variants – deletions and duplications, between 1Kb and 100Kb in length – were detected and genotyped using GenomeStrip v2.013. One of the reference strain individuals (sample RGfi) was omitted from the this analysis because a different sequencing library preparation method was used from the other samples (see above). We included the following settings (according to developers’ guidelines): Sex-chromosome and k-mer masking when estimating sequencing depth, computation of GC-profiles and read counts, and reduced insert size distributions. Large variant discovery and genotyping was performed only on chromosomes 2, 3, 4 and X, omitting the mitochondrial genome and unmapped scaffolds.\n\nWe used the Genomestrip CNV Discovery pipeline with the settings: minimum refined length 500, tiling window size 1000, tiling window overlap 500, maximum reference gap length 1000, boundary precision 100, and genotyped the results with the GenerateHaploidGenotypes R script (genotype likelihood threshold 0.001, R version 3.0.2). Following visualisation of the genotype results and comparison with the bam sequence alignment files using the Integrated Genomics Viewer (IGV) v2.3.7214, we excluded telomeric and centromeric regions where the sequencing coverage was fragmented, and six regions of multi-allelic gains of copy-number with dispersed break-points, previously reported to undergo mosaic in vivo amplification prior to oviposition15 (see Supplementary Table 1 for genomic positions, and Supplementary Figure 3 for visualisation of in vivo amplification in a sequence alignment file). We excluded 6 samples (H082, H083, H090, H097, H098, H153) for which 80–90% of the genome was reported by Genomestrip to contain structural variation, which we regarded as error. Most these samples were grouped by the order in which they were processed for DNA extraction and sequencing, so this may have been caused partly by a batch-effect leading to differences in read pair separation, depth-of-coverage, and response to normal fluctuations in GC-content. Following removal of these samples, there were 2897 CNVs (1687 deletions, 877 duplications, and 333 of the ‘mixed’ type), ranging in size from 1000bp to 217,707bp. We observed eight regions, for which Genomestrip identified multiple adjacent CNVs in single individuals, but which are likely single CNVs, 100Kb to 1.3Mb in length (Supplementary Table 2).\n\nEach row corresponds to an individual sequenced (in order originally sequenced from top to bottom, with the reference line at the bottom). Image generated using R/3.3.1 (package ggplot v2.1.0) with data generated by GATK VariantsToTable with individual genotypes as copy-numbers. Data and code for this figure is located at https://doi.org/10.5281/zenodo.159282.\n\nQuality-filtering for structural variants detected by Genomestrip analysis of whole-genome resequencing data are not thoroughly established. We visually inspected, in the bam read alignment files using the Integrated Genomics Viewer14, reported structural variants which were most likely to be artefacts. Specifically these were variants with: i) Extreme values for quality-score, GC-content or cluster separation, ii) Any homozygous non-reference genotypes (not expected with our breeding design), iii) Type ‘mixed’. Following this, we used the following criteria for quality filtering: Quality score >15, cluster separation <17, GC-fraction >0.33, no mixed types (deletions and duplications only), homozygous non-reference genotype count >0, and heterozygous genotype count <200. Summaries of the quality metrics for quality-filtered data are shown in Table 2 and Supplementary Figure 2. We applied an upper limit to the cluster separation to remove groups of outliers in the upper end of the distribution, although this may have excluded many true, low-frequency variants. However, data on rare variants are not directly useful for our further investigations.\n\nValues show the total number of variants, median (and IQR) for each metric. Data generated from vcf file using GATK VariantsToTable, on the quality-filtered data. *No CNVs in the quality-filtered samples had a ‘no-call’ or homozygous non-reference genotype.\n\nAfter filtering, 167 CNVs remained (78 deletions and 89 duplications, size range 1Kb-26.6Kb). The positions and genotypes of these CNVs for each individual are shown in Figure 3. The genotype data for quality-filtered CNVs were combined with the data from 2252 indels >50bp from the Haplotype Caller pipeline, and a total of 2419 variants were uploaded to the public database on structural variation, NCBI dbVar. Although we have used methods for detecting SNPs, indels and CNVs, variants between 200bp and 1Kb are not reported by either HaplotypeCaller or Genomestrip. Additionally, sequence inversions are not detected by these methods and the upper limits to CNV detection using Genomestrip, based on the parameters and results of this study, are 100Kb–1Mb.\n\n\nDataset validation\n\nInitial validation of our methods can be seen by lack of variants in the two reference line individuals compared with the LHM hemiclones (3,686 verses a median of 728,799 per sample). For a more thorough test of the genotyping and hemiclone method reproducibility, we sequenced an additional hemiclone individual from three of the LHM lines, and mapped the reads to the reference genome assembly as before. For HaplotypeCaller, we generated ‘g.vcf’ files for each sample, and then performed genotyping and quality-filtering as described above, except that the original three samples were replaced with the replication test samples. Similarly, for Genomestrip, we performed structural variant discovery and genotyping on all of the same samples as before, replacing three original samples with the replication test samples. We then used the GATK Genotype Concordance function to generate counts of genotype differences between the three pairs of samples. Overall results are presented in Table 3. Genotype reproducibility for quality-filtered biallelic SNPs was 98.5–99.5%, going down to 89.1–93.2% for filtered multi-allelic indels. Reproducibility of structural variant genotype calls was 95.6–100.0%, although we noted that for one individual (H119) filtering actually reduced the reproducibility rate from 99.7% to 95.6%. Full code, logs and numerical results can be found at http://doi.org/10.5281/zenodo.160539.\n\n*Presented values are the overall genotype concordance, as generated using GATK/3.4 Genotype Concordance function. Code, logs and output data are available at http://doi.org/10.5281/zenodo.160539.\n\nAlthough these results indicate that our genotype accuracy is very good, there are several caveats to consider. In the quality-filtered small-variant data, seven samples (H034, H035, H040, H038, H039, H188, H174) had prominently higher genotype drop-out rates than the others (of 2–7%), as well as a higher proportion of homozygous non-reference genotypes (2–4%; See Figure 2C). Additionally two samples had prominently more heterozygous variants (H072:885,551 and H093:955,148 verses the other LHM hemiclones: mean 710,934).\n\nAlthough the genotype replication rate for the structural variants was also very high, we cannot exclude the possibility that, due to incomplete masking of hard-to-sequence regions of the reference assembly, variants which are artefacts reported in the original genotype data, may also be present in the replication genotype data.\n\n\nData availability\n\nAll publicly-available records are for 220 LHM hemiclone individuals and 2 in-house reference line individuals, with the exception of the large-variant data for which one in-house reference line sample and six LHM hemiclones were omitted. The NCBI BioProject identifier is PRJNA282591. Code, logs and quality control data for each dataset, and for generating the figures and tables in this manuscript are publicly-available at https://zenodo.org/communities/sussex_drosophila_sequencing/. Use of the files uploaded to Zenodo is under Creative Commons 4.0 license.\n\nRaw fastq sequence reads, and bam alignment files for D. melanogaster are publicly-available at the NCBI Sequence Read Archive, accession number SRP058502 (https://www.ncbi.nlm.nih.gov/sra/?term=SRP058502). The code for read-mapping, alongside the run logs and quality-control data are available at https://doi.org/10. 5281/zenodo.159251. Additionally the sequence alignment files for the corresponding Wolbachia have accession number SRP091004 (https://www.ncbi.nlm.nih.gov/sra/?term=SRP091004), with further supporting files at https://doi.org/10.5281/zenodo.15978416.\n\nRecords of quality-filtered sequence variants identified by GATK HaplotypeCaller in the LHM hemiclones, and in the in-house reference line, are available from the NCBI dbSNP, https://www.ncbi.nlm.nih.gov/projects/SNP/snp_viewBatch.cgi?sbid=1062461, handle: MORROW_EBE_SUSSEX. In compliance with NCBI dbSNP criteria, variants >50bp in length, multi-allelic indels, and variants with a null alternate allele are excluded. More extensive genotype data (unfiltered, quality-filtered, and formatted for NCBI dbSNP) are available at https://doi.org/10.5281/zenodo.15927217. Also included is the code used for variant discovery and genotyping, quality-filtering and formatting, alongside run logs and quality-control data. Further filtering of this dataset may be necessary to remove localised areas of artefact SNPs in single samples.\n\nRecords of quality-filtered variants detected by GenomeStrip, and variants >50bp detected by Haplotype Caller are publicly-available at NCBI dbVar, accession number nstd134, http://www.ncbi.nlm.nih.gov/dbVar/nstd134. Unfiltered and filtered genotype data, code for CNV discovery and genotyping using Genomestrip/2.0, run logs, and summary data are publicly-available at https://doi.org/10.5281/zenodo.15947217.\n\nRun code and logs for performing the genotyping using Haplotype Caller and Genomestrip when three samples are replaced by hemiclones from the same line, code for comparing the genotype calls between pairs of hemiclones, and results tables are located at https://doi.org/10.5281/zenodo.16053918.\n\nInput data, code and logs for generating the figures and tables used in this manuscript are located at https://doi.org/10.5281/zenodo.15928219. Code and logs for the generation of the input data is provided in the data releases pertaining to each process.", "appendix": "Author contributions\n\n\n\nEM conceived and supervised the experiment. EM, TP, IF, MW and WG designed the experiment. TP and IF established and maintained the lines, and carried out the DNA extractions. WG analysed the sequencing and genotype data. WG and MW developed the read-mapping and variant-calling procedures. WG and EM wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding was provided to EM by a Royal Society University Research Fellowship, the Swedish Research Council (No. 2011-3701), and by the European Research Council (No. 280632).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nSequencing was performed under contract by Exeter University, DNA Sequencing Service (UK), who also provided analysis advice, http://www.exeter.ac.uk/business/facilities/sequencing/. Crucial computational support was provided by Jeremy Maris at the Centre for High-Performance Computing, University of Sussex, http://www.sussex.ac.uk/its/services/research/highperformance. Bob Handsaker (Harvard Medical School, USA) provided analysis advice for use of Genomestrip for structural variant detection.\n\n\nSupplementary material\n\nSupplementary information for: \"Whole genome resequencing of a laboratory-adapted Drosophila melanogaster population\":\n\nClick here to access the data.\n\n\nReferences\n\nAbbott JK, Morrow EH: Obtaining snapshots of genetic variation using hemiclonal analysis. Trends Ecol Evol. 2011; 26(7): 359–368. PubMed Abstract | Publisher Full Text\n\nRice WR, Linder JE, Friberg U, et al.: Inter-locus antagonistic coevolution as an engine of speciation: assessment with hemiclonal analysis. Proc Natl Acad Sci U S A. 2005; 102(Suppl 1): 6527–6534. PubMed Abstract | Publisher Full Text | Free Full Text\n\nInnocenti P, Morrow EH: The sexually antagonistic genes of Drosophila melanogaster. PLoS Biol. 2010; 8(3): e1000335. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdams MD, Celniker SE, Holt RA, et al.: The genome sequence of Drosophila melanogaster. Science. 2000; 287(5461): 2185–2195. PubMed Abstract | Publisher Full Text\n\nHoskins RA, Carlson JW, Wan KH, et al.: The Release 6 reference sequence of the Drosophila melanogaster genome. Genome Res. 2015; 25(3): 445–458. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichards S, Murali SC: Best Practices in Insect Genome Sequencing: What Works and What Doesn’t. Curr Opin Insect Sci. 2015; 7: 1–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–2079. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLunter G, Goodson M: Stampy: a statistical algorithm for sensitive and fast mapping of Illumina sequence reads. Genome Res. 2011; 21(6): 936–939. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDePristo MA, Banks E, Poplin R, et al.: A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat Genet. 2011; 43(5): 491–498. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLack JB, Cardeno CM, Crepeau MW, et al.: The Drosophila genome nexus: a population genomic resource of 623 Drosophila melanogaster genomes, including 197 from a single ancestral range population. Genetics. 2015; 199(4): 1229–1241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan der Auwera GA, Carneiro MO, Hartl C, et al.: From FastQ data to high confidence variant calls: the Genome Analysis Toolkit best practices pipeline. Curr Protoc Bioinformatics. 2013; 11(1110): 11.10.1–11.10.33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCingolani P, Platts A, Wang le L, et al.: A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly (Austin). 2012; 6(2): 80–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHandsaker RE, Van Doren V, Berman JR, et al.: Large multiallelic copy number variations in humans. Nat Genet. 2015; 47(3): 296–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson JT, Thorvaldsdóttir H, Winckler W, et al.: Integrative genomics viewer. Nat Biotechnol. 2011; 29(1): 24–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpradling AC, Mahowald AP: Amplification of genes for chorion proteins during oogenesis in Drosophila melanogaster. Proc Natl Acad Sci U S A. 1980; 77(2): 1096–1100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilks W: Read-mapping for next-generation sequencing data (Wolbachia) [Data set]. Zenodo. 2016. Data Source\n\nGilks W: SNP and indel discovery and genotyping in next-generation sequencing data [Data set]. Zenodo. 2016. Data Source\n\nGilks W: Genotype reproducibility testing in next-generation sequencing data [Data set]. Zenodo. 2016. Data Source\n\nGilks W: Graphing and tabulating next-generation sequencing and genotyping data [Data set]. Zenodo. 2016. Data Source" }
[ { "id": "17453", "date": "08 Nov 2016", "name": "Stephen Richards", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAs a data note I think this is an excellent very detailed and comprehensive description of a dataset. I can find all the data in the public databases as described.\nI have only the most minor of quibbles:\nThe breeding of the hemiclonal lines always confuses me, and I think it would help the reader if there were a figure describing this with different colored chromosomes showing what is happening as you go through the crosses.\n\nIf I wanted a vcf file (or ideally gvcf) for the project is there one available for download.\n\nMaybe stick the data in fly-var? http://www.iipl.fudan.edu.cn/FlyVar/", "responses": [ { "c_id": "2365", "date": "20 Dec 2016", "name": "William Gilks", "role": "Author Response", "response": "Dear Dr Richards, We thank you for reviewing our manuscript, and consider that your suggestions improve the quality of the manuscript, and the public availability of data. Following your suggestions, we have made the following changes: As requested, we have included a figure describing the breeding design of the hemiclonal lines, which hopefully clarifies the transmission of the different chromosomes in each cross (Figure 1).   You suggested releasing a ‘gvcf’ file for public use, which contains a record of the genotype information at all sites in the D. melanogaster genome in the LHM population study sample. We had previously deleted this file after the smaller vcf file was generated. In response to the suggestion we have re-generated a gvcf, and deposited it in Zenodo (https://doi.org/10.5281/zenodo.198880), alongside code and run-logs, and made a note of this in the manuscript text under ‘Data Availability; Small variant data’. This gvcf differs from the original gvcf only in that: i) An updated version of GATK was used (3.4 compared to 3.2), and ii) That all scaffolds of the dm6 assembly were analysed, including those which have not been mapped to specific chromosomal positions.   You suggested that we deposit the genotype data in the ‘Fly-var’ database (http://www.iipl.fudan.edu.cn/FlyVar/). Following communication with the curators of Fly-var, we understand that the current method of data submission is by e-mail, which is unsuitable for our large dataset. We have given the URLs for our data to the Fly-var curators ready for upload when procedures exist. We hope that these alterations meet your approval, and would be happy to make any further changes that may be required. Sincerely, William Gilks Tanya Pennell Ilona Flis Matthew Webster Ted Morrow" } ] }, { "id": "17452", "date": "16 Nov 2016", "name": "Geraldine A. Van der Auwera", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall this is a very solid technical note. The methods seem sound, the descriptions are fairly straightforward, and caveats are properly acknowledged. The authors have provided detailed descriptions of what was done (including software tool versions) and made data and code available to reproduce not only the dataset but also all figures in the paper itself.\n\nRegarding the experimental design, I think it's great that Gilks et al. chose to perform sequencing on individual flies rather than pooled samples. It takes some extra effort to deal with the small amounts of starting material involved, but the resulting dataset is that much more valuable.\n\nIt's also nice to see a study looking at CNVs and short variants together. As tools in this space improve and enable greater integration I look forward to seeing more analysis of how the different variant types relate to each other (e.g. looking at which short variants might be amplified or suppressed by CNV events).\n\nRequest for additional figures\nI would recommend including diagrams of the hemiclonal experimental design and of the analysis workflows to maximize clarity. In particular, I think it could be made more obvious that the HaplotypeCaller workflow was run using the GVCF pathway for joint analysis.\n\nMinor comments\nI prefer \"high-throughput sequencing\" to \"next generation sequencing\" (this technology was \"next-gen\" ten years ago, now it's just the current standard).\n\nOn page 3, does \"Fine mapping\" refer to realignment around indels or equivalent processes?\n\nOn page 4, I would express \"strand bias\" as \"FisherStrand estimation of strand bias\" to avoid ambiguity with other estimators like Strand Odds Ratio, SOR).\nOn page 4, does \"null alternate alleles\" refer to the GATK convention of emitting \"*\" to record sites with spanning deletions, as documented here: https://software.broadinstitute.org/gatk/guide/article?id=6926?\nTypos (Page/paragraph)\nP2p3 - \"detections\" -> \"detects\" P2p6 - \"off-spring\" -> \"offspring\" P3p1 - \"were were\" -> \"were\" P4p1 - \"mis-matches\" -> \"mismatches\" P6p2 - \"g.vcf\" -> \"GVCF\" or \"gVCF\"", "responses": [ { "c_id": "2366", "date": "20 Dec 2016", "name": "William Gilks", "role": "Author Response", "response": "Dear Dr Van der Auwera, We thank you for reviewing our manuscript, and consider that your suggestions improve the clarity and quality of the manuscript, particularly for technical accuracy, and overall communication. Following these suggestions, we describe the changes that we have made: As requested, we have included a figure describing the breeding of the hemiclonal lines and indicating what is happening to the chromosomes, (Figure1), including a reference to the figure in the main text.   As requested, we have included figure that summarises the analysis pipeline for the next-generation sequencing data and genotyping procedures (Figure 2).   In accordance with the suggestion, we have changed the term “next-generation sequencing” to \"high-throughput sequencing\". We have also added a reference to for the sequencing method (Introduction, first sentence, Bently et al 2008 Nature PMID:18987734).   We agree that our description of the sequence-read mapping procedures was generally vague and inaccurate (Page 3, Read Mapping methods, 2nd paragraph). Our original sentences were: “Fine mapping was performed with both Stampy v1.0.248 and the Genome Analysis Tool-Kit (GATK) v3.2.29 (following10). Removal of duplicate reads, indexing and sorting was performed with Picard- Tools v1.77 and SamTools v1.0.” We have changed this to: “Remaining reads were re-mapped using Stampy v1.0.24, which is slower but more precisely maps reads which are divergent from the reference genome assembly9. This method was used previously for the Drosophila Genome Nexus10. Removal of duplicate reads, indexing and sorting was performed with Picard-Tools v1.77. Re-mapping of sequence reads around insertion-deletion polymorphisms was performed using Genome Analysis Tool-Kit (GATK) v3.2.2, as a recommended standard practice11.” The updated text provides more information on the properties of secondary mapping using Stampy, and how it has been used previously for the Drosophila Genome Nexus. Furthermore, the new text, distinguishes the process of fine-mapping of reads around insertion-deletion polymorphisms using GATK.   You suggested for clarification, changing \"strand bias\" to \"FisherStrand estimation of strand bias\" in order to avoid ambiguity with other estimators (page 4, Small variant detection methods, 2nd paragraph, 3rd sentence).  We have added in brackets a definition for strand bias, in this case as ‘Phred-scaled p-value from Fisher’s Exact test’. We have also added this information to Table 1, on HaplotypeCaller variant metrics.   On page 4 (Small variant detection methods, last paragraph, preparation of data to NCBI dbSNP), you query whether our use of the term ‘null alternate allele’ as a reference to the GATK convention of using an asterisk symbol for an alternate allele which is located in a spanning deletion. We have removed the term ‘null alternate allele’, and merely stated that variants located within deletions were excluded. The original sentence was: “For data submission to dbSNP, we removed 44,644 indels that were multi-allelic or greater than 50bp in length, and a further 57,662 variants that had null alternate alleles (likely due to being situated within a deletion).\" The new sentence is: “For data submission to NCBI dbSNP, we were obliged to exclude 44,644 indels that were multi-allelic or greater than 50bp in length, and a further 57,662 SNPs and indels situated within deletions.”   We have made the grammatical and spelling corrections as suggested. We hope that these alterations meet your approval, and would be happy to make any further changes that may be required. Sincerely, William Gilks Tanya Pennell Ilona Flis Matthew Webster Edward Morrow" } ] } ]
1
https://f1000research.com/articles/5-2644
https://f1000research.com/articles/5-2893/v1
21 Dec 16
{ "type": "Research Article", "title": "Impact of antiretroviral therapy on clinical outcomes in HIV+ kidney transplant recipients: Review of 58 cases", "authors": [ "Rossana Rosa", "Jose F. Suarez", "Marco A. Lorio", "Michele I. Morris", "Lilian M. Abbo", "Jacques Simkins", "Giselle Guerra", "David Roth", "Warren L. Kupin", "Adela Mattiazzi", "Gaetano Ciancio", "Linda J. Chen", "George W. Burke", "Jose M. Figueiro", "Phillip Ruiz", "Jose F. Camargo", "Rossana Rosa", "Jose F. Suarez", "Marco A. Lorio", "Michele I. Morris", "Lilian M. Abbo", "Jacques Simkins", "Giselle Guerra", "David Roth", "Warren L. Kupin", "Adela Mattiazzi", "Gaetano Ciancio", "Linda J. Chen", "George W. Burke", "Jose M. Figueiro", "Phillip Ruiz" ], "abstract": "Background: Antiretroviral therapy (ART) poses challenging drug-drug interactions with immunosuppressant agents in transplant recipients.  We aimed to determine the impact of specific antiretroviral regimens in clinical outcomes of HIV+ kidney transplant recipients. Methods: A single-center, retrospective cohort study was conducted at a large academic center. Subjects included 58 HIV- to HIV+ adult, first-time kidney transplant patients. The main intervention was ART regimen used after transplantation.  The main outcomes assessed at one- and three-years were: patient survival, death-censored graft survival, and biopsy-proven acute rejection; we also assessed serious infections within the first six months post-transplant. Results: Patient and graft survival at three years were both 90% for the entire cohort. Patients receiving protease inhibitor (PI)-containing regimens had lower patient survival at one and three years than patients receiving PI-sparing regimens: 85% vs. 100% (p=0.06) and 82% vs. 100% (p=0.03), respectively. Patients who received PI-containing regimens had twelve times higher odds of death at 3 years compared to patients who were not exposed to PIs (odds ratio, 12.05; 95% confidence interval, 1.31-1602; p=0.02).  Three-year death-censored graft survival was lower in patients receiving PI vs. patients on PI-sparing regimens (82 vs 100%, p=0.03). Patients receiving integrase strand transfer inhibitors-containing regimens had higher 3-year graft survival. There were no differences in the incidence of acute rejection by ART regimen. Individuals receiving PIs had a higher incidence of serious infections compared to those on PI-sparing regimens (39 vs. 8%, p=0.01). Conclusions: PI-containing ART regimens are associated with adverse outcomes in HIV+ kidney transplant recipients.", "keywords": [ "HIV", "kidney transplant", "protease inhibitor", "antiretroviral therapy", "infection" ], "content": "Introduction\n\nMore than 500 kidney transplants in human immunodeficiency virus–infected (HIV+) recipients have been performed in the United States with acceptable outcomes1–5. HIV infection is associated with a two- to three-fold increase in the risk of rejection3. Reduced exposure to immunosuppressive agents is considered the main mechanism for increased predisposition to rejection3,6,7.\n\nDrug-drug interactions between antiretroviral therapy (ART) and calcineurin inhibitors (CNI), such as tacrolimus, pose a significant clinical challenge. Protease inhibitors (PI) and cobicistat increase the levels of CNI, whereas nonnucleoside reverse transcriptase inhibitors (NNRTI) reduce the levels of these agents. In contrast to PI and NNRTI, integrase strand transfer inhibitors (INSTI), which are not a substrate of CYP450, have become the preferred antiretroviral in many centers to overcome the problematic pharmacokinetic interactions6–9.\n\nAlthough tenofovir disoproxil fumarate (TDF) has a good safety profile and is recommended as a first-line agent10, it can cause renal tubular dysfunction in HIV+ individuals11 and tenofovir-related nephrotoxicity is always a concern in kidney transplant recipients.\n\nData on the impact of specific ART regimens on the clinical outcomes of HIV+ kidney transplant recipients is scarce. In the present study, we compared post-transplant outcomes by ART regimens in a group of 58 HIV+ kidney recipients transplanted at our institution over a 9-year period.\n\n\nMethods\n\nA single-center, retrospective cohort study of 58 consecutive HIV- to HIV+ adult, first-time kidney transplants performed in the Miami Transplant Institute affiliated to Jackson Memorial Hospital, a 1,550-bed academic medical center, between October 2006 and October 2015. All HIV+ recipients had an undetectable viral load, and all but one (a kidney-liver recipient) had a CD4 count > 200 cells/mm3 at the time of the transplant. The study was approved by the University of Miami institutional review board (#20150614). Written consent was waived by the institutional review board due to the retrospective observational nature of the study.\n\nImmunosuppression and antimicrobial prophylaxis protocols at our center have been previously described4,5.\n\nThe one- and three-year outcomes assessed were: patient survival, death-censored graft survival, and biopsy-proven acute rejection; we also assessed serious infections within the first six months post-transplant, defined as infections requiring admission to the intensive care unit during initial transplant hospitalization or re-admission to the hospital after discharge4.\n\nThe Fisher exact test and Wilcoxon Mann–Whitney U test were used where appropriate. Univariate analyses were performed using logistic regression with penalized likelihood estimation. Multivariable models were not pursued due to small number of events. Log-rank test was used to assess differences in time-to-event. Statistical analyses were performed using SAS University Edition (SAS Institute Inc., Cary, NC, USA).\n\n\nResults\n\nA total of 58 HIV+ adult kidney allograft recipients were studied (Table 1). In total, 51 subjects had at least one HIV viral load during the first year post-transplant, and except for six patients who had transient “blips” in viremia (median peak viremia, 130 copies/mL [IQR, 114–193]), all the patients had sustained ART-induced HIV viral load suppression (<50 copies/mL) post-transplant.\n\nBMI, body mass index; CMV, cytomegalovirus; HIV, human immunodeficiency virus; HIVAN, HIV-associated nephropathy; IQR, interquartile range; IVIG, Intravenous immunoglobulin; MMF, mycophenolate mofetil; PRA, panel reactive antibody. PI, protease inhibitor.\n\n*Data presented as absolute number (percentage), unless specified otherwise. The p-value corresponds to comparison of PI-containing and PI-sparing groups by using the Fisher exact test. Wilcoxon Mann–Whitney test was used for variables presented as median and IQR.\n\n†All of the patients received anti–thymocyte globulin, basiliximab and methylprednisolone for induction.\n\nϮCold ischemia time and HLA-mismatch data available for 47 and 54 patients, respectively.\n\nϮDuring first year post-transplant.\n\nThere were no ART restrictions in transplant eligibility for HIV+ candidates during the study period. The three most common regimens post-transplant were nucleoside reverse transcriptase inhibitors (NRTI) plus PI, NRTI plus INSTI, and NRTI plus NNRTI (Table 2).\n\nART, antiretroviral therapy; INSTI, integrase strand transfer inhibitors; NRTI, nucleoside reverse transcriptase inhibitors; NNRTI, nonnucleoside reverse transcriptase inhibitors; PI, protease inhibitors.\n\n†Refers to the ART regimen the patient was discharged home with after the initial transplant hospitalization.\n\nºData only available for 41 patients (due to death, loss of follow up, or insufficient documentation in medical record).\n\n*Individual percentage values are rounded and might not total 100%.\n\nA total of 30 (52%) patients underwent ART modifications after transplant; 22 (38%) of them prior to discharge, and an additional 8 (14%) during the first year post-transplant. Adjustments in ART were primarily done to avoid drug-drug interactions or added nephrotoxicity. There was a significant increase in the proportion of patients receiving INSTI at time of discharge and at 12 months post-transplant compared to pre-transplant period: 41% (p<0.01) and 51% (p<0.0005) vs. 17%, respectively (Table 2 and Figure 1).\n\nThere was a significant increase in the proportion of patients receiving INSTI-containing regimens at time of discharge and 12 months post-transplant. ART, antiretroviral therapy; INSTI, integrase strand transfer inhibitors; NRTI, nucleoside reverse transcriptase inhibitors; NNRTI, nonnucleoside reverse transcriptase inhibitors; PI, protease inhibitors.\n\nThe patient and graft survival at three years were both 90% for the entire cohort. Transplant outcomes varied by ART regimen at the time of discharge after the initial transplant hospitalization. Patients receiving PI-containing regimens had lower patient survival at one and three years than patients receiving PI-sparing regimens: 85% vs. 100% (p=0.06) and 82% vs. 100% (p=0.03), respectively (Table 3 and Figure 2). Patients who received PI-containing regimens had twelve times higher odds of death at three years compared to patients who were not exposed to PIs (odds ratio [OR] 12.05; 95% confidence interval [CI] 1.31-1602; p=0.02). Hepatitis C and delayed graft function also increased the odds of death, but this finding did not reach statistical significance (Table 4). Three-year death-censored graft survival was lower in patients receiving PI vs. patients on PI-sparing regimens (82 vs 100%, p=0.03; Table 3 and Figure 2). On the contrary, patients receiving INSTI-containing regimens had higher three-year graft survival rates (100 vs. 82%, p=0.04; Table 3).\n\nART, antiretroviral therapy; INSTI, integrase strand transfer inhibitors; NRTI, nucleoside reverse transcriptase inhibitors; NNRTI, non-nucleoside reverse transcriptase inhibitors; PI, protease inhibitors; TDF, tenofovir disoproxil fumarate.\n\nºRefers to the ART regimen the patient was discharged home with after the initial transplant hospitalization.\n\nP values correspond to Fisher 's exact test. Numbers in bold represent statistical significance.\n\n†As defined previously4. See main text for details.\n\nˆRegimens listed here were three most common ART regimens post-transplant in this cohort.\n\n*Includes NRTI + INSTI and NRTI + NNRTI.\n\nKaplan–Meier curves show the (A) 3-year patient survival, (B) 3-year graft survival, (C) 3-year rejection-free survival, and (D) 200-day infection-free survival in PI-sparing (blue) and PI-containing (red) groups. The number of patients in each group is shown in the bottom of each panel.\n\nϮData presented as absolute number (percentage), unless specified otherwise.\n\n*P-value calculated using logistic regression with penalized likelihood estimation (null hypothesis of beta=0).\n\nHIV, human immunodeficiency virus; IQR, interquartile range.\n\nWe next assessed transplant outcomes in patients receiving NRTI “backbone” combined with either NNRTI, PI or INSTI as a second drug class. Compared to a group of patients receiving NRTI plus INSTI or NRTI plus NNRTI, the 3-year patient and graft survival were lower in patients receiving NRTI plus PI (78 vs. 100%; p=0.05, Table 3 and Figure 3).\n\nKaplan–Meier curves show the (A) 3-year patient survival, (B) 3-year graft survival, (C) 3-year rejection-free survival, and (D) 200-day infection-free survival in NRTI + INSTI (blue), NRTI + NNRTI (red) and NRTI + PI (green) groups. Number of patients in each group is shown in the bottom of each panel. ART, antiretroviral therapy; INSTI, integrase strand transfer inhibitors; NRTI, nucleoside reverse transcriptase inhibitors; NNRTI, nonnucleoside reverse transcriptase inhibitors; PI, protease inhibitors.\n\nCauses of graft loss among patients on PI-containing regimens were acute rejection in two (33%), thrombosis/hemorrhagic complications in two (33%), CNI toxicity in one (17%), and unidentified in another patient. The cumulative incidence of biopsy-proven acute rejection was 14 and 17% at one and three years post-transplant, respectively. There were no significant differences in rejection rates by ART (Figure 2 and Figure 3; Table 3).\n\nSerious non-opportunistic infections within six months post-transplant occurred in 15 (26%) patients. The etiology of such infections, mainly bacterial and fungal in nature, has been reported previously4. In total, 13 (87%) of these patients were on PI-containing regimens. Individuals receiving PI had a higher incidence of serious infections compared to those on PI-sparing regimens (39 vs. 8%, p=0.01; Figure 2). This association remained significant in analyses restricted to patients on NRTI “backbone”: 39 vs. 10% for patients receiving NRTI + PI compared to those receiving NRTI + INSTI or NNRTI, respectively (p=0.04; Table 3 and Figure 3).\n\nTacrolimus levels at 4, 12, 26 and 52 weeks post-transplant were within therapeutic range for most patient groups (Table 5). Although we did not observe differences in tacrolimus levels by ART at these specific time points, out of 11 patients with tacrolimus levels available at the time of infection, six (54%) had supra-therapeutic levels (median, IQR: 9.2, 5.5-10.1).\n\nART, antiretroviral therapy; INSTI, integrase strand transfer inhibitors; NRTI, nucleoside reverse transcriptase inhibitors; NNRTI, non-nucleoside reverse transcriptase inhibitors; PI, protease inhibitors. TDF, tenofovir disoproxil fumarate.\n\n*Only includes patients on NRTI other than TDF.\n\nThe p-value corresponds to comparison of PI-containing and PI-sparing groups by using the Fisher exact test.\n\nTacrolimus target levels at our center are 6–8 ng/mL during the first three months and 5–7 ng/mL after three months post-transplant. Higher levels are targeted for highly sensitized patients.\n\n\nDiscussion\n\nConsistent with previous studies of kidney transplantation in HIV1–5, we observed excellent transplant outcomes without evidence of HIV disease progression. The most important finding of the present study is the association between PI use and adverse outcomes, namely reduced three-year patient and graft survival, and increased risk of serious non-opportunistic infections. These observations remained true in analyses restricted to patients receiving NRTI “backbone”; thus, even after excluding the potential influence of other agents included in the ART regimen, PI continued to be associated with poor outcomes. The immunosuppression protocol at our institution remained constant during the study period, and the proportion of patients transplanted in the 2006–2010 (and consequently the 2011–2015) eras was similar between PI and non-PI groups, suggesting that this observation was also independent of variation in transplant practices over time that might have impacted outcomes.\n\nBiopsy-proven acute rejection and CNI toxicity accounted for half of the cases of graft loss in patients taking PI in the present study. Increased risk of allograft rejection in HIV+ individuals has been largely attributed to reduced exposure to immunosuppressive agents, due to drug-drug interactions with ART3,6,7. However, in this small cohort, we did not observe an association between ART regimens and the incidence of rejection. CNI levels at 4, 12, 26 and 52 weeks were comparable across ART groups. Other factors, such as infection of the allograft, previous alloimmunization and immune activation, might also play a role in predisposition to rejection3,5,6.\n\nNon-opportunistic infections within six months post-transplant are common in HIV+ kidney recipients3, especially those with marginal pre-transplant CD4 counts4. Notably, the occurrence of serious infections in this cohort was almost five-fold higher in patients receiving PI.\n\nThis might be due to the effects of PI on tacrolimus levels, considering that the overwhelming majority of these patients were on PI-containing regimen and more-than-half had tacrolimus levels above target at the time of infection. PI could also influence the net state of immunosuppression by increasing the level or effect of other immunosuppressants, such as prednisone and mycophenolate.\n\nContrary to our expectations, the use of NNRTI or TDF did not influence kidney allograft survival. Tenofovir alafenamide (TAF) is a new formulation of tenofovir associated with less kidney (and bone) toxicity12. Whether there is added clinical benefit of TAF over TDF in kidney transplant recipients remains to be established.\n\nConsistent with recent reports7–9, patients receiving INSTI-containing regimens had excellent patient survival (96%) and graft survival (100%) at three years, and the lowest rejection rates in this cohort (8%). Current guidelines recommend the use of NRTI plus INSTI as a first-line therapy for HIV10. INSTI pose no interactions with CNI or mTOR inhibitors. In addition, INSTIs have no interactions with direct-acting antivirals, which is important in the setting of hepatitis C co-infection, as that has been associated with poor outcomes2,3. Thus, it has become our practice to preemptively switch HIV+ candidates pre-transplant or in the immediate post-transplant period to PI-sparing, preferably INSTI-based, ART regimens.\n\nAlthough none of the patients studied here was on cobicistat, it is important to highlight that this pharmacokinetic enhancer, contained in several combination pills, can increase the levels of CNI13. HIV+ recipients and their community HIV providers should be educated about what ART medications to avoid, and when not possible, how to adjust CNI doses and monitor levels accordingly.\n\nOur study is limited by the small number of patients and retrospective design; serum levels for other immunosuppressants, such as mycophenolate were not available. The association found in the present study between PI-containing ART regimens and adverse outcomes needs to be confirmed in larger studies. Until more data becomes available, the use of PI-sparing regimens in HIV+ kidney recipients seems to be the most prudent approach.\n\n\nData availability\n\nDataset 1: Rosa et al. Impact of ART in KT outcomes in HIV recipients: Raw data. doi, 10.5256/f1000research.10414.d14671714.", "appendix": "Author contributions\n\n\n\nJFC conceived the study; JFC and RR designed the study; JFS, MAL, MIM, LMA, JS, GG, DR, WLK, AM, GC, LJC, GWB, JMF, and PR acquired the data; JFC and RR analyzed the data; JFC and JFS prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported in part by a Miami Center for AIDS research (CFAR) pilot award to JFC, funded by a grant (P30AI073961) from the National Institutes of Health (NIH). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe thank Analucía Schneégans for technical assistance. We are indebted to all the patients that participated in the present study.\n\n\nReferences\n\nLocke JE, Mehta S, Reed RD, et al.: A National Study of Outcomes among HIV-Infected Kidney Transplant Recipients. J Am Soc Nephrol. 2015; 26(9): 2222–2229. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSawinski D, Forde KA, Eddinger K, et al.: Superior outcomes in HIV-positive kidney transplant patients compared with HCV-infected or HIV/HCV-coinfected recipients. Kidney Int. 2015; 88(2): 341–349. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStock PG, Barin B, Murphy B, et al.: Outcomes of Kidney Transplantation in HIV-Infected Recipients. N Engl J Med. 2010; 363(21): 2004–2014. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuarez JF, Rosa R, Lorio MA, et al.: Pretransplant CD4 Count Influences Immune Reconstitution and Risk of Infectious Complications in Human Immunodeficiency Virus-Infected Kidney Allograft Recipients. Am J Transplant. 2016; 16(8): 2463–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLorio MA, Rosa R, Suarez JF, et al.: Influence of immune activation on the risk of allograft rejection in human immunodeficiency virus-infected kidney transplant recipients. Transpl Immunol. 2016; 38: 40–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStock PG: Kidney infection with HIV-1 following kidney transplantation. J Am Soc Nephrol. 2014; 25(2): 212–215. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTricot L, Teicher E, Peytavin G, et al.: Safety and efficacy of raltegravir in HIV-infected transplant patients cotreated with immunosuppressive drugs. Am J Transplant. 2009; 9(8): 1946–1952. PubMed Abstract | Publisher Full Text\n\nAzar MM, Malinis MF, Moss J, et al.: Integrase strand transferase inhibitors: the preferred antiretroviral regimen in HIV-positive renal transplantation. Int J STD AIDS. 2016; pii: 0956462416651528, [in press]. PubMed Abstract\n\nKershaw C, Rogers C, Pavlakis M, et al.: Impact of Integrase Inhibitor-Based Antiretroviral Regimen on Outcomes in HIV+ Renal Transplant Recipients. In: 2015 American Transplant Congress, Philadelphia, Pennsylvania. Am J Transplant. 2015; 15(Suppl 3). Reference Source\n\nPanel on Antiretroviral Guidelines for Adults and Adolescents: Guidelines for the use of antiretroviral agents in HIV-1-infected adults and adolescents. Washington, DC: Department of Health and Human Services, 2016. Reference Source\n\nKarras A, Lafaurie M, Furco A, et al.: Tenofovir-related nephrotoxicity in human immunodeficiency virus-infected patients: three cases of renal failure, Fanconi syndrome, and nephrogenic diabetes insipidus. Clin Infect Dis. 2003; 36(8): 1070–3. PubMed Abstract | Publisher Full Text\n\nWang H, Lu X, Yang X, et al.: The efficacy and safety of tenofovir alafenamide versus tenofovir disoproxil fumarate in antiretroviral regimens for HIV-1 therapy: Meta-analysis. Medicine (Baltimore). 2016; 95(41): e5146. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHan Z, Kane BM, Petty LA, et al.: Cobicistat Significantly Increases Tacrolimus Serum Concentrations in a Renal Transplant Recipient with Human Immunodeficiency Virus Infection. Pharmacotherapy. 2016; 36(6): e50–e53. PubMed Abstract | Publisher Full Text\n\nRossana R, Suarez JF, Lorio MA, et al.: Dataset 1 in: Impact of antiretroviral therapy on clinical outcomes in HIV+ kidney transplant recipients: Review of 58 cases. F1000Research. 2016. Data Source" }
[ { "id": "19166", "date": "26 Jan 2017", "name": "Merceditas Villanueva", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper entitled\" Impact of antiretroviral therapy on clinical outcomes in HIV+kidney transplant recipients: Review of 58 cases\" by Camargo et. al. is a well-done study with contributions from a multi-disciplinary group of medical providers (Internal Medicine, Infectious Disease, Nephrology, Surgery). The findings add to a growing experience on the optimal management of HIV+ renal transplants including use of ART. Specifically, it provides statistical analysis demonstrating that PI-containing regimens are significantly associated with worse 3 year survival, 3 year graft function and severe infection rates at 6 months in a relatively large group of patients at a single transplant center.  The results add further weight to our own findings at our center 1 about the preferential use of INSTI regimens in this population preferentially used pre-emotively before transplant.\nThe study design is well explained and tables and figures are clear.  The conclusions are balanced and justified on the basis of the data.\nMy main question is that the outcomes are based on the ART regimen in the immediate post-transplant period (there were 33 patients on PI-containing regimen).  However, the authors state that at 12 months post-transplant, of their available data, only 21 were on PI-containing regimen, due to switches made, in part due to concerns for drug-drug interactions.  So the conclusions regarding 3 year patient and graft survival may not reflect an on-treatment analysis. In other words, at 3 years, how many patients were still actually taking PIs?\nA few other questions to clarify the study would be helpful: 1) There were 6 deaths at 3 years:  what were the causes of death?  Infections? How many were on PIs at time of death? 2) What were the specific antiretrovirals used? How was dosing altered post transplant? 3) Can the authors speculate on what other mechanisms might explain why PI use (aside from effect on CNI or mTOR levels) could affect patient and graft survival?", "responses": [ { "c_id": "2529", "date": "03 Mar 2017", "name": "Jose Camargo", "role": "Author Response", "response": "Thank you for expert opinion and insightful comments on this study. Referee’s comment: So the conclusions regarding 3 year patient and graft survival may not reflect an on-treatment analysis. In other words, at 3 years, how many patients were still actually taking PIs   Author’s response: Our data indicates that administration of PI-containing regimens at the time of discharge after initial hospitalization for kidney transplant is associated with worse 3-year patient and graft survival.  Although it is true that the number of patients on PI-containing regimens decreased over time (in part due to ART switches made to avoid drug-drug interactions, and in part due to increased mortality in this group), and this data may not strictly reflect an on-treatment analysis, our observations remain valid, as the time-to-event analyses showed that most graft failures (4 of 6; 67%) and most deaths (5 of 6; 83%) at 3 years in this group actually occurred within the first year post-transplant, while patients were still on PIs. Even more, our observations are consistent with the notion that events occurring in the early period following renal transplantation are associated with long-term graft outcome and patient survival (1). (1) Woo YM, Jardine AG, Clark AF, et al. Early graft function and patient survival following cadaveric renal transplantation. Kidney Int. 1999;55(2):692-9.   Referee’s comment: There were 6 deaths at 3 years:  what were the causes of death?  Infections? How many were on PIs at time of death?   Author’s response: Two patients died from septic shock, two from sudden cardiac arrest, and two died at outside facilities so we were unable to determine the cause of death. All of these 6 patients were on PIs at the time of death.   Referee’s comment: What were the specific antiretrovirals used? How was dosing altered post transplant?   Author’s response: At our institution we resume ART early in the post-transplant period (i.e., as soon as patients are tolerating oral intake, typically in post-operatory day 1 or 2), and all ART agents are adjusted to renal function in coordination with HIV and transplant pharmacists. Except for TDF, all the analyses in this study were performed by ART class and we strongly feel that the association observed for PIs and poor outcomes is a class effect.  Due to the size of our cohort we were unable to draw conclusions about specific ART agents within a given class. However, for transparency, these were the specific agents identified within each class in this cohort: NRTI (zidovudine, stavudine, didanosine, emtricitabine, tenofovir DF, lamivudine, abacavir); NNRTI (nevirapine, efavirenz, etravirine, rilpivirine); PI (fosamprenavir, lopinavir, darunavir, atazanavir, ritonavir); INSTI (raltegravir, dolutegravir).         Referee’s comment: Can the authors speculate on what other mechanisms might explain why PI use (aside from effect on CNI or mTOR levels) could affect patient and graft survival?   Author’s response: Biopsy-proven acute rejection and CNI toxicity accounted for half of the cases of graft loss in patients taking PI in the present study. Cases of acute tubular injury and renal toxicity with PI administration have been described previously (2, 3). Other potential mechanism for the negative impact of PI administration on patient and graft survival is thru the inhibition of P-gp-mediated NRTI extrusion, resulting in an accumulation of NRTI drugs in the cell and mitochondrial toxicity, leading to the release of apoptogenic factors, DNA fragmentation and apoptosis (4). Some studies have also suggested a link between PI and increased cardiovascular risk, which could potentially account for the two cases of sudden cardiac death in this cohort (5, 6).   (2) Röling J, Schmid H, Fischereder M, et al. HIV-associated renal diseases and highly active antiretroviral therapy-induced nephropathy. Clin Infect Dis. 2006;42(10):1488-95. (3) Shafi T, Choi MJ, Racusen LC, et al. Ritonavir-induced acute kidney injury: kidney biopsy findings and review of literature. Clin Nephrol. 2011;75 Suppl 1:60-4. (4)  Petit F, Fromenty B, Owen A, Estaquier J. Mitochondria are sensors for HIV drugs. Trends Pharmacol Sci 2005;26(5):258-64. (5) Iloeje UH, Yuan Y, L'italien G, et al. Protease inhibitor exposure and increased risk of cardiovascular disease in HIV-infected patients. HIV Med. 2005;6(1):37-44. (6) Rhew DC, Bernal M, Aguilar D. et al. Association between protease inhibitor use and increased cardiovascular risk in patients infected with human immunodeficiency virus: a systematic review. Clin Infect Dis. 2003;37(7):959-72." } ] }, { "id": "20186", "date": "14 Feb 2017", "name": "Kalathil K Sureshkumar", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript is a retrospective single- canter analysis of the outcomes of kidney transplantation in HIV infected recipients stratified by anti-retroviral therapy (ART) used. The authors found inferior 3 year graft and patient survivals in patients on protease inhibitor (PI) containing regimens. Six month opportunistic infections were also high in patients on PI-containing regimens. ART with integrase inhibitors was associated with better outcomes. Integrase inhibitors such as raltegravir has minimal drug-drug interactions and are being increasingly used in ART regimens.\n\nOverall, the manuscript is written well and conclusions are supported by the study findings. I have following comments:\nWas there any difference in outcomes based on induction agents used? Any postulations as to why the observed inferior graft and patient outcomes in the PI group other than drug-drug interactions?", "responses": [ { "c_id": "2530", "date": "03 Mar 2017", "name": "Jose Camargo", "role": "Author Response", "response": "Thank you for expert opinion and insightful comments on this study. Referee’s comment:  Was there any difference in outcomes based on induction agents used?   Author’s response: All of the patients received the same induction immunosuppression regimen (consisting of anti–thymocyte globulin, anti-CD25 monoclonal antibody and methylprednisolone) precluding such type of analysis in this single-center cohort study. Your point is an interesting one, and other groups have addressed this question (1-3).   (1) Kucirka LM, Durand CM, Bae S, et al. Induction Immunosuppression and Clinical Outcomes in Kidney Transplant Recipients Infected With Human Immunodeficiency Virus. Am J Transplant. 2016;16(8):2368-76. (2) Vivanco M, Friedmann P, Xia Y, et al. Campath induction in HCV and HCV/HIV-seropositive kidney transplant recipients. Transpl Int. 2013 Oct;26(10):1016-26. (3) Locke JE, James NT, Mannon RB, et al.  Immunosuppression regimen and the risk of acute rejection in HIV-infected kidney transplant recipients. Transplantation. 2014;97(4):446-50.   Referee’s comment:  Any postulations as to why the observed inferior graft and patient outcomes in the PI group other than drug-drug interactions?   Author’s response: Please see detailed response to similar question by Dr. Villanueva.  We speculate that PI-induced acute tubular injury, increased cardiovascular risk and mitochondrial toxicity are other potential mechanisms affecting outcomes in HIV+ kidney transplant recipients exposed to PIs. Elucidation of the specific mechanism(s) underlying this association requires further research." } ] } ]
1
https://f1000research.com/articles/5-2893
https://f1000research.com/articles/5-2884/v1
20 Dec 16
{ "type": "Research Note", "title": "Meta-analysis of crowdsourced data compendia suggests pan-disease transcriptional signatures of autoimmunity", "authors": [ "William W. Lau", "Rachel Sparks", "OMiCC Jamboree Working Group", "John S. Tsang", "William W. Lau", "Rachel Sparks" ], "abstract": "Background: The proliferation of publicly accessible large-scale biological data together with increasing availability of bioinformatics tools have the potential to transform biomedical research. Here we report a crowdsourcing Jamboree that explored whether a team of volunteer biologists without formal bioinformatics training could use OMiCC, a crowdsourcing web platform that facilitates the reuse and (meta-) analysis of public gene expression data, to compile and annotate gene expression data, and design comparisons between disease and control sample groups.\nMethods: The Jamboree focused on several common human autoimmune diseases, including systemic lupus erythematosus (SLE), multiple sclerosis (MS), type I diabetes (DM1), and rheumatoid arthritis (RA), and the corresponding mouse models. Meta-analyses were performed in OMiCC using comparisons constructed by the participants to identify 1) gene expression signatures for each disease (disease versus healthy controls at the gene expression and biological pathway levels), 2) conserved signatures across all diseases within each species (pan-disease signatures), and 3) conserved signatures between species for each disease and across all diseases (cross-species signatures).\nResults: A large number of differentially expressed genes were identified for each disease based on meta-analysis, with observed overlap among diseases both within and across species. Gene set/pathway enrichment of upregulated genes suggested conserved signatures (e.g., interferon) across all human and mouse conditions.\nConclusions: Our Jamboree exercise provides evidence that when enabled by appropriate tools, a \"crowd\" of biologists can work together to accelerate the pace by which the increasingly large amounts of public data can be reused and meta-analyzed for generating and testing hypotheses. Our encouraging experience suggests that a similar crowdsourcing approach can be used to explore other biological questions.", "keywords": [ "meta-analysis", "gene expression", "public data", "autoimmunity", "mouse models of disease", "crowdsourcing", "human and mouse comparison" ], "content": "Introduction\n\nThe volume of large-scale biological data in the public domain is increasing at an unprecedented rate; as a result, data reuse is becoming an increasingly viable means to generate and test hypotheses1 (Figure 1). The reusability of public data, however, depends on the quality and availability of the associated meta-data and annotations. Given a research goal, for example, to generate gene expression signatures for a biological phenotype, one has to first identify and annotate relevant public data, followed by the construction of comparison group pairs (or CGP - see Figure 1 - e.g., a group of samples corresponding to the phenotype of interest versus a group of control samples) and subsequent bioinformatics analyses. Bench scientists are uniquely empowered with biological knowledge to identify and annotate relevant public data and form proper comparisons. Recently, there have also been a variety of crowdsourcing efforts, including hackathons, datathons and open challenges, in which diverse groups of individuals work together to accelerate the pace of pursuing common goals2,3. Thus, we were interested in assessing what could be accomplished by harnessing the collective biological knowledge of a group of biologists to explore, identify, and annotate public datasets when empowered with a user-friendly web platform and a shared scientific goal; would this approach accelerate the pace by which useful biological comparison groups could be constructed and utilized? What would be the specific strengths and hurdles, from both a social and scientific perspective? Towards addressing these questions, we conducted a crowdsourcing “Jamboree” exercise within the NIH immunological community to test the hypothesis that the use of OMiCC4 (https://omicc.niaid.nih.gov), an open, programming-free web platform that enables a crowdsourcing approach to public gene expression data reuse, can facilitate the rapid assembly of a large data compendium followed by bioinformatics analyses to generate biological hypotheses. Select aspects of this exercise, particularly on how it provides evidence that a tool such as OMiCC can enable biologists without bioinformatics training to directly explore public data, have been highlighted elsewhere5 and for which this work serves as a companion (also see supplemental website to ref. 5 - https://omicc.niaid.nih.gov/2016-nih-jamboree-analysis/report.html); here we focus on the post-Jamboree data quality control, analysis, and observations, as well as discussing the utility and caveats of this approach.\n\nIncreasing availability of public data opens new opportunities for biologists to generate hypotheses. The NIH OMiCC Jamboree was a social experiment to assess whether a group of biologists without computational experience can identify and annotate public datasets and construct CGPs using the OMiCC tool. This paper describes the data analysis, including meta-analysis and gene set enrichment analysis, to derive gene expression signatures across human and mouse.\n\nFor this crowdsourcing experiment, we focused on assessing the gene expression patterns of and shared signatures among several common human autoimmune and inflammatory diseases and the corresponding mouse models. Mouse models of human diseases can be informative for studying disease mechanisms, but may not accurately reflect the underlying biology in humans6,7. We were particularly interested in determining whether we could detect shared gene expression signatures among diseases (pan-disease signatures), including type I diabetes (DM1), multiple sclerosis (MS), rheumatoid arthritis (RA), sarcoidosis (sarcoid), Sjögren’s syndrome (SS), and systemic lupus erythematosus (SLE), as well as among their mouse models. We chose these diseases because they have reasonably well-established mouse models and both human and mouse gene expression data are available publicly. A prior study has also evaluated pan-disease transcriptional signatures and found conserved signals across RA, SLE and SS8. Here we are including more diseases and are additionally interested in assessing whether human and mouse have shared pan-disease signatures. Given that data from mouse are often generated from non-blood tissues while those from humans usually come from blood, such cross-species comparisons could also point to potential links between blood and non-blood tissues. Cross-species comparisons of gene expression signatures have been performed previously in sepsis, for example, where both conserved and divergent signals have been detected6,9,10. While our analyses are motivated by these questions, our primary goal here is not to validate previous findings or to generate new biological knowledge per se, but to use this exercise as a proof-of-concept to illustrate the potential utility of data reuse with crowdsourcing.\n\n\nMethods\n\nThe Jamboree was advertised on the NIH Immunology Listserv, which is primarily subscribed by local researchers to disseminate and share immunology-focused information. No inclusion or exclusion criteria were applied to the identification of the participants. The Jamboree involved a half-day group training session using the OMiCC platform followed by a day-long Jamboree, during which 29 volunteer biologists were separated into ten 2- or 3-member teams to search OMiCC for public gene expression datasets of DM1, MS, RA, sarcoid, and SLE (Figure 2). The assignments of teams and topics were based on the participants’ self-declared research backgrounds; additionally, each group had at least one participant who felt proficient using OMiCC after the half-day orientation. Half of the groups were assigned to focus on humans with one group per disease and similarly, the other half of the groups were assigned to the corresponding mouse models. The participants were asked to use OMiCC (https://omicc.niaid.nih.gov) to annotate sample groups and create CGPs between disease and control samples in the studies they identified. They were also encouraged to consult the primary publications describing the studies to help ensure the accuracy of their annotations. Although Sjögren’s syndrome was not originally assigned to any group, the sarcoidosis groups were not able to find sufficient studies from which to construct CGPs and thus was subsequently assigned to focus on Sjögren’s syndrome. Compendia of CGPs created by the teams can be accessed and reused within OMiCC (see Data and Software Availability).\n\nWorkflow of the NIH Jamboree detailing steps taken prior to, during, and after the actual Jamboree event.\n\nA total of 86 human CGPs were collected from the Jamboree, spreading across the six diseases. Participants were instructed to identify public microarray datasets in OMiCC that contained data derived from whole blood (WB) or peripheral blood mononuclear cells (PBMCs) of both healthy controls and affected patients; they were asked to avoid studies of stimulated cells. Post-Jamboree CGP QC was required in order to correct misplaced annotations or to standardize annotations created with free text. Only 54 of the 86 CGPs were created with samples annotated as PBMC or WB. We removed an additional 15 CGPs for the following reasons: 1) incorrect sample annotations; 2) the CGP did not contain sample groups from both cases and controls; and 3) the samples in the CGP significantly overlapped with those in another CGP (Jaccard index > 66%). As a result, 39 human CGPs representing five diseases (note that no WB or PBMC samples passed QC for Sjögren’s syndrome) were included in the downstream analyses (Table 1).\n\nEach dataset is comprised of a set of comparison group pairs (CGPs), which in turn contain a number of case and control microarray samples. Since the same sample may be selected in more than one CGP, the number of unique samples in each group is listed. Common genes are those measured across all platforms in a dataset. These genes were considered in the ranked-based meta-analyses, some of which were identified as having significantly (PFP <= 0.05) increased (UP) or decreased (DOWN) expression. Genes in both UP and DOWN lists were removed. The datasets ‘human_pan-disease’ and ‘mouse_pan-disease’ were created by combining all CGPs constructed for each species.\n\nParticipants of the mouse teams created a total of 94 CGPs from mouse models of the aforementioned six diseases. Participants were instructed to identify public microarray datasets in OMiCC that contained data derived from non-blood tissues, WB, or PBMCs of both healthy and diseased mice; they were asked to avoid studies of stimulated cells. Due to the complexities of the mouse models and studies, the overall quality of the CGPs was comparatively lower than that of the human CGPs. For example, a substantial fraction of CGPs contained data from stimulated cells despite our explicit call for avoiding such studies; these CGPs were excluded. Four CGPs were excluded because they were duplicates of other CGPs. Some CGPs had young, clinically unaffected mice as controls and older, clinically ill mice as cases (e.g., age-related disease progression models), while others were obtained from purified cell subsets (e.g., CD4+ T cells and B cells). We still included these CGPs in our final set with the goal of identifying conserved signals through meta-analysis. After this curation process, 34 CGPs remained across four diseases because no samples from sarcoidosis or Sjögren’s syndrome passed QC (Table 1).\n\nIn addition to the individual disease datasets (i.e., a collection of CGPs), all the CGPs for each species were combined to create a pan-disease compendium—one for human and one for mouse.\n\nMeta-analysis was conducted in OMiCC to derive differential expression signatures for each dataset (note that OMiCC uses a rank-based meta-analysis R package called RankProd11, version 2.36.0). The results were reported at the gene level, based on internal OMiCC mappings between platform-specific probe identifiers and standard HUGO gene names. For each gene, this method reports the false prediction rate (PFP - similar to false discovery rate (FDR)) for both increased and decreased expression (herein referred to as the UP and DOWN genes, or differentially expressed (DE) genes when they are combined). Using PFP <= 0.05 as a threshold, we identified UP and DOWN genes for each disease (meta-analysis per species) and for each species (meta-analysis across all CGPs within a species to derive pan-disease gene signatures). Genes with conflicting indications (which is possible with the RankProd method used by OMiCC), i.e. those suggested to have increased and decreased expression for the same disease, were removed. The resulting gene lists and meta-analysis output were exported as text files for further processing. Prior to any downstream analyses, mouse genes were mapped to human genes using NCBI’s homology maps (ftp://ftp.ncbi.nlm.nih.gov/pub/homology_maps/human/,version 12/27/15) and those with either no or non-unique mappings were discarded. The robustness of the RankProd (rank based) results was evaluated using another effect-size metric called Cohen’s d, which was calculated in R as\n\nd=t(nD+nCnDnC)(nD+nCnD+nC−2),\n\nwhere t is the t statistic reported by OMiCC, and nD and nC are the number of samples in the disease and control groups, respectively.\n\nGene set based enrichment (or over-representation) analyses were carried out separately for the UP and DOWN genes from each of the four diseases in human and mouse (i.e., DM1, MS, RA, and SLE) against terms in KEGG (http://www.genome.jp/kegg/) or Reactome (http://www.reactome.org/) containing 3 to 500 genes, using the R clusterProfiler12 (version 3.0.5) and ReactomePA13 (version 1.16.2) packages, respectively. In addition, to illustrate how similar analyses can be performed without any programming, enrichment analyses were also carried out using the web-based Toppgene tool14 (https://toppgene.cchmc.org/enrichment.jsp; using default settings and discarding any input gene that mapped to multiple entries). Pan-disease signatures were generated by meta-analyzing each of the two pan-disease compendia (one for human and one for mouse)—a pan-disease compendium contains the CGPs from all diseases within a species. The method implemented by the above software determines enrichment by evaluating the statistical significance of the overlap between the input DE gene list and target gene sets using the hypergeometric test, and we considered gene sets and pathways with an adjusted p-value of <=0.05 to be significantly enriched. Conserved signatures between human and mouse were determined simply by finding the gene sets and pathways that were significantly enriched in both human and mouse.\n\nThis work did not require ethics approval, as per NIH guidelines.\n\n\nResults\n\nUsing the 39 human and 34 mouse CGPs created by the Jamboree participants (after QC), for each disease we ran meta-analysis across the CGPs in each disease within OMiCC. The number of DE genes varies substantially across diseases, possibly driven in part by differences in sample sizes and in the number of common genes shared among profiling platforms in each disease/CGP collection (Table 1 and Figure 3A; a list of DE genes for each disease is in Table S1). Comparison of the DE gene sets among diseases, separately for UP and DOWN genes, reveals strong signature overlaps among some diseases. Figures 3B–C show the odds ratios (OR) between pairs of diseases and those with OR > 1 have higher than the expected number of overlapping genes. Interestingly, there tended to be stronger overlap between pairs of diseases within a species than that between the same disease across human and mouse.\n\n(A) Number of differentially expressed genes and (B–C) the proportion of genes that overlap (i.e. Jaccard index) between UP and DOWN genes (PFP <= 0.05) for pairs of diseases, as indicated by the size and color intensity of the circles. The number in each cell denotes the odds ratio, which is a measure of statistical association between the two groups based on the degree of gene overlap. An odds ratio of 1 suggests no association. Hs = human; Mm = Mouse.\n\nGiven that meta-analysis results can be method dependent15, we next assessed the robustness of the rank-based meta-analysis method used by OMiCC by an independent analysis using a standardized effect-size metric known as Cohen’s d, which is the mean difference of expression values between the case and control groups normalized by the joint standard deviation. For each CGP, we ranked the genes according to their Cohen’s d value. Then for each collection of CGPs by which an OMiCC meta-analysis was performed (e.g., RA in humans), we calculated the median rank of each gene among the CGPs. The genes with large effect sizes according to Cohen’s d should be enriched for those identified as having increased expression by the rank-based method in OMiCC, and conversely for the decreased expression genes. The comparison indicates that for most diseases, the OMiCC rank-based results are largely consistent with the effect-size approach, although there were a number of genes discordant between the two methods (Figure S1).\n\nTo gain higher level insights (e.g., pathway and biological processes) into the gene signatures identified, we assessed whether the UP and DOWN genes identified in the previous steps (Figure S2 and Table S2) are enriched for gene sets and pathways annotated in KEGG and Reactome. The analyses were conducted in R (version 3.3.1) and also with Toppgene (a web_based tool). Note that the differences between the R and Toppgene analyses can be partially explained by the fact that Toppgene assumes that all genes in the genome have been measured (i.e., the “background” set), which is not true in this analysis because we only assessed genes common among gene-expression profiling platforms used to generate the data in the compendium (Table 1).\n\nTo generate pan-disease signatures, we next attempted to extract common enriched pathways across all diseases within each species. One simple approach is to identify overlapping signatures from the significantly enriched pathways of individual diseases, but its statistical power could be limited. Indeed, using this strategy the only globally enriched pathway is the Reactome term “Chemokine receptors bind chemokines” from the UP genes of the mouse datasets. Thus, we also tested an alternative approach where all CGPs from each species across diseases were pooled together to form a single OMiCC compendium for meta-analysis (i.e., “human_pan-disease” and “mouse_pan-disease”; Figure 4). In this manner, the large number of samples increased the statistical power of the meta-analysis, thus resulting in the larger number of pan-disease enrichment signatures, including those reflecting broad immune activation and the well-appreciated interferon signature in human8 (Figure 4). However, this approach can potentially be confounded by variation in sample sizes across diseases, e.g., diseases with larger numbers of samples may dominate the signal.\n\nOver-representation analyses of the (A) UP genes and (B) DOWN genes (PFP <= 0.05) identified by using all CGPs from each species in the meta-analysis. The analyses were performed in both R and ToppGene; the top 20 enriched terms identified in R are shown. Terms found also in ToppGene are indicated by an asterisk (*). P-values are adjusted by Benjamini and Hochberg (BH) FDR correction (shown as 'adj.p'). Counts (indicated by circle size) and gene ratios (x-axis) respectively denote the number and proportion of genes in the UP or DOWN signature that also appear in the target gene set.\n\nWe next used a conservative approach to assess shared gene set/pathway signatures between human and mouse by requiring that enriched terms be statistically significant in both human and mouse (after multiple-testing correction). Interestingly, using this criterion, all pan-disease enrichments conserved between human and mouse were derived from the UP genes (Figure 5), which may partially reflect that increases in immune cell frequencies (e.g., increases in monocytes in blood and/or tissues) were potential underlying drivers of these species-conserved, pan-disease signatures.\n\nOver-representation analyses of the (A) UP genes and (B) DOWN genes (PFP <= 0.05) identified within OMiCC were carried out against KEGG and Reactome terms (see also Figures 4 and Figure S2 and Table S2). For each individual CGP compendium (disease or pan-disease), gene sets or terms with adjusted p-value <= 0.05, as defined by the hypergeometric test after adjustment by the Benjamini and Hochberg (BH) FDR procedure, in both human and mouse are listed. These overlapping terms highlight signatures conserved between human and mouse. Gene ratios (x-axis) denote the fraction of genes in the respective signature (human and mouse as denote by blue and red, respectively) that are in the target gene set (y-axis).\n\n\nDiscussion\n\nOur crowdsourcing exercise illustrates that a group of biologists without formal bioinformatics training can use OMiCC, a programming-free web-based platform, to generate a sizable number of CGPs during a day-long group exercise with a shared scientific goal. This is encouraging because CGP construction can be time consuming, requires biological expertise, and is often required for public data reuse and meta-analysis. Our observation suggests that other groups should be able to replicate our experience in their own institutions to pursue other scientific questions. However, there are some caveats: substantial QC was required to remove improperly constructed CGPs, such as those created from data obtained using stimulated cells (which was an exclusion criteria we specified, but nonetheless, compliance was less than perfect). Additionally, CGPs were more difficult to construct for the biologically more complex mouse models, and thus more were removed in the QC process. It is likely that early participant feedback on CGP quality during the Jamboree would help ensure higher quality CGPs, thereby reducing some of the required post-Jamboree QC. This also suggests that extending the Jamboree to two days, for example, with another day to review and QC the CGPs by the participants, could be valuable.\n\nFollowing QC, meta-analysis performed within OMiCC led to several interesting observations: firstly, evaluation of DE genes showed substantial signature overlaps among diseases within species, and to a lesser extent, between the two species. Secondly, these findings were largely consistent when evaluated using an effect-size based approach. However, caution needs to be exercised in interpreting the results as the identification of DE genes can be influenced by a number of variables that cannot be controlled in this type of analysis. For example, as more CGPs from independent studies using different platforms are included in the analysis, the number of common genes among the platforms typically decreases, thus reducing the number of genes for which differential expression can be evaluated. Meta-analysis of CGPs containing overlapping samples can also give a false sense of robustness because the true PFP (or FDR) can be higher than what is reported. Other potential confounding factors include unequal distributions of age and race (or strain for mice) between sample groups within CGPs. However, these can also increase the heterogeneity across CGPs, so any conserved signals that emerge from the meta-analyses of the CGPs are likely relatively robust16. Barring differences in meta-analysis methodologies, our analysis identified a larger number of pan-disease DE genes in human compared to an earlier, similar meta-analysis effort8 (1021 versus 210 UP and 976 versus 202 DOWN genes), likely in part because our analysis included a larger number of CGPs/studies curated by the Jamboree participants. This highlights the potential benefit of using crowdsourcing to amass a large multi-study dataset within a relative short amount of time.\n\nUsing tools outside of OMiCC, gene set/pathway enrichment analysis revealed that, as expected, a higher level of conservation across diseases than that at the gene level. Some of the enriched KEGG and Reactome terms were consistent with previous reports, e.g., “cytokine signaling” was enriched in genes with increased expression in human SLE. It is well-established that SLE patients exhibit increased expression of IFN-inducible genes in blood compared to healthy controls17,18. The term “cytokine signaling” was also enriched (albeit to a lesser magnitude) in RA, as well as in the human and mouse pan-disease signatures, and it was furthermore conserved between human and mouse; these results are again consistent with previous reports8,19–21. Our pathway enrichment analysis also identified some less well-established, but potentially biologically interesting associations. For example, the KEGG term “Malaria” is enriched in the UP genes in RA due to genes such as CR1, GYPA, ICAM1, PECAM1, and TLR4. It is not clear whether this is related to the fact that anti-malarial drugs, such as hydroxychloroquine, have been used as a secondary treatment for RA for many years22, and it has been suggested that hydroxychloroquine interferes with Toll-like receptor signaling23 to reduce immune cell activation and proliferation, although its exact mechanism of action in ameliorating RA is still not well understood. Another potentially interesting observation is the enrichment of platelet-related pathways in a number of signatures. The involvement of platelets has been implicated in various autoimmune diseases24, particularly in RA25, and has been proposed as a potential therapeutic target for some of the autoimmune diseases assessed here26.\n\nIn reflection, there are several ways in which our Jamboree could have been improved, such as offering more extensive training using OMiCC prior to data exploration, providing early feedback on the construction of CGPs, and creating independent discovery and validation cohorts to strengthen the robustness of our preliminary observations. Despite some of the caveats associated with our analyses and results, overall we provided evidence that user-friendly crowdsourcing and analysis platforms, such as OMiCC, can potentially accelerate the pace by which public data can be utilized to generate and test hypotheses.\n\n\nData and software availability\n\nThe comparison group pairs (CGPs, e.g., RA versus healthy) created by the Jamboree participants and used in the post-Jamboree analyses have been made public in OMiCC at: https://omicc.niaid.nih.gov/. They are collected in compendia whose names have the format 2016-NIH-Jamboree-Species-Disease (species can either be Human or Mouse while diseases include DM1, MS, RA, SLE, and Sarcoid). These compendia can be retrieved in OMiCC by using the compendia search function (on OMiCC homepage: Search > On Compendia) and searching for the keyword '2016-NIH-Jamboree'. This information can also be retrieved from Dataset 1 listed below.\n\nTo retrieve the raw microarray data, a user can construct new compendia using selected CGPs from the Jamboree compendia collection (see the Community and Sharing Features section of the OMiCC Tutorial) and export the gene expression data from the web site.\n\nF1000Research: Dataset 1. R data file, 10.5256/f1000research.10465.d14699427\n\nF1000Research: Dataset 2. R markdown script to generate the data analysis report, 10.5256/f1000research.10465.d14699528\n\nF1000Research: Dataset 3. Meta-analysis output files exported from OMiCC, 10.5256/f1000research.10465.d14699629\n\nF1000Research: Dataset 4. Results of Toppgene analyses against KEGG, Reactome, and Gene Ontology (GO) Biological Process terms using the DE genes listed in Table S1 as input, 10.5256/f1000research.10465.d14699730", "appendix": "Author contributions\n\n\n\nWWL helped design the Jamboree, performed post-Jamboree data curation, designed and performed post-Jamboree data analysis, and wrote the manuscript; RS designed and organized the Jamboree, performed post-Jamboree data curation, and wrote the manuscript; OJWG participated in the Jamboree; JST conceived and guided the project, designed and helped organize the Jamboree, helped design post-Jamboree data analysis plan, helped post-Jamboree data curation, and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research was funded by the Intramural Programs of the National Institute of Allergy and Infectious Diseases (NIAID) and the Center for Information Technology (CIT) at the National Institutes of Health.\n\n\nAcknowledgements\n\nWe thank BCBB/OCICB of NIAID for providing computing support and web hosting; NIH Facilities for providing the OMiCC Jamboree hosting venue; and members of the J.S.T. lab for discussions.\n\n\nConsortium/Collective Authors\n\nThe OMiCC Jamboree Working Group\n\n(Listed alphabetically by last name)\n\nJames Austin1, Neha Bansal1, Julián Candia2, Ehren Dancy1, Karen L. Elkins3, Sara Faghihi-Kashani4, Julio Gomez-Rodriguez5, Liliana Guedez6, Yongjian Guo1, Maria J. Gutierrez7, Trung Ho8, Reiko Horai6, Sunmee Huh9, Chie Iwamura10, Jaimy Joy11, Ju-Gyeong Kang12, Sunil Kaul9, Laura B. Lewandowski13, Candace Liu1, Yong Lu1, Nathan P. Manes1, Mary J. Mattapallil6, Sarfraz Memon9, M. Jubayer Rahman10, Kameron B. Rodrigues10, Bruno Silva11, Amit Singh11, Anthony J. St. Leger6, Jessica Tang12, Abigail Thorpe1, Hang Xie3, Yongge Zhao9, Ofer Zimmerman1\n\n1. National Institute of Allergy and Infectious Diseases, National Institutes of Health (NIH), Bethesda, MD, USA, 20892\n\n2. Trans-NIH Center for Human Immunology, NIH\n\n3. Center for Biologics Evaluation and Research, Food and Drug Administration, Silver Spring, MD, USA, 20993\n\n4. National Institute of Environmental Health Sciences, NIH\n\n5. National Human Genome Research Institute, NIH\n\n6. National Eye Institute, NIH\n\n7. Johns Hopkins University School of Medicine, Baltimore, MD, USA, 21287\n\n8. Uniformed Services University of Health Sciences, Bethesda, MD, USA, 20814\n\n9. National Cancer Institute, NIH\n\n10. National Institute of Diabetes and Digestive and Kidney Diseases, NIH\n\n11. National Institute on Aging, NIH\n\n12. National Heart, Lung and Blood Institute, NIH\n\n13. National Institute of Arthritis and Musculoskeletal and Skin Diseases, NIH\n\n\nReferences\n\nRung J, Brazma A: Reuse of public genome-wide gene expression data. Nat Rev Genet. 2013; 14(2): 89–99. PubMed Abstract | Publisher Full Text\n\nSaez-Rodriguez J, Costello JC, Friend SH, et al.: Crowdsourcing biomedical research: leveraging communities as innovation engines. Nat Rev Genet. 2016; 17(8): 470–86. PubMed Abstract | Publisher Full Text\n\nCeli LA, Lokhandwala S, Montgomery R, et al.: Datathons and Software to Promote Reproducible Research. J Med Internet Res. 2016; 18(8): e230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShah N, Guo Y, Wendelsdorf KV, et al.: A crowdsourcing approach for reusing and meta-analyzing gene expression data. Nat Biotechnol. 2016; 34(8): 803–6. PubMed Abstract | Publisher Full Text\n\nSparks R, Lau WW, Tsang JS: Expanding the immunology toolbox: embracing public-data reuse and crowdsourcing. Immunity. 2016. Publisher Full Text\n\nSeok J, Warren HS, Cuenca AG, et al.: Genomic responses in mouse models poorly mimic human inflammatory diseases. Proc Natl Acad Sci U S A. 2013; 110(9): 3507–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWebb DR: Animal models of human disease: inflammation. Biochem Pharmacol. 2014; 87(1): 121–30. PubMed Abstract | Publisher Full Text\n\nToro-Domínguez D, Carmona-Sáez P, Alarcón-Riquelme ME: Shared signatures between rheumatoid arthritis, systemic lupus erythematosus and Sjögren's syndrome uncovered through gene expression meta-analysis. Arthritis Res Ther. 2014; 16(6): 489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGodec J, Tan Y, Liberzon A, et al.: Compendium of Immune Signatures Identifies Conserved and Species-Specific Biology in Response to Inflammation. Immunity. 2016; 44(1): 194–206. PubMed Abstract | Publisher Full Text\n\nTakao K, Miyakawa T: Genomic responses in mouse models greatly mimic human inflammatory diseases. Proc Natl Acad Sci U S A. 2015; 112(4): 1167–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHong F, Breitling R, McEntee CW, et al.: RankProd: a bioconductor package for detecting differentially expressed genes in meta-analysis. Bioinformatics. 2006; 22(22): 2825–7. PubMed Abstract | Publisher Full Text\n\nYu G, Wang LG, Han Y, et al.: clusterProfiler: an R package for comparing biological themes among gene clusters. OMICS. 2012; 16(5): 284–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu G, He QY: ReactomePA: an R/Bioconductor package for reactome pathway analysis and visualization. Mol Biosyst. 2016; 12(2): 477–9. PubMed Abstract | Publisher Full Text\n\nChen J, Bardes EE, Aronow BJ, et al.: ToppGene Suite for gene list enrichment analysis and candidate gene prioritization. Nucleic Acids Res. 2009; 37(Web Server issue): W305–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTseng GC, Ghosh D, Feingold E: Comprehensive literature review and statistical considerations for microarray meta-analysis. Nucleic Acids Res. 2012; 40(9): 3785–99. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSweeney TE, Haynes WA, Vallania F, et al.: Methods to increase reproducibility in differential gene expression via meta-analysis. Nucleic Acids Res. 2016; pii: gkw797. PubMed Abstract | Publisher Full Text\n\nBaechler EC, Batliwalla FM, Karypis G, et al.: Interferon-inducible gene expression signature in peripheral blood cells of patients with severe lupus. Proc Natl Acad Sci U S A. 2003; 100(5): 2610–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBennett L, Palucka AK, Arce E, et al.: Interferon and granulopoiesis signatures in systemic lupus erythematosus blood. J Exp Med. 2003; 197(6): 711–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHiggs BW, Liu Z, White B, et al.: Patients with systemic lupus erythematosus, myositis, rheumatoid arthritis and scleroderma share activation of a common type I interferon pathway. Ann Rheum Dis. 2011; 70(11): 2029–36. PubMed Abstract | Publisher Full Text\n\nLiu Z, Bethunaickan R, Huang W, et al.: Interferon-α accelerates murine systemic lupus erythematosus in a T cell-dependent manner. Arthritis Rheum. 2011; 63(1): 219–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTsokos GC, Lo MS, Reis PC, et al.: New insights into the immunopathogenesis of systemic lupus erythematosus. Nat Rev Rheumatol. 2016; 12(12): 716–30. PubMed Abstract | Publisher Full Text\n\nvan Vollenhoven RF: Treatment of rheumatoid arthritis: state of the art 2009. Nat Rev Rheumatol. 2009; 5(10): 531–41. PubMed Abstract | Publisher Full Text\n\nKyburz D, Brentano F, Gay S: Mode of action of hydroxychloroquine in RA-evidence of an inhibitory effect on toll-like receptor signaling. Nat Clin Pract Rheumatol. 2006; 2(9): 458–9. PubMed Abstract | Publisher Full Text\n\nHabets KL, Huizinga TW, Toes RE: Platelets and autoimmunity. Eur J Clin Invest. 2013; 43(7): 746–57. PubMed Abstract | Publisher Full Text\n\nBoilard E, Nigrovic PA, Larabee K, et al.: Platelets amplify inflammation in arthritis via collagen-dependent microparticle production. Science. 2010; 327(5965): 580–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoilard E, Blanco P, Nigrovic PA: Platelets: active players in the pathogenesis of arthritis and SLE. Nat Rev Rheumatol. 2012; 8(9): 534–42. PubMed Abstract | Publisher Full Text\n\nLau WW, Sparks R; OMiCC Jamboree Working Group, et al.: Dataset 1 in: Meta-analysis of crowdsourced data compendia suggests pan-disease transcriptional signatures of autoimmunity. F1000Research. 2016. Data Source\n\nLau WW, Sparks R; OMiCC Jamboree Working Group, et al.: Dataset 2 in: Meta-analysis of crowdsourced data compendia suggests pan-disease transcriptional signatures of autoimmunity. F1000Research. 2016. Data Source\n\nLau WW, Sparks R; OMiCC Jamboree Working Group, et al.: Dataset 3 in: Meta-analysis of crowdsourced data compendia suggests pan-disease transcriptional signatures of autoimmunity. F1000Research. 2016. Data Source\n\nLau WW, Sparks R; OMiCC Jamboree Working Group, et al.: Dataset 4 in: Meta-analysis of crowdsourced data compendia suggests pan-disease transcriptional signatures of autoimmunity. F1000Research. 2016. Data Source" }
[ { "id": "19356", "date": "16 Jan 2017", "name": "Hans Lehrach", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nLau and colleagues describe an interesting effort of a group of biologists without formal bioinformatics training to use a programming-free web-based platform to generate a sizable number of comparison group pairs (CGPs) during a day-long group exercise, using gene expression data from humans and mouse models. The subsequent gene and gene set enrichment analyses – performed after quality control of the generated CGPs – yield reasonable results for a number of autoimmune diseases, resulting in plausible enrichments identified for genes and gene sets associated with inflammation and immune processes.\nThe described effort is a potentially scalable method for analysis of very large data sets using the combined manpower of a large number of individuals. To produce pore quantifiable data of this process, it would however be interesting to compare the results of duplicates (do individual groups working in isolation on the same question come up with the same or different results. How does the result of such a 1 day Jamboree compare with the results of a single expert working for a month? If you would rerun the exercise, how different would you expect the results to be? It might also be interesting to systematically eliminate one dataset at a time to quantitate its influence on the final result.\nOne major aspect of the study worth more detailed reporting is the way quality controls are carried out on the CGPs collected by the crowd. This aspect will become even more important when a large crowd is used, and more CGPs are collected, and constitutes one of the main pillars of all subsequent analyses. Therefore, I would suggest that the authors report in more detail their strategies and the conduction of the quality controls, along also with more details on potential caveats and pitfalls.\n\nOther comments:\nMethods\nMajor comment:\n\"we considered gene sets and pathways with an adjusted p-value of <=0.05 to be significantly enriched\" Here it is not clear if p-values were adjusted for multiple testing, e.g. using Benjamini-Hochberg correction.\n\nA more detailed description should be given for the ToppGene analysis\n\nMinor comment:\nToppgene is actually spelled \"ToppGene\"", "responses": [] }, { "id": "21277", "date": "18 Apr 2017", "name": "Markus Riester", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes a thorough case study using the author's recently published OMiCC web service. This service provides re-processed expression data and allows the curation and selection of datasets by disease experts without requiring bioinformatic expertise. Performing gene expression meta-analyses is challenging and time consuming for precisely the reasons this tool addresses and tools like OMiCC are therefore a welcome addition to the field.\nThe paper is clearly written and both design and implementation are in general solid.\nA shortcoming of the design is that the curation teams were all assigned different tasks. It would have been interesting to see the overlap of curations obtained by independent teams.\nIn addition, I have a few minor comments and optional suggestions regarding the analyses:\n\nA brief literature review of existing solutions (for example InsilicoDB) appears to be missing in both this manuscript and the main paper.\n\nA challenge of comparing array data from different platforms is that some genes might be captured with varying quality across platforms. It is unclear what was done to identify problematic probe sets or genes. Various R packages (e.g. metaArray) for example calculate Integrative Correlation scores. These scores identify probe sets which behave differently across platforms in terms of co-expressed genes.\n\nAnother challenge is the extensive reuse of specimens and data in public datasets. The authors write that duplicates were identified and removed. As a completely optional suggestion, we recently published the doppelgangR package that automates the identification of duplicates.\n\nIt is unclear if the software can generate more classical meta-analysis visualizations like forest plots.\n\nThe number of different platforms included in the meta-analysis and whether platform was a significant source of heterogeneity could be made clearer.\n\nI probably would have performed the gene set analysis using expression data collapsed to pathways, for example by GSVA, ssGSEA or related newer methods. These methods turn a gene-by-sample matrix into a pathway-by-sample matrix; the same gene-centric methods can be then applied to pathways. I am not aware of any existing literature comparing pathway meta-analysis methods and this is thus another optional comment. This might however be a cleaner approach than pooling the mouse and human datasets.\n\nAxis and legend labels sometimes use R variable names (such as \"gene.ratio\") instead of proper annotation (using xlab(), scale_fill_discrete() etc.)\n\nfRMA is in theory better for meta-anlyses compared to standard RMA since then all datasets use the same reference pool for normalization. I am however again not aware of a systematic comparison and the impact on meta-analyses.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes", "responses": [] } ]
1
https://f1000research.com/articles/5-2884
https://f1000research.com/articles/5-2880/v1
20 Dec 16
{ "type": "Research Note", "title": "Analysis of distribution of chromatin marks across \"divergence islands\" in three-spined stickleback (Gasterosteus aculeatus)", "authors": [ "Alexey Sokolov", "Svetlana Zhenilo", "Sergey Rastorguev", "Alexander Mazur", "Egor Prokhortchouk", "Svetlana Zhenilo", "Sergey Rastorguev", "Alexander Mazur", "Egor Prokhortchouk" ], "abstract": "The three-spined stickleback (Gasterosteus aculeatus) is a well-known model organism for studying adaptations to water salinity. In this work, we investigate the dynamics of an epigenetic landscape of water salinity adaptation using three chromatin marks: H3K27ac, H3K4me1 and H3K4me3. The choice of marks was determined by the fact that some adaptive genomic loci are situated in gene-free regions, suggesting their regulatory role as enhancers. Histone modifications seem to be a promising mechanism that could regulate such regions. Difference between histone modifications in sea and freshwater - both in genes and intergenic enhancers - may contribute to epigenetic plasticity of stickleback adaptation. As a result of this study, we found differential chromatin peaks in \"divergence islands\" at enhancer elements and promoters of genes, which are responsible for stress adaptation and homeostasis. However, a full genome study analysis is required to fully understand mechanism of adaptation to water salinity.", "keywords": [ "divergence islands", "chromatin marks", "Gasterosteus aculeatus" ], "content": "Introduction\n\nThe three-spined stickleback (Gasterosteus aculeatus) is a model organism that can be used to elucidate molecular mechanisms of adaption to various salinities. In this work, we analyze epigenetic signatures that can be specific for various salinities. We study three chromatin marks H3K27ac, H3K4me1 and H3K4me3, which were chosen as these marks label active enhancers and promoters1, in marine and fresh water sticklebacks in their natural habitats, as well as in foreign (change of salinity) environments. These marks were studied in 19 “divergence islands”2, short genomic regions, which are highly diverged between marine and fresh water species. In the present study, 17 divergence islands overlapped in protein-coding genes, which are relevant to fresh water adaptation. In addition, we report the changes of histone modifications between marine and freshwater fish in promoters of protein-coding genes and enhancers.\n\n\nMethods\n\nIn this study, we used six marine and six fresh water sticklebacks, which were collected in august of 2015 from the White Sea (near the Pertsov White Sea Biological Station of Lomonosov Moscow State University, Murmanskaya Oblast, Russia) and Mashinnoe Lake (located near the village of Chkalvosky, Repuyblic of Karelia, Russia), respectively. The fish were placed for four days in various water tanks: half of the sticklebacks from each group was kept in the water with their natural salinity (FF and MM for fresh and marine water, respectively) and the other half was kept in a modified environment (i.e. three fresh water fish was placed in salty water (FM) and three marine fish were placed in fresh water (MF)). The live fish were transferred to the laboratory to Moscow.\n\nFor the chromatin immunoprecipitation (ChIP)-seq experiments, gills were collected from all 12 sticklebacks. Chromatin was prepared from gills, as described by Cell Signalling (https://www.cellsignal.com/common/content/content.jsp?id=chip-agarose) and ChIP was performed as described by Filion et al.3.\n\nReads were aligned to gasAcu1 reference genome from the UCSC Genome Browser Gateway (https://genome-euro.ucsc.edu/) by Bowtie2 (http://bowtie-bio.sourceforge.net/bowtie2/index.shtml) with a \"very-sensitive-local\" parameter4,5. For peaks calling we used MACS version 1.4.26. For the intersection of peaks with divergence islands and genes we used bedtools version 2.26.07.\n\n\nResults\n\nIn our study, we determined between 7138 and 20828 histone modification peaks (Table 1). We selected histone modification peaks in “divergence islands” regions, which are highly divergent between marine and freshwater populations of sticklebacks2. We observed that the majority (17 out of 19) of the islands showed the same chromatin marks (H3K4me1, H3K4me3 and H3K27ac) in fresh water species in their natural salinity (FF) and fresh water species placed into a marine environment (FM). The same was true for marine water species in their natural salinity (MM) and marine water species placed into a fresh water environment (MF). In addition, the majority of the islands (14 out of 19) showed the same histone modifications between the whole set of marine and fresh water species (MM+MF vs. FF+FM; Table 2).\n\nMM - marine species in natural environment; MF - marine species in fresh water; FF - fresh water species in natural environment; FM - fresh water species in marine water.\n\n“0” - no intersection; “1” – intersection. Regions with differential chromatin peaks are highlighted. MM - marine species in natural environment; MF - marine species in fresh water; FF - fresh water species in natural environment; FM - fresh water species in marine water. Differential peaks islands are highlighted in red.\n\nNevertheless, we found 3 out of 19 “divergence islands” demonstrating differential chromatin marks in cases of short-term adaptation to water salinity for fish placed into foreign environments. Interestingly, one “divergence island” gained H3K4me1 in the promoter of the RPTOR gene in FM. The other two islands gained H3K27ac and lost H3K4me1 inf FM, suggesting their role as enhancers for genes outside the island. Also, two islands gained H3K4me3 and H3K4me1 at STC2 and PNPLA3 genes, respectively, in MF.\n\nFinally, 5 out of 19 islands demonstrated differential histone modifications between fresh water species and marine species placed into fresh water (FF vs MF) (Table 3). In all these cases, MF gained a mark, which was absent in a FF. For example, H3K4me1 was gained in LRRC59 and BDH2 genes, which is involved in the adaption to stress and homeostasis8, suggesting that these genes might be activated after the placement of marine fish into a fresh water environment. In addition, 6 islands out of 19 demonstrated differential histone modifications between marine and fresh water species placed in marine water, with both gains and losses of marks (Table 3). Among these, we found that H3K4me3 was gained at STC2 in FM* compared to MM*, H3K4me1 at LRRC59 and BDH2 in MM* compared with FM*, and H3K27ac at RPTR in MM* compared with FM*.\n\n“0” - no intersection; “1” – intersection. Regions with differential chromatin peaks are highlighted. MM - marine species in natural environment; MF - marine species in fresh water; FF - fresh water species in natural environment; FM - fresh water species in marine water. Differential peaks islands are highlighted in red.\n\n\nConclusions\n\nIn this study, we analyzed the epigenetic profile of \"divergence islands\" with three chromatin marks, H3K4me1, H3K4me3 and H3K27ac. We report differential histone modifications that might be involved in the regulation of promoters and enhancers located in \"divergent islands\", and therefore contribute to adaptation to water salinity. Furthermore, we found differential chromatin peaks at promoters of genes that are responsible for stress adaptation and homeostasis. The results of this study contributes to our understanding of molecular mechanisms of adaptation to water salinity. However, a full genome histone modification analysis is required in order to further understand these mechanisms of adaptation.\n\n\nData availability\n\nFastq files can be found in the SRA archive under accession number SRX2403902 (https://www.ncbi.nlm.nih.gov/sra/?term=SRX2403902).", "appendix": "Author contributions\n\n\n\nSR designed the experiment. SZ and AM prepared ChIP-seq libraries. AS and EP carried out the research. AS wrote the manuscript.\n\n\nCompeting interests\n\n\n\nThe authors declare no conflict of interest.\n\n\nGrant information\n\nThis work was supported by the Russian Scientific Foundation (RSF; grant #14-24-00175).\n\n\nAcknowledgements\n\nThe authors are grateful to Konstantin G. Skryabin and Yulia Medvedeva (Institute of Bioengineering, Research Center of Biotechnology of the Russian Academy of Sciences) for ongoing support and valuable comments throughout the preparation of the manuscript.\n\n\nReferences\n\nShlyueva D, Stampfel G, Stark A: Transcriptional enhancers: from properties to genome-wide predictions. Nat Rev Genet. 2014; 15(4): 272–286. PubMed Abstract | Publisher Full Text\n\nTerekhanova NV, Logacheva MD, Penin AA, et al.: Fast evolution from precast bricks: genomics of young freshwater populations of threespine stickleback Gasterosteus aculeatus. PLoS Genet. 2014; 10(10): e1004696. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFilion GJ, Zhenilo S, Salozhin S, et al.: A family of human zinc finger proteins that bind methylated DNA and repress transcription. Mol Cell Biol. 2006; 26(1): 169–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Trapnell C, Pop M, et al.: Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 10(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang Y, Liu T, Meyer CA, et al.: Model-based analysis of ChIP-Seq (MACS). Genome Biol. 2008; 9(9): R137. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuinlan AR, Hall IM: BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010; 26(6): 841–842. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDevireddy LR, Hart DO, Goetz DH, et al.: A mammalian siderophore synthesized by an enzyme with a bacterial homolog involved in enterobactin production. Cell. 2010; 141(6): 1006–17. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "18627", "date": "03 Jan 2017", "name": "Alexey Ruzov", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn my opinion, the results on differential distribution of histone modifications in sea and freshwater three-spined sticklebacks may potentially be interesting. Despite this, I strongly believe that text of the manuscript is currently beyond the minimal standards required for paper indexing.\nSpecifically: the abstract does not describe main findings of the paper in sufficient detail; the introduction does not provide an adequate background information for the study but, instead, mainly repeats the abstract; any Discussion comparing the results with already available literature is missing; some crucial details of the methodologies used (e.g. which antibodies have been used for CHIP) are missing from Methods; some sentences in the text are difficult to understand.\nI recommend the authors to rewrite the text of manuscript addressing these points.", "responses": [] }, { "id": "20811", "date": "20 Mar 2017", "name": "Ilkka Kronholm", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study by Sokolov et al. investigates changes in chromatin marks among saline and freshwater populations of sticklebacks. While I like the idea of the study and I do appreciate that this manuscript is intended as a short note, the current state of this manuscript is not suitable for indexing at the moment. The authors need to rewrite most of the manuscript as major revision is necessary. I've given some suggestions below.\nIntroduction The introduction needs to be expanded so that the authors give some background on the system as well, there is plenty of previous research on the stickleback system that has been done and more of it needs to be cited here. Please also explain the ideas behind the divergence islands, give some information about these chromatin marks etc.\nMethods More information needs to be given about the chromatin immunoprecipitation experiments and the bioinformatic analysis. Currently the methods do not stand on their own. For example, at the moment the manuscript only states that MACS software was used for peak calling with no information how the method works or what parameters were used.\nResults I was wondering would it be helpful if the some of the results were shown as figures. Perhaps at least those peaks where differences were found.\nDiscussion Currently the results are not really discussed at all. The authors should properly discuss their results. The authors found changes in certain genes, but the biological functions of those genes are barely mentioned or whether these are candidates for explaining adaptation to marine and freshwater environments.", "responses": [] } ]
1
https://f1000research.com/articles/5-2880
https://f1000research.com/articles/5-2873/v1
19 Dec 16
{ "type": "Opinion Article", "title": "Grand challenges for global brain sciences", "authors": [ "Global Brain Workshop 2016 Attendees" ], "abstract": "The next grand challenges for science and society are in the brain sciences.  A collection of 60+ scientists from around the world, together with 15+ observers from national, private, and foundations, spent two days together discussing the top challenges that we could solve as a global community in the next decade.  We settled on three challenges, spanning anatomy, physiology, and medicine.  Addressing all three challenges requires novel computational infrastructure.  The group proposed the advent of The International Brain Station (TIBS), to address these challenges, and launch brain sciences to the next level of understanding.", "keywords": [ "Neuroscience", "neuroinformatics", "global brain" ], "content": "\n\nUnderstanding the brain and curing its diseases are among the most exciting challenges of our time. Consequently, national, transnational, and private parties are investing billions of dollars (USD). To efficiently join forces, the Global Brain Workshop 2016 was hosted at Johns Hopkins University’s Kavli Neuroscience Discovery Institute on April 7–8. A second workshop, Open Data Ecosystem in Neuroscience took place July 25–26 in Washington D.C. to continue the discussion specifically about computational challenges and opportunities. A third conference, Coordinating Global Brain Projects, took place in New York City on September 19th in association with the United Nations General Assembly. So vast are both the challenges and the opportunities that global coordination is crucial.\n\nTo find ways of synergistically studying the brain, the kick-off workshop welcomed over 60 scientists, representing 12 different countries and a wide range of brain science subdisciplines. They were joined by 15 observers from various national and international funding organizations, including NIH, NSF, IARPA, the Kavli Foundation, and the Simons Foundation. Participants were engaged weeks before the conference and charged with coming up with ambitious projects that are both feasible and internationally inclusive, on par with the International Space Station (i.e., worthy of a global, decade-long effort). Over the course of 36 hours, scientists discussed, debated, and gathered feedback, ultimately proposing several “grand challenges for global brain sciences” that were refined by working groups. The workshop was covered in a media piece in Science April 15, 20161.\n\nThe group began with 60+ ideas, each forged independently by one of the scientific participants. Each participant proposed a unique challenge that was designed to meet the following desiderata:\n\n1. Significant: it will yield tangible societal, economic, and medical benefits to the world.\n\n2. Feasible: it can achieve major milestones within 10 years given existing funding opportunities.\n\n3. Inclusive: nations throughout the world can meaningfully contribute to and benefit from each challenge, and the collection of challenges are collectively scientifically diverse.\n\nInterestingly, a lot of the proposed ideas were similar to one another and others were complementary. This allowed the group to converge on three grand challenges for global brain sciences, each depending on a common universal resource. As each of these four projects gain momentum, we encourage readers to get in touch (details provided below).\n\n\nChallenge 1: What makes our brains unique?\n\nBoth within and across species, brain structure is known to exhibit significant variability across many orders of magnitude in scale, including Anatomy, Biochemistry, Connectivity, Development, and gene Expression (ABCDE). It remains mysterious how and why the nervous system tightly regulates certain properties, while allowing others to vary. Understanding the design principles governing variability may hold the key to understanding intelligence and subjective experience, as well as the influence of variability on health and function.\n\nThis grand challenge is a global project to coordinate the construction of comprehensive multiscale maps of the ABCDE’s of multiple brains from multiple species using multiple cognitive and mental health disease models. Within a decade, we expect to have addressed this challenge in brains including, but not limited to, Drosophila, zebrafish, mouse, and marmoset, and to have developed tools to conduct massive neurocartographic analyses. Indeed, many existing datasets will play a crucial role in seeding this project, including data from the Human Brain Project, IARPA’s MICrONS project, and Z-Brain to name a few. The result will be a state-of-the-art “Virtual NeuroZoo” with fully annotated data and analytic tools for analysis and discovery. This virtual NeuroZoo can be utilized by neuroscientists and citizens alike, both as a reference and for educational materials. By incorporating disease models, we explicitly link this challenge with the third challenge. Global discussions around this project are now beginning via the tags “neurostorm” and “neurozoo” at the neuroinformatics discussion forum NeuroStars (https://neurostars.org/).\n\n\nChallenge 2: How does the brain solve complex computational problems?\n\nBrains remain the most computationally advanced machines for a large array of cognitive tasks - whether navigating hazardous terrain, translating languages, conducting surgery, or recognizing emotional states - despite the fact that modern computers can utilize millions of training samples, megawatts of power, and tons of hardware. While the ABCDEs establish the “wetware” upon which our brains can solve such computations, to understand the mechanisms we need to measure, manipulate, and model neural activity simultaneously across many spatiotemporal resolutions and scales - including wearables, embedded sensors, and actuators - while animals are exhibiting complex ecological behaviors in naturalistic environments.\n\nThis grand challenge is a global project to investigate a single naturalistic behavior that is ecologically relevant across phylogenies, such as foraging, and measure brain and body properties across spatial, temporal, and genetic scales. The challenge differs from previous efforts in three key ways. First, it requires studying animals in complex and naturalistic environments. Second, it requires coordinated attacks at many different scales by many different investigators while the animals are performing the same complex behaviors. We envision groups of 20–30 investigators all operating together on shared data and experimental design. Third, the richness of the mental repertoire of cognition suggests that deciphering its codes will require many parallel investigations to uncover different facets of brain function. These experiments in turn will produce multiscale models of neural systems with the potential to accomplish computational tasks that no current computer system can perform. Mechanistic studies, guided by theoretical models, will help to ask how perturbations of those systems lead to aberrant function, linking this challenge with the next one. Global discussions around this project are now beginning via the tags “neurostorm“ and “GlobalBrainLab” at NeuroStars.\n\n\nChallenge 3: How can we augment clinical decision-making to prevent disease and restore brain function?\n\nPsychiatric and neurological illnesses levy enormous burdens upon humanity: impairment, suffering, financial costs, and loss of productivity. Despite a growing awareness of the challenges, clinicians consistently battle the lack of objective tests to guide clinical decision-making (e.g., diagnosis, selection of treatments, prognosis). Compounding these limitations are societal stigmas regarding mental illness that increase the suffering of patients and their families. The ABCDEs of neurobiological variability, when coupled with multiscale mechanistic models of cognition, will provide new approaches to neurobiologically informed clinical decision making.\n\nThis grand challenge is a global project to transform clinical decision-making via incorporating neural mechanisms of dysfunction. This will require collecting, organizing and analyzing human and non-human anatomical and functional data. These data (such as ADNI and ADHD-200), and the tools developed to explore and discover novel treatment therapies, will be the foundation upon which the next decades of experiments and clinical decisions will be based. The distributed and multimodal nature of these datasets further motivate the need for an all-purpose computational platform, upon which models of disease can be developed, deployed, tested, and refined.\n\n\nA universal resource\n\nAll three of the grand challenges for global brain sciences represent severe methodological challenges, both technological and computational. The technological developments required for each of the challenges are non-overlapping. In contrast, regardless of the nature of the scientific questions or data modalities involved, each project will require computational capabilities including collecting, storing, exploring, analyzing, modeling, and discovering data. Although neuroscience has developed a large number of computational tools to deal with existing datasets (for example, resources in http://www.nitrc.org/), the datasets proposed here bring with them a whole suite of new challenges, including scale and complexity.\n\nThis resource would be a comprehensive computational platform, deployed in the cloud, that will provide web services for all the current “pain points” in daily neuroscience practice associated with big data. This resource will realize a new era of brain sciences, one in which the bottlenecks to discovery transition away from data collection and processing to data enriching exploring, and modeling. While science has always benefitted from standing on the shoulders of giants, this will enable science to stand on the shoulders of everyone. Today, essentially every practicing neuroscientist’s productivity is limited due to computational resources, access to data or algorithms, or struggling with determining which data and algorithms are best suited to answer the most pressing questions of our generation. This resource will create a future where those limitations will feel as archaic as fitting the data with paper and pencil feels today. For further details, see an article written by the Neuro Cloud Consortium called called “To the Cloud! A Grassroots Proposal to Accelerate Brain Science Discovery”2.\n\n\nSocietal considerations\n\nEach nation affords different opportunities and restrictions, owing to ethical, policy, and cultural considerations. Because these grand challenges are inherently inclusive, manifesting them will require understanding and mitigating issues that arise in cross-cultural endeavors. Indeed, addressing the vast diversity of partnerships in such an endeavor is a challenge in itself. We therefore recommend the following. First, form a ‘cultural sensitivity committee’ to consider and investigate potentially sensitive issues. Second, bolstered by their research, establish cross-cultural collaboration education materials, including written guidelines and videos, which will be recommended to all participating scientists. Third, to deepen the understanding of transnational collaborations, develop trainee exchange programs in which participating trainees will spend six months to a year working and training in a foreign country. This will also facilitate cross-cultural knowledge dissemination and fertilization. Fourth, require frequent assessments to ensure maintenance of cultural sensitivities. These assessments will feedback into the educational material and be used to modify the exchange programs.\n\n\nNext steps\n\nCrucial to the success of this endeavor is a sequence of actionable steps that the community can follow. Because we are not proposing any additional funding, realizing the eventual goals of these grand challenges will rely on marshalling existing funds. Due to the incoming leadership changes, both on national and transnational levels, quick action is of the essence. Therefore, we have taken the following steps:\n\nWe have created a webpage, http://neurox.io, containing a bibliography of reports that resulted from this conference, as well as a list of all scientific participants and observers who attended the original brainstorming meeting in April that led to this document. We will also be monitoring comments on NeuroStars (https://neurostars.org/), a community forum for neuroscience and neuroinformatics related queries, with the tag “neurostorm” for further discussion. Finally, we held an outpost at the NeuroData booth #4126 at the 2016 Society for Neuroscience conference (https://www.sfn.org/annual-meeting/neuroscience-2016) to discuss these issues further. We were encouraged by visitors who felt inspired by this idea to join the discussion, engage.", "appendix": "Author contributions\n\n\n\nJTV and BM organized the event and the writing of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nWe would like to thank the National Science Foundation (1637376) and the Kavli foundation for providing JTV with financial and organizational support.\n\n\n*Global Brain Workshop 2016 Attendees\n\nJoshua T. Vogelstein1,27,28,29,30,31, Katrin Amunts7,8, Andreas Andreou30, Dora Angelaki32, Giorgio A. Ascoli33, Cori Bargmann34, Randal Burns28,29, Corrado Cali11, Frances Chance35, George Church36, Hollis Cline37, Todd Coleman38, Stephanie de La Rochefoucauld39, Winfried Denk40, Ana Belén Elgoyhen41, Ralph Etienne Cummings42, Alan Evans5, Kenneth Harris43, Michael Hausser3, Sean Hill9, Samuel Inverso44, Chad Jackson45, Viren Jain46, Rob Kass47, Bobby Kasthuri13, Adam Kepecs15, Gregory Kiar1,27, Konrad Kording6, Sandhya P. Koushika10, John Krakauer48, Story Landis49, Jeff Layton50, Qingming Luo51, Adam Marblestone52, David Markowitz26, Justin McArthur53, Brett Mensh2,4, Michael P. Milham19, Partha Mitra15, Pedja Neskovic54, Miguel Nicolelis55, Richard O'Brien56, Aude Oliva57, Gergo Orban58, Hanchuan Peng14, Eric Perlman27, Marina Picciotto59, Mu-Ming Poo17, Jean-Baptiste Poline18, Alexandre Pouget60, Sridhar Raghavachari61, Jane Roskams14, Alyssa Picchini Schaffer20, Terry Sejnowski62, Friedrich T. Sommer63, Nelson Spruston4, Larry Swanson64, Arthur Toga65, R. Jacob Vogelstein, Anthony Zador15, Richard Huganir30,31, Michael I. Miller1,27,31\n\n1. Department of Biomedical Engineering, Institute for Computational Medicine, Johns Hopkins University, Baltimore, MD, USA\n\n2. Optimize Science, Mill Valley, CA USA; UCSF Kavli Institute for Fundamental Neuroscience, San Francisco, CA, USA\n\n3. Department of Physiology, University College London, London, UK\n\n4. Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA\n\n5. Montreal Neurological Institute, McGill University,,Montreal, Quebec, Canada\n\n6. Physical Medicine and Rehabilitation, Physiology, and Applied Mathematics, and Biomedical Engineering, Northwestern University, Chicago, IL, USA\n\n7. Institute for Neuroscience and Medicine, INM-1, Research Centre Juelich, Germany, C. and O. Vogt Institute for Brain Research, Forschungszentrum Jülich; University Hospital Duesseldorf, University Duesseldorf, Germany\n\n8. Human Brain Project, EPFL, Geneva, Switzerland\n\n9. Blue Brain Project, EPFL, Campus Biotech, Geneva, Switzerland\n\n10. Department of Biological Sciences, Tata Institute of Fundamental Research, Navy Nagar, Colaba, Mumbai, India\n\n11. Biological and Environmental Science and Engineering, KAUST,Thuwal, 23955-6900 Saudi Arabia\n\n12. Cuban Neuroscience Center, 190 e / 25 and 27, Cubanacan, Playa. Havana. CP 11600; University of Electronic Science and Technology of China, Shahe Campus:No.4, Section 2, North Jianshe Road, 610054, Chengdu, Sichuan, P.R.China\n\n13. Argonne National Laboratory, Argonne, IL, USA\n\n14. Allen Institute for Brain Science, Seattle, WA, USA\n\n15. Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA\n\n16. Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA\n\n17. Institute of Neuroscience, CAS Center for Brain Science, 320 Yue Yang Road Shanghai, 200031 P.R.China; Intelligence Technology, Chinese Academy of Sciences, 319 Yueyang Road, Shanghai 200031, P.R.China\n\n18. Henry H. Wheeler Jr. Brain Imaging Center, Helen Wills Neuroscience Institute, 188 Li Ka Shing Center for Biomedical and Health Sciences, Henry H. Wheeler, Jr. Brain Imaging Center, Suite B107, University of California, Berkeley, CA 94720, USA\n\n19. Center for the Developing Brain, Child Mind Institute, New York, NY; Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA\n\n20. Simons Collaboration on the Global Brain, Simons Foundation, New York, NY, USA\n\n21. Israel Brain Technologies, Hakfar Hayarok, Ramat Hasharon, Israel\n\n22. Department of Physiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo,, Japan; RIKEN Brain Science Institute, Laboratory for Marmoset Neural Architecture, 2-1 Hirosawa, Wako, Saitama, Japan\n\n23. Mind Research Network, Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM, USA\n\n24. The Kavli Foundation, Oxnard, CA, USA\n\n25. Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA\n\n26. Intelligence Advanced Research Projects Activity (IARPA), Maryland Square Research Park, Riverdale Park, MD, USA\n\n27. Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA\n\n28. Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA\n\n29. Institute for Data Intensive Engineering and Sciences, Johns Hopkins University, Baltimore,MD, USA\n\n30. Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA\n\n31. Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA\n\n32. Dept of Neuroscience, Baylor College of Medicine, Houston, TX, USA\n\n33. Dept of Molecular Neuroscience, George Mason University, Fairfax, VA, USA\n\n34. Howard Hughes Medical Institute, Rockefeller University, New York, NY, USA\n\n35. Sandia National Laboratories, Albuquerque, NM, USA\n\n36. Harvard Medical School, Harvard University, Boston, MA, USA\n\n37. Department of Molecular and Cellular Neuroscience, The Scripps Research Institute, La Jolla, CA, USA\n\n38. Department of Bioengineering, University of California, San Diego, CA, USA\n\n39. International Brain Research Organization (IBRO), Paris, France\n\n40. Max Planck Institute of Neurobiology, Martinsried, Germany\n\n41. Molecular Biology and Genetic Engineer Institute, CONICET, Argentina\n\n42. Department on Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD, USA\n\n43. Department of Quantative Neuroscience, University College London, London, England\n\n44. Department of Genetics, Harvard University, Boston, MA, USA\n\n45. U.S. Department of State, Washington D.C., USA\n\n46. Google, Mountain View, CA, USA\n\n47. Department of Statistics, Carnegie Mellon University, Pittsburgh, PA, USA\n\n48. Department of Neurology, Johns Hopkins Hospital, Baltimore, MD, USA\n\n49. National Institute of Neurological Disorders and Stroke (NINDS), National Institute of Health, Bethesda, MD, USA\n\n50. Amazon Web Services, Atlanta, GA, USA\n\n51. Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan, 430074, China\n\n52. MIT Media Lab, Cambridge, MA, USA\n\n53. Department of Neurology, Johns Hopkins Hospital, Baltimore, MD, USA\n\n54. Office of Naval Research, Arlington, VA, USA\n\n55. Duke Institute for Brain Sciences, Duke University, Durham, NC, USA\n\n56. Department of Neurology, Duke University School of Medicine, Durham, NC, USA\n\n57. National Science Foundation, Arlington, VA, USA\n\n58. Department of Theoretical Physics, MTA Wigner Research Centre for Physics, Budapest, Hungary\n\n59. Department of Psychiatry, Yale University, New Haven, CT, USA\n\n60. Neuroscience Center, University of Geneva, Geneva, Swizterland\n\n61. National Science Foundation, Arlington, VA, USA\n\n62. Salk Institute for Biological Studies, La Jolla, CA, USA\n\n63. Redwood Center for Theoretical Neuroscience, University of California, Berkley, CA, USA\n\n64. Department of Biological Sciences, University of Southern California, Los Angeles, CA, USA\n\n65. Institute for Neuroimaging and Informatics, University of Southern California, Los Angeles, CA, USA\n\n\nReferences\n\nUnderwood E: NEUROSCIENCE. International brain projects proposed. Science. 2016; 352(6283): 277–278. PubMed Abstract | Publisher Full Text\n\nNeuro Cloud Consortium: To the Cloud! A Grassroots Proposal to Accelerate Brain Science Discovery. Neuron. 2016; 92(3): 622–627. PubMed Abstract | Publisher Full Text" }
[ { "id": "19358", "date": "20 Jan 2017", "name": "Stephen J. Eglen", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article lists three grand challenges for global brain sciences. It concisely summarises discussions from workshops held in 2016 to outline collaborative challenges for brain sciences. I found the paper interesting to read as I had not heard about these initiatives before, and I think others in the field would also find them of interest.\nAs an opinion piece, there is no original research contained in this piece that requires technical evaluation.\nI do have some comments that I would like to see responses to before recommending the article getting indexed.\nThe abstract is not informative enough. I think it should describe the three scientific challenges (perhaps a sentence each) so that those readers just seeing the abstract on Pubmed will see the challenges.\n\nThe last sentence of the abstract notes that the group proposed the \"TIBS\", but this is not elaborated on in the paper. Is the TIBS the same as the GlobalBrainLab?\n\nVery similar earlier versions of this paper are already available, on the front page http://brainx.io/ and https://arxiv.org/pdf/1608.06548v3.pdf -- I think this should be noted somewhere to help link up the literature.\n\nA key point of this paper seems to be to communicate the grand challenges to a wide audience. Tags (neurostorm, neurozoo, GlobalBrainLab) are listed for people to use on a website (neurostars.org), but when I just searched, I could find no hits for either neurozoo or GlobalBrainLab (there is one hit for neurostorm; I am aware however that neurostars lost large amounts of data last year so perhaps earlier discussions have vanished). It seems a bit premature to say that discussions are \"now beginning\". Perhaps simply say that you encourage people to go to neurostars and use those tags if they wish to discuss them?\n\nNo tag for discussion grand challenge 3 has been listed.\n\nThe paper describes three challenges, but page 1 describes \"As each of these four projects gain momentum\". What is the fourth project? Is it the cloud-computing proposal described in reference 1 (the NeuroView article)? If so, it looks like since the workshops in Summer 2016 there has been sufficient momentum gained in this area to write a large article on this challenge. What has happened in the last six+ months to the last three challenges -- are people actively working on them?  Having a bit more up to date information on the progress since the workshop would help the reader.\n\nHow might these global challenges interact with the research agendas of other large scale initiatives? There is brief mention of other large scale projects in challenge 1, but I see no strategy for ensuring how these large scale initiatives (Human Brain Project, and other National Brain projects) can work together with these challenges. As recognised in the article, there is no extra funding yet for these challenges, so interacting with these other initiatives is likely to be required (Huang and Luo, 2015). From my part, such coordination of large scale initiatives and challenges might best be led via the INCF (www.incf.org), as otherwise might end up with the creation of another INCF. (Full disclosure: I am co-chair of the UK neuroinformatics node, which is a national node of the INCF.)\n\nI'm surprised to see only two references in the paper; at the very least I think Huang and Luo should be cited to give the reader some context of other large-scale initiatives. References to other projects would also be appropriate  (e.g. ADNI, ADHD-200, Z-Brain, MICrONS).", "responses": [] }, { "id": "20967", "date": "14 Mar 2017", "name": "Sten Grillner", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have been asked to comment on the opinion article ”Grand challenges for global brain sciences”. I was not present in the meeting at Johns Hopkins in April 2016, which provides the basis for the report, but at the follow-up meeting at Rockefeller in September 2016, and in the meeting at United Nations in February 2017.\n\nThe reason for this global initiative is the fact that great investments in neuroscience have been made through primarily the Human Brain Project of the EU, the US Brain Initiative, and the Japanese Brain Mind Initiative. In addition, there are advanced plans in a similar vein in several countries, including China, Korea, and Australia. The report argues for collaboration between the different initiatives, and it would seem clear that one should strive for complementarity between the initiatives. Essentially, what would seem important is to create a collaborative spirit, rather than a competitive mind set.\n\nThe conclusion as presented is that the different projects chosen should be feasible in a 10-year perspective, be significant in the context of basic and clinical neuroscience, an inclusive, that is involved as many research communities worldwide as possible. With a global perspective it is clear, that the human and infrastructure capabilities vary markedly in different parts of the world. It may be worthwhile to consider that some aspects of neuroscience (like computational neuroscience) can be conducted even under conditions when advanced experimental equipment is not available.\n\nThe members of the Johns Hopkins meeting ended up supporting three main challenges (original text in italics), as summarized below:\n\n”Challenge 1: What makes our brains unique? Both within and across species, brain structure is known to exhibit significant variability across many orders of magnitude in scale, including Anatomy, Biochemistry, Connectivity, Development, and gene Expression (ABCDE). It remains mysterious how and why the nervous system tightly regulates certain properties, while allowing others to vary. Understanding the design principles governing variability may hold the key to understanding intelligence and subjective experience, as well as the influence of variability on health and function.”\nI suppose the title infers that the question of what makes the human brain unique in comparison with that of other vertebrates should be in focus. What appears central in this context is the capacity to acquire language, because this allows us not only to interact regarding what goes on at a given moment, but also to discuss what happened many years ago, or different plans for the immediate or distant future. This possibility is something that other primates and mammals cannot enjoy (in some species, there is a complex behavioral repertoire for communication that can be individualized, but it is far from the human language). We can, however, assume that the neural circuits involved in motor learning in mammals have been tinkered with to provide this novel skill to produce the different words as in speech, and not unlikely there may have been a gradual development of this skill on the evolutionary line from the chimps to humans (Cro-Magnon). The language capability has been extended through the ability to transmit information in the written form, a critical addition for transmission of culture. Another aspect is the human cognitive ability to reason, which is unmatched among vertebrates. The many different areas mentioned in the quote above seem to include almost any type of neuroscience, rather than what makes the human brain unique. I believe focus is needed.\n\n“Challenge 2: How does the brain solve complex computational problems? Brains remain the most computationally advanced machines for a large array of cognitive tasks - whether navigating hazardous terrain, translating languages, conducting surgery, or recognizing emotional states - despite the fact that modern computers can utilize millions of training samples, megawatts of power, and tons of hardware.”\nThe human brain is unique in many aspects, but at the same time, we must realize that many of our fellow vertebrates are much more skillful in a variety of tasks. Consider for instance a bird navigating back to its nest of last year in the Northern hemisphere starting near the South Pole, the motor skills of a cheetah hunting for a prey, or an owl hunting down a mouse when it is pitch dark, a monkey swinging itself from branch to branch in an arboreal environment, an eagle identifying a prey from very high altitude, or a dog sniffing for detecting explosives. In understanding the neural bases of these complex behavioral skills, a variety of animal models will be useful. They are interesting in their own rights, but they may also unravel the neural bases of similar mechanisms in humans. What may characterize the human nervous system is the versatility in inventing novel skills like those of a piano virtuoso or juggler or just writing in long hand. The astounding energy efficiency is another unexplained fact – the brain with its billion of cells does only demand some 30 watts or so.\n\n”Challenge 3: How can we augment clinical decision-making to prevent disease and restore brain function? Psychiatric and neurological illnesses levy enormous burdens upon humanity: impairment, suffering, financial costs, and loss of productivity.”\nClearly, the whole medical area is important, and no less than one third of the costs for health care in Europe are due to diseases of the brain, whether psychiatric, neurological, or geriatric in nature. This entire field is of course of crucial importance, and any solution to the many chronic diseases will of course be a gift to mankind. Consider for instance the possibility that we would find a therapy for Alzheimer’s in an early stage, or treatment of MS or Parkinson’s! However, also for this challenge no 3, there is a lack of focus.\n\nTo summarize I find these different challenges to represent very important aspects of basic or clinical neuroscience, but on the other hand, the areas are formulated in so broad general terms that they actually represent the larger part of the current research panorama. This would mean that the initiatives would primarily provide additional research support for neuroscience in general.\nProgress results, however, often from focused initiatives regarding particular functions of the brain or disease mechanisms. The current initiatives in Europe, the US and Japan have so far mainly focused on developing tools and infrastructure for research. This can be important in itself, but it is only when these tools are used for research that scientific progress is made. It would therefore be important, in the reviewer’s mind, that a set of crucial and solvable scientific problems will become in focus for the Brain initiatives in a ten year perspective.", "responses": [ { "c_id": "3437", "date": "19 Feb 2018", "name": "joshua vogelstein", "role": "Author Response", "response": "thank you very much for your feedback.  we entirely agree with you.  for reference, we do not plan to further update this manuscript. thank you." } ] } ]
1
https://f1000research.com/articles/5-2873
https://f1000research.com/articles/5-758/v1
26 Apr 16
{ "type": "Research Note", "title": "Use of cidofovir in pediatric patients with adenovirus infection", "authors": [ "Lakshmi Ganapathi", "Alana Arnold", "Sarah Jones", "Al Patterson", "Dionne Graham", "Marvin Harper", "Ofer Levy", "Lakshmi Ganapathi", "Alana Arnold", "Sarah Jones", "Al Patterson", "Dionne Graham", "Marvin Harper" ], "abstract": "Background: Adenoviruses contribute to morbidity and mortality among immunocompromised pediatric patients including stem cell and solid organ transplant recipients. Cidofovir (CDV), an antiviral compound approved by the FDA in 1996, is used for treatment of adenoviral (ADV) infections in immunocompromised patients despite concern of potential nephrotoxicity.  Methods: We conducted a retrospective 5-year review at Boston Children’s Hospital of 16 patients (mean age = 6.5 years) receiving 19 courses of CDV. During therapy all pertinent data elements were reviewed to characterize potential response to therapy and incidence of renal dysfunction.  Results: Of the 19 CDV courses prescribed, 16 courses (84%) were in patients who had a positive blood ADV Polymerase chain reaction (PCR) alone or in combination with positive ADV PCR/ Direct Immunofluorescence Assay (DFA) at another site. Respiratory symptoms with or without pneumonia were the most common presentation (10/19, 53%). In the majority of blood positive courses (10/16, 63%), viral clearance was also accompanied by clinical response. This was not the case in four courses where patients expired despite viral clearance, including one in which death was directly attributable to adenovirus. There was reversible renal dysfunction observed during the use of CDV.Conclusions:  CDV appeared safe and reasonably tolerated for treatment of ADV in this pediatric population and was associated with viral response and clinical improvement in the majority of patients but reversible renal dysfunction was a side effect. Further studies of the efficacy of CDV for immunocompromised children with ADV infection are warranted.", "keywords": [ "cidofovir", "anti-viral", "pediatric", "adenovirus", "stem-cell", "solid organ", "immunocompromised" ], "content": "Introduction\n\nAdenovirus (ADV) is a common cause of respiratory infection in childhood. ADV infections are usually self-limited and asymptomatic in the immunocompetent host but have been recognized as a cause of significant morbidity and mortality in immunocompromised pediatric patients such as recipients of hematopoietic stem cell transplant (HSCT) and solid organ transplant (SOT)1. In these patients, ADV is an opportunistic pathogen that may lead to severe localized disease including pneumonia/pneumonitis, hepatitis, hemorrhagic cystitis or disseminated disease with multiorgan failure2–4. Case fatality rates in immunocompromised patients with ADV pneumonia have been reported to be as high as 60%5. Currently, there is no FDA-labeled product available for treatment of ADV infection though several agents have been administered for this indication including ribavirin6,7, ganciclovir8, vidarabine9,10, immune globulin11 and cidofovir12–21.\n\nCidofovir (CDV), a nucleoside and phosphonate analogue is a broad-spectrum antiviral agent that inhibits viral DNA polymerase and has broad activity in vitro against multiple viruses including all serotypes of ADV22,23. CDV has an FDA indication for the treatment of cytomegalovirus (CMV) retinitis in patients with AIDS. Although this drug does not have an FDA indication for treating ADV, there is evidence of in vivo efficacy of CDV against ADV12,14. While CDV at a standard dose of 5mg/kg has been reported as primary therapy for treatment of ADV infection in pediatric and adult hematopoietic stem cell transplantation (HSCT) patients12,21, concern exists regarding potential nephrotoxicity. These associated adverse effects have limited the use of CDV for treatment of ADV infections in pediatric patients. To minimize potential toxicity of CDV, modified dosing regimens such as the use of 1 mg/kg three times have been utilized14.\n\nLimited information regarding safety and efficacy of CDV in pediatric patients prompted us to review prior published studies in the literature and conduct a retrospective review of our inpatient use of CDV at Boston Children’s Hospital (BCH).\n\n\nMethods\n\nFollowing Institutional Review Board (IRB) approval (IRB-P00015576), a retrospective chart review was conducted for all hospitalized patients at Boston Children’s Hospital (BCH), who were prescribed CDV for adenovirus infection from January 2006 through December 2010. The following data were collected: (1) demographic information, (2) underlying disease state, (3) type of transplant, (4) duration of cidofovir therapy, (6) serum creatinine (SCr) (baseline, peak during therapy, and level up to 2 weeks post last dose), (7) concomitant nephrotoxins prescribed (acyclovir, amikacin, cyclosporine, foscarnet, gentamicin, liposomal amphotericin B, tacrolimus, tobramycin, vancomycin, and intravenous contrast media), (8) sites of ADV detection by viral direct fluorescent antibody (DFA), nucleic acid test, and/or culture, (9) viral quantitative PCR surveillance in blood and other sites of infection (all specimens were tested at least weekly before, during and to two weeks post last dose of CDV to evaluate for changes in viral load with a minimum three serial values being obtained before, during and at end of therapy); (10) symptoms of infection, and clinical course including response to therapy, (11) concomitant reduction of immunosuppression and (12) mortality and cause(s) of mortality. All blood sample testing for adenovirus quantitative PCR in blood was performed at the Boston Children’s Hospital Virology Laboratory using our laboratory developed test, the Argene adenovirus assay (bioMerieux, Cambridge, MA). The Argene adenovirus assay contains primers and probes selective for a 138 base pair (bp) sequence in the Hexon gene of the adenovirus. Using a 5’ nuclease assay, viral DNA is detected using the primers and fluorescent probes from the Argene assay kit by means of real time PCR in a Cepheid SmartCycler (Cepheid, Sunnyvale, CA).\n\nAs there is no accepted definition for ADV infection or disease, we adopted definitions used in prior studies13. Specifically, definite adenovirus disease as follows: Non-gastrointestinal locations: Symptoms and signs from the appropriate organ combined with histopathological documentation of adenovirus and/or adenovirus detection by culture, antigen test, or nucleic acid test from biopsy specimens (liver or lung), BAL fluid, or cerebrospinal fluid and without another identifiable cause; Gastrointestinal location: Symptoms together with detection of adenovirus from biopsy material by culture, antigen test, or nucleic acid test.\n\nProbable adenovirus disease as follows: Gastrointestinal tract: Detection of adenovirus in stool by culture, antigen test, or nucleic acid test together with symptoms; Urinary tract: Symptoms of dysuria or hematuria combined with detection of adenovirus by culture, antigen test, or nucleic acid test without another identifiable cause; and Respiratory tract: Symptoms and signs of pneumonia/pneumonitis combined with detection of adenovirus by culture, antigen test, or nucleic acid test without another identifiable cause.\n\nAsymptomatic adenovirus infection as follows: any detection of adenovirus in an asymptomatic patient from stool, blood, urine, or upper airway specimens by viral culture, antigen tests, or PCR.\n\nAdenoviremia was defined as the detection of >100 copies of ADV/mL of blood (this being the lower limit of detection of the assay). Viral clearance was defined as an ADV viral load of <100 copies in blood by quantitative PCR at the end of therapy. Viral response was defined as decrease in viremia by at least one log-fold. Clinical resolution was defined as resolution of symptoms and/or signs of infection. Renal dysfunction was defined as a ≥50% increase in SCr from baseline during the course of CDV therapy. The peak SCr during therapy was used to calculate the number of patients that experienced renal dysfunction.\n\nStatistical analyses employed Prism 5 for Windows Version 5.04 (GraphPad Software Inc, CA). The Mann-Whitney test was used to assess risk of renal dysfunction. Trends in adenoviremia including pre-treatment viral load, changes in viral load during therapy, and post-treatment viral load were graphed.\n\n\nResults\n\nFrom January 1, 2006 to December 31, 2010, a total of 16 pediatric patients received CDV for adenovirus infection at our hospital. These 16 patients received 19 courses (three patients received two separate CDV courses). The standard CDV dose of 5mg/kg weekly was used in all courses unless there was concern for renal dysfunction at the start of therapy in which case a dosing regimen of 1mg/kg three times a week was used. Patient demographics, primary diagnosis, clinical symptoms and course, and sites of adenovirus detection appear in Table 1. Patient age ranged from 0.75–20 years (mean 6.5 years). Seven (44%) patients were male. Underlying primary diagnosis included 8 (50%) HSCT (1 autologous), 4 (25%) SOT, 2 (12.5%) leukemia, and 2 (6.5%) defined as other. Duration of CDV therapy ranged from 5–82 days (median 33.5 days).\n\nThe age and gender distribution, primary diagnosis, sites of adenovirus detection, symptoms and clinical course of the patients included in the study are shown.\n\nALL, acute lymphoblastic leukemia; AML, acute myeloid leukemia; CID, congenital immunodeficiency; CML, chronic myelogenous leukemia; HLH, hemophagocytic lymphohistiocytosis; MRD, matched related donor; MUD, matched unrelated donor; Pt #, patient number; SCID, severe combined immunodeficiency disorder; SCT, stem cell transplant; UD, unrelated donor; Yrs, years; Site of adenovirus detection: S, stool, Sp, sputum, B, blood, BAL, bronchoalveolar lavage, R, respiratory DFA, CSF, cerebrospinal fluid, U, Urine, PF, Pericardial Fluid; *indicates patient expired\n\nOf the 19 courses prescribed (Table 1), two courses were prescribed in a patient with definite adenovirus disease of the gastrointestinal tract, 15 courses were prescribed in patients with probable disease and two courses were prescribed in patients with asymptomatic infection. Sixteen courses (84%) were in patients who had a positive blood ADV PCR either in whole blood only or in combination with positive ADV PCR of sputum, stool, urine, broncho-alveolar lavage (BAL) fluid, pericardial fluid or positive sputum adenoviral DFA sample. Respiratory symptoms were the most common presentation in 10 courses (53%) of which six courses were prescribed for patients with respiratory symptoms and radiological evidence of pneumonia. Two courses were prescribed in patients who presented with prolonged fevers; four courses were prescribed in patients who had worsening diarrhea and colitis, two of which were biopsy proven adenovirus infection; four courses were prescribed in patients with viral sepsis with or without pneumonia; and two courses were administered in patients with severe hemorrhagic cystitis. Two courses were prescribed in patients with asymptomatic respiratory tract infection and asymptomatic gastrointestinal infection respectively.\n\nWe further examined the 16 blood-positive courses to assess trends in ADV viral load pre-, during and post- CDV therapy (Figure 1). A quantitative reduction in viral load was seen in 15 blood positive courses (94%) with viral clearance achieved in 14 (88%). Of note, all solid organ transplant recipients treated with CDV also had concomitant decrease in their immunosuppression. A single patient (Patient 6) did not demonstrate viral response to therapy and expired. The majority of adenovirus blood-positive CDV courses (10/16, 63%) were associated with clinical improvement with viral clearance, however this was not the case in four courses. Patients 7, 10, 11 and 16 expired despite demonstrating viral clearance. Patients 6, 7 and 16 had multiple other co-infections. Patients 11 and 16 developed severe hemorrhagic cystitis. Patient 11 experienced significant complications of hemorrhagic cystitis including urinary tract obstruction, renal failure and bladder perforation. Patient 16 also had concomitant BK Polyoma virus detected in the urine.\n\nViral loads of 16 patients with quantitative blood adenovirus PCR treated with cidofovir, are shown. Day of therapy ≤0 denotes pre-treatment viral loads. Up to two post-treatment values are shown where available and informative. Each patient’s individual treatment duration is shown in the legend. + denotes patient expired.\n\nEach patient’s medication profile was assessed to determine the number of additional nephrotoxic agents concomitantly prescribed during CDV therapy (from Day 1 to 7 days post last CDV dose). All 19 courses prescribed had at least one additional nephrotoxic agent prescribed during CDV therapy (Table 2). Four courses (21%) had only one additional nephrotoxic medication prescribed, five courses (26%) had two such medications prescribed, six courses (32%) had three prescribed, two courses (11%) had four prescribed, and two courses (11%) had five prescribed.\n\nAbsolute values for serum creatinine in mg/dL for each patient are represented pre-treatment, during treatment (peak serum creatinine), and post-treatment. Additional nephrotoxic agents that each patient received are also represented.\n\nPt #, patient number; Cr, creatinine; all creatinine values are in mg/dL.\n\nAdministration of CDV was significantly associated with occurrence of renal dysfunction when comparing the peak Cr measured during CDV therapy to the baseline serum Cr (p=0.0016). Eleven courses (58%) were associated with development of renal dysfunction. Cr increased by a mean of ~50% from baseline during CDV therapy (Figure 2). Of the courses with elevation in serum creatinine, 64% demonstrated return to pre-treatment creatinine levels following cessation of CDV therapy. There was no statistically significant difference when assessing for increased risk of renal dysfunction if patients received ≤ 1 additional nephrotoxic agent or ≥ 2 additional nephrotoxic agents.\n\nThe y axis represents change in serum creatinine (in percentage) compared to pre-treatment serum creatinine. During therapy, serum creatinine increased by a mean of 50%. *** indicates p<0.01; NS indicates change is not significant.\n\n\nDiscussion\n\nIn this retrospective review of patients treated with CDV for adenovirus infection at our hospital during a 5-year period, we assessed the safety and potential efficacy of the medication in pediatric patients. Our review yielded a case series of 16 patients. While the number of patients is modest, this series adds to the existing literature describing the use of CDV in pediatric recipients of HSCT, SOT and chemotherapy for oncologic diagnoses (Table 3).\n\nPublished studies in the literature describing use of cidofovir in pediatric HSCT and SOT recipients are summarized. Toxicities, and clinical outcomes reported in each study are highlighted.\n\nSimilar to other studies the majority of our patients had received a HSCT or had an oncologic diagnosis and received chemotherapy. We identified eight publications describing the use of CDV for adenovirus infection in the setting of HSCT or oncologic diagnoses treated with chemotherapy (Table 3). Three of these studies14,17,27 reported viral clearance in 89–100% of their patients. We observed similar rates of viral clearance (88%) but this was not consistently associated with clinical improvement.\n\nThere are very few reports on the use of CDV for adenovirus infection in pediatric SOT recipients, which have largely been restricted to reports of children receiving liver or lung transplants16,20,24,25. We identified four publications reporting the use of CDV for adenovirus infection in pediatric SOT recipients limited to one to four per report and all of these children having received liver or lung transplants16,20,24,25. Doan et al.25 described children who had received lung transplants with reported viral clearance in three of their four patients. Our case series contributes patients who received several types of SOT including lung, heart, combined kidney and liver, and multi-visceral transplants. All patients with SOT in our series demonstrated viral clearance as well as resolution of symptoms, which may have reflected a combination of both the antiviral effect of CDV coupled with reduced immunosuppression.\n\nTwo-thirds of our patients experienced resolution of their symptoms and had an overall favorable clinical course with recovery. However, one-third died all of which were stem cell transplant recipients. With the exception of one patient it is unclear whether adenovirus was the direct cause of mortality in these patients. Our observations are consistent with what has been reported in the literature pertaining to outcomes in stem cell transplant recipients with adenovirus infections who have been treated with CDV9,12,14,18,23,27. Among SCT patients mortality remains high (10%–70%) even when clearance from blood is seen.\n\nIn our case series, renal dysfunction was common during CDV therapy with patients experiencing an average 50% increase of serum creatinine from their baseline. However, renal dysfunction was transient in the majority of patients with serum creatinine returning to baseline after cessation of CDV therapy. While some studies have reported no toxicities related to the use of CDV14,16,17,19,25, the transient nature of nephrotoxicity observed has been reported by other studies20,24. We were unable to detect any increased risk of nephrotoxicity associated with concomitant administration of additional nephrotoxic agents but this may reflect our small number of study participants.\n\nOur study has several limitations. Most notably, the small number of patients precluded evaluation of other factors that may impact infection resolution such as immunosuppressive regimens, and additional factors that may impact degree of renal dysfunction. Nevertheless, our study adds to the limited reported literature of pediatric ADV patients treated with CDV.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for Figure 1, 10.5256/f1000research.8374.d11732128\n\nF1000Research: Dataset 2. Raw data for Figure 2, 10.5256/f1000research.8374.d11732229", "appendix": "Author contributions\n\n\n\nOL conceived the study. LG, AA and SJ collected data and prepared the first draft of the manuscript. LG, AP and DG performed data analysis. MH contributed to the preparation of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nOL’s laboratory is supported by a Boston Children’s Hospital Department of Medicine award to the Precision Vaccines Program as well as Global Health (OPPGH5284) and Grand Challenges Explorations (OPP1035192) awards from the Bill & Melinda Gates Foundation and by NIH grants 1R01AI100135-01 and 3R01AI067353- 05S1 and National Institute of Allergy & Infectious Diseases Adjuvant Discovery Program, Contract No. HHSN272201400052C. The Levy Laboratory has received sponsored research support from VentiRx Pharmaceuticals, 3M Drug Delivery Systems, MedImmune, and Crucell (Johnson & Johnson)- companies that develop adjuvants and vaccines.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nIson MG: Adenovirus infections in transplant recipients. Clin Infect Dis. 2006; 43(3): 331–339. PubMed Abstract | Publisher Full Text\n\nCarrigan DR: Adenovirus infections in immunocompromised patients. Am J Med. 1997; 102(3A): 71–74. PubMed Abstract\n\nHoward DS, Phillips II GL, Reece DE, et al.: Adenovirus infections in hematopoietic stem cell transplant recipients. Clin Infect Dis. 1999; 29(6): 1494–1501. PubMed Abstract | Publisher Full Text\n\nHale GA, Heslop HE, Krance RA, et al.: Adenovirus infection after pediatric bone marrow transplantation. Bone Marrow Transplant. 1999; 23(3): 277–282. PubMed Abstract\n\nHierholzer JC: Adenoviruses in the immunocompromised host. Clin Microbiol Rev. 1992; 5(3): 262–274. PubMed Abstract | Free Full Text\n\nGavin PJ, Katz BZ: Intravenous ribavirin treatment for severe adenovirus disease in immunocompromised children. Pediatrics. 2002; 110(1 Pt 1): e9. PubMed Abstract\n\nMcCarthy AJ, Bergin M, De Silva LM, et al.: Intravenous ribavirin therapy for disseminated adenovirus infection. Pediatr Infect Dis J. 1995; 14(11): 1003–1004. PubMed Abstract | Publisher Full Text\n\nChen FE, Liang RH, Lo JY, et al.: Treatment of adenovirus-associated haemorrhagic cystitis with ganciclovir. Bone Marrow Transplant. 1997; 20(11): 997–999. PubMed Abstract | Publisher Full Text\n\nKawakami M, Ueda S, Maeda T, et al.: Vidarabine therapy for virus-associated cystitis after allogeneic bone marrow transplantation. Bone Marrow Transplant. 1997; 20(6): 485–490. PubMed Abstract | Publisher Full Text\n\nKitabayashi A, Hirokawa M, Kuroki J, et al.: Successful vidarabine therapy for adenovirus type 11-associated acute hemorrhagic cystitis after allogeneic bone marrow transplantation. Bone Marrow Transplant. 1994; 14(5): 853–854. PubMed Abstract\n\nSaquib R, Melton LB, Chandrakantan A, et al.: Disseminated adenovirus infection in renal transplant recipients: the role of cidofovir and intravenous immunoglobulin. Transpl Infect Dis. 2010; 12(1): 77–83. PubMed Abstract | Publisher Full Text\n\nMuller WJ, Levin MJ, Shin YK, et al.: Clinical and in vitro evaluation of cidofovir for treatment of adenovirus infection in pediatric hematopoietic stem cell transplant recipients. Clin Infect Dis. 2005; 41(12): 1812–1816. PubMed Abstract | Publisher Full Text\n\nLjungman P, Ribaud P, Eyrich M, et al.: Cidofovir for adenovirus infections after allogeneic hematopoietic stem cell transplantation: a survey by the Infectious Diseases Working Party of the European Group for Blood and Marrow Transplantation. Bone Marrow Transplant. 2003; 31(6): 481–486. PubMed Abstract | Publisher Full Text\n\nHoffman JA, Shah AJ, Ross LA, et al.: Adenoviral infections and a prospective trial of cidofovir in pediatric hematopoietic stem cell transplantation. Biol Blood Marrow Transplant. 2001; 7(7): 388–394. PubMed Abstract | Publisher Full Text\n\nBhadri VA, Lee-Horn L, Shaw PJ: Safety and tolerability of cidofovir in high-risk pediatric patients. Transpl Infect Dis. 2009; 11(4): 373–379. PubMed Abstract | Publisher Full Text\n\nEngelmann G, Heim A, Greil J, et al.: Adenovirus infection and treatment with cidofovir in children after liver transplantation. Pediatr Transplant. 2009; 13(4): 421–428. PubMed Abstract | Publisher Full Text\n\nYusuf U, Hale GA, Carr J, et al.: Cidofovir for the treatment of adenoviral infection in pediatric hematopoietic stem cell transplant patients. Transplantation. 2006; 81(10): 1398–1404. PubMed Abstract | Publisher Full Text\n\nLegrand F, Berrebi D, Houhou N, et al.: Early diagnosis of adenovirus infection and treatment with cidofovir after bone marrow transplantation in children. Bone Marrow Transplant. 2001; 27(6): 621–626. PubMed Abstract\n\nAnderson EJ, Guzman-Cottrill JA, Kletzel M, et al.: High-risk adenovirus-infected pediatric allogeneic hematopoietic progenitor cell transplant recipients and preemptive cidofovir therapy. Pediatr Transplant. 2008; 12(2): 219–227. PubMed Abstract | Publisher Full Text\n\nWallot MA, Dohna-Schwake C, Auth M, et al.: Disseminated adenovirus infection with respiratory failure in pediatric liver transplant recipients: impact of intravenous cidofovir and inhaled nitric oxide. Pediatr Transplant. 2006; 10(1): 121–127. PubMed Abstract | Publisher Full Text\n\nRibaud P, Scieux C, Freymuth F, et al.: Successful treatment of adenovirus disease with intravenous cidofovir in an unrelated stem-cell transplant recipient. Clin Infect Dis. 1999; 28(3): 690–691. PubMed Abstract | Publisher Full Text\n\nNaesens L, Lenaerts L, Andrei G, et al.: Antiadenovirus activities of several classes of nucleoside and nucleotide analogues. Antimicrob Agents Chemother. 2005; 49(3): 1010–1016. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorfin F, Dupuis-Girod S, Mundweiler S, et al.: In vitro susceptibility of adenovirus to antiviral drugs is species-dependent. Antivir Ther. 2005; 10(2): 225–229. PubMed Abstract\n\nCarter BA, Karpen SJ, Quiros-Tejeira RE, et al.: Intravenous Cidofovir therapy for disseminated adenovirus in a pediatric liver transplant recipient. Transplantation. 2002; 74(7): 1050–1052. PubMed Abstract\n\nDoan ML, Mallory GB, Kaplan SL, et al.: Treatment of adenovirus pneumonia with cidofovir in pediatric lung transplant recipients. J Heart Lung Transplant. 2007; 26(9): 883–889. PubMed Abstract | Publisher Full Text\n\nSivaprakasam P, Carr TF, Coussons M, et al.: Improved outcome from invasive adenovirus infection in pediatric patients after hemopoietic stem cell transplantation using intensive clinical surveillance and early intervention. J Pediatr Hematol Oncol. 2007; 29(2): 81–85. PubMed Abstract | Publisher Full Text\n\nWilliams KM, Agwu AL, Dabb AA, et al.: A clinical algorithm identifies high risk pediatric oncology and bone marrow transplant patients likely to benefit from treatment of adenoviral infection. J Pediatr Hematol Oncol. 2009; 31(11): 825–831. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGanapathi L, Arnold A, Jones S, et al.: Dataset 1 in: Use of cidofovir in pediatric patients with adenovirus infection. F1000Research. 2016. Data Source\n\nGanapathi L, Arnold A, Jones S, et al.: Dataset 2 in: Use of cidofovir in pediatric patients with adenovirus infection. F1000Research. 2016. Data Source" }
[ { "id": "13586", "date": "03 May 2016", "name": "Miguel O’Ryan", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well written descriptive, retrospective study of a case series of immunocompromised children receiving cidofovir for treatment of mostly probable, few definite or asymptomatic adenovirus infections. Children had different underlying diseases including a few with solid organ transplantation. The series review is transparent, showing viral and clinical evolution as well as renal compromise in association with treatment. Most children cleared adenovirus with treatment but as the authors point out, also in association with improved immunity. Most but not all children clearing the virus improved clinically, a few died despite clearing the virus. Transient nephrotoxicity occurred in near 50% of children, but this occurred in association with other nephrotoxic treatments and was not a major problem. This review adds to the rather limited number of currently available series and although not providing any definite conclusion (clinicians will still have to balance several variables before deciding to use cidofovir or not) it adds helpful information for treating physicians that have to make these hard decisions.", "responses": [] }, { "id": "13587", "date": "04 May 2016", "name": "Jeffrey Bergelson", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports the use of cidofovir for treatment of adenovirus infection in 16 pediatric patients, and provides detailed information about the patients’ clinical status, blood viral loads, and renal function throughout the treatment course. I have only one major point for the authors to consider. Most patients survived, and a number had decreases in viral load within several weeks of the start of cidofovir therapy. However, the patient group was quite heterogeneous (one case had cerebral palsy, but no apparent immune dysfunction); in some cases the viral loads rose for a long time before they began to fall; and 4 of the 7 patients who had undergone allogeneic stem cell transplant died. I think the discussion should point out explicitly that it is difficult to conclude from these data whether or not cidofovir provided any clinical benefit.  Minor points: Clarify the assay used for viral loads. Is it the Argene assay, or an in-house assay using some components of the Argene kit? Clarify the definition of “viral response”. Is it a 10-fold decrease in titer? Perhaps mention briefly why the asymptomatic patients were treated with cidofovir. It appears that one was a lung transplant patient (considered to be at high risk) and one a SCT patient with recurrent viremia. Figure 2 adds little.Typos, spelling, glitches: Page 3.  “These associated adverse effects”;  THIS adverse EFFECT.Page 3.\n\n“1 mg/kg three times” WEEKLYPage 3.\n\n“Respiratory tract”; italicizePage 3.\n\n“The peak SCr during therapy was used to calculate the number…” should be deleted.Table 1.  “multivisCeral”Page 5.\n\n“Respiratory symptoms were the most common presentation in 10 courses”; should be “… most common presentation (10 courses)..”Page 5.  “Two courses were prescribed… respectively” ; should be One course was prescribed in asymjptomatic x, and one in y.Page 5.  “improvement with viral clearance, however”; comma should be a semicolon.Page 6.  “.. one third died all of which”; ?  …died, all of WHOM..”", "responses": [ { "c_id": "1959", "date": "06 May 2016", "name": "Jeffrey Bergelson", "role": "Reviewer Response", "response": "Because the paper provides no evidence about efficacy, the abstract should be changed, as well as the discussion. \"….was associated with viral response and clinical improvement\" should be softened." }, { "c_id": "2352", "date": "16 Dec 2016", "name": "Lakshmi Ganapathi", "role": "Author Response", "response": "Addressing Dr Bergelson’s report, in the new version of the manuscript we have expanded our discussion to point out more explicitly that at least based on our data, while we did observe clearance of viremia in several patients overall clinical benefit was somewhat harder to conclude. We have also mentioned the rationale behind treating asymptomatic patients (both treated patients were considered to be at high risk for complications).   In the methods section, we have clarified the following: 1) The assay used to determine viral load, 2) What we defined as viral response.   We have also corrected the minor spelling and punctuation issues.   We agree with both  Dr Bergelson and Dr Hundstad that Figure 2 adds little and have removed it from this version of the manuscript" } ] }, { "id": "13746", "date": "09 May 2016", "name": "David A Hunstad", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGanapathi et al describe a case series of 16 patients who received a total of 19 courses of cidofovir for adenovirus infection. The authors provide virological data to demonstrate a response to cidofovir therapy, and provide additional data regarding renal function and co-administration of renal toxic medications. Ganapathi et al. provides unique data regarding the peak creatinine level during therapy with cidofovir, though this value is difficult to interpret given the confounding factors of other nephrotoxic agents. This case series is an incremental addition to the number of published cases of pediatric patients infected by adenoviruses who were treated with cidofovir. The manuscript follows a similar format to the case series reported by Bhadri et al. (2009), with presentation of the effects on renal function, and a summarization of the literature.\n\nThe authors should summarize outcomes from all other case series to provide updated overall mortality and nephrotoxicity of CDV-treated adenovirus infections in hematopoietic stem cell transplants, solid organ transplants, or other patients.  Such analyses may enhance the timeliness and utility of this manuscript. In the discussion, the authors describe the potential benefit of cidofovir in solid organ transplant patients, in conjunction with reduced immunosuppression. However, without a comparative cohort of children who did not receive cidofovir or reduced immunosuppression, it would be difficult to draw this conclusion from the data presented. In addition, the authors do not discuss two interesting published papers. Humar et al. (2005) described a cohort of adult SOT patients with adenovirus viremia who did not receive cidofovir; all had spontaneous resolution of viremia with no deaths. In contrast, Seidemann et al. (2004) reported that 3 out of 5 pediatric SOT with adenoviremia (one also had a HSCT) died despite receiving cidofovir. The authors should absolutely cite these studies, especially Seidemann as it included pediatric SOT patients. In addition, the authors should highlight whether the positive outcomes in their SOT population are aligned with the adult study by Humar et al. (and the conclusion in that paper that treatment may not be necessary for adenoviremia in SOT patients); and contrast their results with those of Seidemann et al. Specific commentsMethods: The authors should report their study site’s protocol for testing for adenoviruses, specifically, whether testing for adenoviruses was only performed on symptomatic patients, or if high-risk patients were routinely screened for adenovirus. Methods: Probenecid is often co-administered with cidofovir. The authors should report if this drug was used in any of the courses of cidofovir, and what dosages. If probenecid was not used in all courses, the authors should analyze their data to investigate the role of probenecid in reducing nephrotoxicity. Methods: It is not clear if adenovirus testing is completed on whole blood or serum. Later in the results, the authors state whole blood, but this should be explained in the methodology. Methods: The authors should define pre-treatment/peak/post-treatment creatinine levels in their methodology. In addition, what defines pre-treatment creatinine? Is that the creatinine on the day of initiation of cidofovir? One week prior? Baseline from months prior? Defining this value is essential to the author’s statistical analysis. Table 1: Patients should be ordered by diagnosis for better organization (as opposed to patient number); for example, the hematopoietic stem cell transplant patients should be grouped together, then the solid organ transplant patients should be grouped together. A column of outcomes should be added for greater clarity of which cases resulted in mortality, instead of the usage of an asterisk. Results: Much of the information stated in the paragraph starting “Of the 19 courses prescribed” is redundant with the data presented in table 1, including the number of courses involving viremia, pneumonia, and GI symptoms. This paragraph can be consolidated, and the case characteristics presented as already implemented in Table 1. Results: The authors should include a description of the patient’s response to cidofovir in Table 1. This would provide better organization and clarity in interpreting which characteristics were associated with response (or lack thereof) to therapy. It also would make the paragraph “We further examined the 16 blood-positive…” redundant, and this could be consolidated. Figure 1: This figure might be easier to interpret if it were separated, for example, into two graphs of HSCT and SOT/other patients. It is difficult to follow 16 distinct virological lines in the current graph.  In addition, the color choices for patients 4, 9, and 13 are too similar. The statistical analysis of increased risk of renal dysfunction when receiving 1 vs >2 renal toxic medications should be omitted as currently presented. The authors do not describe whether the additional nephrotoxic agents were co-administered at the same time or for what duration. These are critical confounding variables that are necessary for interpretation of whether nephrotoxicity was associated with increased administration of additional nephrotoxic agents. Given that this review is retrospective, such information may or may not be available.  Figure 2 is a somewhat awkward way to describe SCr before and after CDV therapy.  In addition, it is unclear how, if only 64% of patients with elevated SCr returned to baseline, the mean SCr after CDV was equal to baseline (in the right bar of Figure 2). The table describes the SCr data adequately, and Figure 2 could therefore be omitted.  The publication search criteria used to generate Table 3 should be explained in the methods. In their literature review, the authors should incorporate other pediatric adenovirus-cidofovir case reports and series that were not reviewed in this manuscript, including: Seidemann et al. (2004), Leruez-Ville et al. (2004), Nagafuji et al. (2004). At least brief mention of the advent of brincidofovir for treatment of these patients with much reduced nephrotoxicity would be warranted.", "responses": [ { "c_id": "2351", "date": "16 Dec 2016", "name": "Lakshmi Ganapathi", "role": "Author Response", "response": "Addressing Dr Hunstad’s and Dr Janowski’s report, in the new version of the manuscript we have clarified the following in the methods section: 1) The protocol for testing for adenovirus infection, 2) Usage of probenecid and dose (it was used in all patients who received cidofovir), 3) how we defined pre-treatment creatinine and 4) the publication search criteria. While we mention the use of concomitant medications that are known to render nephrotoxicity, we agree that the analysis is confounded by several factors and hence have omitted the statistical analysis of increased risk of renal dysfunction when receiving 1 versus >1 nephrotoxic medications.   In the discussion section, we have expanded our discussion on the potential benefit of cidofovir in solid organ transplant recipients, and compare the findings of Humar et al  (a study in which patients were all adults) with the findings reported by Seidemann et al and Leruez-Ville et al. As such, we added the latter two studies to table 3 for an updated literature review. However, we did opt to leave out the study by Nagafuji et al as recommended by the reviewers. The study by Nagafuji et al  focused primarily on adult HSCT recipients and included only one pediatric patient. We feel that given much larger pediatric specific case series of HSCT recipients with adenovirus infection, this one case reported adds little. On the other hand, we did include case reports in the solid-organ patient literature review given the paucity of data in this population. We have also made a brief mention regarding the advent of brincidofovir. We updated table 1 to include include a column of outcomes. While we recognize that other changes as recommended by the reviewers may contribute to better organization of text and data, we did not make too many other changes to table 1 or figure 1 given that these changes are stylistic and not necessarily content relevant." } ] } ]
1
https://f1000research.com/articles/5-758
https://f1000research.com/articles/5-2851/v1
15 Dec 16
{ "type": "Software Tool Article", "title": "The ISMARA client", "authors": [ "Panu Artimo", "Séverine Duvaud", "Mikhail Pachkov", "Vassilios Ioannidis", "Erik van Nimwegen", "Heinz Stockinger", "Panu Artimo", "Séverine Duvaud", "Mikhail Pachkov", "Vassilios Ioannidis", "Erik van Nimwegen" ], "abstract": "ISMARA (ismara.unibas.ch) automatically infers the key regulators and regulatory interactions from high-throughput gene expression or chromatin state data. However, given the large sizes of current next generation sequencing (NGS) datasets, data uploading times are a major bottleneck. Additionally, for proprietary data, users may be uncomfortable with uploading entire raw datasets to an external server. Both these problems could be alleviated by providing a means by which users could pre-process their raw data locally, transferring only a small summary file to the ISMARA server. We developed a stand-alone client application that pre-processes large input files (RNA-seq or ChIP-seq data) on the user's computer for performing ISMARA analysis in a completely automated manner, including uploading of small processed summary files to the ISMARA server. This reduces file sizes by up to a factor of 1000, and upload times from many hours to mere seconds. The client application is available from ismara.unibas.ch/ISMARA/client.", "keywords": [ "bioinformatics", "data analysis", "motif activity response analysis", "genome", "command line tool", "Graphical User Interface" ], "content": "Introduction\n\nMotif activity response analysis (MARA) is a general method that models genome-wide expression or chromatin state data in terms of computationally predicted regulatory sites for transcription factors (TFs) and microRNAs to infer the key regulators, their targets, and regulatory interactions between regulators, that are operating in a given system (Arnold et al., 2013; Balwierz et al., 2014; Suzuki et al., 2009). MARA has been successfully used to reconstruct core regulatory networks across a wide range of mammalian systems (e.g. see Balwierz et al., 2014 and citations therein) and has recently been implemented as a completely automated online system called ISMARA (Integrated System for Motif Activity Response Analysis; ismara.unibas.ch; Balwierz et al., 2014). ISMARA is also one of many resources that are part of Switzerland’s Service Delivery Plan in ELIXIR (http://www.elixir-europe.org). To run ISMARA, a user only needs to upload her/his raw data to the server, which can be either gene expression data (microarray or RNA-seq data) or chromatin state data (ChIP-seq data) from a set of biological samples. Although ISMARA is a highly popular tool, the current sizes of raw next-generation sequencing datasets are so large (up to hundreds of GBs), that their upload to the web server can require many hours, and this has become a major bottleneck for many users.\n\nTo address this problem, we have developed a stand-alone client application (called the ISMARA client) that completely automates the process of pre-processing the user's raw data on her/his own computer, and transmits the much smaller resulting processed files to the ISMARA server for analysis. Since the processed files are many orders of magnitude smaller than the original raw files, the upload is short, even with slow Internet connection speeds.\n\nThe resulting processed file (typically several MBs large) is a simple tab-delimited file, which is sent to the ISMARA web server, where it is analyzed in the exactly the same way as when raw data is uploaded. The pre-processing that the ISMARA client performs is also identical to the pre-processing that would otherwise take place on the ISMARA server. Overall, by reducing transfer load and therefore upload times, the ISMARA server is less busy with file transfers, can respond quicker to client requests and the end-user experience is generally improved by shorter waiting times.\n\nAnother important feature of the ISMARA client is that it allows users to only communicate highly summarized data to the ISMARA server. In many cases users may be uncomfortable with uploading entire raw datasets of potentially highly competitive data to an external server. By using the ISMARA client, the raw data stays within the premises of users, whereas only small summary information is sent to the ISMARA server for further processing.\n\n\nMethods and implementation\n\nIn developing the ISMARA client application, our main objectives were to reduce data transfer times and to provide a software application that is easy to install and use on several platforms, i.e., operating systems. We selected the framework Qt5 (www.qt.io) using QML (http://doc.qt.io/qt-5/qtqml-index.html) for the user interface and C++ for the platform-independent part. Several of the pre-processing steps that are currently performed on raw data by the ISMARA web server have been implemented on the client side, i.e., within the ISMARA client, and packaged as a native application for Mac OS X and Linux.\n\nThe ISMARA client can process microarray data (CEL files), and RNA-seq and ChIP-seq data (BAM/BED files). Depending on the data type there are different processing procedures. For microarray data, the ISMARA client first performs background correction on the probe intensities, followed by correction and adjustment for non-specific binding, and then filters out consistently non-expressed probes. After this, it quantile normalizes the intensities across the samples and log-transforms them. A list of microarray chips that are currently supported is available on the ISMARA website (cf “Usage” at ismara.unibas.ch/fcgi/mara). For RNA-seq data, the client first sorts and indexes the input files, maps the reads to ISMARA's transcript set for the corresponding organism, uses ISMARA's associations between promoters and transcripts and the annotated transcript lengths to calculate normalized expression levels per promoter, and finally log-transforms the expression levels. ChIP-seq data files are sorted and indexed, reads that map to promoter regions (2kb regions centered on each promoter) are counted, the counts are normalized and log-transformed. Detailed descriptions of all processing steps can be found in the original ISMARA paper (Balwierz et al., 2014).\n\nThe actual software application uses several external tools, including samtools (Li et al., 2009), htslib and bedops (Neph et al., 2012), as well as scripts and modules in R and Python. Additionally, a new internal interface has been developed on the ISMARA server that is used by the ISMARA client to automatically upload locally pre-processed data.\n\nFrom a user's point of view, the ISMARA client is a convenient tool that takes large raw data files as input, processes them locally (using several CPU cores in parallel) and then submits the results of the pre-processing as a tab-delimited text file to the ISMARA server. The server then performs MARA on this pre-processed data and displays the final results in a web page, i.e. exactly as when raw data are uploaded to the web server. The user experience of the client and the existing web application are very similar, i.e., the client follows the web site's look and feel. The user starts by selecting the data type (microarray, RNA-seq or ChIP-seq): for RNA-seq and ChIP-seq, the user is also requested to select a genome assembly [human genome versions hg18 or hg19 or mouse genome version 9 (mm9)].\n\nOnce the options are selected, a user can add files in CEL, BAM or BED formats. Next, the pre-processing is started by clicking on the “Process data” button. Note that, if present, the “Email” and “Project name” fields can be used by the ISMARA server to send a notification when processing of a specific job has finished.\n\nAdditionally, the ISMARA client also implements a new functionality that is currently not available on the web server: several jobs, i.e., processing/submission requests can be managed with the client application. In particular, the client stores all on-going and finished jobs of the user, including their download URLs, so that it is easy to manage multiple sets of experiments. Detailed log information is also available and can be copy-pasted for further communication with the ISMARA team in case of problems or questions.\n\nIn order to allow and test for platform-independence, the application was developed on several Linux flavours (Linux Mint, CentOS and Ubuntu), as well as on Mac OS X using bash UNIX shell as the main glue between scripts and external applications. Original plans also included to support MS Windows natively (Qt5 allows that), but external dependencies on scripting and bioinformatics software, such as Python, samtools, R, and Bash, for which support is limited on MS Windows, could not be resolved without considerable re-engineering efforts. Therefore, we decided to use VirtualBox (http://www.virtualbox.org) to create disk images that can also be run on Windows machines. In detail, an Ubuntu client image of ISMARA can be run on VirtualBox and installed on MS Windows, allowing Windows users to make use of the ISMARA client.\n\nIn summary, easily installable binary applications of the ISMARA client are currently provided on-line for Ubuntu 15.04 and Mac OS X (10.10 and 10.11). Additionally, other Linux flavours and/or virtual machine images via VirtualBox can be provided on demand. The ISMARA client can be installed on a machine with the following modest requirements: 4 GB RAM, and fairly recent versions of R (3.2.0 and 3.1.2 for Mac and Linux, respectively) and Python (2.7.6 and 2.7.9 for Mac and Linux, respectively) need to be preinstalled. Notably, because experimental files can be several tens of GBs large, the client allows machines with limited amounts of disk space to make use of external hard drives. Importantly, usage of an external hard drive has no significant impact on the pre-processing performance and can be easily set up from the ISMARA client’s preferences.\n\n\nResults\n\nTo assess the performance of the client in comparison with usage of the ISMARA webserver directly we compared two scenarios that we denoted S1 and S2 (cf. Table 1): S1 uses the ISMARA client to pre-process data (P1), uploads small summary files to the server (Upload), and then performs the final analysis on the server (P2). Scenario S2 uploads all data (i.e., large files) to the ISMARA server directly, without using the ISMARA client, and lets the server perform both the pre-processing and final analysis (P1+P2). We tested both scenarios on networks with different speeds and used two different datasets: a set of RNA-seq files (GEO accession, GSE30611) with a total size of 30.2 GB, and a set of ChIP-seq files (GEO accession, GSE26386) with a total size of 3.6 GB.\n\nThe analysis used a client with 4 cores and a server with 12 cores, on both fast and slow networks. Tests were done in July 2016.\n\nTo investigate the performance gains of the ISMARA client for transferring data of reduced size, we compared the sizes of the original input files with the data file sizes that are obtained from the pre-processing by the client (P1). We analysed expression and ChIP-seq data on middle range desktop machines (Intel core i7 quadcore processors) running Linux Mint or Mac OS X using the example data available on the ISMARA server in the ‘sample data’ section. The pre-processing of ChIP-seq and RNA-seq data on the client lead to file size reductions of a factor of about 300 to more than a 1000 (10.4 MB and 17.4 MB compared to the original file sizes of 3.6 GB and 30.2 GB, respectively). A smaller file size reduces network transfer times significantly (Stockinger et al., 2002), particularly on long low latency wide-area network connections. For the RNA-seq example in Table 1, uploading the original 30.2 GB files took from 30 to 60 min on fast networks (1 Gbit/s network speed) to 5–6 hours on “normal” (mid-size/home network links with 10 Mbit/s speed). In contrast, uploading the pre-processed data file of 17.4 MB took only several seconds on both fast and slow links.\n\nNext, we compared end-to-end processing times of scenarios S1 and S2 (cf. column ‘Total’ in Table 1). For the S1 scenario, using 4 cores for the ISMARA client, we observed a total processing time of 2h45 for RNA-seq, including client side processing, upload and web server side processing. Upload time was negligible due to the small size of the pre-processed data file. For the S2 scenario, in which 30.2 GB of data is first uploaded to the server before all processing is done on the 12-core ISMARA server, we observed the following two total processing times: 2h15–2h45 for a 1 Gbit/s network and 7h45 for a 10 Mbit/s network. In summary, using the client on 10 Mbit/s (“slower”) networks was always faster than using the server only (S2). Even for fast networks, the observed total processing time was similar for S1 and S2.\n\nFor the ChIP-seq data (Table 1), overall execution times of scenarios S1 and S2 were similar. Finally, we did not observe any file size reductions for microarray experiments (GEO accession, GSE26386), due to the fact that input file sizes were much smaller (e.g. 36.9 MB) for microarray data in comparison with RNA-seq and ChIP-seq data. Notably, the client pre-processed data files that were uploaded remained relatively small for microarray data as well. Overall, the total processing times for scenarios S1 and S2 with microarray data showed no significant differences.\n\n\nConclusion\n\nThe ISMARA client works very well for medium to large datasets by reducing both data transfer times and in many cases also the overall execution times.\n\n\nSoftware availability\n\nISMARA client available from: https://ismara.unibas.ch/ISMARA/client/\n\nISMARA client source code: https://gitlab.isb-sib.ch/ST/ismara-client\n\nISMARA client archived source code at time of publication: DOI, 10.5281/zenodo.192284 (Artimo et al., 2016)\n\n(https://zenodo.org/record/192284#.WEbJSNWLTcs)\n\nLicence: GPL v2", "appendix": "Author contributions\n\n\n\nPA and SD developed the client application in Qt5/QML, integrating server-side scripts (in Python and R) that were developed and provided by MP. MP provided guidance on ISMARA's functionality, code and data. VI and HS did testing and project management. EvN provided the initial idea and overall supervision for the project/application. All authors contributed to writing this article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nSwiss State Secretariat for Education, Research and Innovation (SERI), in part. Development work on ISMARA in the van Nimwegen group is supported by the University of Basel, and by the CellPlasticity and BrainstemX grant of the Swiss National Science Foundation in the context of the SystemsX.ch initiative.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nArnold P, Schöler A, Pachkov M, et al.: Modeling of epigenome dynamics identifies transcription factors that mediate Polycomb targeting. Genome Res. 2013; 23(1): 60–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArtimo P, Davaud S, Pachkov M, et al.: ISMARA Client [Data set]. Zenodo. 2016. Data Source\n\nBalwierz PJ, Pachkov M, Arnold P, et al.: ISMARA: automated modeling of genomic signals as a democracy of regulatory motifs. Genome Res. 2014; 24(5): 869–884. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFANTOM Consortium, Suzuki H, Forrest AR, et al.: The transcriptional network that controls growth arrest and differentiation in a human myeloid leukemia cell line. Nat Genetics. 2009; 41(5): 553–62. PubMed Abstract | Publisher Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeph S, Kuehn MS, Reynolds AP, et al.: BEDOPS: high-performance genomic feature operations. Bioinformatics. 2012; 28(14): 1919–1920. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStockinger H, Samar A, Holtman K, et al.: File and Object Replication in Data Grids. Cluster Comput. 2002; 5(3): 305–314. Publisher Full Text" }
[ { "id": "18518", "date": "18 Jan 2017", "name": "Carsten O. Daub", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nArtimo et al. present a software tool to pre-process microarray, RNA-Seq and ChIP-Seq data for server-based ISMARA motif activity response analysis. With the novel client tool, the data transfer from the user to the ISMARA server is dramatically reduced saving time and allowing to keep the primary data confidential.\n\nThe developed client tool is a very useful complement to the ISMARA server. It makes the ISMARA server much more user friendly. The manuscript is well written with sufficient level of detail.\n\nI have two minor suggestions:\nThe client logfile is replaced after each start of the client. It might be helpful to be able to access logfiles for each of the jobs individually as well as even after restarting the client. It was unclear to me to which genome version the sample data was mapped to. It might also help to state the species for the sample data in case a user does not read the GEO entries.", "responses": [] }, { "id": "19973", "date": "07 Feb 2017", "name": "Josep Lluís Gelpi", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper reports a client software for the ISMARA server at SIB. The rational of the application is to pre-process data at user’s premises reducing the amount of time required to upload raw data to the server. This is indeed very reasonable and due to the usual size of input data, it represents a significant time saving. The client installs ok and software requirements are reasonable. Also the interface is friendly and easy to follow. Some comments/suggestions  follow:\n\n1. Instructions to install in a virtual machine for Windows are confusing. Links go to the installation packages of Virtualbox and Ubuntu desktop. I understand that users are expected to install the software in an empty, Ubuntu VM after Virtualbox is available. This needs some skills in system administration. An easier way would be to download a VirtualBox VM with the software already installed. Consider providing such ready-to-run VM, or alternatively a container (Docker or other). 2.  Data is uploaded automatically after pre-processing.  Does the server calculation also start automatically after upload? Results page should auto-reload when calculation is completed. 3. Consider making the upload optional (although it can be the default). Users may be interested in checking the intermediate files before running the ismara calculation, and upload manually the relevant ones. Users may also store the intermediate files or re-use them for other analysis. 4. Although the client is linked to a GUI, presumably, the pre-processing work can be done also from a command line. If this is the case, help on the command line instruction and parameters would be useful. In this way, experienced users could prepare a batch pre-processing job, or perhaps chain this in a larger workflow. Details of the procedure for uploading should be indicated. 5. Source is made open, but no indication about the policy of contribution is available. 6. In openSUSE KDE desktop, the interface show some visual problems: Data type menu is cut (Use miRNA does not appear), links on the output are not clickable. Also FAQ and Technical Support links are missing. 7. URL to the ISMARA results page does not appear in the log file, while text indicates it is.", "responses": [] } ]
1
https://f1000research.com/articles/5-2851
https://f1000research.com/articles/5-2414/v1
28 Sep 16
{ "type": "Software Tool Article", "title": "Building pathway graphs from BioPAX data in R", "authors": [ "Nirupama Benis", "Dirkjan Schokker", "Frank Kramer", "Mari A. Smits", "Maria Suarez-Diez", "Dirkjan Schokker", "Frank Kramer", "Mari A. Smits", "Maria Suarez-Diez" ], "abstract": "Biological pathways are increasingly available in the BioPAX format which uses an RDF model for data storage. We can retrieve the information in this data model in the scripting language R using the package rBiopaxParser, which converts the BioPAX format to one readable in R. It also has a function to build a regulatory network from the pathway information, here we describe an extension of this function. The new function will also include non-regulatory interactions in the pathway and thus allow extraction of maximum information. This function will be available as part of the rBiopaxParser distribution from Bioconductor.", "keywords": [ "rBiopaxParser", "R", "pathways", "BioPAX" ], "content": "Introduction\n\nBiological pathways represent signaling and/or metabolic events involving protein and non-protein molecules. They are increasingly used in gene and protein expression studies to provide an aggregate score for gene sets encoding for defined biological events1. Several pathway databases, either curated or not, have adopted the BioPAX [RRID:SCR_009881] (Biological Pathway Exchange) language as a standard for pathway representation using the RDF (Resource Description Framework) data model2.\n\nThe structure of BioPAX is founded upon groupings, called classes, for physical entities and interactions with hierarchical networks of their sub-classes. Interactions between physical entities are represented such that conjoint interactions may form a specific pathway with defined, but different types of interactions between the involved physical entities. The BioPAX format is being actively developed, with BioPAX level 2 format focusing on metabolic pathways and BioPAX level 3 introducing full support for signaling pathways.\n\nSPARQL (Simple Protocol And RDF Query Language) is a query language able to retrieve and manipulate data stored in RDF. Pathway information is often combined with statistical data analysis using tools such as R3. The rBiopaxParser [RRID:SCR_002744]4 is an R package to retrieve data stored in a BioPAX RDF format. It comes with several options that are useful to probe the data and extract specific information from it, for example participants of a pathway, stoichiometric conditions to be fulfilled for an interaction, etc.\n\nOne such option is the pathway2RegulatoryGraph (P2RG) function that converts a pathway into a graphical structure. This is extremely useful for visual representation and subsequent graph-based network analysis. The P2RG function returns the parts of a pathway that are regulated (activated or inhibited) by proteins or protein complexes. Here we present an adaptation of P2RG, called pathway2Graph (P2G) which can be used to build a graph of the entire pathway. P2G is specifically aimed at retrieving results from Reactome BioPAX level 3. We have verified P2G results by directly querying the original BioPAX data using SPARQL.\n\n\nMethods\n\nThe classes of PhysicalEntity and Interaction that are used in Reactome v51 to represent information on pathways are shown in Figure 1. This graph was generated using the tool RDF2Graph5 on the Reactome Level 3 RDF file. The nodes in Figure 1 represent classes and the edges show the possible relationships, called predicates, these classes could have in the database. As depicted in Figure 1, the node Pathway will have one or more PathwaySteps that consist of different types of Interaction sub-classes. All the Interaction nodes shown in Figure 1 describe interactions between PhysicalEntities, hence are connected to them by particular types of predicates as indicated in the edge labels. The Interaction classes are interconnected because they can be dependent on each other. The Control interaction and its sub-classes (Catalysis and Modulation) represent signaling events. They regulate BiochemicalReaction and Degradation interactions which mostly represent metabolic reactions.\n\nThis figure shows a network of the Interaction and PhysicalEntity classes that are a part of a pathway in Reactome v51 BioPAX level 3. Nodes are classes and the directed edges are links between them in the database. The green nodes are the Pathway and PathwayStep classes, the blue nodes are Interaction classes and orange nodes are PhysicalEntity classes.\n\nTo create a regulatory graph, the P2RG function starts with the Control, Catalysis and Modulation interactions that are either activating or inhibiting other interactions. This method provides a graph with plenty of information on the regulatory components of the pathway. The nodes of this graph are physical entities like Proteins or SmallMolecules and the directed edges are either activation or inhibition events. However, interactions can be missed if they are not regulated by the Control interactions and could result in the loss of valuable information in the graphical representation of the pathway.\n\nThe new function P2G can start with any type of interaction to obtain a graph with all possible physical entities involved in the pathway. Similar to the result of the P2RG function, the P2G function gives a graph with nodes that are physical entities, but the edges are not strictly activation or inhibition events. The directed edges could represent several types of events like translocation of a protein or cleavage of DNA. In some cases there is more than one documented connection between the same physical entities. In this case only the first connection is used as an edge in the final pathway graph.\n\n\nComparison of two methods: P2G vs P2RG\n\nThe Reactome database (v51) categorizes pathways into 27 branches, we only worked with pathways that have more than one interaction, resulting in 1,666 pathways. Using P2RG, graphs for 1,548 pathways were retrieved. By using the new P2G function, we were able to retrieve information on all 1,666 pathways. The highest number of pathways were obtained, using either method, in the “Disease” category (P2RG: 3,396 pathways, P2G: 4,888 pathways). In 85% of the cases, pathways retrieved using P2G consisted of more physical entities (nodes) than those retrieved using P2RG. 19% of the pathways have at least twice the number of nodes, and 60% have at least twice the number of interactions between nodes (edges) in the P2G version compared to the P2RG version. For example, the pathway ‘Apoptosis induced DNA fragmentation’ has seven nodes when built with the P2RG function and 23 nodes when built with P2G as shown in Figure 2. Total number of nodes and edges in important Reactome categories are given in Table 1. Missing information causes the appearance of disconnected graphs when reconstructing pathways. By using the new P2G function, the percentage of disconnected pathways is reduced by 9%. Additionally, P2G also has the option of only retrieving the biggest connected component. The pathways have directed edges because most of the interactions have direction. Edges without a direction are represented as bidirectional edges.\n\nBoth graphs were extracted from the same BioPAX file. A) Graph recovered using the new P2G function; B) Graph recovered using P2RG function. In both panels blue nodes are proteins or protein complexes, white nodes are non-protein entities. Black encircled nodes are found in both graphs and red encircled nodes are only detected with the new P2G function.\n\nThe number of nodes and edges of ten different pathways (Reactome Categories) are indicated as obtained after application of P2RG and P2G on the same set of BioPAX RDF information.\n\n\nConclusion\n\nThe P2G function (pathway2Graph) is currently available in the development version of rBiopaxParser package and will be part of the package in the Bioconductor 3.4 release. It is a useful addition to the rBiopaxParser package because it retrieves all the components of a pathway from the database and provides complete graphical information for both signaling as well as metabolic pathways.\n\n\nData availability\n\nThe input data for this package is the BioPAX format of any pathway database. We used the Reactome database which is freely available for download in different formats from the website www.reactome.org. A subset of this database is given as Supplementary file 1.\n\n\nSoftware availability\n\nSoftware available from: The function pathway2Graph is currently available in the development version of the R package rBiopaxParser accessible through the following commands in R.\n\nLibrary (devtools)\n\ninstall_github (repo = \"rBiopaxParser\", username = \"frankkramer-lab\")\n\nLatest source code: https://github.com/frankkramer-lab/rBiopaxParser/tree/2.12.0\n\nArchived source code as at the time of publication: http://dx.doi.org/10.5281/zenodo.616186\n\nSoftware license: GPL-2", "appendix": "Author contributions\n\n\n\nNB built the new function and prepared the manuscript. DS guided the process and edited the manuscript. FK tested the function, added it to the package and contributed to the manuscript. MS contributed significantly to the manuscript. MSD guided the building of the function, tested it and edited the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work has been financially supported by the Systems Biology Investment Programme of Wageningen University, KB-17-003.02-022. Frank Kramer’s work is funded by the German Ministry of Education and Research (BMBF) grants FKZ01ZX1508 and FKZ031L0024A.\n\n\nSupplementary material\n\nSubset of Reactome database.\n\nThis .owl file contains information on four pathways from the Reactome v51 BioPAX level 3 database. This format can be loaded into the R environment using the rBiopaxParser package and used to test the P2G function and obtain graphs which were used as the basis for Figure 2. More information on loading and processing this file format can be found in the package documentation.\n\nClick here to access the data\n\n\nReferences\n\nMitrea C, Taghavi Z, Bokanizad B, et al.: Methods and approaches in the topology-based analysis of biological pathways. Front Physiol. 2013; 4: 278. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDemir E, Cary MP, Paley S, et al.: The BioPAX community standard for pathway data sharing. Nat Biotechnol. 2010; 28(9): 935–942. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKramer F, Bayerlová M, Beißbarth T: R-based software for the integration of pathway data into bioinformatic algorithms. Biology (Basel). 2014; 3(1): 85–100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKramer F, Bayerlová M, Klemm F, et al.: rBiopaxParser--an R package to parse, modify and visualize BioPAX data. Bioinformatics. 2013; 29(4): 520–522. PubMed Abstract | Publisher Full Text\n\nvan Dam JC, Koehorst JJ, Schaap PJ, et al.: RDF2Graph a tool to recover, understand and validate the ontology of an RDF resource. J Biomed Semantics. 2015; 6: 39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKramer F: rBiopaxParser 2.12.0 [Data set]. Zenodo. 2016. Data Source" }
[ { "id": "17096", "date": "02 Nov 2016", "name": "Lynn Fink", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis articles describes the addition of a new function to the extant rBiopaxParser R library. This new function converts a BioPAX-formatted pathway of gene or protein interactions into a graphical structure that is human-viewable. This function supercedes an earlier function which performed the same task, but was unable to convert all nodes and egdes in the pathway. This resulted in a loss of information. This loss is now remedied, however, with the new and improved function.\nI installed the package and tested it and it seemed to work flawlessly, although I didn't try anything fancy. I imagine that the new function is a significant advance for people working on these pathways routinely as it must have been frustrating to have missing data.\nMy only reservation about the paper is that I was unclear on the point of the new function until I'd read most of the paper. Perhaps the authors could extend the last paragraph of the introduction to be more clear about why the new function was necessary and how it has improved the R package. Although this request is a small change, the article is confusing as it is and making this would be a big improvement.", "responses": [ { "c_id": "2339", "date": "12 Dec 2016", "name": "Nirupama Benis", "role": "Author Response", "response": "Thank you for your comments. The Introduction has been expanded with an image (now Figure 1) to emphasize the differences between the two functions." } ] }, { "id": "17305", "date": "07 Nov 2016", "name": "Stephen N. Floor", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have developed a new function in the rBiopaxParser package to generate figures from BioPAX formatted biological pathway data. This new function, called pathway2Graph (P2G) replaces an older function called pathway2RegulatoryGraph (P2RG). P2G includes more interaction terms than P2RG, and therefore generates more complete interaction graphs. That said, as written the changes are mysterious - why did the original function limit the interaction types when generating graphs and what is advantageous about including more interactions? Addition of the P2G function is a small improvement to the rBiopaxParser package that will be useful, but the discussion of P2G's advantages and disadvantages should be expanded.\nSpecific points:\nThe language throughout is highly technical, potentially compromising its readability to end users.\n\nThe edge labels in Figure 1 are very difficult to read. As these are discussed in the text, their font size and/or weight should be increased.\n\nIt would increase the readability of this software tool article if the authors described a (biological) scenario where the P2G function would be uniquely useful compared to P2RG. It’s obvious that including more interaction types will lead to more complete graphs, but in what scenario would this be useful for a user?\n\nWhy were all interaction types not originally included in P2RG? The advantages and any potential disadvantages of including all interaction types should be discussed more.\n\nCurrently, the difference between P2G and P2RG is rather minor (1,666 vs 1,548) – might this difference change in the future if more edges are added through the interaction types that are unique to P2G?\n\nThe work is technically sound and presents a useful extension to the rBiopaxParser package, but the paper describing this work will be useful to a broader audience if changes similar to those above are incorporated.", "responses": [ { "c_id": "2338", "date": "12 Dec 2016", "name": "Nirupama Benis", "role": "Author Response", "response": "Thank you for your comments. In the new version we have expanded the Introduction with non-technical specifics that should explain the basic differences between the functions to a broader audience. Figure 2 (Figure 1 in the previous version) has been changed to increase visibility of edge labels. In order to explain the differences between the outputs of the two functions we have added a section in the Methods and Results section where we describe a particular pathway (‘Apoptosis induced DNA fragmentation’) in terms of the extra information gained by using the new function. A new table (Table 2) with more biological information on this pathway has also been added." } ] }, { "id": "17940", "date": "24 Nov 2016", "name": "Hilary Ann Coller", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have developed a new function that allows the user to build a regulatory network in a graph format based on pathway information. In the version that the authors developed, the output graph includes regulatory and non-regulatory interactions and allows the viewer to more fully comprehend the underlying network. Nodes in the network represent classes, while the edges show the relationships among these classes. An example of the approach is provided with the Reactome database. The authors’ approach, P2G, was used to analyze data for 1,666 pathways. P2G returned more nodes than was retrieved by the earlier version in 85% of the cases. The software will be available as part of the Bioconductor 3.4 release. This will likely be a valuable addition to the Bioconductor package that will provide scientists with a means for generating graphical and intuitive networks from gene expression and metabolic data. The manuscript is clear with an appropriate title and abstract. The article is clearly written and the conclusions are based on the data.", "responses": [ { "c_id": "2337", "date": "30 Nov 2016", "name": "Nirupama Benis", "role": "Author Response", "response": "Thank you for your comments." } ] }, { "id": "17476", "date": "02 Dec 2016", "name": "Kyle Ellrott", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors of this paper describe a new function provided by the rBiopaxParser library, which is a R based system for parsing BioPax documents. BioPax is coded in RDF, which is a linked data format that describes the subject matter using graph triples in the form of a Subject, Predicate and Object. Their pathway2Graph and the old gathwayRegulatoryGraph essentially attempt to do graph transformations, taking the RDF graph and converting it to a graph that is suitable for analysis.\n\nThe paper’s primary metric for demonstrating improvements in their method is measuring the number of non-zero pathways retrieved after extracting data from Reactome. In addition, I tested code and their supplied supplemental data.\n\n``` > library(rBiopaxParser) > biopax=readBiopax(\"19c8ac7b-96b2-4db4-a78a-c2defed535ae.owl\") > a <- pathway2RegulatoryGraph(biopax, \"Pathway1020\") > b <- pathway2Graph(biopax, \"Pathway1020\")\n> nodes(a)  [1] \"CASP3\"\n\n\"CASP3(176-277)\"  [3] \"DFFA(1-?)\"\n\n\"DFFB\"  [5] \"DFFA(118-331)\"\n\n\"DFFA(1-117)\"  [7] \"DFFA(225-331)\"\n\n\"DFFA(118-224)\"  [9] \"Histone H1\"\n\n\"DFF40 homodimer/homooligomer\" [11] \"HMGB1/HMGB2\"\n> nodes(b)  [1] \"Protein8776\"\n\n\"Protein8777\"\n\n\"Complex4232\"  [4] \"Complex4233\"\n\n\"Complex4234\"\n\n\"Complex4235\"  [7] \"Complex4169\"\n\n\"Complex4238\"\n\n\"Complex4236\" [10] \"Complex4239\"\n\n\"Complex4240\"\n\n\"Protein8779\" [13] \"Protein8784\"\n\n\"Protein8785\"\n\n\"Protein8783\" [16] \"Complex4241\"\n\n\"PhysicalEntity567\" \"Complex2061\" [19] \"Complex4242\"\n\n\"Protein8786\"\n\n\"PhysicalEntity109\" [22] \"Complex4243\"\n\n\"Complex4244\"\n```\nFrom the old version (a) to the new extraction (b), for this particular pathway the number of nodes went from 11 to 23. Interestingly, the names of the vertices became less descriptive in the newer method, going from gene names like ‘CASP3’ to strings extracted from the RDF urls, like ‘Protein8776’.\nIt would seem that the authors claim, that the number of extracted elements from the BioPax files is accurate. But what is slightly confusing is the significance of this change. Was the original method faulty and this is a bug fix? Or is the method being updated to deal with the BioPax standard as it changes from version 1 to 2 to 3?\n\nThis seems like a minor but necessary change to the library to make it have a better extract information from BioPax. More explanation about the nature of this change, and how the new parsing strategy improved how the library was able to deal with all of the various types of relationship classes in the BioPax format would make a big difference in helping to illustrate the improvements mentioned in this paper.", "responses": [ { "c_id": "2390", "date": "28 Dec 2016", "name": "Nirupama Benis", "role": "Reader Comment", "response": "Thank you for your comments. We uploaded the second version of the paper before we received your comments on the paper. In the new version we have explained in more detail the differences between the two methods using the details of a pathway as an example. The new method is an extension of the existing function and simply serves to extract information on the regulatory and non-regulatory parts of a pathway. We hope that the current version of the paper sufficiently addresses your concerns." } ] } ]
1
https://f1000research.com/articles/5-2414
https://f1000research.com/articles/5-2841/v1
12 Dec 16
{ "type": "Software Tool Article", "title": "Integration of EGA secure data access into Galaxy", "authors": [ "Youri Hoogstrate", "Chao Zhang", "Alexander Senf", "Jochem Bijlard", "Saskia Hiltemann", "David van Enckevort", "Susanna Repo", "Jaap Heringa", "Guido Jenster", "Remond J.A. Fijneman", "Jan-Willem Boiten", "Gerrit A. Meijer", "Andrew Stubbs", "Jordi Rambla", "Dylan Spalding", "Sanne Abeln", "Youri Hoogstrate", "Chao Zhang", "Alexander Senf", "Jochem Bijlard", "Saskia Hiltemann", "David van Enckevort", "Susanna Repo", "Jaap Heringa", "Guido Jenster", "Remond J.A. Fijneman", "Jan-Willem Boiten", "Gerrit A. Meijer", "Andrew Stubbs", "Jordi Rambla" ], "abstract": "High-throughput molecular profiling techniques are routinely generating vast amounts of data for translational medicine studies. Secure access controlled systems are needed to manage, store, transfer and distribute these data due to its personally identifiable nature. The European Genome-phenome Archive (EGA) was created to facilitate access and management to long-term archival of bio-molecular data. Each data provider is responsible for ensuring a Data Access Committee is in place to grant access to data stored in the EGA. Moreover, the transfer of data during upload and download is encrypted. ELIXIR, a European research infrastructure for life-science data, initiated a project (2016 Human Data Implementation Study) to understand and document the ELIXIR requirements for secure management of controlled-access data. As part of this project, a full ecosystem was designed to connect archived raw experimental molecular profiling data with interpreted data and the computational workflows, using the CTMM Translational Research IT (CTMM-TraIT) infrastructure http://www.ctmm-trait.nl as an example. Here we present the first outcomes of this project, a framework to enable the download of EGA data to a Galaxy server in a secure way. Galaxy provides an intuitive user interface for molecular biologists and bioinformaticians to run and design data analysis workflows. More specifically, we developed a tool -- ega_download_streamer - that can download data securely from EGA into a Galaxy server, which can subsequently be further processed. This tool will allow a user within the browser to run an entire analysis containing sensitive data from EGA, and to make this analysis available for other researchers in a reproducible manner, as shown with a proof of concept study.  The tool ega_download_streamer is available in the Galaxy tool shed: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer.", "keywords": [ "Galaxy", "EGA", "bioinformatics", "workflows", "translational research", "data management" ], "content": "Introduction\n\nWith the advent of high-resolution and high-throughput experimental platforms the field of biomedical research has become more complex, with major shifts in data diversity and dimensions. Consequently, solutions for the increasing demand of data processing, storage and workflow management are required for translational research. Due to the privacy issues related to the clinical nature of translational research and personal footprints in molecular data, there is a need for a secure framework to store and analyse data. The aim of CTMM Translational Research IT (CTMM-TraIT) project is to provide a multidomain IT-infrastructure as an end-to-end solution where researchers can capture, process, and share their study data. To achieve this, CTMM-TraIT makes use of large community-driven open source software including tranSMART1–3 and Galaxy4,5. In a collaboration between ELIXIR, CTMM-TraIT and European Genome-phenome Archive(EGA) a full ecosystem was designed, as shown in Figure 1, to connect the storage of raw molecular profiling data with processed data and the computational workflows.\n\nThe clinical data of an experiment describes the clinical-pathological data, including tissue and patient information. Descriptors of the samples combined with these variables are stored in tranSMART. Molecular profiling data are derived from samples of patients: these samples are processed in the laboratory to obtain tissue derivatives, such as isolation of DNA, RNA and proteins, which are subsequently analysed by high throughput experimental techniques to obtain the raw molecular profiling data; the descriptions of the performed experiments are also stored in tranSMART. The actual raw data produced by the high throughput analysis are physically stored in repositories like EGA, while the interpreted data processed by extensive computational workflows, and references to the raw data are stored in tranSMART. The ability to reanalyse the raw data, is provided by Galaxy. Note that the work described here indicated by red arrows implements a data connection, allowing a user to retrieve raw data from EGA in Galaxy, and run subsequent workflows, constructed by tools in the Galaxy tool shed.\n\nFacilitating the long-term storage and management of raw, interpreted and clinical data (patient and tissue information), supported by provenance of computational workflows, is a key aim of the CTMM-TraIT project; special attention to the security is necessary, due to the privacy-sensitive nature of the data. EGA is a service that provides long term archiving and distribution of identifiable genetic and phenotypic data resulting from biomedical research projects. Data stored at the EGA are collected from individuals whose consent agreements authorise release only for specific research use to bona fide researchers. Strict protocols govern how information and data are managed, stored, transferred and distributed by the EGA project, and each data provider is responsible for ensuring a Data Access Committee is in place to grant access to their data. However, EGA only functions as long-term storage facility and does not facilitate analysis. Within the CTMM-TraIT project, we agreed upon a workflow in which the interpreted data, such as the BAM files, and the clinical-pathological data would be stored in tranSMART; the raw and uninterpreted data, such as FASTQ and BAM files, would be stored and archived in EGA. Figure 1 demonstrates how the clinical-pathological and interpreted data are managed by tranSMART, which links to the raw data in EGA which in turn can be accessed and (re)analysed from within a Galaxy environment.\n\nWithin EGA, data are separated into different layers: 1) raw data, produced by high-throughput platforms; 2) metadata describing the raw data, e.g. machines and protocols used and descriptions of treatments and tissues and 3) interpreted data, produced by running analyses on the first two layers. Since the EGA is ideally placed to facilitate continued data access and management to funded projects after completion of the project, and the data from layer 3 should be reproducible using the data from the other layers, only the data from the 1st and 2nd layer will go to EGA for archival storage.\n\nIn the ecosystem we use Galaxy, a popular and user friendly web-based bioinformatics platform that provides an intuitive user interface to run and design workflows, to perform integrated analysis from multiple domains (genomics, transcriptomics and proteomics), and to share and communicate both results and methodologies. It makes use of tools and libraries provided by the bioinformatics community as plugins. Tools are embedded as plugins in such a way that each of them becomes a modular block that can be plugged into the next block (tool or visualisation)6. To directly import and hence analyse data stored in EGA within Galaxy, it is necessary to implement an interface from EGA to Galaxy as a plugin (Galaxy tool wrapper).\n\nHere we present an end-to-end interface to a framework which seeks to extend data accessibility, ensures long-term archival and facilitates downstream analysis by utilising EGA. The framework embeds EGA access into Galaxy, and allows subsequent workflows using (novel) Galaxy tools. An advantage of setting up an analysis in this way is that both the tools and the data are connected and centralised and can be shown, shared, and reproduced. We further demonstrate the setup with an RNA-Seq use case.\n\n\nMethods\n\nWe have embedded the EGA download client (https://ega-archive.org/using-ega-download-client) into Galaxy as a tool wrapper including dependency management. The tool is named as ega_download_streamer and can be installed on Galaxy systems from the main tool shed. Before being able to use the tool, Galaxy needs to be configured by setting up an EGA account, further explained in Supplementary File 1. Hereafter, we call this tool \"Galaxy EGA download streamer\"\n\nTo allow access to EGA, the tool interfaces with the EGA download client ensuring that encrypted data are transferred from EGA. The Galaxy EGA download streamer gets data from EGA directly into the user’s history. On the Galaxy side, the server starts with a form requesting a unique EGA file identifier. After submission, it logs in with the configured credentials, creates an encryption key and sends this over a secure connection to EGA requesting EGA to encrypt the file that corresponds to the identifier with the given key. After making the request, the encrypted package becomes available and will be downloaded; subsequently the connection with EGA is closed. The package will be locally decrypted and extracted if it is a file archive. Galaxy determines, with its built-in sniffing system, the file type (FASTQ, BAM, GTF, etc.) and eventually puts the files into the user’s history.\n\nGalaxy with a version number 16.07 and above is required to use this tool, because only this version and higher can make the tool detect data types automatically within a workflow. In addition, at least 30 GB RAM and 100GB hard disk space is required to run the use case in the next section. Other system requirements for the installation of Galaxy can be found in Galaxy official document (https://wiki.galaxyproject.org/Admin/GetGalaxy).\n\n\nUse case\n\nAs a proof of concept we show how the Galaxy EGA download streamer may be used in a workflow to detect fusion genes from RNA-seq data. To demonstrate this workflow we use cell line data that can be made publicly available. We use an RNA-Seq dataset of the VCaP cell line in the Galaxy workflow shown in Figure 2. Since the recurrent fusion TMPRSS2-ERG is found in more than 50% of the diagnosed patients7, we test it for the presence of fusion genes, and since VCaP contains TMPRSS2-ERG8 we can use TMPRSS2-ERG as a positive control. We use the tool STAR-Fusion (https://github.com/STAR-Fusion/STAR-Fusion/wiki) which can be used as a separate module after running the RNA-STAR aligner9.\n\nThe workflow firstly, obtains the raw data from EGA, to subsequently allow reanalysis of the data in a workflow of multiple components, to derive interpreted data. The raw forward and backward FASTQ sequencing reads are imported from EGA by ega_download_streamer; subsequently, the tool FASTQ Groomer does a consistency check of the data formats; then with Sickle, low quality bases (Q<30) are trimmed and reads clipped into less than 25 bases are discarded, only outputting the high-quality sequencing reads. Afterwards, these reads are aligned to the hg19 (GenBank Assembly ID GCA_000001405.1) reference genome in RNA STAR. Then STAR-Fusion is used for predicting the fusion genes, which also requires two reference files as auxiliary inputs. The output goes through two filters to only keep predictions having more than two split reads and more than two spanning reads.\n\nBesides the Galaxy EGA download streamer accessing EGA, this workflow also requires adaptations to the RNA-STAR Galaxy wrapper from the IUC group (https://toolshed.g2.bx.psu.edu/repos/iuc/rgrnastar), by adding a specific fusion detection settings preset and to create a new Galaxy wrapper for STAR-Fusion. The workflow starts with obtaining data from EGA which for this study are the raw paired-end FASTQ sequencing reads. These files translate to EGA identifiers: EGAF00001210838 (forward) and EGAF00001210839 (reverse) and will be the input for the Galaxy EGA download streamer. Because we want to ensure the handshake with other tools and the several sub file formats of FASTQ10, it is desirable to proceed with a FASTQ-sanger encoded file, which is ensured by the tool FASTQ Groomer11. Note that the search space for alignment is higher for fusion gene detection than for most other alignment purposes such as determining expression levels; hence we would like to have a high base quality to avoid misalignments and unnecessary computation. We improve the base quality by trimming low quality bases (Q<30) and discarding reads being clipped into less than 25 bases with the tool Sickle (12; https://github.com/najoshi/sickle). These high-quality sequencing reads were aligned to the hg19 (GenBank Assembly ID: GCA_000001405.1) reference genome. As proposed by the authors of STAR-Fusion we use fusion gene detection specific settings available as the “Use parameters suggested for STAR-Fusion”-preset in IUC’s Galaxy RNA-STAR wrapper. Besides a classical alignment file, it also produces an alignment file for the discordant reads and an equivalent junction file. STAR-Fusion uses the junction file to predict fusion genes and requires two additional reference files (Data and software availability). STAR-Fusion produces a list that contains many candidates including predictions that have a rather low confidence level, less than 3 split or spanning reads. Therefore we end the workflow with two filters that only keep predictions that have more than two split reads and more than two spanning reads.\n\nThe results on the VCaP data contain candidate fusion genes with a high number of corresponding reads: HNRNPC-KIAA0586, USP10-ZDHHC7, TMPRSS2-ERG, PIK3C2A-TEAD1. Except for HNRNPC-KIAA0586, the others were earlier reported in RNA-Seq and DNA-Seq analysis11,13,14.\n\n\nDiscussion and conclusions\n\nThe EGA-TraIT implementation study sets out to design an entire ecosystem for molecular profiling data in clinical research, with a focus on security. Here we demonstrate with a proof-of-concept study that it is possible to connect EGA and Galaxy as designed within this system. This study is part of an ongoing effort to make EGA data correspond to FAIR (Findable, Accessible, Interoperable, and Reusable) data principles15, which will result in further recommendations on the EGA data model and ontologies in the near future. Here we highlight the implementation of the storage component and demonstrate how to use it in an analysis context. Its key value is that it allows tracking and redistributing the entire workflow and data jointly, from the beginning to the end, ensuring the provenance of all intermediate layers up to the final results. As a result, we have:\n\nshared molecular data via EGA;\n\ncreated new Galaxy tools;\n\nshared the workflow, including all parameters, via a URL for Galaxy (as a shared history) and via myExperiment16;\n\nshared the interpreted data as a Galaxy history;\n\nshared a manual as a Galaxy page on how to set up such an experiment.\n\nFurther to the work described here, the implementation study continues until the end of 2016 and the complete outcomes from this project, with recommendations on structuring metadata, will be presented in a future report. This implementation study and the IMI OncoTrack implementation study (https://www.elixireurope.org/news/elixir-and-oncotrack-examine-solutions-long-term-management-translational-data), have provided complementary use cases for EGA to shape linkage with external databases, such as tranSMART.\n\nA limitation of the working prototype for the Galaxy EGA download streamer is that it requires setting up a generic EGA account for the entire public Galaxy server. This means that any user can only access the data files that are available for that generic account rather than a personal account. We have thought of several solutions:\n\nA secure input type for passwords. However, Galaxy currently does not support password input types, and textual input types are recorded in the database which allows them to be shared when history items are shared with other users.\n\nAdapt EGA in a way that it shares tokens that allow download of a particular file within a particular time. However, re-running the tool would require selecting a new token. For this setup, it would be ideal to have a non-memorised data type.\n\nAn authentication management mechanism within Galaxy. If a user would configure certain authentications within Galaxy, Galaxy can manage these authentications and automatically connect to EGA on request (OAuth model).\n\nDue to the current limitation of data protection and access control in a public Galaxy service, a private Galaxy instance seems to be a practical solution to this problem, keeping the data access limited to a small research group. This does require extra expertise to properly establish the service in a secure environment.\n\n\nData and software availability\n\nThe software can be found in the main Galaxy tool shed at the following URL: https://toolshed.g2.bx.psu.edu/view/yhoogstrate/ega_download_streamer\n\nThe source code of Galaxy EGA download streamer is available at Github: https://github.com/ErasmusMC-Bioinformatics/galaxytools-emc\n\nArchived source code of Galaxy EGA download streamer at the time of publication: https://zenodo.org/record/167330; doi, 10.5281/zenodo.16733017 License: MIT\n\nThe workflow is publicly accessible and can be downloaded at: https://bioinf-galaxian.erasmusmc.nl/galaxy/u/yhoogstrate/w/ega-vcap-rna-seq-demo\n\nThe workflow with corresponding data are explained in more detail at the published Galaxy page, including a description of the results: https://bioinf-galaxian.erasmusmc.nl/galaxy/u/yhoogstrate/p/egavcap-rna-seq-star-fusion-demo\n\nThe workflow is also described at myExperiment: http://www.myexperiment.org/workflows/4924.html\n\nThe data are accessible at the following URL: http://bioinf-galaxian.erasmusmc.nl/galaxy/u/yhoogstrate/h/galaxy-ega-star-fusion-demo\n\nThe raw paired-end FASTQ sequencing reads used in the \"Use case\" section can be downloaded from EGA using EGA File identifiers: EGAF00001210838 (forward) and EGAF00001210839 (reverse). There files belong to EGA dataset EGAD00001001626, which belongs to the study that can be accessed from https://ega-archive.org/studies/EGAS00001001476.\n\nTwo additional reference files required in the workflow can be downloaded from the \"Shared Data\" section of Galaxy: https://bioinf-galaxian.erasmusmc.nl/galaxy/library/list# folders/F8f0c64b106db6693", "appendix": "Author contributions\n\n\n\nYH, SH and A.Se developed the Galaxy EGA download streamer; YH developed and updated the Galaxy wrappers for RNA-STAR and STAR-Fusion and made upstream changes in Galaxy for workflow compatibility; YH, A.St, DS, SA, JR, SH, JB, GM, DvE, RF, JWB, GJ, JH designed the workflows, the data ecosystem and provided the use cases; YH and CZ did the testing; SA, DS and SR coordinated the implementation study; YH, CZ and SA prepared the first draft of the manuscript; all the authors were involved in revising the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis EGA-TraIT implementation study is supported by ELIXIR. YH, CZ, JB, SH, RF, JWB, GM, A.St and SA are all supported by CTMM-TraIT (grant agreement number 05T-401).\n\nA.Se and DS are supported by ELIXIR; the research is supported by ELIXIR-EXCELERATE, ELIXIR and European Molecular Biology Laboratory. ELIXIR-EXCELERATE is funded by the European Commission within the Research Infrastructures programme of Horizon 2020 (grant agreement number 676559); ELIXIR, the research infrastructure for life-science data.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Justin Paschall and Serena Scollen for helpful suggestions and comments. Jochem Bijlard contributed his work mostly when he was working at the computer science department, Vrije Universiteit Amsterdam.\n\n\nSupplementary material\n\nSupplementary File 1: A user guide for Galaxy EGA download streamer, including installation and usage.\n\nClick here to access the data\n\n\nReferences\n\nBierkens M, van der Linden W, van Bochove K, et al.: tranSMART. J Clin Bioinforma. 2015; 5(Suppl 1): S9. Publisher Full Text | Free Full Text\n\nLappalainen I, Almeida-King J, Kumanduri V, et al.: The European Genome-phenome Archive of human data consented for biomedical research. Nat Genet. 2015; 47(7): 692–695. PubMed Abstract | Publisher Full Text\n\nScheufele E, Aronzon D, Coopersmith R, et al.: tranSMART: An Open Source Knowledge Management and High Content Data Analytics Platform. AMIA Jt Summits Transl Sci Proc. 2014; 2014: 96–101. PubMed Abstract | Free Full Text\n\nTaylor J, Schenck I, Blankenberg D, et al.: Using galaxy to perform large-scale interactive data analyses. Curr Protoc Bioinformatics. 2007; Chapter 10: Unit 10.5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHillman-Jackson J, Clements D, Blankenberg D, et al.: Using Galaxy to perform large-scale interactive data analyses. Curr Protoc Bioinformatics. 2012; Chapter 10:Unit10.5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlankenberg D, Von Kuster G, Bouvier E, et al.: Dissemination of scientific software with Galaxy ToolShed. Genome Biol. 2014; 15(2): 403. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJhavar S, Reid A, Clark J, et al.: Detection of TMPRSS2-ERG translocations in human prostate cancer by expression profiling using GeneChip Human Exon 1.0 ST arrays. J Mol Diagn. 2008; 10(1): 50–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTomlins SA, Laxman B, Varambally S, et al.: Role of the TMPRSS2-ERG gene fusion in prostate cancer. Neoplasia. 2008; 10(2): 177–188. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDobin A, Davis CA, Schlesinger F, et al.: STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1): 15–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCock PJ, Fields CJ, Goto N, et al.: The Sanger FASTQ file format for sequences with quality scores, and the Solexa/Illumina FASTQ variants. Nucleic Acids Res. 2010; 38(6): 1767–1771. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlankenberg D, Gordon A, Von Kuster G, et al.: Manipulation of FASTQ data with Galaxy. Bioinformatics. 2010; 26(14): 1783–1785. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJoshi NA, Fass JN: Sickle: a sliding-window, adaptive, quality-based trimming tool for FastQ files. 2011. Reference Source\n\nTeles Alves I, Hiltemann S, Hartjes T, et al.: Gene fusions by chromothripsis of chromosome 5q in the VCaP prostate cancer cell line. Hum Genet. 2013; 132(6): 709–713. PubMed Abstract | Publisher Full Text\n\nKim D, Salzberg SL: TopHat-Fusion: an algorithm for discovery of novel fusion transcripts. Genome Biol. 2011; 12(8): R72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoble CA, Bhagat J, Aleksejevs S, et al.: myExperiment: a repository and social network for the sharing of bioinformatics workflows. Nucleic Acids Res. 2010; 38(Web Server issue): W677–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoogstrate Y, Hiltemann S: ErasmusMC-Bioinformatics/galaxytoolsemc: v1.0 ega_download_streamer [Data set]. Zenodo. 2016. Data Source" }
[ { "id": "19733", "date": "02 Feb 2017", "name": "Hervé Ménager", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nHoogstrate & al. present a Galaxy plugin (tool) to enable a direct access to the European Genome-phenome Archive. The purpose of this work is to enable the direct access from the Galaxy workbench to the data stored in the EGA, whose access is restricted due to their personal nature (personally identifiable biomedical data). Such components are indeed a requirement to implement a user-level access to restricted data, and are aligned with the goals of CTMM-TraIT and ELIXIR. The proposed solution is simple, pragmatic and effective.\nThe current limitation of this work is the authentication for users who have to share their credentials at the level of the Galaxy instance. It is very clearly explained, and I agree that this restricts the usage of this tool to small group or private instances. This very problem is however discussed by the Galaxy community (see https://github.com/galaxyproject/galaxy/pull/393, https://github.com/galaxyproject/galaxy/pull/3121, https://github.com/galaxyproject/galaxy/pull/3383) and hopefully solving it will enable such use cases on institution-wide and public Galaxy servers.\nOne aspect I would like to see discussed, although it is probably a bit beyond the scope of this paper, is the implementation of the first analysis step (figure 1) of the designed ecosystem which produces the interpreted data stored into TransMart. Ideally, if the Galaxy system is used for reanalysis of the raw and interpreted data, one would expect that the \"Analysis\" use the same Galaxy workflows. It does not seem to be the case here, and I would be very interested in hearing what motivates selecting the alternative that was preferred to this solution.", "responses": [] }, { "id": "19969", "date": "06 Feb 2017", "name": "Anton Nekrutenko", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes proof of principle for accessing the data contained within the European genome-phenome archive (EGA) via Galaxy analysis environment. This is a very timely and important development as it would dramatically increase the utility of resources such as EGA. This is because continuous accumulation of new data and development of new technologies will undoubtedly result in the need to reanalyze previously generated datasets and to combine them with newly acquired research outcomes. Having ability to securely request and analyze the data via Galaxy makes such re-analyses straightforward and convenient resulting in more researchers performing these tasks. My hope is that this would will push other sites with similar data (e.g., dbGaP) to implement similar software solutions.\nI have several comments:\nTo access the data a user must have necessary security credentials. One the data is transferred into Galaxy it leaves the EGA filesystem. How do authors secure this particular Galaxy instance? Other sites housing protected data will need this information to make the decision on whether to adopt a similar strategy.\nIt is necessary to explain why the Galaxy instance for EGA access requires a minimum of 30 GB RAM and 100GB disk space. It seems that these requirements are dictated by data size and types of tools (i.e., STAR tools require considerable amount of memory to perform analyses of RNAseq data). Again other sites adopting Hoogstrate’s et al. approach should be aware of the fact that Galaxy itself requires little resources. Instead the underlying tools and data would dictate hardware specifications.\nGalaxy Toolshed is introduced in the paper with little explanation of what it is. A couple of sentences descripting this “AppStore” will be helpful to reads who are not familiar with Galaxy’s ecosystem.\nWhile it is a good starting example, it would be helpful to show that one can generate more concrete results with Galaxy beyond just filtering STAR-fusion output. Is there any interesting bit of biological information that can be added to the paper?\nWhen I tried to access a history at https://bioinf-galaxian.erasmusmc.nl/galaxy/u/yhoogstrate/w/ega-vcap-rna-seq-demo browser has timed out.", "responses": [] } ]
1
https://f1000research.com/articles/5-2841
https://f1000research.com/articles/5-2834/v1
08 Dec 16
{ "type": "Research Article", "title": "Effect of diclofenac suppository on pain control during flexible cystoscopy-A randomized controlled trial", "authors": [ "Mehwash Nadeem", "M Hammad Ather", "Mehwash Nadeem" ], "abstract": "TRIAL DESIGN: To compare the difference in pain score during flexible cystoscopy between patients undergoing the procedure with plain lubricating gel  only and plain gel with diclofenac suppository in a randomized control trial. METHODS:  A total of 60 male patients with an indication of flexible cystoscopy were enrolled in a prospective, randomized controlled study. Patients were randomized in two groups. In group “A”, patients received diclofenac suppository one hour prior to the procedure while group “B” did not receive diclofenac suppository. Both groups received 10 ml of intra-urethral  plain gel for lubrication during flexible cystoscopy. Pain score was recorded immediately after the procedure using the visual analogue scale (VAS). Pre- and post-procedure pulse rate and systolic blood pressure was also recorded. Statistical analyses were performed using chi-square test and student t-test. Regression analysis was performed to address the confounding variables. RESULTS: Both groups were comparable for variables including age, duration of procedure, level of operating surgeon and indication of procedure. Most common indication for flexible cystoscopy was removal of double J stent. There was a statistically significant difference in the mean pain score between two groups (p = 0.012).  The difference in post-procedure mean pulse rate in the two groups was statistically significant (p= 0.01) however there was no difference observed in mean post procedure systolic blood pressure. Regression analysis showed that none of the confounding variables were significantly affecting pain perception. CONCLUSIONS: Intra rectal diclofenac suppository is simple and effective pre-emptive analgesia. We recommend its routine use during flexible cystoscopy for better pain control.", "keywords": [ "diclofenac suppository", "pain control", "flexicystoscopy", "office urology" ], "content": "Introduction\n\nThe earliest reported use of flexible endoscope for examination of bladder neck was by Tsuchida and Sugawara1. It is now one of the most commonly performed diagnostic as well as therapeutic urologic interventions2. Pain associated with cystoscopy varies from patient to patient and there is continuous effort using various methods to reduce pain during and after the procedure to improve patient compliance for flexible cystoscopy. The majority of patients require local anesthesia or lubricant solution only but some patients may require intravenous sedation3 or inhalation analgesia (nitrous oxide)4. Factors contributing to severity of pain include: lubrication, use of topical anesthesia and duration of cystoscopy5–7 but the available evidence for best practice in terms of treatment is continuously evolving8. The important issues regarding the correct use of intra-urethral gels are, for the most part, left to individual preference9. Effect of different intra-urethral gels, their dosage, temperature and time of instillation on pain perception has been evaluated in literature. In a randomized control trail, 2% lidocaine gel in two different doses (10 and 20 ml) and plain lubricating gels were found to be equally effective for pain control during flexible cystoscopy (p=0.406)10. Pain perception with use of lidocaine versus plain lubricating gel is less as reported in a meta-analysis by Aaronson et al.11 while another meta-analysis by Patel et al. has reported no statistical difference among the two gels for pain control12. In a study by Komiya et al., oral zaltoprofen has been used as pre-emptive analgesia for rigid cystoscopy and it has been proved to provide better pain control than 2% lidocaine gel alone (11.35 versus 13.69 with a difference of pain score -2.8, p-value 0.0087)13. Intra-rectal diclofenac suppository administration used by Irer et al. has a proven role to reduce pain and improve patients’ tolerance of trans rectal ultrasound-guided prostate biopsy14.\n\nDiclofenac is an anti-inflammatory drug with local and systemic effects; the local effects include reducing the impact of pain mediators. The diclofenac suppository in comparison to the oral has a rapid onset and a slower rate of absorption. The maximal plasma level Is reached within 2 hours, and is maintained for up to 12 hours and that forms the basis of using suppository rather than oral NSAID in our study15. In the current study we have attempted to assess the use of diclofenac suppository as a pre-emptive analgesia during flexible ureteroscopy.\n\n\nMethodology\n\nThe Ethical Review Committee of the Aga Khan University and the Clinical Trial Unit approved the study protocol. The study was registered at www.clinicaltrials.gov (ClinicalTrials.gov identifier: NCT01812928). This trial was conducted at the surgical day care unit from February 2013 to July 2013.\n\nDetails of recruitment and flow of study has been demonstrated as CONSORT flow diagram. (Figure 1). The principal investigator of this study obtained the written consent from all the qualified patients before randomization. All male patients of 18 years of age and older with indication for flexible cystoscopy, were assessed for recruitment in the trial. We included all adult males who attended for evaluation of hematuria or lower urinary tract symptoms and those for removal of double J ureteral stent. All patients undergoing the procedure had a urinalysis and culture to exclude UTI. Patients were requested to empty the bladder immediately prior to the procedure or within 30 minutes. Prior to the procedure, patients were explained the visual analog scale (VAS; score zero means no pain and 10 means worst pain). Eligible patients were randomized by a computer-generated list and sealed envelopes. Patients were randomized into either Group A (those patients who received diclofenac suppository prior to procedure) or Group B (those patients who did not receive diclofenac suppository prior to procedure) using a web-based random number generator (RANDOM.ORG, Dublin, Ireland; https://www.random.org). Diclofenac suppository (100 mg) was administered rectally 1 hour prior to the procedure in the pre-operative area. Both groups received 10 ml of plain lubricating gel immediately before the procedure for the purpose of lubrication.\n\nThe procedure was performed at the surgical day care unit in supine position by a consultant urologist or senior urology resident (residency year 5 and 6) that was blinded to the randomization group. A second resident immediately following the procedure, collected data (pain score) in the operating room. The VAS consists of a straight line with the endpoints defining extreme limits such as ‘no pain at all’ and ‘pain as bad as it could be’16. The investigator was blinded to the group (independent assessor). Operative time was recorded from the operating room time log. Pre- and post-procedure pulse rate and blood pressure were recorded for all participants.\n\nData was analyzed using SPSS™ version 17.0. Results were described in terms of mean and standard deviation for age, duration of procedure and pain score while frequency and percentage were mentioned for categorical variables. The student t-test (independent samples, one-tailed) was used to determine statistical significance of VAS for pain between group A and B. Confounder and effect modifiers i.e. age, level of the person performing procedure, indication for procedure and duration of procedure were analyzed using linear regression analysis. p-value of <0.05 was considered as statistically significant.\n\n\nResults\n\nSeventy-three patients were evaluated for inclusion in the study. A total of sixty patients were recruited in the trial and analyzed. The mean age was 46.75 ± 16.12 years (IQR: 18–80). The most common indication for flexible cystoscopy was removal of double J Stent (n= 38, 63.3%), others were for evaluation of hematuria (16, 26.7%) and lower urinary tract symptoms (6, 10%). Year 5 and 6 urology residents performed the majority of the procedures (n= 56). Mean duration of the procedure was 5.52 ± 2.13 minutes (IQR: 2–10 minutes). On the 11 point VAS the mean pain score was 3.63 with a standard deviation of 1.46 for the entire group (IQR: 0 – 7). The highest pain score was of 7 on VAS reported by only one patient from group B.\n\nThe mean age of the patients in groups A and group B were 48.53 ± 17.81 years and 44.97 ± 14.31 years respectively and there was no statistically significant difference (p= 0.53). The pre-procedure pulse and systolic blood pressures were comparable in both groups. Mean duration of procedure in group A was 5.76 ± 2.25 minutes and in group B was 5.28 ± 2.00 minutes. This difference in duration was not statistically significant (p=0.82). Indications for the procedure and level of operating surgeon were also comparable between the groups.\n\nMean pain score in group A was 3.16 ± 1.53 and in group B was 4.10 ± 1.24. This difference in the mean pain score was found to be statistically significant (p= 0.012). None of our patients required additional analgesia in either group. The difference in post-procedure pulse rate was found to be statistically significant (p=0.01) between groups however no statistically significant difference (p=0.15) was observed in systolic blood pressure between two groups (Table 1).\n\nLinear regression analysis was performed. None of the confounding factors (including age, indication for procedure, level of operating surgeon and duration of procedure) was found to have significant impact on the outcome parameter (r2 = 0.026, standard error of estimate= 1.479; Table 2).\n\nModel summary (a)\n\n* Predictor: (constant), indication for procedure\n\nDuration of procedure (b)\n\n\nDiscussion\n\nWe examined the effect of pre-emptive analgesia on pain perception during flexible cystoscopy and found out that diclofenac suppository significantly reduces pain when administered as pre-emptive analgesia before flexible cystoscopy.\n\nRandomized studies by Patel et al.12 regarding use of lidocaine versus plain gel, which included 817 patients, showed that intra urethral lidocaine gel had no statistical effect on pain on a 100-point VAS scale (95% CI, -9.6 to 0.385). This meta-analysis challenged the commonly held belief among clinicians that intra urethral lidocaine gel is more efficacious than plain gel for decreasing pain during flexible cystoscopy12. In contrast to the findings of Patel et al.12, Cornel et al. observed slightly less pain (statistically non significant) in the test group and pain perception was the same between patients with past experience of cystoscopy and initial cystoscopy17. To avoid this bias, we kept very strict inclusion criteria and excluded all the patients with previous experience of flexible cystoscopy.\n\nThe present study has demonstrated significant reduction in pain perception during flexible cystoscopy in male patients with use of diclofenac suppository as pre-emptive analgesia. Sample size was calculated a priori to detect the effect, according to Lwanga et al.18 We followed stringent criteria for enrollment of patients in this trial to eliminate confounding factors for pain. Computer generated sequences were used for randomization in order to give equal chance of being selected in either group to all recruited patients.\n\nFlexible cystoscopy is often performed repeatedly in particular during the follow up of urothelial cancer. As repeated cystoscopy did not increase the patient's tolerability to pain associated with cystoscopy, Muezzinoglu noted the need for more effective anesthesia to improve tolerability during the procedure and maintain quality of life of the patients under long-term follow-up with repeated cystoscopies19. Till date various techniques have been used to ameliorate the perception of pain during flexible cystoscopy. Use of NSAID as pre-emptive analgesia has been tested for various surgical procedures20,21. Komiya and co-workers examined the effect of anti-inflammatory drug (NSAID) zaltoprofen that inhibits the generation of prostaglandins as well as the pain induced by bradykinin during rigid cystoscopy13. The mean age of the patients in their study was 69.3+/- 8.2 (range: 41–83) while in our study we had relatively younger study subjects (mean age+/- SD, range: 46.75+/-16.1 years, 18–80 years) who are presumably more anxious with lower pain threshold. Despite this fact, diclofenac suppository significantly improved the pain perception and proved to be effective regardless of age on regression analysis. Another matter of debate is the statistical method used in the study by Komiya et al.13 where they used a “one sample Wilcoxon test” for comparing the two groups which is rather an inappropriate test to demonstrate the effect. The one-Sample Wilcoxon signed-rank test is a non-parametric alternative to a one-sample t-test. The test determines whether the median of the sample is equal to some specified value. Data should be distributed symmetrically about the median. In the present study we have used regression analysis, which is a more stringent method to demonstrate the effect.\n\nIn our study, we used diclofenac suppository as pre-emptive analgesia. The pharmacokinetics of the suppository form is quite different from the orally administered agent. It acts as an anti-inflammatory drug both locally and systemically, by minimizing the effects of local mediators involved in the pain response. Diclofenac has been marketed internationally since 1973 and is currently available in oral, rectal, parenteral and topical preparations15. The efficacy of the diclofenac suppository is due to more rapid onset of effect, and a slower rate of absorption (it takes approximately 4.5 hours for complete absorption) than oral enteric-coated tablets. The maximal plasma level is attained within 2 hours, and it is maintained for up to 12 hours15. The terminal half-life of diclofenac in plasma is 1 to 2 hours. The major route of excretion is the urine (~60%) and a small percentage through bile in the feces22. Its role has proven to be effective for pain control during trans rectal ultra sound guided prostate biopsy in study by Haq et al.23. In a case control the investigators noted that it is a simple and safe method. While Irer et al.14 showed additional benefit of using lidocaine gel for pain control during the same procedure but statistical significance of this study is in question due to its smaller sample size.\n\nIn the present study, appropriate sample size, stringent criteria for recruitment, computer generated randomization, proper statistical methods and analysis has increased the scientific rigor. This was not a placebo controlled as various per rectally medications or “dummy drugs” may have some local inflammatory effect.\n\n\nConclusion\n\nIntra rectal diclofenac suppository is a simple and effective method to reduce pain during flexible cystoscopy regardless of age. We recommend its routine use for better tolerability of pain and to increase patient’s compliance.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for ‘Effect of diclofenac suppository on pain control during flexible cystoscopy-A randomized controlled trial’, 2016, 10.5256/f1000research.9519.d14526824", "appendix": "Author contributions\n\n\n\nM Nadeem: conception, study conduct, data analysis, writing manuscript.\n\nMH Ather: conception and study design, writing manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by a University of Research Council Grant (70823) to HA.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nCONSORT checklist and original study protocol.\n\nClick here to access the data\n\n\nReferences\n\nTsuchida S, Sugawara H: A new flexible fibercystoscope for visualization of the bladder neck. J Urol. 1973; 109(5): 830–1. PubMed Abstract\n\nBeaghler M, Grasso M 3rd, Loisides P: Inability to pass a urethral catheter: the bedside role of the flexible cystoscope. Urology. 1994; 44(2): 268–70. PubMed Abstract | Publisher Full Text\n\nSong YS, Song ES, Kim KJ, et al.: Midazolam anesthesia during rigid and flexible cystoscopy. Urol Res. 2007; 35(3): 139–42. PubMed Abstract | Publisher Full Text\n\nCalleary JG, Masood J, Van-Mallaerts R, et al.: Nitrous oxide inhalation to improve patient acceptance and reduce procedure related pain of flexible cystoscopy for men younger than 55 years. J Urol. 2007; 178(1): 184–8; discussion 188. PubMed Abstract | Publisher Full Text\n\nKobayashi T, Nishizawa K, Ogura K: Is instillation of anesthetic gel necessary in flexible cystoscopic examination? A prospective randomized study. Urology. 2003; 61(1): 65–8. PubMed Abstract | Publisher Full Text\n\nKobayashi T, Nishizawa K, Mitsumori K, et al.: Instillation of anesthetic gel is no longer necessary in the era of flexible cystoscopy: a crossover study. J Endourol. 2004; 18(5): 483–6. PubMed Abstract | Publisher Full Text\n\nHerr HW, Schneider M: Immediate versus delayed outpatient flexible cystoscopy: final report of a randomized study. Can J Urol. 2001; 8(6): 1406–8. PubMed Abstract\n\nSoomro KQ, Nasir AR, Ather MH: Impact of patient's self-viewing of flexible cystoscopy on pain using a visual analog scale in a randomized controlled trial. Urology. 2011; 77(1): 21–3. PubMed Abstract | Publisher Full Text\n\nTzortzis V, Gravas S, Melekos MM, et al.: Intraurethral lubricants: a critical literature review and recommendations. J Endourol. 2009; 23(5): 821–6. PubMed Abstract | Publisher Full Text\n\nMcFarlane N, Denstedt J, Ganapathy S, et al.: Randomized trial of 10 mL and 20 mL of 2% intraurethral lidocaine gel and placebo in men undergoing flexible cystoscopy. J Endourol. 2001; 15(5): 541–4. PubMed Abstract | Publisher Full Text\n\nAaronson DS, Walsh TJ, Smith JF, et al.: Meta-analysis: does lidocaine gel before flexible cystoscopy provide pain relief? BJU Int. 2009; 104(4): 506–9; discussion 9–10. PubMed Abstract | Publisher Full Text\n\nPatel AR, Jones JS, Babineau D: Lidocaine 2% gel versus plain lubricating gel for pain reduction during flexible cystoscopy: a meta-analysis of prospective, randomized, controlled trials. J Urol. 2008; 179(3): 986–90. PubMed Abstract | Publisher Full Text\n\nKomiya A, Endo T, Kobayashi M, et al.: Oral analgesia by non-steroidal anti-inflammatory drug zaltoprofen to manage cystoscopy-related pain: a prospective study. Int J Urol. 2009; 16(11): 874–80. PubMed Abstract | Publisher Full Text\n\nIrer B, Gulcu A, Aslan G, et al.: Diclofenac suppository administration in conjunction with lidocaine gel during transrectal ultrasound-guided prostate biopsy: prospective, randomized, placebo-controlled study. Urology. 2005; 66(4): 799–802. PubMed Abstract | Publisher Full Text\n\nIdkaidek NM, Amidon GL, Smith DE, et al.: Determination of the population pharmacokinetic parameters of sustained-release and enteric-coated oral formulations, and the suppository formulation of diclofenac sodium by simultaneous data fitting using NONMEM. Biopharm Drug Dispos. 1998; 19(3): 169–74. PubMed Abstract | Publisher Full Text\n\nFreyd M: The graphic rating scale. J Educ Psychol. 1923; 14(2): 83–102. Publisher Full Text\n\nCornel EB, Oosterwijk E, Kiemeney LA: The effect on pain experienced by male patients of watching their office-based flexible cystoscopy. BJU Int. 2008; 102(10): 1445–6. PubMed Abstract | Publisher Full Text\n\nLwanga SK, Lemeshow S: Sample size determination in health studies. WHO: Geneva, 1991. Reference Source\n\nMüezzinoglu T, Ceylan Y, Temeltaş G, et al.: Evaluation of pain caused by urethrocystoscopy in patients with superficial bladder cancer: a perspective of quality of life. Onkologie. 2005; 28(5): 260–4. PubMed Abstract | Publisher Full Text\n\nNagatsuka C, Ichinohe T, Kaneko Y: Preemptive effects of a combination of preoperative diclofenac, butorphanol, and lidocaine on postoperative pain management following orthognathic surgery. Anesth Prog. 2000; 47(4): 119–24. PubMed Abstract | Free Full Text\n\nBuvanendran A, Kroin JS: Multimodal analgesia for controlling acute postoperative pain. Curr Opin Anaesthesiol. 2009; 22(5): 588–93. PubMed Abstract | Publisher Full Text\n\nZmeili S, Hasan M, Najib N, et al.: Bioavailability and pharmacokinetic properties of 2 sustained-release formulations of diclofenac sodium, Voltaren vs inflaban: effect of food on inflaban bioavailability. Int J Clin Pharmacol Ther. 1996; 34(12): 564–70. PubMed Abstract\n\nHaq A, Patel HR, Habib MR, et al.: Diclofenac suppository analgesia for transrectal ultrasound guided biopsies of the prostate: a double-blind, randomized controlled trial. J Urol. 2004; 171(4): 1489–91. PubMed Abstract | Publisher Full Text\n\nAther MH, Nadeem M: Dataset 1 in: Effect of diclofenac suppository on pain control during flexible cystoscopy-A randomized controlled trial. F1000Research. 2016. Data Source" }
[ { "id": "18368", "date": "19 Dec 2016", "name": "Noor Buchholz", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nK Das and N Buchholz: Comment on “Nadeem M and Ather MH. Effect of diclofenac suppository on pain control during flexible cystoscopy-A randomized controlled trial [version 1; referees: awaiting peer review]. F1000Research 2016, 5:2834 (doi: 10.12688/f1000research.9519.1)”\nFirstly, I congratulate the authors for a well-chosen study topic, and running a well-designed trial. Flexible cystoscopy, being a commonly practised office procedure without the involvement of anesthetists on most occasions, the choice of appropriate analgesics always remains a dilemma to the practising urologist. A gamut of approaches ranging from local anaethetic usage, lubricant solution to usage of inhalational agents has been practiced with mixed reports. Usage of diclofenac suppository, as has been advocated in this study appears to be a simple and effective approach for conducting this procedure. The statistics supports the claim of the proponents regarding the efficacy of this drug. Considering the ease of administration it will be a good option for the practising urologist conducting office flexible cystoscopy.\nIt would be interesting to know the authors’ feedback on the following aspects from their experience in this study-\nWas there a difference in pain perception in individuals undergoing diagnostic flexible cystoscopy and those undergoing flexible cystoscopy for stent removal as the latter often involves added pain stimulus to the patient due to additional instrumentation.\n\nWas the benefit of diclofenac suppository equally perceived in males and females - our personal observations are flexible cystoscopies are presumably easier and well tolerated in females than males.\n\nThe last sentence in the introduction says- “In the current study we have attempted to assess the use of diclofenac suppository as a pre-emptive analgesia during flexible ureteroscopy” - we presume this is a typographical error and should be “flexible cystoscopy” instead of flexible ureteroscopy – this needs to be corrected if so.", "responses": [ { "c_id": "2388", "date": "29 Dec 2016", "name": "M Hammad Ather", "role": "Author Response", "response": "We thank Drs. Buccholz and Das for their valuable comments on the above submission. In response to the questions raised: No we did not find any difference in the diagnostic cystoscopy versus interventional flexicystoscopy(JJ stent removal). The subgroup analysis not shared in the data (as the numbers were small) was insignificant, however, our own experience was that there is indeed is a momentary discomfort when the stent is being pulled through the bladder neck.   Again an important point as indeed due to the length of the urethra there is a potential of greater discomfort in men during flexible cystoscopy. We, however, notice no significant difference in our practice (perhaps due to routine use of low pressure irrigation during introduction of the scope to keep the urethra open.) However, in our current work we only studied men.   Indeed it is a typographical error, many thanks for pointing this out." } ] }, { "id": "20345", "date": "17 Feb 2017", "name": "Michael Chrisofos", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nUsage of diclofenac suppository, seems to be an excellent choice, based on the results of this revelatory research, so I congratulate the authors for running a well-designed trial and obtained an acceptable scientific standard.\n\nThe issue of non-anesthetists procedures, for any examination like flexible cystoscopy, is always the most common matter of concern about patients for the practicing Urologists. Worldwide there are many used local anesthetic drugs or substitutes transient analgesia, without much success in the best case or side effects to their use in the worst. However, the use of an analgesic which may be readily obtained from the patient while avoiding adverse effects associated with the gastric tolerability, for maintaining a sufficiently efficient to meet the requirements as a pain reliever during flexible cystoscopy, is not only necessary but also imperative. We will certainly take into account the fact that during both rigid and flexible cystoscopy, there is a potential of greater momentary discomfort when the stent is being pulled through the bladder neck and due to the length of the urethra. So, an effective analgesic, not only will help the patient to be well tolerated, it will give the advantage to the urologist to deal with more calm the cystoscopy.\n\nMoreover it would be interesting to know the authors’ feedback on the response of patients according to sex and age in the administration of diclofenac.", "responses": [ { "c_id": "2501", "date": "20 Feb 2017", "name": "M Hammad Ather", "role": "Author Response", "response": "We appreciate the comments and input by Dr Chrisofos. He raised an important point concerning difference in pain perception related to gender and age. The current study was conducted only in men, as we were expecting significant differences in pain perception between gender due to the length of the urethra. All patients in the current work were men. The data indicates that there was no difference noted in pain perception in various age groups, however, the difference has no statistical value as the numbers in the subgroups was small. We do feel that a larger study with appropriate sample size calculation could show that there is a difference in pain perception between gender and similarly younger patients feel greater discomfort compared elderly men and women." } ] }, { "id": "20039", "date": "27 Feb 2017", "name": "Christian Bach", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors need to be congratulated for this randomised controlled trial comparing pain during flexible cystoscope with or without rectal diclofenac. The methodology is sound and the paper is conclusively written. The discussion is covering the relevant literature and the conclusion is supported by the data. A step up in the methodology would've been a double-blind, placebo controlled trial which would further validate the results but this limitation is mentioned by the authors. Overall I'm recommending indexing of the article in its current form and do not really see leeway for improvement.", "responses": [] } ]
1
https://f1000research.com/articles/5-2834
https://f1000research.com/articles/5-1540/v1
29 Jun 16
{ "type": "Opinion Article", "title": "Not Communicating science? Aiming for national impact", "authors": [ "Andreas Prokop", "Sam Illingworth" ], "abstract": "Communicating science to wider lay audiences is of increasing importance and is becoming an ever larger part of a scientist's remit which also offers important opportunities. We discuss here the current state of science communication in the field of the natural sciences in the UK, and the enormous improvements that could be achieved through putting more weight on objective-driven long-term initiatives, ideally in the form of interdisciplinary networks, to achieve higher impact. We describe the barriers that stand in the way of such developments and make a number of suggestions how funding organisations in particular could play a major role in overcoming these barriers.", "keywords": [ "Science communication", "funding", "natural sciences" ], "content": "Communicating science: importance and opportunities\n\nFrom the various media outlets one could get the impression that natural sciences and engineering have never been more popular; however, is this true and, if not, do we do enough to raise awareness and encourage participation? A Public Attitudes to Science survey in the United Kingdom, conducted by Ipsos MORI in partnership with the British Science Association (Castell et al., 2014), reported a clear deficit amongst the general public, with only 45% of respondents (n=1749, age >16) feeling aware of science in general and 51% stating they received too little information. A recent report led by the Wellcome Trust (TNS BMRB & PSI, 2015) found that public engagement is more firmly embedded in the context of the arts, humanities and social sciences than it is among researchers in Science, Technology, Engineering and Mathematics (STEM) subjects. Do STEM scientists communicate their science effectively? Do they miss out on important opportunities to engage lay audiences (i.e. not only non-scientists, but also those who are not particular experts in a certain field.) with their research - in spite of the fact that many funding bodies worldwide made it an obligation, over a decade ago, that researchers explain their research to lay audiences (see e.g. Holbrook, 2005)? In this article we will discuss our view of the current practice of science communication in the UK, focussing on the natural sciences. Here we use the term ‘science communication’ as an umbrella term to avoid confusion about other descriptors commonly used, such as outreach, public engagement and widening participation activities (Illingworth et al., 2015). We will discuss desirable standards, barriers to improvement, and make suggestions on how to overcome these barriers. But first, we will briefly summarise why science communication is an important and worthwhile activity for scientists in general.\n\nOne of the most popular arguments for effective science communication is that the general public has a right to be informed about the scientific research which is paid for by tax money or charity funds. There is also a moral obligation to ensure that political discussions are based on sound scientific evidence. For this, active science communication can be an important means to counter existing misconceptions in the public sphere, to explain the pros and cons when scientific opinion is divided, and to promote trust in science and education policies and practices (Bubela et al., 2009; Scheufele, 2014). Failing to establish effective two-way communication can result in public consensus establishing opinions that are then difficult to revert, as exemplified by the public debate on genetically modified crops and geoengineering (e.g. Borch & Rasmussen, 2005, e.g. Luokkanen et al., 2014). As a more direct incentive for scientists, effective science communication can also help to raise the visibility of a subject, paving the way towards wider acceptance and generating opportunities to exert sustained influence on public opinion as well as policy makers, and to impact positively on political funding decisions (Rowe et al., 2010).\n\nAs discussed elsewhere (Baram-Tsabari & Osborne, 2015), science communication and science education are two sides of the same coin, hence science communication can contribute to the improvement of science education, thus achieving a greater understanding within a scientifically more literate and better informed public, ultimately impacting positively on society and innovation (Rull, 2014).\n\nScience communication can also be of important professional and personal benefit to the scientists. Well-designed science communication activities can become a valuable time investment; communication with lay audiences encourages deeper thinking, by compelling researchers to find the phrases, terms and images that can help to engage diverse audiences in their science. Finding such language can have a positive impact on scientific perspectives and communication strategies towards fellow scientists, with arguments fundamental enough to convince and excite lay audiences, also powerful in presentations, publications and funding applications aimed at expert scientists (Patel & Prokop, 2015).\n\nFinally, genuine engagement in science communication improves career opportunities inside and outside of academia. For the large number of young researchers taking up careers outside scientific research (Allen, 2010), science communication offers fantastic opportunities to transcend the academic environment by developing important transferable skills, for example didactic qualities for future teachers, or experience with production, press, marketing and audience dynamics for those wanting to work in the media. Likewise, for researchers wishing to stay in academia, the additional skills in communication, teaching, project and people management can help to make candidates stand out from their contemporaries in terms of future grant and job applications (Illingworth & Roop, 2015).\n\n\nThe current state: barriers and scope for improvement\n\nThe overarching long-term societal goal of science communication in the field of the natural sciences (from now on referred to simply as science) has to be to bring down barriers between scientists and lay audiences, and to improve the general understanding, appreciation of and fascination for science nationwide.\n\nGood long-term science communication strategies have been adapted by patient interest groups and disease-specific societies who naturally have very receptive, emotionally involved audiences with personal long-term interests (e.g. Acquadro et al., 2003; Smith, 2006). In contrast, the communication of STEM-based science subjects cannot build on these personal and emotional motivations, but must utilise curiosity, risk awareness and economic gain as means to connect to their target audiences. In order to engage meaningfully with these audiences over a long-time period, the development of co-operative, long-term strategies have a number of important advantages.\n\nFirst, they allow gradual and cumulative development, including the publication of high quality activities and resources, and the implementation of evaluation strategies. Second, once visible resources are building up they can generate their own dynamics and legacy; this can then develop momentum, by inspiring other scientists from the same field to use them, which may even culminate in topic-specific science communication networks of researchers. Third, long-term strategies make it possible to widen the scope by including a broader range of activities, target audiences and partnerships, such as science fairs, interactions with schools, public presentations of any kind, art-science collaborations, media work, involvement of science celebrities, or the use of citizen science projects (Silvertown, 2009). Fourth, long-term strategies make it possible to draw in interdisciplinary expertise. For example, artists can bring new creative dimensions and appeal, whilst the conceptual view of communication processes by social scientists can encourage a greater emphasis on two-way dialogue with target groups as a key prerequisite for strategy optimisation. Expert science communicators (from now on simply referred to as communicators) can contribute their knowledge about engagement strategies (Viseu, 2015) including formative research to guide objective setting (Davies, 2008; Nisbet & Scheufele, 2009), approaches to frame-setting and media work, and awareness of a wider range of communication and impact evaluation strategies (Bubela et al., 2009; Jensen, 2014). Furthermore, expert communicators can contribute expertise in production and project management, marketing, audience dynamics, and an understanding of the specific requirements for different target audiences. Any of these collaborations will raise professional standards and impact, but clearly require a long-term approach - so that a common language can be found, allowing scientists to gain an understanding of communication strategies and, vice versa, giving the communicators the chance to grasp and appreciate the content and importance of the communicated science and the journey of its discovery.\n\nBut what are the barriers that might stand in the way of such long-term initiatives? First of all, kick-starting a new long-term initiative and leading it to impact is hard work that requires stamina, belief, dedication and time. Therefore, the first and most obvious barriers are lack of time, issues of self-perception (status, competence) and attitude (lack of interest, viewing outreach as subsidiary to research and university teaching), as well as a lack of measurable and externally recognised reward and recognition (Andrews et al., 2005; Ecklund et al., 2012). These barriers lead to the often-heard view that involvement in science communication is more damaging to careers than helpful, potentially draining initiatives that may have started with great enthusiasm (TNS BMRB & PSI, 2015).\n\nSecondly, it is difficult to secure funding for long-term projects because, in our experience, many funding organisations focus their support on creative new ideas or attractive \"one-off\" events rather than successful ongoing initiatives - a policy that is not well suited to drive outreach to momentum and long-lasting impact. Furthermore, even high quality applications might be turned down not because of content but simply because the proposed implementation does not align well with the current strategy of a funding organisation.\n\nThird, even after having developed strategies and resources, their dissemination is not trivial and can become yet another barrier. Dissemination requires an extra layer of communication which we refer to as meta-communication. Vertical meta-communication involves distributing developed ideas and resources to target audiences (e.g. animating teachers to use the developed educational resources). Even though established dissemination platforms might exist (e.g. for school resources), they often will publish new resources only if copyright agreements are signed, which can mean that resources are withdrawn from access for further development. Horizontal meta-communication is used to recruit fellow scientists to support and contribute to a science communication initiative. However, even well established science communities often don't have the means to communicate horizontally and many fellow scientists will never hear of and/or benefit from existing ideas and resources. In our view, the barriers to meta-communication are held unnecessarily high also by tendencies of funding organisations of being too selective in providing access to their dissemination machineries, and of many science journals to give low priority to articles about science communication strategies or resources.\n\nFinally, the possibilities for scientists to obtain training and support in science communication strategies and impact evaluation are limited (Besley & Tanner, 2011), so that valuable time is lost through learning by doing and re-inventing the wheel, rather than capitalising on well-established methods, strategies and infrastructure. Even where training is provided as continued professional development (CPD), it is not always well advertised and/or recognised by academic staff to be of value. All this said, institutions increasingly employ public engagement officers to provide support for scientists, and funders are actively demanding such provision and are often willing to consider respective support on grants. Whether this support is then efficiently capitalised on, still depends on the local institutional policies which might not necessarily be guided by in-depth understanding of the intricacies of science communication, a situation that demands better national frameworks (see below).\n\n\nImproving science communication: future directions\n\nIn order to instil a solid culture of science communication and achieve a better understanding and appreciation of science, scientists themselves need to dedicate more thought to explaining the essence and importance of their own research to wider audiences, and set long-term objectives, ideally involving multi-disciplinary networks, thereby enabling them to achieve higher quality and maximised impact, as well as improved professional and personal benefit (Patel & Prokop, 2015). However, any efforts to implement good practice need to be facilitated by barrier-free and supportive environments. In our view, the most powerful means to achieve this resides in the hands of funding organisations. Here we will make a few suggestions that we believe would drive change towards an improved science communication culture:\n\nFirst, an important step in instilling a culture of effective science communication would be true collaboration across funding organisations. A number of attempts have been made in the UK to achieve this. For example, the National Co-ordinating Centre for Public Engagement (NCCPE) was established in 2008 as part of the Beacons for Public Engagement Initiative, funded by the four UK Funding Councils, Research Councils UK and the Wellcome Trust. Unfortunately, there are no signs that this move would have led to an effective harmonisation or alignment of public engagement strategies across these organisations. The Concordat for Engaging the Public with Research signed by an impressive list of UK science funders (RCUK, 2010), has all the right intentions but stays rather vague in its statements without providing a concrete implementation strategy. Also the science communication survey mentioned before (TNS BMRB & PSI, 2015) was a collaborative effort of UK funders of public research. Whilst providing a valuable description of the current state of public engagement, only a few conclusions were drawn within that survey, and no recommendations were made for how the current situation could be improved. In our opinion, funding organisations should take their collaboration to the next step and formulate a common strategy for improving engagement and education at the national level, based on clear, long-term objectives which aim to instil a solid, nation-wide culture of science communication and to steadily improve open resources and enhance their accessibility. Once momentum is achieved, it will be easier to sustain. Certainly, finding the right indicators to guide implementation and to measure the success of such objectives is a challenge that will need careful consideration. Perhaps government involvement is required to set the directions, extending on the code of conduct by the Council for Science and Technology, which stated the need for scientists to communicate their research to the wider society (Poliakoff & Webb, 2007).\n\nSecond, a close collaboration between funding organisations could be used to develop a professional framework for science communication. We need effective policies and guidelines for funders, institutions and researchers to facilitate the implementation of best practice and suitable local protocols which also consider professional reward and recognition for public engagement as a crucial motivator. The development of such a framework should be done on a national basis, and the before mentioned Concordat for Engaging the Public with Research (RCUK, 2010) would indicate that such a collaboration is feasible. However, much more would be needed to turn the principles laid out in this concordat into tangible actions. Independent bodies such as the British Science Association or the NCCPE could help as facilitators during this processes, capitalising on experiences for example from the Beacons initiative (2008 to 2012) which aimed to support, recognise, reward and build capacity for public engagement (Duncan & Manners, 2012), but failed unfortunately to leave a visible legacy. If funders, universities and independent bodies closely collaborated, this would raise our chances of developing transparent, comprehensible and effective frameworks and establish a solid culture and chartered status of science communication. Their implementation would be most effective at the institutional level, with compliance being a factor impacting on funding allocations - a procedure that has been successfully implemented by the Athena SWAN charter to address gender equality (Donald et al., 2011).\n\nThird, funders should take a well balanced approach. Thus, they should continue to fund new projects and initiatives, since these are an essential breeding ground for creative innovation. In parallel, funders should look out for ongoing science communication initiatives which are driven by clear, long-term vision and objectives that match the wider societal goals of raising the general appreciation and understanding of science. We need funding policies that consider sustained funding of successful initiatives, as long as they demonstrate a creative drive and clear commitment to improve their quality, momentum and impact. Such a funding strategy would align and strengthen good practice at the levels of implementers and funders, it would also reflect sensible long-term investment into science communication, and would help to embed science communication from the outset in a meaningful and demonstrable way. We believe that a common fund for science communication, centrally ‘owned’ by all of the contributors and co-ordinated with appropriate representation, would be an efficient tool to facilitate the development of new and overarching funding models that align with the fundamental societal goals of improving the general understanding and appreciation of science nationwide. Although such a common fund may initially be more difficult for the individual funding bodies to justify, it would be easier to shield from organisation-specific objectives and policies and would eventually be recognised as good practice in enhancing science communication - be it inspiring newcomers to start science communication projects, or helping successful initiatives to develop towards momentum and impact. Long-term and well thought-out evaluation studies that measure this impact are also essential.\n\nFourth, as explained above, certain funders already provide professional support and advice, e.g. by supporting the instalment of public engagement officers, who can help scientists to develop better science communication strategies and impact evaluation practices. However, to achieve the long-term societal goals, this practice needs to become the norm and be further improved strategically. Successful examples of collaborative science communication initiatives or any other good practices need to be shared at a national level and used to develop frameworks that foster efficient local policies. As another example, social scientists and professional science communicators should be sitting on all, rather than some funding committees, so that content and strategies of proposed science communication projects can be judged equally. They could also give constructive feedback to applicants, thus actively helping them to develop more efficient communication strategies.\n\nFifth, funding organisations should use their capacity and influence to facilitate meta-communication. An immediate improvement would be to give scientists easier access to the powerful dissemination means which are nurtured by most funding organisations, including magazines and social media - and such a service should not be offered exclusively to those who were funded by a certain institution. In the long-term, we would need the implementation of a nationally recognised central Internet platform for science communication, which would also be a powerful facilitator of disseminating frameworks and policies discussed before. Furthermore, similar to the success with open access policies (e.g. Harnad et al., 2004), funders could use their influence to change journal policies towards opening up for the publication of scientific communication articles, which would also provide an important path towards professional reward.\n\nFinally, funders could use their capacity to make the jungle of science communication resources nation- or even worldwide more transparent. For example, dedicated search engines or databases would relieve all of us from time-consuming web searches and make it less likely that the wheel is constantly being re-invented. Such infrastructure could also be used to filter out the noise produced by low quality resources and to promote the sharing of resources and strategies, for example by showcasing the value and impact of listed resources with standardised metrics and comments.\n\n\nFinal thoughts\n\nThis article is an opinion piece, based on the experience of two long-standing but very different personal science communication histories and backgrounds. We recognised that there is an astonishing congruence in views and experiences and were also strongly encouraged by the very positive comments from experienced and competent colleagues (see acknowledgements), and by the numerous in-depth discussions we had with them. The main purpose of this article is therefore not to assure accuracy of every claim we make, but to provoke discussion and encourage those who already follow good practice to come forward and make themselves heard. We feel passionately about the need to improve standards and to instil a solid culture of science communication; this will require re-thinking at all levels including scientist, local institutions and national funding organisations, all of which will have to collaborate and align their efforts. We hope that this article facilitates this development and captures the most important arguments and issues that will have to be discussed and considered.", "appendix": "Author contributions\n\n\n\nAP and SI both contributed equally to all aspects of this article.\n\n\nCompeting interests\n\n\n\nThe authors declares that there are no competing interests, either financial or otherwise.\n\n\nGrant information\n\nScience communication activities of AP are supported by the BBSRC (BB/M007553/1, BB/L000717/1).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nWe are grateful to Dame Nancy Rothwell and Matthew Cobb for helpful discussions in the preparation of this manuscript, and would also like to thank a number of expert colleagues for helpful comments: Robert Dingwall, Mhairi Stewart, Sujata Kundu, Sheena Cruickshank, Catarina Vicente, Jan Barfoot, Stuart Allen, Tim Harrison and, Kingsley Purdham.\n\n\nReferences\n\nAcquadro C, Berzon R, Dubois D, et al.: Incorporating the patient's perspective into drug development and communication: an ad hoc task force report of the Patient-Reported Outcomes (PRO) Harmonization Group meeting at the Food and Drug Administration, February 16, 2001. Value Health. 2003; 6(5): 522–531. PubMed Abstract | Publisher Full Text\n\nAllen HL: Lessons from the United Kingdom’s Royal Society. Thought and Action. 2010; 26: 115–120. Reference Source\n\nAndrews E, Weaver A, Hanley D, et al.: Scientists and public outreach: Participation, motivations, and impediments. Journal of Geoscience Education. 2005; 53(3): 281–293. Reference Source\n\nBaram-Tsabari A, Osborne J: Bridging science education and science communication research. J Res Sci Teach. 2015; 52(2): 135–144. Publisher Full Text\n\nBesley JC, Tanner AH: What science communication scholars think about training scientists to communicate. Sci Commun. 2011; 33(2): 239–263. Publisher Full Text\n\nBorch K, Rasmussen B: Refining the debate on GM crops using technological foresight—the Danish experience. Technol Forecast Soc Change. 2005; 72(5): 549–566. Publisher Full Text\n\nBubela T, Nisbet MC, Borchelt R, et al.: Science communication reconsidered. Nat Biotechnol. 2009; 27(6): 514–518. PubMed Abstract | Publisher Full Text\n\nCastell S, Charlton A, Clemence M, et al.: Public attitudes to science 2014. London, Ipsos MORI Social Research Institute. 2014; 194. Reference Source\n\nDavies SR: Constructing communication: Talking to scientists about talking to the public. Sci Commun. 2008; 29(4): 413–434. Publisher Full Text\n\nDonald A, Harvey PH, McLean AR: Athena SWAN awards: Bridging the gender gap in UK science. Nature. 2011; 478(7367): 36. PubMed Abstract | Publisher Full Text\n\nDuncan S, Manners P: Embedding Public Engagement within Higher Education: Lessons from the Beacons for Public Engagement in the United Kingdom. Higher Education and Civic Engagement. Springer, 2012; 221–240. Publisher Full Text\n\nEcklund EH, James SA, Lincoln AE: How academic biologists and physicists view science outreach. PloS One. 2012; 7(5): e36240. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarnad S, Brody T, Vallières F, et al.: The access/impact problem and the green and gold roads to open access. Serials review. 2004; 30(4): 310–314. Publisher Full Text\n\nHolbrook JB: Assessing the science-society relation: The case of the US National Science Foundation's second merit review criterion. Technology in Society. 2005; 27(4): 437–451. Publisher Full Text\n\nIllingworth S, Redfern J, Millington S, et al.: What's in a Name? Exploring the Nomenclature of Science Communication in the UK [version 2; referees: 3 approved, 1 approved with reservations]. F1000Res. 2015; 4: 409. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIllingworth S, Roop H: Developing Key Skills as a Science Communicator: Case Studies of Two Scientist-Led Outreach Programmes. Geosciences. 2015; 5(1): 2–14. Publisher Full Text\n\nJensen E: The problems with science communication evaluation. Journal of Science Communication. 2014; 13(1). Reference Source\n\nLuokkanen M, Huttunen S, Hildén M: Geoengineering, news media and metaphors: Framing the controversial. Public Underst Sci. 2014; 23(8): 966–81. PubMed Abstract | Publisher Full Text\n\nNisbet MC, Scheufele DA: What’s next for science communication? Promising directions and lingering distractions. Am J Bot. 2009; 96(10): 1767–1778. PubMed Abstract | Publisher Full Text\n\nPatel S, Prokop A: How to develop objective-driven comprehensive science outreach initiatives aiming at multiple audiences. bioRxiv. 2015; 023838. Publisher Full Text\n\nPoliakoff E, Webb TL: What factors predict scientists' intentions to participate in public engagement of science activities? Sci Commun. 2007; 29(2): 242–263. Publisher Full Text\n\nRCUK: Concordat for engaging the public with research. Swindon, RCUK. 2010. Reference Source\n\nRowe G, Rawsthorne D, Scarpello T, et al.: Public engagement in research funding: A study of public capabilities and engagement methodology. Public Underst Sci. 2010; 19(2): 225–239. PubMed Abstract | Publisher Full Text\n\nRull V: The most important application of science: As scientists have to justify research funding with potential social benefits, they may well add education to the list. EMBO Rep. 2014; 15(9): 919–922. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScheufele DA: Science communication as political communication. Proc Natl Acad Sci U S A. 2014; 111(Suppl 4): 13585–13592. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSilvertown J: A new dawn for citizen science. Trends Ecol Evol. 2009; 24(9): 467–471. PubMed Abstract | Publisher Full Text\n\nSmith RD: Responding to global infectious disease outbreaks: lessons from SARS on the role of risk perception, communication and management. Soc Sci Med. 2006; 63(12): 3113–3123. PubMed Abstract | Publisher Full Text\n\nTNS BMRB & PSI: Factors Affecting Public Engagement By Researchers: A study on behalf of a Consortium of UK public research funders, Wellcome Trust. 2015. Reference Source\n\nUK, E: Engineering UK 2015: The state of engineering. London, EngineeringUK. 2015. Reference Source\n\nViseu A: Integration of social science into research is crucial. Nature. 2015; 525(7569): 291. PubMed Abstract | Publisher Full Text" }
[ { "id": "14992", "date": "04 Aug 2016", "name": "Viviane Callier", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI found the article to be written in vague, abstract terms, making it difficult to extract any take-home message. The problem that was to be solved wasn't clearly articulated, nor was the proposed \"solution\" clear to me. I found the article to be a poor example of science communication.", "responses": [ { "c_id": "2295", "date": "08 Dec 2016", "name": "Andreas Prokop", "role": "Author Response", "response": "What have we changed in this version?   This version of the opinion piece has been substantially edited to make the message clearer that we are trying to communicate, i.e. the need for objective-driven, long-term science communication in the UK. We have streamlined these arguments, and present now more coherently the potential barriers to such a vison, as well as actionable suggestions for how to overcome them. We have also changed the structure and title of the paper to be more fitting to the content, and make it much clearer from the outset that this is an opinion piece and not a research article. Please, see our detailed responses below. We look forward to our further discussion. Response to Reviewer 3   I found the article to be written in vague, abstract terms, making it difficult to extract any take-home message. The problem that was to be solved wasn't clearly articulated, nor was the proposed \"solution\" clear to me. I found the article to be a poor example of science communication.   The brevity of this comment makes it difficult to respond to and engage with in a detailed manner. However, we take the point that there were aspects of the opinion piece that were vague, and have addressed this thoroughly, as was explained above. We hope that the new version of the article now clearly describes the problem and the potential solution(s).   If this is not the case, then we would like to ask the reviewer to please be a little bit more constructive in their criticism, so that we can improve the manuscript further." } ] }, { "id": "14993", "date": "09 Aug 2016", "name": "Massimo Caine", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article seeks to describe standards, barriers and possible initiatives that may be relevant for the development of science communication in the UK. Even if opinions concerning a certain topic (in this case science communication) may be very wide and call into consideration several aspects of the subject, I find that the manuscript lacks a clear focus, which, in turn, makes the flow of the text very hard to follow.\n\nIn particular, in the initial part of the manuscript, authors state that they “will discuss the current practice of science communication in the UK”. However, despite a very general description of the relevance of science communication practice for (i) policy making; (ii) education; (iii) scientific networking and (iv) professional development, there is a substantial lack of any description of experience, best practice, event or initiative that may have taken place in the UK. Again, when describing the several barriers that hamper a proper diffusion and development of science communication, the observations are rather general, superficial and seemingly not focused on the UK scenario (at least this is the perception I have as non-UK reader). Consequently, the suggestions on how to overcome such barriers are extremely hard to be contextualized within and they seem to me rather focused on funding agencies and their policies rather than on the good practice/development of science communication.\n\nIn light of that, despite the good will that authors put in place for the cause of science communication, I regret to say that I cannot recommend this opinion article to be indexed as it is.", "responses": [ { "c_id": "2294", "date": "08 Dec 2016", "name": "Andreas Prokop", "role": "Author Response", "response": "What have we changed in this version?   This version of the opinion piece has been substantially edited to make the message clearer that we are trying to communicate, i.e. the need for objective-driven, long-term science communication in the UK. We have streamlined these arguments, and present now more coherently the potential barriers to such a vison, as well as actionable suggestions for how to overcome them. We have also changed the structure and title of the paper to be more fitting to the content, and make it much clearer from the outset that this is an opinion piece and not a research article. Please, see our detailed responses below. We look forward to our further discussion. Response to Reviewer 2   This opinion article seeks to describe standards, barriers and possible initiatives that may be relevant for the development of science communication in the UK. Even if opinions concerning a certain topic (in this case science communication) may be very wide and call into consideration several aspects of the subject, I find that the manuscript lacks a clear focus, which, in turn, makes the flow of the text very hard to follow.   Thank you for your comments. As explained in our response to Reviewer 1, we concede that the central thesis may not have come across clearly enough. We have now re-focussed and edited the paper, and we hope that we make the main objective much clearer, i.e. that there is a problem (a lack of long-term, objective-led science communication), why this problem exists, and some suggested solutions for how we might best tackle this problem. In particular, in the initial part of the manuscript, authors state that they “will discuss the current practice of science communication in the UK”. However, despite a very general description of the relevance of science communication practice for (i) policy making; (ii) education; (iii) scientific networking and (iv) professional development, there is a substantial lack of any description of experience, best practice, event or initiative that may have taken place in the UK.   We agree that the statement about general practice did not match well with the material provided, and it has been removed. To provide concrete descriptions of good practice, we have added a whole paragraph on objective-driven long-term initiatives that are existing in the UK and beyond.   Again, when describing the several barriers that hamper a proper diffusion and development of science communication, the observations are rather general, superficial and seemingly not focused on the UK scenario (at least this is the perception I have as non-UK reader). Consequently, the suggestions on how to overcome such barriers are extremely hard to be contextualized within and they seem to me rather focused on funding agencies and their policies rather than on the good practice/development of science communication.   To address the critique that comments and opinions that we expressed are too general and potentially over-reaching, we have tightened these up significantly and provide now a more rigorous justification for their inclusion. However, we have still chosen to focus on the role that funding agencies and their policies play, because we believe that they are in the best position to lower barriers and facilitate effective science communication." } ] }, { "id": "15848", "date": "24 Aug 2016", "name": "Kathryn B. H. Clancy", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe opinions shared in this piece were undercited, vague, and not novel.\n\nTitle and abstract: The authors should consider a title that describes the take-home message of the article. They should also decide on one way of capitalizing the title (all or nothing).\n\nArticle content: The organization of this paper was not clear, and it also wasn’t clear what this opinion piece was intended to do. Was this written towards funding organizations, individual scientists? What are the actionable items that might allow for a funding organization to test out some of these ideas? A conceptual model or figure would have greatly helped clarify this.\n\nConclusions: The authors might get more traction by developing a more concrete conceptual model with testable hypotheses. This would also help them fund the work to show if their ideas would lead to improved science literacy among the general public.", "responses": [ { "c_id": "2292", "date": "08 Dec 2016", "name": "Andreas Prokop", "role": "Author Response", "response": "What have we changed in this version?   This version of the opinion piece has been substantially edited to make the message clearer that we are trying to communicate, i.e. the need for objective-driven, long-term science communication in the UK. We have streamlined these arguments, and present now more coherently the potential barriers to such a vison, as well as actionable suggestions for how to overcome them. We have also changed the structure and title of the paper to be more fitting to the content, and make it much clearer from the outset that this is an opinion piece and not a research article. Please, see our detailed responses below. We look forward to our further discussion. Response to Reviewer 1   Title and abstract: The authors should consider a title that describes the take-home message of the article. They should also decide on one way of capitalizing the title (all or nothing).   Thank you for your comments. We take your points regarding the title, and as such have modified it accordingly.  Article content: The organization of this paper was not clear, and it also wasn’t clear what this opinion piece was intended to do. Was this written towards funding organizations, individual scientists? What are the actionable items that might allow for a funding organization to test out some of these ideas? A conceptual model or figure would have greatly helped clarify this.   Conclusions: The authors might get more traction by developing a more concrete conceptual model with testable hypotheses. This would also help them fund the work to show if their ideas would lead to improved science literacy among the general public.   The opinions shared in this piece were under cited, vague, and not novel.    Organisation and objective: The paper has been edited so that it is now much clearer in terms of its objectives, and at your suggestion we have included a list of actionable items that we believe would be of benefit to funding organisations. In relation to a conceptual model with a testable hypothesis this is, in our view, not something that would sit well in an opinion piece. However, we now present some actionable items as potential solutions to overcoming the barriers that we present. In regards to the audience of this article, please, see the next bullet point.   Novelty and target audience: This article is less aimed at academic science communicators than at STEM scientists who engage in science communication and at funding bodies that support STEM research. The aim is not to present novel ideas (although these thoughts will be novel to most STEM scientists working at the coal-face of science communication), but to alert to shortcomings and reinforce thinking about possible actionable ways in which to improve on this, many of which are being underused now or have been thought about but were never properly implemented (see last paragraph of section 3). Citations and vagueness: We believe that this version of the manuscript has addressed these issues. We are much clearer in the scope of the paper (identification of problem, context for why it exists, and potential solution(s)), and are also more rigorous in our use of referencing. We have now included a paragraph providing concrete examples of long-term objective-driven initiatives. We also believe that some of the solutions that we propose are novel, and that this is now easier to see given the edit of the paper." } ] } ]
1
https://f1000research.com/articles/5-1540
https://f1000research.com/articles/5-2376/v1
26 Sep 16
{ "type": "Method Article", "title": "Creating a driving profile for older adults using GPS devices and naturalistic driving methodology", "authors": [ "Ganesh M. Babulal", "Cindy M. Traub", "Mollie Webb", "Sarah H. Stout", "Aaron Addison", "David B. Carr", "Brian R. Ott", "John C. Morris", "Catherine M. Roe", "Cindy M. Traub", "Mollie Webb", "Sarah H. Stout", "Aaron Addison", "David B. Carr", "Brian R. Ott", "John C. Morris" ], "abstract": "Background/Objectives: Road tests and driving simulators are most commonly used in research studies and clinical evaluations of older drivers. We adapted an existing, commercial, off-the-shelf, in-vehicle device for naturalistic, longitudinal research to better understand daily driving behavior in older drivers. Design: The Azuga G2 Tracking DeviceTM was installed in each participant’s vehicle, and we collected data over 5 months (speed, latitude/longitude) every 30-seconds when the vehicle was driven.  Setting: The Knight Alzheimer’s Disease Research Center at Washington University School of Medicine. Participants: Five individuals enrolled in a larger, longitudinal study assessing preclinical Alzheimer disease and driving performance.  Participants were aged 65+ years and had normal cognition. Measurements:  Spatial components included Primary Location(s), Driving Areas, Mean Centers and Unique Destinations.  Temporal components included number of trips taken during different times of the day.  Behavioral components included number of hard braking, speeding and sudden acceleration events. Methods:  Individual 30-second observations, each comprising one breadcrumb, and trip-level data were collected and analyzed in R and ArcGIS.  Results: Primary locations were confirmed to be 100% accurate when compared to known addresses.  Based on the locations of the breadcrumbs, we were able to successfully identify frequently visited locations and general travel patterns.  Based on the reported time from the breadcrumbs, we could assess number of trips driven in daylight vs. night.  Data on additional events while driving allowed us to compute the number of adverse driving alerts over the course of the 5-month period. Conclusions: This pilot study indicated that Driving Profiles for older adults can be created and compared month-to-month or year-to-year, allowing researchers to identify changes in driving patterns that are unavailable in controlled conditions.", "keywords": [ "naturalistic driving", "global positioning data acquisition systems", "geographic information system", "in-vehicle technology", "older adults", "Alzheimer’s disease" ], "content": "Background\n\nOur research program seeks to understand driving behavior among older adults, particularly as it occurs on a day-to-day basis as people travel in their own environments. However, evaluation of driving behavior in older adults largely occurs with methodologies that use controlled conditions such as on-the-road tests and driving simulators, and to a lesser extent, self-report and diaries1–4. To better meet our research needs, we explored newer methodologies to study naturalistic driving behavior longitudinally, in a cost effective and unobtrusive manner5.\n\nRecent technological advances in global positioning systems (GPS) and geographic information systems (GIS) techniques allow evaluation of driving behavior in the actual environments in which individuals drive6. Newer in-vehicle GPS devices are unobtrusive and typically provide data on date, time, speed, longitude and latitude regarding where a vehicle is driven7,8. In-vehicle GPS/GIS devices are an emerging methodology employed to better understand driving in situ and compare differences between driver self-report and GPS data obtained from a vehicle9. As a result, naturalistic driving research employing this methodology seeks to understand driving behavior by analyzing continuous, objective data collected by in-vehicle devices to determine patterns and the influence of personal, temporal and environmental factors7,10.\n\nThe evolving field of naturalistic driving and the proliferation of custom and commercial off the shelf (COTS) in-vehicle devices have resulted in numerous different outcomes and GIS analytical techniques8,11. However, some challenges accompany GPS data use, including extensive post-processing of large volumes of data, variability with temporal and spatial aspects of the data, and higher cost associated with the technology and data collection. Consequently, the monitoring periods in many recent studies are limited to capturing data for analysis from a timespan ranging from weeks to 2 months9,11. However these short periods may be too brief to capture relevant driving behaviors.\n\nTo more accurately monitor key driving naturalistic driving behaviors, we piloted a new methodology adapting a COTS in-vehicle device to study naturalistic driving behavior longitudinally, in a cost effective and unobtrusive manner. Our objective for this pilot is to describe methodological challenges associated with adapting a COTS in-vehicle device that captures and synthesizes GPS data for processing and analysis using GIS techniques. We also quantify spatial and temporal patterns associated with driving behavior to construct driver profiles to evaluate how driving behavior changes longitudinally.\n\n\nMethods\n\nParticipant data. Data were collected from participants enrolled in a longitudinal study assessing preclinical Alzheimer’s disease and driving performance (R01 AG043434) at Washington University School of Medicine in St. Louis. Participants had normal cognition, were 65 years or older, had a valid driver’s license, drove at least once per week in a non-adapted vehicle, met minimal visual acuity for state requirements, and had Alzheimer’s disease biomarkers (cerebrospinal fluid or brain imaging) objectively measured and available within the last 2 years. All study protocols, consent documents and questionnaires were approved by Washington University Human Research Protection Office.\n\nData collection and processing. We used the COTS Azuga G2 Tracking DeviceTM (Model 850: Azuga Inc, San Jose, California), which we refer to as a global positioning data acquisition system (GPDAS). The GPDAS plugs into the on-board diagnostic systems port (OBDII) and is powered by the vehicle’s battery. Installation requirements limit vehicles to those manufactured in 1996 or later since earlier years were not equipped with an OBDII port. Data (vehicle speed, latitude, longitude) were collected from the moment ignition was turned on and until it was turned off, with a collection interval set at every 30 seconds. Individual 30-second observations are referred to as a “breadcrumb”. Location data were also collected every 3 hours when the ignition was off. Additionally, aggressive driving incidents such as hard braking, speeding and sudden acceleration were recorded in the trip log. Data were collected and simultaneously transmitted via Bluetooth Low Energy to secured servers. On a daily basis, the data were aggregated by Azuga and made available for download via secured servers.\n\nTwo distinct file types available from Azuga were used in our analysis – Breadcrumb files and Activity files. Within the daily Breadcrumb comma separated values (csv) file, each row consisted of one observation (\"breadcrumb\"), typically at a 30 second interval for a specific vehicle at an instant of time. Each breadcrumb identified the vehicle by a 10-digit code and additionally reports latitude, longitude, vehicle speed, nearest address (reverse geocoded by Azuga), coordinated universal time (UTC) and date, odometer reading, and event type. The event type field identified whether the given breadcrumb was associated with a regular observation or special event such as ignition on/off or aggressive driving. The event type field could also contain codes indicating specific issues such as a low battery level in the vehicle, a connection or disconnection of the device, or a malfunction in the device hardware. Additional fields gave data about the peak speed and average speed of an over-speeding event, as well as initial and final speeds of braking or acceleration events characterized by a rapid change in the vehicle’s velocity.\n\nThe second file type received from Azuga was the daily Activity csv file. Each row in the Activity file represented one trip taken by a single vehicle. Available observations about each trip included the date and start time (in UTC), the starting and ending locations (latitude, longitude, and reverse-geocoded address), the duration/length of the trip in seconds and in distance (rounded to the nearest tenth of a kilometer, then reported in miles), the average and maximum vehicle speed, and the number/duration of aggressive driving events such as sudden acceleration, hard braking, and over-speeding. Preliminary data processing used a Powershell script to compare headers from the incoming Breadcrumb and Activity files to ensure the structure was consistent, and then combined the daily files from the time period of interest into two large comprehensive csv files (one each for Breadcrumbs and for trip-level Activity). These two large csv files were read into the statistical analysis program R as data tables for further analyses. For the remainder of the manuscript, the term breadcrumb refers to a single observation of one vehicle at a specific location and single moment in time, while a trip represents a set of locations (breadcrumbs) occurring between the ignition on and ignition off of a specific vehicle. Over the first 5 months, over 400,000 breadcrumbs representing approximately 12,000 trips were collected for the 20 vehicles.\n\nInitial processing steps taken in R examined the condition of the incoming data for errors and anomalies, then created additional fields for use in aggregating the data, as well as the spatial processing stages. Since all times were reported in UTC and our participants were in the continental United States, time zone calculations were performed to accurately transform the incoming timestamp to local time. Many points in the Central Standard Time Zone were classified as such within R using a bounding rectangle with maximum/minimum latitude and longitude encapsulating the majority of the Central Standard Time Zone. For points close to the boundary of time zones, GIS was used to determine the appropriate zone. This was done by comparing the breadcrumb location against a set of polygons representing the extent of each time zone to determine in which time zone polygon the breadcrumb location fell in. Local time was needed to understand driving activity or avoidance during specific times of day (rush hour, daylight, etc.). The R package lubridate was used to convert UTC time to local time, while the R package RAtmosphere allowed for computations of sunrise and sunset at a given latitude/longitude. These computations were added as additional columns in the data tables. A summary of the workflow is given in Figure 1.\n\nTo clean the incoming data and prepare these for spatial processing, data were checked to ensure that two criteria were met: (1) each observation occurred within the continental United States and (2) no two observations for the same vehicle had identical timestamps. Certain device actions (being connected, disconnected, or plugged into a different vehicle) caused the GPDAS to report latitude and longitude values of 0, or in one case, those of a location in Egypt. Additionally, for some vehicles that started a trip immediately after plugging in the GPDAS, the time delay required to connect to a sufficient number of GPS satellites to register locational data caused a sequence of observations with latitude and longitude equal to 0. Due to uncertainty about the location of the vehicle at times where the latitude or longitude was reported outside the continental United States, those trips, including associated breadcrumbs were removed from analyses. The number of breadcrumbs impacted was less than 1.6% of all incoming breadcrumbs, with the vast majority of Figure 1 (6392 out of 6529) representing one vehicle whose GPDAS had a malfunction causing no locational data to be collected for multiple weeks. Removing the vehicle with a faulty GPDAS from the computation reduced the number of breadcrumbs removed by the first criteria to 137, less than 0.04% of the total number of breadcrumbs collected. The second criteria removed 12 breadcrumbs that were exact duplicates of other breadcrumbs.\n\nFurther data cleaning was required to compile a set of complete trips taken by each driver. Trip-level data were accessible in two ways from the incoming data stream. The Activity files contained summary information about the start, end, and length of each trip, while the Breadcrumb files offered a finer level of locational detail within the trip. Approximately 1.6% (n=203) of the incoming activity records contained NA values as latitude and longitude of the trip end. Typically this was caused by either a loss of GPS signal (such as parking in an underground structure) or a peculiarity of the incoming activity data stream, in which a second recorded trip start occurred several seconds after the first, which was then \"abandoned\" as a meaningful trip in the data stream. An additional 1.8% of reported trips (n=229) contained a value of 0 for the starting latitude or longitude. Most of these (217) were from the aforementioned known defective device that transmitted large numbers of zeros within the breadcrumb data. These were marked for removal.\n\nAnalysis. Data analysis and management for spatial operations in GIS, used ArcGIS 10.3.1 and the ArcPy Python site package (Environmental Systems Research Institute, Redlands, CA, USA). Spatial data were stored as feature classes in file geodatabase format. Time zone computations were exported from ArcGIS as a csv file and merged back in with the data table in R for further computations.\n\nSpatial analysis. Using the latitude and longitude for each breadcrumb, point feature classes were created for each driver by exporting the results of the Make XY Event geoprocessing operation. These point feature classes served as the basis for all subsequent spatial analysis.\n\nRoad analysis. To determine the characteristics of the road over which the participant was traveling at the time of breadcrumb recording, proximity analysis was performed on each breadcrumb relative to a street centerline dataset. The Near geoprocessing operation was used to identify the street centerline feature closest to the street feature for each breadcrumb. The output of the Near geoprocessing operation is the addition of two attributes to the breadcrumb feature class. These attributes are NEARFID, the unique identifier of the nearest street feature, and NEARDIST, the distance from the target breadcrumb to the nearest street feature. The NEARFID value was used to retrieve attributes of the street feature nearest to the breadcrumb, such as the road name, Census Feature Class Code (CFCC), road type, and average speed (proxy for speed limit). Figure 2 shows a sample of breadcrumbs and their proximity to the street centerline features. Attributed values from the nearest street feature were applied to each breadcrumb using a series of Cursors. Cursors are iterator tools available in the ArcPy code library that can read, update and create features in existing spatial datasets (ArcGIS Help 2015).\n\nDriving Areas. Driving Area was defined as the smallest polygon that encompassed all breadcrumbs for a driver during a given time period. The Minimum Bounding Geometry geoprocessing operation was used to produce convex hull polygons representing the weekly and monthly Driving Areas for each driver.\n\nMean Center. The Mean Center was defined as the geographic center of all breadcrumbs for a driver during a given time period. The Mean Center geoprocessing operation was used to produce points representing the weekly and monthly Mean Center for each driver. The operation was based on spatial location of the breadcrumbs only and was not weighted by any attribute.\n\nPrimary Location. The participants’ most commonly-visited locations (Primary Locations) were identified in order to perform spatial analysis on aspects of the drivers’ behavior relative to familiar areas. The participant’s home and/or workplace were assumed to be the most frequent origin or destination of the majority of the trips recorded by the GPDAS. It was crucial that these locations be identified in a dynamic and automated way to achieve scalability of the data processing workflow. A visual examination of the data for a small sample of participants showed that an often-visited location could appear as a dense cluster of breadcrumbs. It was assumed that the densest cluster, or the cluster with the most ignition on breadcrumbs, would be the Primary Location. Clusters of ignition on breadcrumbs were identified using the Aggregate Points geoprocessing operation. The Aggregate Distance parameter was set to 20 feet after visually locating and measuring ignition on breadcrumb clusters on a small sample of participants. The output of the Aggregate Points operation was polygon feature class with features encompassing clusters of three or more points within the Aggregate Distance parameter value. The breadcrumbs located within each polygon were counted and compared to the total number of ignition on breadcrumbs for the participant to determine if the polygon represented a Primary Location. The Feature To Polygon geoprocessing operation was used to produce a point feature at the centroid of each Primary Location polygon, thus providing a single point that was used as the Primary Location in further analyses (Figure 3).\n\nUnique destinations. Unique destinations are defined as separate locations visited by participants during a given timeframe. The Buffer geoprocessing operation was used to create circular polygons with radii of 100, 250 and 500 feet around each breadcrumb indicating an ignition on event. The varying buffer operations were performed to establish a threshold at which two or more distinct breadcrumbs occurring within the same time period would be combined as the same destination. For example, a participant who visited a shopping center twice in the same month may park at opposite ends of the large parking area for each separate visit. However, this shopping center should be counted as a single destination for the target time period.\n\nThe Dissolve geoprocessing operation was used to merge the circular polygons so breadcrumbs within the three distance thresholds would be counted as a single destination. Figure 4 shows a sample of ignition on breadcrumbs in a selected area during a single month. The groups of breadcrumbs within close proximity to each of the commercial buildings occur on different days within the same month. The 500 foot buffer polygon encompassed all four separate commercial destinations and would be counted as a single destination for that month. The 250 foot buffer would combine the northernmost commercial area with the two areas to the southwest, creating a single destination from three distinct destinations. The 100 foot buffer separated the three separate destinations into two destinations, combining only the two smallest commercial areas into a single destination (Figure 4).\n\n\nResults\n\nComprehensive driver profiles. A breadcrumb is one data point in time (at 30-second intervals) that contains location, time, date and speed of a vehicle. A single trip could have hundreds of breadcrumbs that are aggregated and over time can provide specific information about driving patterns and behaviors. The steps discussed in the methodology section resulted in the creation of a driving profile for each driver that could be examined over the course of a study. Driver profiles included spatial, temporal and behavioral components. Spatial components included Primary Location(s), Driving Areas, Mean Centers and Unique Destinations. Temporal components included number of trips taken during different times of day. Behavioral components included number of hard braking, speeding and sudden acceleration events.\n\nPrimary Locations. A driver’s Primary Location was designated as the location that encompassed at least 10 percent of the driver’s Ignition On breadcrumbs. Since participants are over the age of 65, in most cases, participants had a single Primary Location, assumed to be their home/residence, though some participant results showed two Primary Locations. In most cases, the count for the cluster polygon with the highest count of breadcrumbs was significantly higher than the counts for the other two polygons. The exception is Participant C, where two cluster polygons have breadcrumb counts over 10 percent of the total breadcrumbs (Figure 5). Participant C has two Primary Locations based on the percentage of driver’s ignition on events. Primary Locations were compared against the known addresses from the participants and confirmed to be 100 percent accurate, including participant C who is known to have two homes.\n\nDriving Area and Mean Center. The Driving Area polygons resulting from the methodology varied based on the extent of the breadcrumbs for each driver over time. Analysis showed that a participant's driving areas could often have large portions of overlap from week to week or month to month. Mean Centers were expected to be clustered around the participant’s Primary Location. However, this was not the case when participants had more than one Primary Location. See Participant C in Figure 5. Participant C had two designated Primary Locations and as a result, the Mean Centers for this participant tend to be located between the two Primary Locations. The combined Driving Area polygons, Mean Centers and Primary Locations make up the spatial profile for study participants. Spatial profiles for a sample of participants are visualized in Figure 5. Each grey polygon represents the driving area for a single month for each participant. Monthly Mean Centers are represented with white boxes and red stars indicate the Primary Location for each participant.\n\nDriving Areas can vary greatly month to month for some participants, while other participants tend to have little monthly variation in their driving area. The monthly Driving Area polygons for participants C and D show large portions of overlap, while participants A, B and E show large portions of Driving Area unique to a single month timeframe. Common Driving Area can be quantified by calculating the overlapping area from month to month and overall overlapping area for the 5-month study period. The month to month variation in overlapping Driving Area is shown in Figure 6 and reinforces the large amount of overlap from month to month for participants C and D.\n\nThe ratio of overlapping Driving Area over total Driving Area examines the relationship between commonly driven routes and total driving space. In Figure 6, participant C shows little variation in monthly driving area during the study timeframe with over 70% of the total Driving Area being common to all months. Participant E shows the least amount of overlapping area with less than 15% of the total Driving Area being common to all months.\n\nUnique Destination. The results of performing the Unique Destinations methodology showed varying results by driver. While some drivers showed similar counts of Unique Destinations each month, other drivers showed counts of Unique Destinations that varied greatly from month to month (Figure 8). In most months for many drivers, the counts of Unique Destinations derived by using the 100, 250 and 500 feet buffers varied by buffer size. However, if a driver’s destinations were particularly spread out, the buffer size was less consequential. Overall, the results show that the 100 feet buffer should be used to obtain the most accurate count of unique destinations for the participants within each time period.\n\nTrips driven in daylight vs. night-time. The results of the number of trips driven during the day vs. night showed variation across individuals and intra-individual change across different months. Figure 9 displays the number of trips driven during day and night for five participants from July (7) to November (11). Night driving is associated with a three times greater risk of traffic death and increased fatigue and perceived danger12,13. The majority of trips driven by 4/5 participants were driven during the day. For participants B and D, the number of trips generally declined from month 1 to month 5 without a significant change in their number of trips driven at night. However, Participant C had a higher total number of trips for months 4–5 and increased night driving compared to months 5–7. Participant A reduced their night driving and total number of trips taken from month 3 to 5 while participant B showed little change in night driving behavior. Given that the time window of our study represents months when the hours of daylight available are steadily decreasing, the decrease in total number of trips combined with the lack of a corresponding increase in trips taken at night may suggest that the driver A in our study made deliberate adjustments to avoid night-time driving. Finally, more trips were started during dusk, compared to dawn.\n\nAdverse driving behavior. The three alerts (speeding, hard braking, and hard acceleration) identified by the GPDAS are a reflection of adverse driving behavior independent of the environmental driving context. The GPDAS does not capture data on traffic flow or congestion, weather patterns, inclement conditions, or other factors (e.g. altered mental state) that may impact the driver behavior. Figure 10 presents data on hard braking, sudden acceleration and speeding for 5 participants across the 5 months. Similar to the spatial and temporal analyses, there was a wide variation among the participants. The difference between the least and most aggressive drivers shown here is dramatic: Participant B recorded 25 total alerts while Participant C recorded 400. Participants B and D had no speeding alerts, while participant C recorded all three types of aggressive driving patterns in all 5 months. Participant D recorded three times as many braking events as speeding and sudden acceleration events combined. In months 10 and 11, Participants D and E show a marked increase in aggressive habits, while Participants A and C seem to decline in aggression. The inter-individual variation in driving alerts over the 5 months may be a reflection of driver preference or style, the driving environment or the interaction between both. While the data presented in Figure 10 is a total count of alerts, it is possible to examine the frequency of trips containing one or more alerts. It is unlikely that the high number of alerts among some participants (e.g. C, E) may be solely attributed to the driving environment.\n\nIn summary, the COTS GPDAS device was able to capture objective driving behavior. Data obtained provided a foundation for creating a Naturalistic Driving Profile that included spatial, temporal and behavioral components. This methodology allows us to track a number of variables describing the driving behaviors and patterns of participants over time. We were able to confirm the accuracy of the methodology in identifying the Primary Locations by comparing the results to the actual addresses reported by the participants. Based on the locations of the breadcrumbs, we were able to successfully identify frequently visited locations and general travel patterns. Based on the reported time from the breadcrumbs, we could assess number of trips driven in daylight vs. night-time. Data capturing special events allowed us to compute the number of adverse driving alerts over the 5-month period.\n\n\nDiscussion\n\nThis pilot study presented the feasibility of adapting a COTS GPS device to examine daily driving behavior and associated changes in a cohort of cognitively-normal older adults. The ability to understand changes in driving behavior in the actual environments people drive has been unavailable until recently.\n\nThe GPDAS provided continuous driving data that was used to develop a unique Naturalistic Driving Profile combining spatial, temporal and behavioral aspects of driving. Specifically, we were able to obtain date, time, location and a set of metrics that balanced the ability to measure consistency and change in driving behavior, without over-collecting data or over-burdening research participants. The complexities and obstacles of working with large datasets have been well documented14. The key methodological challenges in this research included: 1) synchronizing data collection from the GPDAS and the vendor servers, 2) efficiently processing and error checking the ‘big data’ on a daily basis 3) developing data cleaning procedures for common errors (e.g. device removal or signal loss) and uncommon errors (e.g. device failure) and 4) synthesizing the data for management and analyses in R and ArcGIS.\n\nThis naturalistic driving methodology provides several advantages to understanding driving behavior over conventional methodologies. The GPDAS can be used to simultaneously monitor real-time driving behavior in a large cohort across the continental United States. The GPDAS’ great strength comes in being able to observe individuals and compare intra-individual change over a long period of time. Additionally, the ease of installation (less than 1 minute), no vehicle modification, minimal effort from participants and seamless data acquisition and transmission strengthens its utility.\n\nHowever, there are some limitations in using this device and methodology. We were not able to detect under-speeding as robustly as we had hoped, due to a variety of confounding factors such as traffic, construction speed limit changes, and the granularity of the breadcrumbs. At the time of this analysis, driver identification was limited to participant self-report. The vendor now offers a Bluetooth Low Energy (BLE) beacon the size and weight of a credit card that may be placed in a wallet or purse. The BLE beacon automatically pairs with the GPDAS when the participant is in the driver seat to identify the driver. This simple solution is automatic, requires no participant effort, conveniently syncs with the devices data stream and is downloaded with the device’s data. The unknown product life of GPDAS devices poses a particular challenge for longitudinal naturalistic driving studies. We detected several potential warning signs for device failure, and were able to take proactive steps to order replacement devices when failures were detected. However, such replacement is not simple for some study participants since it requires travel to our facility for a new device and may reduce their willingness to remain in the study. Finally, it is important to consider the goals, outcomes and amount of participant burden when selecting a methodology for longitudinal studies assessing driving performance and behavior.\n\nEthical approval and consent to participate: All participants were recruited and tested at Washington University School of Medicine. Written informed consent to use and publish clinical details was obtained from all participants. All aspects of the study were approved by the Washington University Institutional Review Board.\n\nF1000Research: Dataset 1. Dataset: Creating a driving profile for older adults using GPS devices and naturalistic driving methodology, 10.5256/f1000research.9608.d13584315", "appendix": "Author contributions\n\n\n\nGMB: study concept and design, acquisition of subjects and data, analysis and interpretation of data, and preparation of manuscript; CMT: acquisition data, analysis and interpretation of data, and preparation of manuscript; MW: acquisition data, analysis and interpretation of data, and preparation of manuscript; SHS: analysis and interpretation of data, and preparation of manuscript; AA: study concept and design and preparation of manuscript; DBC: interpretation of data, and preparation of manuscript; BRO: interpretation of data, and preparation of manuscript; JCM: interpretation of data, and preparation of manuscript; CMR: study concept and design, analysis and interpretation of data, and preparation of manuscript.\n\n\nCompeting interests\n\n\n\nGanesh Babulal, Cindy Traub, Molly Webb, Sarah Stout and Aaron Addison declare they have no competing interest.\n\nBrian Ott: Grants and funds: Eli Lily, Avid, Roche, TauRX, Merck, Univita, NIH/NIA; Honoria: NHTSA: Medscape; Consultant: Accera (DSMB).\n\nDavid Carr: Grants and funds: Missouri Department of Transportation; Honorarium: Harvard Speaker; Consultant: Traffic Injury Research Foundation, Advanced Drivers Education Products and Training; Medscape.\n\nJohn Morris: Grants/Funds: Healthy Aging and Senile Dementia, Antecedent Biomarkers for AD: The Adult Children Study, The Dominantly Inherited Alzheimer Network, and Alzheimer Disease Research Center. Honoraria: Cherkin Lecture, Chinese Society, 13th Eibsee Meeting (Keynote Speaker), Korean Dementia Association, and DZNE research center Magdeburg Symposium. Consultant: Lilly USA; ISIS Pharmaceuticals; Charles Dana Foundation. Royalties: Blackwell Medical Publishers; Taylor & Francis. Board Member: Board of Directors American Academy of Neurology (AAN).\n\nCatherine Roe: Grants/Funds: NIH/NIA\n\n\nGrant information\n\nNational Institute on Aging [R01AG043434, R01AG43434-03S1, P50-AG05681, P01-AG03991, P01-AG026276]; Fred Simmons and Olga Mohan, and the Charles and Joanne Knight Alzheimer’s Research Initiative of the Washington University Knight Alzheimer’s Disease Research Center.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript\n\n\nReferences\n\nDuchek JM, Carr DB, Hunt L, et al.: Longitudinal driving performance in early-stage dementia of the Alzheimer type. J Am Geriatr Soc. 2003; 51(10): 1342–7. PubMed Abstract | Publisher Full Text\n\nHunt LA, Murphy CF, Carr D, et al.: Reliability of the Washington University Road Test. A performance-based assessment for drivers with dementia of the Alzheimer type. Arch Neurol. 1997; 54(6): 707–12. PubMed Abstract | Publisher Full Text\n\nOdenheimer GL, Beaudet M, Jette AM, et al.: Performance-based driving evaluation of the elderly driver: safety, reliability, and validity. J Gerontol. 1994; 49(4): M153–9. PubMed Abstract | Publisher Full Text\n\nOtt BR, Davis JD, Papandonatos GD, et al.: Assessment of driving-related skills prediction of unsafe driving in older adults in the office setting. J Am Geriatr Soc. 2013; 61(7): 1164–9. PubMed Abstract | Publisher Full Text\n\nBabulal GM, Addison A, Ghoshal N, et al.: Development and interval testing of a naturalistic driving methodology to evaluate driving behavior in clinical research [version 1; referees: 1 approved, 1 approved with reservations]. F1000Research. 2016; 5: 1716. Publisher Full Text\n\nGrengs J, Wang X, Kostyniuk L: Using GPS data to understand driving behavior. J Urban Technol. 2008; 15(2): 33–53. Publisher Full Text\n\nBlanchard RA, Myers AM, Porter MM: Correspondence between self-reported and objective measures of driving exposure and patterns in older drivers. Accid Anal Prev. 2010; 42(2): 523–9. PubMed Abstract | Publisher Full Text\n\nWang X, Grengs J, Kostyniuk L: Visualizing travel patterns with a GPS dataset: How commuting routes influence non-work travel behavior. J Urban Technol. 2013; 20(3): 105–25. Publisher Full Text\n\nKelly P, Krenn P, Titze S, et al.: Quantifying the difference between self-reported and global positioning systems-measured journey durations: a systematic review. Transport Rev. 2013; 33(4): 443–59. Publisher Full Text\n\nMolnar LJ, Eby DW: The relationship between self-regulation and driving-related abilities in older drivers: an exploratory study. Traffic Inj Prev. 2008; 9(4): 314–9. PubMed Abstract | Publisher Full Text\n\nCrizzle AM, Myers A, Vrkljan B, et al.: Using in-vehicle devices to examine exposure and patterns in drivers with Parkinson’s disease compared to an age-matched control group. Proc 6th int driving symp hum factor driver assess, training and vehicle design; 2011. Reference Source\n\nNational Highway Traffic Safety Administration: Traffic Safety Facts 2012 data: Older population. 2014. Reference Source\n\nNational Highway Traffic Safety Administration: Fatality Analysis Reporting System: Fatal Crash Trends 2012. 2015. Reference Source\n\nFan J, Han F, Liu H: Challenges of Big Data Analysis. Natl Sci Rev. 2014; 1(2): 293–314. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBabulal G, Traub C, Webb M, et al.: Dataset 1 in: Creating a driving profile for older adults using GPS devices and naturalistic driving methodology. F1000Research. 2016. Data Source" }
[ { "id": "16603", "date": "21 Oct 2016", "name": "Frank-Dietrich Knoefel", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWith the current lack of an accepted standardized approach to assessing changing driving risk associated with aging, and with emerging technology, naturalistic driving is an important area of research. The authors of this paper have a strong track record in publishing in the field of aging, cognition and driving.\nThis paper provides the results of a pilot study of 5 participants using an off-the-shelf global positioning data acquisition system connected to the OBDII port of the older driver’s car. It describes how 5 months of data, collected by the company selling the device, was cleaned and processed to identify such features as spatial components (primary location, unique destinations and driving areas), and temporal components (time of day by season). They also used data obtained from the company on adverse driving behavior: speeding, hard breaking and hard acceleration. Using these features they were able to show differences between the drivers. It may be implied that changes in these parameters over time could be used in the future to help assess changing driving risk or help determine interventions to keep older adults driving longer, safely.\nMajor issues:\nWhile this paper is well-written and describes in some detail the work this group has done, it is not clear what the new contribution of this work is. While the Background suggests that the objective was to “describe methodological challenges associated with adapting a COTS in-vehicle device that captures and synthesizes GPS data for processing and analysis using GIS techniques” the abstract does not refer to this.\n\nImportantly, the design of this sensor system and analysis of data were not compared to other systems recently described in the literature, for instance by Marshall et al. (CanDrive project), Eby et al.(2012), and the SHRP2 study by Skog & Handel (2014). Specifically, work on destination identification has been published by Wallace et al. (2013). Our group has further addressed the issue of how to anonymize locations to protect individual driver identity (especially home address), a requirement of most research ethics committees (Wallace et al. 2015). Data cleaning has been discussed in papers by Porter and Wallace. The sensor information (breadcrumbs) logged every 30 seconds should be compared to other groups, for instance we sampled data every 1 to 5 seconds. The length of the pilot data is short compared to our published work using 1 year of driving data, from 7 years of available CanDrive data set. Finally, Wallace et al. (2014) have published work on driver identification that is not referenced\n\nSimilarly, to be able to compare to other literature we need the definitions used for the alerts, e.g. how much over the speed limit was considered “speeding” (absolute or percent), and how this was determined (via GIS data?). Similarly we do not have definitions for hard accelerating and hard braking.\n\nMinor issues:\nBackground\nPage 3: “recent studies are limited to capturing data… from weeks to 2 months.” CanDrive has collected driving data for 7 years from several hundred drivers – downloaded every 4 to 6 months.\nMethods\nUnique destinations paragraph: for clarity: suggest “two or more distinct breadcrumbs occurring within the same radius during the same time period would be combined….” I would move the sections describing the results of breadcrumbs (Fig 2) and the section how changing the buffer from 100 to 500 feet affects the grouping of breadcrumbs (Fig 3 and 4) into the Results section.\nResults:\nThe Abstract refers to 5 participants as a subset of the study participants, but we don’t have a description of these drivers (age, gender, cognitive scores). The first paragraph states: “A single trip could have hundreds of breadcrumbs that are aggregated and over time can provide specific information about driving patterns and behaviors.” Why were we not given the mean and range of breadcrumbs for trips, or time and distance of trips? Figure 6 – legend: the lines for participants B and D look similar. Page 7: Figure 6 is referred to twice – the second time should read Figure 7 (percentages).", "responses": [ { "c_id": "2329", "date": "07 Dec 2016", "name": "Ganesh Babulal", "role": "Author Response", "response": "Dr. Knoefel: We thank you for your thorough review and the additional references. We have made the following edits to the manuscript in consideration of your comments: We have added in text into the abstract to clarify our objective for this paper.   We added information comparing the GPDAS device and data collection process to prior research studies that used naturalistic driving methodologies. The references have also been added as appropriate to the text.   We have added definitions for the alerts of speeding, hard braking and hard acceleration in the results section under the subsection, adverse driving behavior.   The sentence in the background section was modified to point out that some, not all studies have a short collection interval. We also referenced the CanDrive and its extensive data collection interval in the discussion section.   The clarification was made in the methods section to specific radius. We chose to retain the structure of the methods section to reflect a similar layout in the results section.   Demographics on the five participants were not included in this methodology article since the focus was on the GPDAS device, data collection and processing. Results examining differences among participants in driving behavior will be published later after more driving data is collected.   The mean and standard deviation for total number of trips and average miles per trip have been added for the group of participants.   We have also clarified the graph lines for the participants B and D in the text and corrected the reference to Figure 7 in the text." } ] }, { "id": "17216", "date": "26 Oct 2016", "name": "Xueqin Qian", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors adapted an in-vehicle device to measure driving behavior in individuals with Alzheimer’s disease. Data from five participants were collected using the COTS Azuga G2 Tracking Device. Results from this study suggest that this technology was able to accurately identify locations and travel patterns of the participants.\nGiven the increasing number of individuals diagnosed with Alzheimer disease and the important role that driving plays in a person’s life, this study has the potential to provide a measurement tool that can be used reliably to study driving involving seniors with Alzheimer disease. I have two recommendations for the authors to consider:\n\nFor the introduction, can authors provide a little background information on the driving behavior of senior citizens with Alzheimer? It may be helpful to orient readers who are unfamiliar with this topic. As for results section, do you have data on under what situations, the alert behaviors happen the most frequently? If so, would it be possible to add it to the result section?", "responses": [ { "c_id": "2328", "date": "07 Dec 2016", "name": "Ganesh Babulal", "role": "Author Response", "response": "Dr. Qian: Thank you for your prompt review and comments on this manuscript. On your first point—since this is a methodological article, we limited the amount of information on driving performance and Alzheimer’ disease in the background section. However, we have added a sentence in the introduction with a recent reference (2016) of a systematic review that summarizes the evidence on driving in early stage AD. On your second point—we have the location (latitude and longitude), time and date of where/when the alert for adverse driving behaviors occurred. We are working to analyze these breadcrumbs, but given the high volume of data for each participant we do not have results available at this time that can be added into the result section of the current manuscript." } ] }, { "id": "17017", "date": "29 Nov 2016", "name": "Monique M. Williams", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe their adaptation of a commercial off the shelf (COTS) in-vehicle device that captures GPS date. These data are analyzed using GIS techniques.\nData were collected from a total of five participants who were members of a longitudinal cohort study. The five participants were aged 65 years and older and were cognitively normal. They possessed a valid driver’s license and drove at least once a week. In addition, all participants had provided Alzheimer’s disease biomarker data (brain imaging or cerebral spinal fluid studies) in the past two years.\nThe authors used COTS Azuga G2 Tracking Device for data collection. The data included longitude, latitude, and vehicle speed. Data were collected every thirty seconds from the time that the ignition was turned on. In addition, aggressive driving behaviors such as speeding, hard braking, and sudden acceleration were logged. A driving profile was developed for each participant. The profiles included spatial, temporal, and behavioral components.\nThis pilot study demonstrates that the COTS device provides accurate data regarding daily driving behavior in older adults. The driving profiles for participants can be compared month-to-month or year-to-year and thus, provide the opportunity for observing changes in driving behavior.\nWith the aging of the population, the pilot study provides data that can be applied to the development of further studies to provide additional characterization of driving profiles of older adults.", "responses": [] } ]
1
https://f1000research.com/articles/5-2376
https://f1000research.com/articles/5-1822/v1
26 Jul 16
{ "type": "Research Article", "title": "Selective inhibition of ASIC1a confers functional and morphological neuroprotection following traumatic spinal cord injury", "authors": [ "Liam M. Koehn", "Qing Dong", "Sing-Yan Er", "Lachlan D. Rash", "Glenn F. King", "Katarzyna M. Dziegielewska", "Norman R. Saunders", "Mark D. Habgood", "Liam M. Koehn", "Qing Dong", "Sing-Yan Er", "Lachlan D. Rash", "Glenn F. King", "Katarzyna M. Dziegielewska", "Norman R. Saunders" ], "abstract": "Tissue loss after spinal trauma is biphasic, with initial mechanical/haemorrhagic damage at the time of impact being followed by gradual secondary expansion into adjacent, previously unaffected tissue. Limiting the extent of this secondary expansion of tissue damage has the potential to preserve greater residual spinal cord function in patients. The acute tissue hypoxia resulting from spinal cord injury (SCI) activates acid-sensing ion channel 1a (ASIC1a). We surmised that antagonism of this channel should provide neuroprotection and functional preservation after SCI. We show that systemic administration of the spider-venom peptide PcTx1, a selective inhibitor of ASIC1a, improves locomotor function in adult Sprague Dawley rats after thoracic SCI. The degree of functional improvement correlated with the degree of tissue preservation in descending white matter tracts involved in hind limb locomotor function. Transcriptomic analysis suggests that PcTx1-induced preservation of spinal cord tissue does not result from a reduction in apoptosis, with no evidence of down-regulation of key genes involved in either the intrinsic or extrinsic apoptotic pathways. We also demonstrate that trauma-induced disruption of blood-spinal cord barrier function persists for at least 4 days post-injury for compounds up to 10 kDa in size, whereas barrier function is restored for larger molecules within a few hours. This temporary loss of barrier function provides a “treatment window” through which systemically administered drugs have unrestricted access to spinal tissue in and around the sites of trauma. Taken together, our data provide evidence to support the use of ASIC1a inhibitors as a therapeutic treatment for SCI. This study also emphasizes the importance of objectively grading the functional severity of initial injuries (even when using standardized impacts) and we describe a simple scoring system based on hind limb function that could be adopted in future studies.", "keywords": [ "Spinal trauma", "Neuroprotection", "Psalmotoxin", "PcTx1", "Blood-spinal cord barrier", "Acid-sensing ion channel", "Ischaemia", "Transcriptomic" ], "content": "Introduction\n\nTraumatic spinal cord injuries (SCIs) are devastating for patients due to the sudden and irreversible loss of motor, sensory and autonomic functions at and below the level of injury (Hou & Rabchevsky, 2014; Scivoletto et al., 2014). It is estimated that as many as 500,000 people suffer a spinal cord injury every year (World Health Organization, 2013) with each incident requiring lifelong medical care amounting to approximately $USD 3.5–6.8 million over the course of a lifetime. There are currently no effective pharmacological treatments available to reverse trauma-induced tissue loss and restore lost functions for patients. The prevalence of this devastating and currently untreatable injury makes it a prominent issue for biomedical research.\n\nThe extent of functional losses following a SCI is largely determined by two factors: (i) the level at which the injury occurs (tetraplegia with cervical injuries or paraplegia with thoracic and lumbar injuries) and (ii) the extent of tissue damage at the lesion site (complete or incomplete). The pathology of tissue damage after SCI occurs in a biphasic manner. Initial physical (primary) damage at the time of injury due to mechanical compression, stretching and shearing of tissue is typically localised to the central grey matter (Ek et al., 2010; Schwab et al., 2006; Tator & Fehlings, 1991; Wolman, 1965). This is followed by a period of ‘secondary’ expansion of the lesion into surrounding undamaged tissue (the peri-injury zone) over hours to days after injury (Ek et al., 2012; Zhang et al., 2012). Our previous work has shown that secondary loss of grey matter is mostly complete within the first 24 h, whereas secondary loss of surrounding white matter tracts continues for several days post-injury (Ek et al., 2010; Ek et al., 2012). Since this secondary loss is tissue that survived the initial impact, it is potentially salvageable if suitable neuroprotective treatments can be identified, administered and gain access to the injury site before it has become irreversibly damaged.\n\nPrimary tissue loss is characterized by extensive necrotic cell death similar to haemorrhagic stroke. Indeed, magnetic resonance imaging (MRI) studies of human SCIs have shown positive correlations between the extent of spinal haemorrhage/oedema and the extent of permanent functional deficits in patients (Boldin et al., 2006; Flanders et al., 1999; Parashari et al., 2011). In contrast, secondary tissue loss largely occurs by apoptotic cell death (Byrnes et al., 2007; Yong et al., 1998). The mechanisms underlying this secondary apoptotic period are thought to involve structural, cellular, biochemical and vascular changes in the region surrounding the primary injury site (reviewed in Schwartz & Fehlings, 2002 and Zhang et al., 2012). Vascular compromise appears to play a prominent role, with evidence from animal and human studies showing marked reductions in blood flow at the site of the spinal injury and in adjacent proximal regions (Rivlin & Tator, 1978; Soubeyrand et al., 2013; Tei et al., 2005). Compression and rupture of central grey matter blood vessels not only disrupts blood supply to the central grey matter, but also to the deeper layers of surrounding white matter that are supplied by these damaged central vessels (Koyanagi et al., 1993; Losey & Anthony, 2014; Losey et al., 2014; Tator & Koyanagi, 1997). The resulting hypoperfusion creates a zone of acute tissue hypoxia and ischaemia surrounding the physical lesion, commonly referred to as an ischemic penumbra. Trauma-induced ischaemia, and the consequent hypoxia, are widely regarded as central initiators of the cascade of events underlying secondary tissue damage after SCI (Amar & Levy, 1999; Rowland et al., 2008; Tator & Fehlings, 1991; Tator & Koyanagi, 1997) and have also been shown to promote apoptosis in rats (Linnik et al., 1993) and piglets (Mehmet et al., 1994).\n\nThe mechanisms by which hypoxia/ischaemia induce cell death are not clearly understood. Acute tissue acidosis in regions of hypoperfusion is sufficient to activate acid-sensing ion channel 1a (ASIC1a), a proton-gated ion channel that mediates influx of sodium and calcium into cells (Yermolaieva et al., 2004). Preventing ASIC1a activation with intravenous HCO3- reduced tissue loss and functional deficits in a traumatic brain injury model (Yin et al., 2013), and genetic ablation or pharmacological inhibition of ASIC1a reduces neuronal injury following ischemic stroke (Xiong et al., 2004; Yin et al., 2013). It has been proposed that excessive Ca2+ influx via ASIC1a may induce mitochondrial dysfunction (Friese et al., 2007; Sherwood et al., 2011) and promote activation of intrinsic apoptotic pathways including alterations in BAX/BCL2 ratios and activation of caspase-3 (Smaili et al., 2003).\n\nASIC1a is expressed by most neurons in the central and peripheral nervous systems and multiple studies have confirmed expression on spinal cord neurons (Baron et al., 2008; Wu et al., 2004). ASIC1a expression on oligodendrocyte lineage cells has also been reported, implicating this channel in both grey and white matter damage (Feldman et al., 2008).\n\nPsalmotoxin (PcTx1) is a 40-residue, 4.6 kDa peptide from venom of the Trinidad chevron tarantula, Psalmopoeus cambridgei (Escoubas et al., 2000). PcTx1 is the most potent described inhibitor of ASIC1a with an IC50 of ~1 nM, and it has minimal effect on other ASIC subtypes (Escoubas et al., 2000). In addition to PcTx1, ASIC1a is inhibited to a lesser extent by the diuretic amiloride (IC50 ~10 μM; (Gründer & Chen, 2010)), non-steroidal anti-inflammatory drugs such as flurbiprofen (IC50 ~350 μM; (Voilley et al., 2001)) and alkaloids such as sinomenine (IC50 ~0.27 μM; (Wu et al., 2011)). While all of these non-selective ASIC blockers are neuroprotective in rodent stroke models (Mishra et al., 2011; Wu et al., 2011; Xiong et al., 2004; Zheng et al., 2007), PcTx1 provides the best level of neuroprotection (Pignataro et al., 2007; Xiong et al., 2004; Xiong et al., 2006; Xiong et al., 2008). Intracerebroventricular administration of recombinant PcTx1 at 2 h post-stroke was found to reduce infarct volume by ~70% in a rat model of middle cerebral artery occlusion, which correlated with improvements in neurological score and motor function as well as preservation of neuronal architecture (McCarthy et al., 2015). There has been one report of neuroprotection in a SCI model using intrathecal administration of a P. cambridgei venom extract containing PcTx1 (Hu et al., 2011). Unfortunately, PcTx1 constitutes only ~0.4% of the protein content of P. cambridgei venom (McCarthy et al., 2015), which contains hundreds of other bioactive peptides that act on a range of ligand- and voltage-gated ion channels. Thus, the results obtained with crude venom extracts cannot be taken as definitive evidence that ASIC1a antagonism by PcTx1 is neuroprotective in SCI.\n\nHere we show conclusively that pure recombinant PcTx1 delivered systemically is neuroprotective after SCI in rats, reducing the extent of secondary tissue loss and improving locomotor function. In addition, transcriptomic analysis revealed a significant effect of initial lesion size on the differential expression of genes in different regions of the spinal cord in relation to the injury centre. Inflammatory associated genes predominated in the rostral penumbra, whereas genes associated with blood vessels (e.g. actin and myosin) dominated at the lesion centre, possibly indicative of vascular remodelling. Together the transcriptomic data suggest that the mechanism of action by which PcTx1 confers neuroprotection involves modification of inflammatory and vascular responses to SCI.\n\n\nMaterials and methods\n\nRecombinant PcTx1 was produced as described previously via expression in the periplasm of Escherichia coli (Saez et al., 2011; Saez et al., 2015). Briefly, recombinant His6-MBP-PcTx1 fusion protein was isolated from cell lysates by passage over Ni-NTA Superflow resin (QIAGEN) and the His6-MBP tag was then removed by cleavage of the fusion protein with tobacco etch virus (TEV) protease. The released recombinant PcTx1 containing a non-native N-terminal serine to facilitate TEV cleavage (Saez et al., 2011) was isolated to >95% purity using reverse-phase HPLC. The isolated recombinant PcTx1 peptide is equipotent with native PcTx1 (Saez et al., 2011).\n\nAll procedures involving animals were approved by The University of Melbourne Animal Ethics Committee (Approval number: 1212637) and conducted in compliance with Australian National Health and Medical Research guidelines. Adult female Sprague Dawley rats (weight range 205–285g) were supplied by The University of Melbourne Biological Research Facility and housed in groups of 2–4 per cage on a 12 h light/dark cycle with ad libitum access to food and water. A total of 34 rats were randomly assigned into three treatment groups; PcTx1-treated (n=12), saline-treated (n=16) or uninjured/untreated controls (n=6).\n\nRats were deeply anaesthetized with inhaled isoflurane (3% in oxygen at 1.5 L/min, Lyppard Australia). The thoracic spinal cord was exposed at the T10 vertebral level via a skin incision and vertebral laminectomy. The vertebral column was then stabilized in a stereotaxic frame with clamps attached to the T9 and T11 dorsal vertebral spines. A single contusion injury was applied to the dorsal surface of the exposed spinal cord using a computer-controlled impactor (Ek et al., 2010; Ek et al., 2012) using impact parameters previously determined to produce moderate incomplete spinal cord lesions mostly confined to the central grey matter.\n\nMini-osmotic pumps (Alzet®1003D, BioScientific, Australia) were filled with a solution of PcTx1 in sterile saline (1.03 mg/ml) and primed for 12–24 h at 37°C in phosphate buffered saline (PBS). Immediately after the contusion injury, animals assigned to the PcTx1-treated group (n=12) received an intraperitoneal loading dose of PcTx1 (12.5 μg/kg) and a mini-osmotic pump containing PcTx1 was implanted subcutaneously between the shoulder blades and the wound site closed with several layers of sutures. The role of the pump was to slowly release PcTx1 (1.08 μg/h) subcutaneously to compensate for renal losses and maintain a stable plasma concentration of the drug over a 48 h period. Saline-treated animals (n=16) received an equivalent intraperitoneal injection of saline, but no osmotic pump was implanted. Uninjured, untreated rats (n=6) were included as reference controls.\n\nAn inherent feature of all spinal contusion models is inter-animal variance in the size of the spinal lesions produced. Whilst the impactor device used in this study delivers standardized and highly reproducible impacts (Ek et al., 2010; Ek et al., 2012), differences in the number and location of ruptured blood vessels will result in differences in the extent of tissue ischaemia, hypoxia and ultimately tissue loss. Accordingly, the functional severity of the initial injuries in each animal was assessed upon full recovery from anaesthesia and again at 24 h post-injury using a simplified injury severity (SIS) scale ranging from 0 (normal locomotion) to 3 (complete flaccid hind limb paralysis; see Table 2). This scale was independently developed, but the assessment criteria are similar to those described by Herrmann et al. (2008) and Anderson et al. (2016), and shown in Table 2. Each hind limb was assessed separately and an average of both hind limbs recorded. The criteria for inclusion in the study were an SIS score >1 and <3. Only one saline-treated animal fell outside this range (1.0) and was excluded. From this point on the study was blinded, ensuring that the researchers performing subsequent analyses (functional, morphological and transcriptomic) were not aware of which SCI treatment group the rats belonged to. Animal identities were decoded at the end of the data collection.\n\nA number of commonly used functional tests were conducted at 6 weeks post-injury (±0.5 weeks) by 2–3 independent observers. All observers were blinded to animal identity and treatment group during the testing. Open field locomotion and gait were assessed using the Basso, Beattie and Bresnahan (BBB) scale (Basso et al., 1995) as animals walked across a flat surface. Complex coordinated motor function was assessed by the animals’ ability to traverse a horizontal ladder of 76 cylindrical metal bars (3 mm thick, 7 mm apart) in which 15 bars had been removed (bars: 2, 6, 14, 22, 26, 33, 42, 44, 49, 54, 56, 62, 64, 66, 74 were selected using a random number generator; Haahr, 1998). Animals were video recorded during three attempts at crossing the ladder and the footage analysed by a blinded observer to count the number of hind limb foot faults (foot slipping below the ladder between rungs). A tapered beam test was used to analyse complex coordinated motor function and balance. This test requires animals to walk along a narrowing beam suspended 1.2 m above the ground. The beam tapered from a width of 60 mm to 15 mm over a distance of 1.35 m. A slim ledge 1 cm below the beam on either side automatically caught and counted a foot fault each time one of the animal’s legs slipped off the beam and came into contact with the ledge. This task was repeated six times and the average number of foot faults recorded. The limb pattern during swimming in a tank of water (27–31°C) was also video recorded and analysed to determine if the animals used alternating hind limb movements during swimming, indicative of supra-spinal connections.\n\nAt the end of each experimental period (24 h or 6 weeks post-injury) injured animals (together with age-matched controls) were terminally anaesthetized with an overdose of inhaled isoflurane (Lyppard Australia) and transcardially perfused with 50 ml heparinised (5 IU/ml) PBS (80 ml/min/kg) followed by 150 ml of 4% paraformaldehyde solution.\n\nThe spinal cords were dissected out and a 10 mm segment centred on and enclosing the injury site removed and post-fixed overnight in Bouin’s fixative (Sigma-Aldrich, Australia). Cords were dehydrated through increasing concentrations of ethanol then cleared overnight in chloroform before being mounted in paraffin wax (Paramat, VWR International). The embedded cord segments were serially sectioned in the transverse plane (Leica RM2125RT microtome, 5 μm thick sections) and sequential ribbons of 10 sections mounted on numbered glass slides. Slides were dewaxed, cleared through histolene (Fronine, Australia) and hydrated through decreasing concentrations of ethanol. Standard procedures were used for Hematoxylin and Eosin (H&E; Sigma-Aldrich) and Luxol Fast Blue (LFB; Gurr UK) staining. For immunohistochemical stains, peroxidase and protein blockers were applied (DAKO, 2 h each) before overnight incubation with primary antibodies (1:500 monoclonal mouse anti-CNPase, Sigma-Aldrich C5922 or 1:500 polyclonal rabbit anti-FOX3, abcam Ab104225) at 4°C. Secondary antibodies (1:100 polyclonal horse anti-mouse biotinylated, VECTOR BA2001 or 1:200 polyclonal swine anti-rabbit, DAKO Z0196) were administered and incubated for 1.5 h before application of either ABC kit (DAKO) or polyclonal rabbit PAP (Sigma-Aldrich P1291) complexes respectively. For both methods the final reaction product was developed with the DAB+ kit (DAKO).\n\nTissue sections were viewed and photographed (with scale bars) using a light-microscope (Olympus BX50 fitted with a DP60 digital camera). Areas of positive stained tissue were measured using ImageJ (Abràmoff et al., 2004). The embedded scale bar in each image was used to calibrate the pixel-to-distance ratio, the perimeter of positive staining outlined manually and the enclosed area measured. In sections containing a central cystic cavity, the area of the cystic cavity was subtracted from the total cord cross-sectional area of the section to determine the area of remaining tissue.\n\nSpecific tissue regions containing white matter tracts involved in hind limb motor function were outlined and measured separately. The dorsal column corticospinal tract (dcCST) located at the base of the dorsal column was not measured because it was no longer present at 6 weeks post-SCI. The rubrospinal tracts (RST) and dorsolateral corticospinal tracts (dlCST) are located in the dorsolateral white matter (dlWM) just below the dorsal horns. Preserved dlWM area was defined as positive stained tissue bounded by the lower edge of the dorsal horn at the dorsal extremity and a line drawn at right angles to the pial surface of the cord at a point located 0.3 mm down the outer circumference at the ventral extremity (shown in Figure 1). For each cord, the sum of the left and right dlWM preserved areas was recorded.\n\nLeft is from an uninjured control animal and right is from the injury centre in an animal 6 weeks after SCI. The outlined regions (black dashed lines) mark the areas used for measurements of dorsolateral and ventromedial white matter (dlWM and vmWM respectively). The red outline indicates the position of the dorsal column corticospinal tract (dcCST) at the base of the dorsal column. The dlWM regions were defined as all stained white matter from the ventral side of the dorsal horn down to 0.3 mm along the outer circumference of the cord. Both the left and right dlWM areas were measured. The vmWM was defined as the total area of stained tissue within a rectangle measuring 0.4 mm × 0.6 mm centred on the sulcal fissure (small black arrow in left image). The absence of most of the dorsal column and dcCST at 6 weeks post-injury is indicated by the large arrow in the right image.\n\nThe ventro-medial white matter (vmWM) showed substantial tissue damage (“holes”) in all injured animals and thus simply outlining the perimeter of positive staining did not provide an accurate assessment of the true extent of tissue preservation. To estimate the actual area of positive tissue staining, a 0.4 mm and 0.6 mm rectangle was placed over the ventromedial region centred on and enclosing the sulcal fissure (Figure 1). The number of pixels within the rectangle that were above a baseline threshold colour (the minimum colour considered to reflect positive staining) was recorded and the corresponding area calculated using the known pixel-to-distance ratio for the image (i.e. the embedded scale bar). All dlWM and vmWM measurements were recorded as an average of two sections (5 to 10 sections/25–50 μm apart) from the centre of the injury site. The centre of the injury in each cord segment (section with the smallest preserved tissue area) was determined from serial reconstructions of tissue area in H&E sections equally spaced along the entire length of the cord segment. All LFB and CNPase dlWM and vmWM measurements were made within ± 450 μm of the lesion centre (i.e. ± 9 slides).\n\nTo count the number of FOX3-positive grey matter neurons, images of the stained spinal sections were manually counted using the count tool in Photoshop (Adobe Systems). This places a sequentially numbered spot on top of each immunopositive cell. The image together with numbered spots is saved and compared with the same image counted by the other assessors.\n\nThe functional performance of PcTx1-treated and saline-treated animals was compared using two-way analysis of covariance (ANCOVA). This method fits least squares linear regression lines to the raw data for individual animals and then compares the slopes and intercepts of the regression lines. Inter-animal differences in the initial injury severity (SIS scores) introduce a covariate that makes a substantial contribution to the total observed variance in each treatment group. The presence of this covariate negates the use of simple parametric testing such as ANOVA and Student’s t-Test.\n\nBlood-spinal cord barrier (BSCB) function at the lesion site was assessed between 2 h and 7 days post-SCI for different size permeability tracers (n = 2–3 per tracer) in a separate series of untreated, injured rats. At 30 minutes prior to the time of BSCB assessment, animals were anaesthetised with urethane (1–1.5 g/kg, i.p.) and a femoral vein injection (100 μl) containing one of the following permeability tracers dissolved in sterile saline was administered: 286 Da biotin ethylene diamine (BED, A-1593 Life Technologies Australia), 3 kDa biotin-dextran-amine (BDA, D3308, Life Technologies Australia), 10 kDa dextran rhodamine B (DRB, D-1824, Life Technologies Australia), 44kDa horseradish peroxidase (HRP, P-8375 Sigma-Aldrich, Australia) or 70 kDa dextran fluorescein (DF-70kDa, A-1823, Life Technologies Australia). After 20 minutes of circulation time the animals were administered an overdose of anaesthetic (inhaled isoflurane). A 10 mm segment of spinal cord enclosing the injury site was removed post-mortem and immersion-fixed in 4% paraformaldehyde. For animals injected with HRP or BED, the cord segments were embedded in 4% agar, serially sectioned (70 μm thickness) in a vibrating microtome (Leica VT1000S) and stained with DAB (D-5905, Sigma-Aldrich, Australia). For animals injected with the dextran tracers, the spinal segments were embedded in paraffin wax, serially sectioned in the transverse plane at 5 μm thickness and mounted on glass slides (10 sequential sections per slide). Sections approximately 0.5 mm apart along the entire length of the cord segment were inspected under fluorescence microscopy. BSCB function was interpreted as disrupted where tracer was visible outside blood vessels within the spinal tissue and interpreted as intact where the tracer was completely confined to the lumen of blood vessels.\n\nPcTx1 (n=3) and saline treated animals (n=3) were sacrificed 24 h after injury (overdose of inhaled isoflurane), the thoracic region of the spinal cord was removed under RNase-free conditions and 3 mm long segments collected from the injury centre, the adjacent cord rostral to the injury centre and the adjacent cord caudal to the injury centre. Each 3 mm segment was placed into separate cryovials containing RNA-later solution (Life Technologies Australia) for 2 h before being snap frozen by placing the cryovials into liquid nitrogen. Total mRNA in each sample was extracted using commercial kits (RNeasy, Qiagen) and stored at -80°C. Transcriptome datasets were generated using RNAseq analysis (Illumina HiSeq 2000 platform, 100 bp single end reads, Australian Genome Research Facility, AGRF, Melbourne, Australia). Analysis of differential transcript expression between PcTx1-treated and saline-treated animals was conducted using edgeR software with sample weights (Robinson et al., 2010; Saunders et al., 2014; Zhou et al., 2014). The significance level was set as greater than 2 fold-change (FC) in the positive or negative direction.\n\nAnalysis of transcript variance revealed effects of initial lesion severity (SIS scores) between individual animals within each treatment group (see Transcriptomic analysis below). Accordingly, subsequent analysis was made comparing only datasets from animals with similar sized initial injury severities.\n\nFC values were calculated from FKPM (fragments per kilobase of transcript per million mapped reads) values in the PcTx1-treated vs saline-treated groups for animals with similar SIS scores. Positive fold changes indicate that the genetic transcript was present in a higher proportion in PcTx1-treated animals in that part of the cord compared to saline-treated animals, with negative values indicating lower proportions in PcTx1-treated animals. Gene set testing was conducted to identify enriched pathways using the Gene ontology ‘Panther’ classification system (Mi et al., 2013).\n\n\nResults\n\nDespite the fact that all animals received the same mechanical impact to the spinal cord (2 mm diameter tip, 0.30 m/s ± 0.04 S.D. impact velocity, 1.44 mm ± 0.01 S.D. penetration depth, 0.997s ± 0.004 S.D. compression time), the resulting initial injury severity (SIS) scores ranged from 1.50 to 2.75 (Table 1). This reflects the inherent variability of spinal tissue injuries even when conducted using a well-validated impactor (Ek et al., 2010) operated by an experienced experimenter. There was no correlation between any of the impact parameters and SIS scores (data not shown). There was also no significant difference between the average SIS scores of the PcTx1-treated and saline-treated groups (Table 1). However, within each treatment group there was a range of initial injury severities and for this reason each animal’s results were plotted against their initial SIS score for analyses.\n\nAll animals received the same mechanical impact to the exposed lower thoracic (T10) spinal cord.\n\nAnimals are first assessed by the presence (scores 0–1.5) or absence (scores 2–3) of weight supported hind limb stepping. Those exhibiting weight support are then graded on the basis of the visual severity of any gait abnormality (none, slight or gross). Those without weight support are graded on the basis of how many joints (hip, knee, ankle) they are able to voluntarily flex/extend (none, one or more than 1) when restrained by holding the tail and lifting the hind quarters gently up and lowering back down. Each hindlimb is assessed independently and the average of both recorded. The most equivalent grades on the scale used by Herrmann et al. (2008) and Anderson et al. (2016) are shown on the right.\n\nAt 6 weeks post-injury, there was an inverse correlation between initial injury severity (SIS) scores and residual hind limb function (BBB gait scores, Figure 2a). Over the entire range of initial injury severities, PcTx1-treated animals scored higher on the BBB scale than saline-treated rats. Linear regression and ANCOVA analysis revealed that the data were best described by two separate lines with the same slope, but with the PcTx1-treated group having a significantly higher elevation compared to the saline-treated group (p=0.002, n=6–10). Uninjured control rats (diamond symbols in Figure 2a, n=4) all scored 21 on the BBB scale.\n\nLocomotor functional data are plotted against the initial injury severity (SIS scores); (a) BBB functional scores, (b) horizontal ladder and (c) tapered beam. Each data point represents an individual animal, PcTx1-treated (blue circles), saline-treated (red open circles) and uninjured controls (grey diamonds). Correlation lines in (a) were fitted by least squares linear regression. The data were best described by two separate lines of the same slope, but with a statistically significant difference in elevation (p=0.002; ANCOVA).\n\nFigure 2b shows the correlation between the initial SIS scores for individual rats and their average number of hind limb food faults on the horizontal ladder task (average from three repeat attempts). For a given SIS score, PcTx1-treated rats tended to make fewer hind limb errors than saline-treated animals (n=6 per group). No significant correlation was observed between SIS scores and foot faults. Control rats (n=4) made no hind limb errors along the ladder. There was no significant difference between the two treatment groups on the tapered beam task (Figure 2c) and no correlation between initial SIS scores and average foot faults (from six attempts) on the tapered beam (not shown).\n\nIn the swimming test, all PcTx1- and saline-treated rats were able to kick their hind limbs in a coordinated manner indicating the presence of some supraspinal control of hind limb function. There was no observable difference between the two treatment groups in the swimming test (not shown).\n\nAnalysis of FOX3 immunopositive neurons, total cross-sectional tissue area (as shown by H&E staining) or general cross-sectional white matter area (as shown by CNPase and LFB staining) did not show any significant differences between PcTx1-treated and saline-treated rats at any point along the spinal cord at 6 weeks post-injury. Thus an in-depth analysis of regions of locomotor importance at the injury centre was conducted.\n\nFor a white matter tract in the spinal cord to be functional it requires an uninterrupted connection between the brain nuclei from which the tract axons originate and synaptic connections further down the spinal cord. Thus only the tracts that run all the way through the injury centre would have functional significance for hind limb locomotion. The injury centre was determined by plotting the H&E stained area along the cord (Figure 3) and selecting the point of the least remaining tissue. In the rodent spinal cord, supraspinal motor fibres descend in a number of areas of the cord. The main corticospinal tracts run in the dorsal column (dcCST) and the dorsolateral white matter (dlCST) (Steward et al., 2004). Rubrospinal and reticulospinal tracts are also thought to be involved in hind limb locomotion (Watson & Harrison, 2012). In this study, the dcCST was completely destroyed and no longer present at 6 weeks post-injury in all rats (See Figure 1). The area of LFB staining in the ventromedial white matter (vmWM) at the injury centre was highly variable and showed no difference between treatment groups. There was, however, a noticeable difference in the appearance of myelin in this region between treatment groups as illustrated in Figure 4.\n\nAreas of preserved tissue were calculated as the total cord area minus the area of the central cystic cavity. PcTx1-treated rats (blue lines; n=6) and saline-treated rats (red lines; n=6) are shown. Negative values on the x-axis indicate regions rostral to the injury centre (0) and positive values indicate regions caudal to the injury centre. A visual reconstruction of serial H&E traverse sections corresponding to equivalent points along the graph is shown. The relative size and position of the impactor tip at the time of injury is also indicated.\n\nLuxol fast blue (LFB) stained transverse cross-sections from the injury site of an uninjured control (a and b), saline-treated (c and d) and PcTx1-treated (e and f) animals. The outlined areas in (a), (c) and (e) are shown at higher magnification in (b), (d) and (f) to illustrate the preservation of myelin in the ventromedial areas of these cords. Note the increased numbers and sizes of myelin-devoid spaces in the saline-treated cord (d) compared to the PcTx1-treated cord (f). The saline-treated rat (c and d) had an initial simplified injury severity score (SIS) of 1.73 and the PcTx1-treated rat (e and f) had an SIS of 1.9.\n\nIn PcTx1-treated animals, myelin showed fewer axotomised tracts and a denser staining pattern compared to saline-treated SCI controls (Figure 4). In addition, PcTx1-treated animals had significantly larger areas of preserved dorsolateral white matter (dlWM) compared to saline-treated animals (Figure 5). ANCOVA revealed two separate lines with the same slope best described the data with the PcTx1-treated group having a significantly higher elevation (Figure 5; p=0.003, n=6). Similar results were obtained from adjacent sections stained with H&E or CNPase (data not shown).\n\nSaline-treated rats (red open circles; n=6), PcTx1-treated rats (blue circles; n=6) and control rats (grey diamonds; n=2) are shown. All measurements were the average of two sections from the injury centre of each animal. The correlation lines shown were fitted by least squares linear regression. The data were best described by two separate lines of the same slope, but with a statistically significant difference in elevation (p=0.003, ANCOVA).\n\nA positive correlation was observed between the preserved area of LFB staining in the dlWM at the injury centre and functional performance assessments using BBB scores (Figure 6a) and the horizontal ladder task (Figure 6b). There was no correlation between LFB area in the dlWM and performance on the beam task (not illustrated). PcTx1-treated rats typically had larger areas of preserved dlWM that correlated with better functional outcomes.\n\nCross-sectional areas of LFB staining in each animal at 6 weeks post-injury were measured as the average of two sections from the centre of the injury. (a) BBB gait analysis (b) horizontal ladder foot faults. Each data point represents an individual animal, PcTx1-treated rats (blue circles; n=6) and saline-treated rats (red open markers; n=6).\n\nAlthough 6 weeks post-injury is a suitable time-point for long-term functional assessment, it is well after the response to injury has concluded. Thus, analysis was also conducted at an earlier 24 h post-injury time-point, when white matter loss is still continuing (Ek et al., 2010; Ek et al., 2012), to determine the potential mechanisms of PcTx1 neuroprotection.\n\nAssessments of tissue area using H&E and LFB staining could not be accurately performed at this time due to the presence of significant amounts of blood within the cord and the distinction between live and recently dead tissue is not apparent, both of which affect interpretation of staining. The only morphological analyses that were possible to meaningfully conduct were immunocytochemistry for FOX3 and CNPase. No significant differences were observed between PcTx1-treated and saline-treated groups at 24 h post-injury in terms of the average numbers of FOX3 immunopositive neurons at any point along the 10 mm length of cord segment (not illustrated).\n\nTotal white matter area as determined by measurements of CNPase positive staining revealed preservation of tissue at the injury centre in PcTx1-treated animals with higher severity injuries (SIS ~2.5, 1.27 mm2 and 1.05 mm2 in PcTx1-treated versus 0.631 mm2 and 0.734 mm2 in saline-treated), but not in animals with lower severity injuries (SIS 1.5, 1.38 mm2 in PcTx1-treated versus 1.40 mm2 in saline-treated). This greater effect of the PcTx1 treatment in animals with more severe injuries was also apparent in the dlWM area measurements (SIS ~2.5, 0.166 mm2 and 0.137 mm2 in PcTx1-treated versus 0.09 mm2 and 0.056 mm2 in saline-treated compared to SIS 1.5, 0.143 mm2 in PcTx1-treated versus 0.147 mm2 in saline-treated). This suggests that PcTx1 treatment was more effective at preserving white matter in animals with more severe injuries at 24 h post-SCI.\n\nRNAseq transcriptomic analysis of the spinal cord at 24 h after a contusion injury was performed on PcTx1-treated and saline-treated animals (n=3 per group) using the Illumina Platform. The SIS scores in the PcTx1-treated animals were: 1.9, 2.5 and 2.75 while in the saline-treated group they were: 2.5, 2.5 and 2.75. Initial differential analysis of the datasets compared the means (n=3) of the PcTx1-treated and saline-treated groups. This highlighted a large number of genes that were significantly differentially expressed (FC>2 or FC<-2): 516 genes rostral to the injury site, 136 genes at the injury centre and 133 genes caudal to the injury site.\n\nHowever, marked inter-animal variance was apparent in the transcript numbers of many of the differentially expressed genes. When transcript counts for the top 50 up- and down-regulated genes were plotted against the initial injury severity (SIS score) for each rat, it was apparent that the severity of the initial injury had a profound effect on gene expression levels (see Figure 7). Accordingly, transcript fold changes were calculated between animals with similar sized injuries. Figure 8 shows gene expression changes between PcTx1-treated and saline-treated animals with 2.5 and 2.75 size initial injury severities. Not only was there a difference in the number of up-regulated and down-regulated genes between the two injury sizes, but there were also marked differences in the panels of differentially expressed genes in different segments of the spinal cord. Table 3 lists the top 50 genes identified as being significantly up- or down-regulated by PcTx1 treatment for each spinal cord segment in animals with SIS scores of 2.5 (Table 3a) and 2.75 (Table 3b). Datasets for all genes with FC >2 or <-2 can be found in Tables S1 (see Supplementary file 1). As seen in Table 3a (SIS 2.5) the magnitude of highly up-regulated genes (FC>20) was highest in the rostral (10 genes) and injury segments (5 genes), whilst in the caudal segment no gene was up-regulated by more than FC 7.5, with most being less than FC 5. The largest FC in the down-regulated dataset, however, was identified in the caudal segment (FC -35.3). The three segments showed very little, if any, overlap in the top 50 up- or down-regulated genes (Tables 3a & 3b).\n\nData are shown for the injury centre segment and the immediately adjacent rostral and caudal segments for animals with SIS 2.5 injuries (a) or SIS 2.75 injuries (b). Fold changes (FC) were calculated from FPKM values (see Methods).\n\n(a) shows the top 50 up-regulated genes, (b) the middle 50 unchanged genes and (c) the bottom 50 down-regulated genes (Table 3). Each line represents a single gene. Data shown are for the rostral segment, similar correlations were observed in the injury and caudal segments. Note, transcript numbers increased (a) or decreased (c) for each gene with increasing injury severity (SIS) score. This highlights the importance of using similar size initial injuries for comparative studies.\n\nDatasets obtained from animals with 2.5 and 2.75 SIS scores. Negative values (red) indicate the number of down-regulation genes while positive values (blue) indicate the number of up-regulated genes.\n\nIn contrast, a different pattern of expression was observed in animals with more severe injuries (SIS score of 2.75, Table 3b). In the rostral segment only one transcript was up-regulated by FC>20 and none were down-regulated by more than FC<-20. However, in both the injury and caudal segments there were several transcripts that were over 20 FC (both up- and down-regulated). The largest FC values were obtained for the caudal segment: two transcripts were up-regulated by FC>70 while three were down-regulated by FC<-30 (Table 3b). Again, there was little, if any, overlap in the top 50 up- or down-regulated genes between the three segments.\n\nThe top 50 up-regulated and down-regulated genes for each segment were analysed by ‘String’ which orders them into functional protein association networks (Szklarczyk et al., 2015). These are illustrated in Supplementary file 2. A common feature of the protein association pathways was chemokine signalling (and immune response) related proteins. In both the SIS 2.5 and SIS 2.75 groups, PcTx1-treated rats showed up-regulation of chemokine-based signalling in the rostral segment and a down-regulation in the caudal segments. In the injury segment of the spinal cord initial injury severity appeared to be the key contributor to chemokine signalling – with a cluster of genes up-regulated in the SIS 2.5 group and a cluster down-regulated in the SIS 2.75 group. Along with chemokine-related proteins, smooth muscle associated proteins were up-regulated in the injury centre of the SIS 2.5 group. These results suggest PcTx1 may act through modifying the inflammatory response to SCI.\n\nAll of the gene transcripts with significant changes (i.e. FC>2 or FC<-2) for the two injury sizes (SIS 2.5 and 2.75) were separated into their biological categories (Figure 9.) using the ‘Panther’ gene classification system (Mi et al., 2013). Genes classed as cellular (~21%) and metabolic (~18%) processes showed the most changes. Gene changes were also observed for apoptotic (~2%) and immune system (~5%) response – categories of potential importance to tissue preservation following SCI. However, when canonical genes known to be involved in the intrinsic and extrinsic apoptotic pathways (Elmore, 2007) were analysed individually, no differences between PcTx1-treated and saline-treated rats at any injury severity or in different regions of the cord were detected (Supplementary file 3). The change in caspase-3 levels barely reached significance (FC 2.2) in the rostral segment of SIS 2.5 PcTx1-treated compared to saline-treated animals. However, as a whole, these results suggest that PcTx1 treatment may not have a major influence on apoptotic pathways at the 24 h post-SCI time point.\n\nSeparate datasets were obtained from animals with SIS scores of 2.5 (top, lighter bars) and 2.75 (lower, darker bars).\n\nLists of markers for main cellular components of the central nervous tissue were obtained from (Anderson et al., 2016), see Supplementary file 4. Transcripts for neuronal markers were down-regulated for SIS 2.5 injuries in the rostral segment (rbFOX3 FC -2.8; SYT1 FC -2.3) and up-regulated in the injury segment for SIS 2.75 injuries (rbFOX3 FC 6.6; SYT1 FC 4.5). A similar profile was observed for markers of astrocytes with down-regulation in the rostral segment of SIS 2.5 injured animals (Gfap FC -2.6; Aqp4 FC -3.1; Slc1a2 FC -4.2) and up-regulated in the injury segment of SIS 2.75 injured animals (Gfap FC 2.3; Aqp4 FC 2.5; Slc1a2 FC 3.3). No statistically significant differences were observed for oligodendrocyte markers (Olig1 and Olig2) or for myelin (MAG) at any injury size anywhere along the cord. Similarly, there were no significant differences in microglial markers anywhere along the cord (apart from Trem2 in the rostral segment of SIS 2.5 injured animals, FC 2.2). However, CD68 (a macrophage marker) was significantly up-regulated in rostral regions (FC 6.9 in SIS 2.5 and FC: 2.7 in SIS 2.75), but down-regulated for larger injury sizes in both the injury and caudal segments (SIS 2.75 FC -4.8 at injury centre and FC -2.2 in the caudal segment). CD14 and CCL2 showed similar profiles to CD68. Thus, in contrast to apoptotic pathways, immune responses (either local or systemic) and astrocyte involvement appear to be affected by PcTx1 treatment at 24 h post-injury.\n\nThe different size permeability tracers exhibited different temporal patterns of BSCB dysfunction after SCI (Figure 10e). At 2 h and 12 h post-SCI, HRP (44 kDa) was observed diffusing radially out from regions of tissue damage into surrounding tissue (Figure 10a). At 24 h post-SCI, both HRP and the larger dextran fluorescein (70 kDa) were only visible within the lumen of intact blood vessels (see Figure 10b), indicating early restoration of BSCB function for large permeability tracers. All three of the smaller permeability tracers (up to 10 kDa in size) were observed outside blood vessels in and around the injury centre at 24 h, 2 days and 4 days post-SCI (Figure 10c). By 5 days post-SCI, BSCB function was restored for the smaller tracers with all confined to the lumen of blood vessels, and the lesion core, and no evidence of diffusion out into surrounding tissue (Figure 10d).\n\n(a) Extravasation of HRP (44 kDa) extending radially out from sites of injury at 2 h post-injury. (b) HRP is confined to the lumen of blood vessels at 24 h post-injury with no visible leakage around sites of injury. (c) Extensive leakage of the 10 kDa dextran rhodamine B is observed around sites of injury at 24 h post-injury. (d) at 5 days post-injury, 10 kDa dextran rhodamine B was always confined to the lumen of blood vessels. (e) diagrammatic summary of the period of blood-spinal cord barrier disruption after SCI vs the molecular size of the permeability tracers. The tracers; dextran fluorescein, 70 kDa DF; horseradish peroxidase, HRP 44 kDa, dextran rhodamine B (DRB 10 kDa), biotin-dextran-amine (BDA 3 kDa), biotin-ethylene-diamine (BED 0.3 kDa) were injected systemically 20 minutes prior to collection of tissue. A tick mark indicates the presence of visible tracer extravasation whereas a cross indicates that the tracers were confined to the lumen of blood vessels.\n\n\nDiscussion\n\nIn this study we investigated whether ASIC1a inhibition improves functional outcomes after traumatic SCI. We also determined the temporal pattern of blood-spinal cord barrier (BSCB) disruption after SCI to define the length of the “treatment window” when there is unrestricted access into the spinal cord. Transcriptomic analysis of spinal cord tissue at the injury centre and in adjacent rostral and caudal regions was performed to better understand the cellular pathways involved in PcTx1-mediated neuroprotection. Taken together, our data indicate that (1) the severity of the initial injuries directly influence long-term functional outcomes and (2) blockade of ASIC1a using a systemically administered peptide inhibitor significantly improves functional outcomes over a range of initial injury severities.\n\n\nBlood-spinal cord barrier dysfunction\n\nPcTx1-mediated inhibition of acid-sensing ion channels on spinal neurons and glial cells is crucially dependent on the peptide being able to access spinal cord tissue. Our data show there is a post-trauma “treatment window” for drug delivery that is inversely related to the size of the tracer. For compounds up to 10 kDa in size, the “treatment window” allows 4 days of unrestricted access to spinal cord tissue after SCI. Thus PcTx1 (4.6 kDa) would have been able to access spinal tissue for the entire 48 h treatment period investigated. In this study, dextran rhodamine B (10 kDa) was used as a surrogate tracer for the smaller PcTx1 because it can be directly visualized in fixed tissue sections and avoids the effects that labelling of PcTx1 would have on its physicochemical properties, pharmacological activity, solubility profile and bio-distribution in vivo. In addition, our data show that BSCB disruption is highly localised to sites of injury, thus delivering PcTx1 directly to where it is needed (Figure 10).\n\nIn order to assess efficacy of treatments targeted at reducing the secondary expansion of tissue damage in the central nervous system, it is essential to use experimental models in which secondary tissue loss is a prominent feature. This requirement generally means using small to medium sized lesions as more severe primary lesions that occupy most or all of the cross sectional area of the cord leave little surrounding undamaged tissue into which secondary expansion can occur. For example, the absence of any significant neuroprotective effect in a large animal trial of MgPEG (Streijger et al., 2016), previously shown to be neuroprotective in rodents (Kwon et al., 2009), is probably due to the absence of any viable tissue to protect at the injury centre. In the present study, we used controlled impacts to the exposed thoracic spinal cord to produce moderate spinal cord lesions that are initially confined to the central grey matter. Thoracic level injuries enable hind limb function to be used as an assessment of preservation of white matter locomotor tracts. Lower level primary injuries involving lumbar grey matter will cause the loss of lower motorneurons that connect between the spinal cord and individual hind limb muscles. Without lower motorneuron connections, preservation of upper motorneurons in white matter tracts will not be functionally detectable.\n\nTo reliably assess the therapeutic effectiveness of treatments to limit secondary tissue loss after SCI, it is essential that comparisons are made between animals that started with similar sized initial injuries, otherwise it cannot be determined if smaller lesions in treated animals at later time points are due to the treatment effect or smaller initial injuries in those animals. Most experimental impactor devices give control over one or more aspects of the impact (force, velocity, depth of tissue penetration and length of compression). It is generally assumed that uniform impacts will produce uniform initial injuries, however this has not been well investigated. The results obtained in this study using highly reproducible contusion impacts to the spinal cord (0.30 m/s ± 0.04 S.D. velocity, 1.44 mm ± 0.01 S.D. penetration, 0.997s ± 0.004 S.D. compression time) showed marked differences in the severity of initial functional deficits between animals (SIS range 1.9–2.75, Figure 2a), scores that reflect marked differences in animals’ abilities to flex hind limb joints (Table 2). The reason for this variance is likely to be due to inter-animal differences in the number and location of blood vessels that are damaged by the impact, the extent of haemorrhage that occurs and the size of the ensuing region of downstream hypoperfusion. Thus caution needs to be exercised when using aggregate analysis to compare group means between treated and untreated SCI animals. Assessing the severity of functional deficits (or MRI assessments of the extent of tissue damage) soon after injuries are made enables treatment effects to be followed over time within individual animals. The SIS grading scale developed in this study used a small number of simple yes/no objective criteria (does the animal show hind limb weight support, how many hind limb joints can it extend and flex, Table 2), yet yielded very accurate results. Larger initial SIS scores were predictive of greater total amounts of tissue loss (Figure 5) and poorer residual functional (Figure 2a) by 6 weeks post-injury. Early assessment of hind limb function has only been employed in a small number of previous SCI studies (Anderson et al., 2016; Herrmann et al., 2008), but given the variability of lesion sizes observed using highly standardised impacts, we would encourage routine inclusion of initial injury severity measurements in future SCI studies.\n\nHind limb function is not exclusively driven and modulated by inputs from the brain. Proprioceptive and cutaneous sensory inputs have been shown to be able to drive local motor pattern generators in the spinal cord and generate hind limb stepping locomotion in the absence of any spinal cord connections to or from the brain (Grillner & Wallen, 1985; Rossignol, 1996). However, these local inputs are greatly reduced, if not absent, when swimming and this provides a useful objective test for the presence of descending supraspinal drive of hind limb locomotor function (Saunders et al., 1998; Wheaton et al., 2011). All of the SCI animals (PcTx1-treated and saline-treated) were able to swim using alternating kicks of their hind limbs when placed in a tank of water indicating preservation of some supraspinal drive to the lumbar motor centres of the hind limbs. If no functional connections between the brain and lumbar motor centres are present, then differences in hind limb function would not be reflective of preservation of descending white matter tracts.\n\nMost of the locomotor function tests used in this study indicated greater preservation of function at 6 weeks post-injury in the PcTx1-treated animals. The BBB locomotor analysis showed a highly significant improvement in hind limb function of approximately 2 points on the BBB scale in the PcTx1 treated animals (Figure 2a) that was apparent across the entire range of injury severities investigated. Although that is a modest increase numerically, the effect on locomotor function can be disproportionately large. An increase from 11 to 12 on the BBB scale, for example, corresponds to the difference between the presence and absence of fore limb–hind limb coordination. The PcTx1-treated animals also exhibited fewer foot faults in the horizontal ladder test compared to saline-treated equivalents indicating improved ability to perform coordinated motor tasks following SCI. Two animals with very high SIS scores (one PcTx1-treated and one saline-treated) had many more foot faults than the other rats (Figure 2b). As an SIS score of 3 indicates complete paralysis (and therefore more extensive initial trauma), it is not surprising that animals with scores close to this would perform poorly on the ladder test.\n\nIn contrast to results from the BBB and ladder tests, PcTx1 treatment did not improve motor function on the tapered beam test. A range of factors may have contributed to this. The animals making the most foot faults had a wide range of SIS scores suggesting that walking with legs pressed close together under the body when on the thinnest part of the beam might be difficult for rats of any injury severity. It was also apparent that, after a few trials, many rats learnt that they could walk along the beam using the lower counting ledge without falling and did not even attempt to stay on the narrow beam in subsequent trials. This made it difficult to distinguish between voluntary and involuntary use of the error-counting ledge.\n\nA striking feature of the present study was the complete absence of the central grey matter and large amounts of surrounding white matter at the injury centre (Figure 4). Immunohistological analysis of serial sections spanning the length of the injured cord segment revealed no significant differences in the number of FOX3 positive neurons between the PcTx1-treated and saline-treated animals at either 24 h or 6 weeks post-injury. In addition, there was no significant change in the number of FOX3 positive neurons between 24 h and 6 weeks which is consistent with our earlier study showing that trauma-induced secondary loss of neurons in the central grey matter is largely complete by 24 h post-injury (Ek et al., 2010; Ek et al., 2012).\n\nCNPase staining of white matter also showed no significant differences in total white matter area between the PcTx1-treated and saline-treated animals at 6 weeks post-injury despite the significant improvements in locomotor function that were observed in these animals. We investigated further whether the improvements in hind limb function might be due to greater preservation of individual descending white matter tracts that are known to be involved in motor function. A study using injections of biotinylated dextran amine into the motor cortex of mice (Steward et al., 2004) highlighted three main regions of descending corticospinal motor tracts; at the base of the dorsal column (dcCST), in the dorsolateral white matter on the ventral side of the dorsal horns (dlWM) and in the ventromedial white matter (vmWM) either side of the sulcal fissure. In the present study, the dcCST which normally carries the majority of the descending motor fibres (Kathe et al., 2014; Steward et al., 2004) was completely absent at 6 weeks post-injury (Figure 1 & Figure 4) in all of the PcTx1-treated and saline-treated animals. Since all of these animals were able to swim using their hind limbs and recorded high BBB scores this suggests that the dcCST is not essential for driving coordinated hind limb motor function. Detailed analysis of the dlWM region using three separate histological stains (LFB, CNPase and H&E) on adjacent slides from the injury centre showed a greater preservation of white matter in the PcTx1-treated animals compared to saline-treated controls at 6 weeks post-injury (Figure 5). The total area of preserved dlWM showed an inverse correlation with initial injury severity scores (Figure 5) and a positive correlation with BBB functional scores (Figure 6a).\n\nPreservation of tissue in the vmWM region that normally contains reticulospinal tracts was highly variable and there were no apparent correlations between white matter area in this region and any of the behavioural tests or with the severity of the initial injuries as measured by SIS score.\n\nThere was however a noticeable difference in the appearance of the myelin within this region between the treatment groups (Figure 4). In PcTx1-treated animals, there was an improvement in the quality of the myelin and a decrease in the amount of axotomised axons (Figure 4). Taken together, these results suggest that it is preservation of axons in the dlWM that is mainly responsible for the improvements in hind limb function in the PcTx1-treated animals.\n\nRNAseq differential expression analysis revealed a number of genes whose transcript levels were significantly altered in the PcTx1-treated SCI animals compared to saline-treated SCI animals. Subsequent analysis of transcript variance of the up- and down regulated genes also revealed marked differences in transcript numbers between animals with different injury severities (Figure 7). Animals with less severe injury severities (e.g. SIS 1.9) exhibited much lower gene transcript numbers for the up-regulated genes and much higher transcript numbers for the down-regulated genes when compared to animals with more severe injuries (e.g. SIS 2.5). This may be due to different magnitudes of the biological responses to injury sizes (such as inflammation levels) that are sensitive to PcTx1. It might also be due to greater vascular disruption in the larger injuries allowing greater extravasation of PcTx1 into the injury site. These pronounced effects of injury severity on gene expression and hind limb functional performance highlight the need to conduct early post-surgery analysis of injury severity in order to compare animals with very similar initial injuries. Aggregating SCI data to conduct parametric statistical analysis using group means is likely to increase the incidence of type 2 statistical errors.\n\nUnique profiles were also observed between the different cord regions (rostral, injury centre and caudal) in terms of the number and complement of genes that changed their expression levels (Figure 8). This may partly reflect different pathological processes in the different regions, ‘primary’ necrotic damage localized to the injury centre (Ek et al., 2012) and secondary ischaemic damage away from the injury centre. Another contributing factor could be the predominantly rostral to caudal direction of arterial blood flow at the T10 level of the spinal cord and the rostral to caudal angular orientation of sulcal arteries supplying the central grey matter (Figure 11b). Thus disruption of grey matter vessels at the injury site might result in less perfusion (and consequent greater hypoxia ischaemia) below the injury site compared to higher rostral segments (Ek et al., 2010). These results highlight that the pathophysiology of SCI is not homogeneous along the injured cord segment and that neuroprotective treatments may exhibit different efficacies in different areas of the spinal cord and for different injury severities.\n\nVibratome sections (70 μm thick) from the T10 spinal level and stained with DAB+ kit (DAKO) to highlight blood vessels. (a) is a transverse section. Note the higher blood vessel density in the central grey matter region (yellow star). The deeper layers of the surrounding white matter are supplied by blood vessels originating from the central grey matter (white arrows), whereas the outer layers of white matter are supplied by radial blood vessels penetrating in from the pial surface (arrowheads). (b) is a longitudinal section through the centreline of the cord. Note the rostral to caudal angle of the sulcal arteries which branch off from the anterior spinal artery to supply the central grey matter.\n\nPcTx1 treatment did not significantly alter the expression of genes for myelin or oligodendrocytes, which is in contrast to the histological results showing greater preservation of myelin. Similarly, down-regulation of neuronal markers in PcTx1-treated SIS 2.5 animals in the rostral segment and up-regulation in the injury segment in PcTx1-treated SIS 2.75 animals was also at odds with the histological results. The discrepancy between these results may highlight that the main transcriptomic changes occurred at different time-points to the protein changes. It could also mean that individual animals respond to treatment and injury at varying speeds within the first few days.\n\nAn interesting observation was an inverse relationship between expression of neuronal and inflammatory markers. In the rostral region of PcTx1-treated SIS 2.5 animals, transcript numbers for neuronal and astrocyte markers were significantly decreased, whilst transcript numbers for the inflammatory markers CD68, CCL2 and CD14 were significantly increased. Conversely, transcript numbers for neuronal and astrocyte markers were significantly increased at the injury centre in PcTx1-treated SIS 2.75 animals, whilst transcript numbers for the three inflammatory markers were significantly decreased. Attenuation of the enriched immune system related pathways (such as CCL2 and CD14) has been proposed as a potential treatment for SCI (Wen et al., 2016). Our results suggest that PcTx1-mediated effects on the immune response may be an integral component for functional recovery following SCI.\n\nPcTx1 also affected expression of genes encoding for smooth muscle related proteins. This was particularly apparent at injury centre in animals with SIS 2.5 injuries (Supplementary file 2). Due to the prominent role vascular disruption plays in spinal injury severity, alterations in blood flow might contribute to the effects observed from PcTx1. Future investigation into this topic would be beneficial to the field.\n\nIt has been proposed that the neuroprotective effects of ASIC1a inhibitors may be due to prevention of activation of the intrinsic apoptotic pathway (Friese et al., 2007; Gao et al., 2005; Xiong et al., 2004; Yermolaieva et al., 2004). It has been postulated that acidosis in areas of acute tissue ischaemia opens ASIC1a channels allowing the influx of Na+ ad Ca2+ into neurons and glia. This is thought to cause mitochondrial membrane depolarisation, release of cytochrome c and subsequent activation of intrinsic apoptotic cell death pathways (Friese et al., 2007; Sherwood et al., 2011; Smaili et al., 2003). A review by Elmore (2007) describes the key proteins involved in the intrinsic, extrinsic and final execution apoptotic pathways. We did not observe down-regulation of any of the key genes in these apoptotic pathways in the PcTx1-treated animals at 24 h post-SCI (Dataset 1, Table S3). Thus the neuroprotective effects of PcTx1 may not be mediated by a modulation of apoptotic cell death. There remains, however, the possibility that PcTx1 could inhibit apoptotic pathway activation at earlier stages of tissue damage prior to 24 h post-SCI. Previous research in rats has shown increased expression of many inflammatory genes (e.g. IL1β, IL6, MIP2 and MIP1α) in the initial 6–12 h post-SCI before declining to values approaching control levels between 24 h and 48 h (Carmel et al., 2001). As the transcriptomic analysis in this study was only conducted at a single time point (24 h), there remains the possibility that PcTx1 has even greater effects on earlier phases of the inflammatory response to SCI.\n\nIn the spinal cord, “tissue is function” and it is widely accepted that the extent of functional losses after SCI reflects the extent of tissue loss (injury size). It remains to be determined why PcTx1 appeared to primarily preserve white matter (or myelin levels) in the dlWM regions and not in other myelinated tracts within the cord. One possibility is that the model of injury used in this study (central dorsal contusion) disproportionately affects some regions of the cord compared to others. The central grey matter and vmWM regions, for example, lie immediately under the impact site whilst the dlWM is to the side and less directly impacted. Another possibility is that there could be important structural differences between the different areas of the cord. The dlWM may contain a greater proportion of blood vessels originating from the outer pial surface, which would retain greater perfusion, compared to areas with blood vessels originating from the central sulcal (grey matter) supply (Figure 11a). In order to reveal whether there is evidence of preservation in other areas not visible at the light microscope level, higher resolution analysis of white matter structure at the EM level would be required. For example, greater numbers of myelin sheath wrappings around axons could explain some of the difference in myelin integrity observed between PcTx1-treated and saline-treated animals seen in the vmWM (Figure 4).\n\nTranscriptomic analysis highlighted several possible mechanisms for the neuroprotective effects of PcTx1 after trauma. Whilst PcTx1 did not appear to affect apoptotic cell death pathways at 24 h post injury, it did alter expression levels of genes involved in inflammatory and immune responses. Further transcriptomic analysis at earlier time points may lead to a better understanding of the mechanisms involved and their timing. In conclusion, this study shows that PcTx1 is effective at preserving white matter tracts involved in hind limb function and thereby improving behavioural outcomes following SCI in the rat.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw transcript data for PcTx1- and saline-treated adult rats 24 h after spinal cord injury, 10.5256/f1000research.9094.d128164 (Koehn et al., 2016).", "appendix": "Author contributions\n\n\n\nMDH, KMD, NRS, LDR and GFK conceived and planned the study. LMK and QD performed most of the analytical work, MDH was responsible for all animal surgeries, KMD supervised morphological analyses, S-YE and LDR provided the PcTx1 peptide. Transcriptomic analysis was performed by MHD and LMK. All authors were involved in writing the manuscript and have agreed to the final version.\n\n\nCompeting interests\n\n\n\nThe authors state that there are no conflicts of interest in the authorship of this study.\n\n\nGrant information\n\nThis work was supported by Australian National Health & Medical Research Council (NHMRC) Project Grant APP1049287 to M.D.H and N.R.S, Project Grant APP1063798 to G.F.K. and a NHMRC Principal Research Fellowship to G.F.K.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary file S1. PcTx1-induced fold changes (FC) in gene expression after SCI.\n\nLists of genes with FC >2 or FC <-2 changes in transcript counts at 24 h post-SCI in response to PcTx1 treatment in the rostral (Table S1a), injury centre (Table S1b) and caudal spinal cord segments (Table S1c).\n\nClick here to access the data.\n\nSupplementary file S2. Functional protein association networks for gene changes in PcTx1-treated animals after SCI.\n\nFunctional protein association networks of transcripts that were up-regulated or down-regulated in the injury segment of PcTx1-treated animals compared to saline-treated animals. (a) SIS 2.5 and (b) SIS 2.75 animals. Connecting lines indicate associations between the protein products as determined by the ‘String’ network system (Szklarczyk et al., 2015). Transcripts were obtained from RNAseq (Illumina).\n\nClick here to access the data.\n\nSupplementary file S3. Effects of PcTx1 on apoptotic pathway expression after SCI.\n\nFold changes (FC) of transcript counts for canonical genes in the intrinsic, extrinsic and converging execution apoptotic pathways at 24 h after SCI. Positive values indicate that transcript counts were up-regulated in PcTx1-treated animals compared to saline-treated animals and negative values indicate lower transcript counts for the PcTx1-treated animals. Lists of genes were obtained from Elmore (2007). Transcript counts were obtained from RNAseq (Illumina).\n\nClick here to access the data.\n\nSupplementary file S4. Effects of PcTx1 on expression of cell specific markers after SCI.\n\nFold Fold changes (FC) of transcript counts of genes for cellular markers for astrocytes, neurons, oligodendroglia, microglia and endothelial cells at 24 h after SCI. Positive values indicate that transcript counts were up-regulated in PcTx1-treated animals compared to saline-treated animals and negative values indicate lower transcript counts for the PcTx1-treated animals. Lists of genes were obtained from Anderson et al. (2016). Transcript counts were obtained from RNAseq (Illumina).\n\nClick here to access the data.\n\n\nReferences\n\nAbràmoff MD, Magalhães PJ, Ram SJ: Image processing with ImageJ. Biophotonics Intern. 2004; 11: 36–42. Reference Source\n\nAmar AP, Levy ML: Pathogenesis and pharmacological strategies for mitigating secondary damage in acute spinal cord injury. Neurosurgery. 1999; 44(5): 1027–1039; discussion 1039–40. PubMed Abstract | Publisher Full Text\n\nAnderson MA, Burda JE, Ren Y, et al.: Astrocyte scar formation aids central nervous system axon regeneration. Nature. 2016; 532(7598): 195–200. PubMed Abstract | Publisher Full Text\n\nBaron A, Voilley N, Lazdunski M, et al.: Acid sensing ion channels in dorsal spinal cord neurons. J Neurosci. 2008; 28(6): 1498–1508. PubMed Abstract | Publisher Full Text\n\nBasso DM, Beattie MS, Bresnahan JC: A sensitive and reliable locomotor rating scale for open field testing in rats. J Neurotrauma. 1995; 12(1): 1–21. PubMed Abstract | Publisher Full Text\n\nBoldin C, Raith J, Fankhauser F, et al.: Predicting neurologic recovery in cervical spinal cord injury with postoperative MR imaging. Spine (Phila Pa 1976). 2006; 31(5): 554–559. PubMed Abstract | Publisher Full Text\n\nByrnes KR, Stoica BA, Fricke S, et al.: Cell cycle activation contributes to post-mitotic cell death and secondary damage after spinal cord injury. Brain. 2007; 130(Pt 11): 2977–2992. PubMed Abstract | Publisher Full Text\n\nCarmel JB, Galante A, Soteropoulos P, et al.: Gene expression profiling of acute spinal cord injury reveals spreading inflammatory signals and neuron loss. Physiol Genomics. 2001; 7(2): 201–213. PubMed Abstract | Publisher Full Text\n\nEK CJ, Habgood MD, Dennis R, et al.: Pathological changes in the white matter after spinal contusion injury in the rat. PLoS One. 2012; 7(8): e43484. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEK CJ, Habgood MD, Callaway JK, et al.: Spatio-temporal progression of grey and white matter damage following contusion injury in rat spinal cord. PLoS One. 2010; 5(8): e12021. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElmore S: Apoptosis: a review of programmed cell death. Toxicol Pathol. 2007; 35(4): 495–516. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEscoubas P, De Weille JR, Lecoq A, et al.: Isolation of a tarantula toxin specific for a class of proton-gated Na+ channels. J Biol Chem. 2000; 275(33): 25116–21. PubMed Abstract | Publisher Full Text\n\nFeldman DH, Horiuchi M, Keachie K, et al.: Characterization of acid sensing ion channel expression in oligodendrocyte lineage cells. Glia. 2008; 56(11): 1238–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlanders AE, Spettell CM, Friedman DP, et al.: The relationship between the functional abilities of patients with cervical spinal cord injury and the severity of damage revealed by MR imaging. AJNR Am J Neuroradiol. 1999; 20(5): 926–34. PubMed Abstract\n\nFriese MA, Craner MJ, Etzensperger R, et al.: Acid-sensing ion channel-1 contributes to axonal degeneration in autoimmune inflammation of the central nervous system. Nat Med. 2007; 13(12): 1483–9. PubMed Abstract | Publisher Full Text\n\nGao J, Duan B, Wang DG, et al.: Coupling between NMDA receptor and acid-sensing ion channel contributes to ischemic neuronal death. Neuron. 2005; 48(4): 635–46. PubMed Abstract | Publisher Full Text\n\nGrillner S, Wallen P: Central pattern generators for locomotion, with special reference to vertebrates. Annu Rev Neurosci. 1985; 8: 233–61. PubMed Abstract | Publisher Full Text\n\nGründer S, Chen X: Structure, function, and pharmacology of acid-sensing ion channels (ASICs): focus on ASIC1a. Int J Physiol Pathophysiol Pharmacol. 2010; 2(2): 73–94. PubMed Abstract | Free Full Text\n\nHaahr M: List randomiser.1998; [Accessed 22nd March 2015]. Reference Source\n\nHerrmann JE, Imura T, Song B, et al.: STAT3 is a critical regulator of astrogliosis and scar formation after spinal cord injury. J Neurosci. 2008; 28(28): 7231–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHou S, Rabchevsky AG: Autonomic consequences of spinal cord injury. Compr Physiol. 2014; 4(4): 1419–53. PubMed Abstract | Publisher Full Text\n\nHu R, Duan B, Wang D, et al.: Role of acid-sensing ion channel 1a in the secondary damage of traumatic spinal cord injury. Ann Surg. 2011; 254(2): 353–362. PubMed Abstract | Publisher Full Text\n\nKathe C, Hutson TH, Chen Q, et al.: Unilateral pyramidotomy of the corticospinal tract in rats for assessment of neuroplasticity-inducing therapies. J Vis Exp. 2014; 15(94). PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoehn L, Dong Q, Er SY, et al.: Dataset 1 in: Selective inhibition of ASIC1a confers functional and morphological neuroprotection following traumatic spinal cord injury. F1000Research. 2016. Data Source\n\nKoyanagi I, Tator CH, Theriault E: Silicone rubber microangiography of acute spinal cord injury in the rat. Neurosurgery. 1993; 32(2): 260–268; discussion 268. PubMed Abstract\n\nKwon BK, Roy J, Lee JH, et al.: Magnesium chloride in a polyethylene glycol formulation as a neuroprotective therapy for acute spinal cord injury: preclinical refinement and optimization. J Neurotrauma. 2009; 26(8): 1379–1393. PubMed Abstract | Publisher Full Text\n\nLinnik MD, Zobrist RH, Hatfield MD: Evidence supporting a role for programmed cell death in focal cerebral ischemia in rats. Stroke. 1993; 24(12): 2002–2008, discussion 2008–9. PubMed Abstract | Publisher Full Text\n\nLosey P, Anthony DC: Impact of vasculature damage on the outcome of spinal cord injury: a novel collagenase-induced model may give new insights into the mechanisms involved. Neural Regen Res. 2014; 9(20): 17836. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLosey P, Young C, Krimholtz E, et al.: The role of hemorrhage following spinal-cord injury. Brain Res. 2014; 1569: 9–18. PubMed Abstract | Publisher Full Text\n\nMcCarthy CA, Rash LD, Chassagnon IR, et al.: PcTx1 affords neuroprotection in a conscious model of stroke in hypertensive rats via selective inhibition of ASIC1a. Neuropharmacology. 2015; 99: 650–657. PubMed Abstract | Publisher Full Text\n\nMehmet H, Yue X, Squier MV, et al.: Increased apoptosis in the cingulate sulcus of newborn piglets following transient hypoxia-ischaemia is related to the degree of high energy phosphate depletion during the insult. Neurosci Lett. 1994; 181(1–2): 121–125. PubMed Abstract | Publisher Full Text\n\nMi H, Muruganujan A, Casagrande JT, et al.: Large-scale gene function analysis with the PANTHER classification system. Nat Protoc. 2013; 8(8): 1551–1566. PubMed Abstract | Publisher Full Text\n\nMishra V, Verma R, Singh N, et al.: The neuroprotective effects of NMDAR antagonist, ifenprodil and ASIC1a inhibitor, flurbiprofen on post-ischemic cerebral injury. Brain Res. 2011; 1389: 152–160. PubMed Abstract | Publisher Full Text\n\nParashari UC, Khanduri S, Bhadury S, et al.: Diagnostic and prognostic role of MRI in spinal trauma, its comparison and correlation with clinical profile and neurological outcome, according to ASIA impairment scale. J Craniovertebr Junction Spine. 2011; 2(1): 17–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPignataro G, Simon RP, Xiong ZG: Prolonged activation of ASIC1a and the time window for neuroprotection in cerebral ischaemia. Brain. 2007; 130(Pt 1): 151–158. PubMed Abstract | Publisher Full Text\n\nRivlin AS, Tator CH: Regional spinal cord blood flow in rats after severe cord trauma. J Neurosurg. 1978; 49(6): 844–853. PubMed Abstract | Publisher Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossignol S: Neural control of stereotypic limb movements. In Comprehensive Physiology. American Physiological Society, Oxford; 1996; 173–216. Publisher Full Text\n\nRowland JW, Hawryluk GW, Kwon B, et al.: Current status of acute spinal cord injury pathophysiology and emerging therapies: promise on the horizon. Neurosurg Focus. 2008; 25(5): E2. PubMed Abstract | Publisher Full Text\n\nSaez NJ, Deplazes E, Cristofori-Armstrong B, et al.: Molecular dynamics and functional studies define a hot spot of crystal contacts essential for PcTx1 inhibition of acid-sensing ion channel 1a. Br J Pharmacol. 2015; 172(20): 4985–4995. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaez NJ, Mobli M, Bieri M, et al.: A dynamic pharmacophore drives the interaction between psalmotoxin-1 and the putative drug target acid-sensing ion channel 1a. Mol Pharmacol. 2011; 80(5): 796–808. PubMed Abstract | Publisher Full Text\n\nSaunders NR, Kitchener P, Knott GW, et al.: Development of walking, swimming and neuronal connections after complete spinal cord transection in the neonatal opossum, Monodelphis domestica. J Neurosci. 1998; 18(1): 339–355. PubMed Abstract\n\nSaunders NR, Noor NM, Dziegielewska KM, et al.: Age-dependent transcriptome and proteome following transection of neonatal spinal cord of Monodelphis domestica (South American grey short-tailed opossum). PLoS One. 2014; 9(6): e99080. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwab JM, Brechtel K, Mueller CA, et al.: Experimental strategies to promote spinal cord regeneration--an integrative perspective. Prog Neurobiol. 2006; 78(2): 91–116. PubMed Abstract | Publisher Full Text\n\nSchwartz G, Fehlings MG: Secondary injury mechanisms of spinal cord trauma: a novel therapeutic approach for the management of secondary pathophysiology with the sodium channel blocker riluzole. Prog Brain Res. 2002; 137: 177–190. PubMed Abstract | Publisher Full Text\n\nScivoletto G, Tamburella F, Laurenza L, et al.: Who is going to walk? A review of the factors influencing walking recovery after spinal cord injury. Front Hum Neurosci. 2014; 8: 141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSherwood TW, Lee KG, Gormley MG, et al.: Heteromeric acid-sensing ion channels (ASICs) composed of ASIC2b and ASIC1a display novel channel properties and contribute to acidosis-induced neuronal death . J Neurosci. 2011; 31(26): 9723. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmaili SS, Hsu YT, Carvalho AC, et al.: Mitochondria, calcium and pro-apoptotic proteins as mediators in cell death signaling. Braz J Med Biol Res. 2003; 36(2): 183–190. PubMed Abstract | Publisher Full Text\n\nSoubeyrand M, Laemmel E, Court C, et al.: Rat model of spinal cord injury preserving dura mater integrity and allowing measurements of cerebrospinal fluid pressure and spinal cord blood flow. Eur Spine J. 2013; 22(8): 1810–1819. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSteward O, Zheng B, Ho C, et al.: The dorsolateral corticospinal tract in mice: an alternative route for corticospinal input to caudal segments following dorsal column lesions. J Comp Neurol. 2004; 472(4): 463–477. PubMed Abstract | Publisher Full Text\n\nStreijger F, Lee JH, Manouchehri N, et al.: The evaluation of magnesium chloride within a polyethylene glycol formulation in a porcine model of acute spinal cord injury. J Neurotrauma. 2016. PubMed Abstract | Publisher Full Text\n\nSzklarczyk D, Franceschini A, Wyder S, et al.: STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015; 43(Database issue): D447–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTator CH, Fehlings MG: Review of the secondary injury theory of acute spinal cord trauma with emphasis on vascular mechanisms. J Neurosurg. 1991; 75(1): 15–26. PubMed Abstract | Publisher Full Text\n\nTator CH, Koyanagi I: Vascular mechanisms in the pathophysiology of human spinal cord injury. J Neurosurg. 1997; 86(3): 483–492. PubMed Abstract | Publisher Full Text\n\nTei R, Kaido T, Nakase H, et al.: Secondary spinal cord hypoperfusion of circumscribed areas after injury in rats. Neurol Res. 2005; 27(4): 403–408. PubMed Abstract | Publisher Full Text\n\nVoilley N, de Weille J, Mamet J, et al.: Nonsteroid anti-inflammatory drugs inhibit both the activity and the inflammation-induced expression of acid-sensing ion channels in nociceptors. J Neurosci. 2001; 21(20): 8026–8033. PubMed Abstract\n\nWatson C, Harrison M: The location of the major ascending and descending spinal cord tracts in all spinal cord segments in the mouse: actual and extrapolated. Anat Rec (Hoboken). 2012; 295(10): 1692–1697. PubMed Abstract | Publisher Full Text\n\nWen T, Hou J, Wang F, et al.: Comparative analysis of molecular mechanism of spinal cord injury with time based on bioinformatics data. Spinal Cord. 2016; 54(6): 431–8. PubMed Abstract | Publisher Full Text\n\nWheaton BJ, Callaway JK, Ek CJ, et al.: Spontaneous development of full weight-supported stepping after complete spinal cord transection in the neonatal opossum, Monodelphis domestica. PLoS One. 2011; 6(11): e26826. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWolman L: The disturbance of circulation in traumatic paraplegia in acute and late stages: a pathological study. Paraplegia. 1965; 2: 213–226. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Fact sheet 384 – spinal cord injury. 2013; [Accessed: 30th May 2016]. Reference Source\n\nWu LJ, Duan B, Mei YD, et al.: Characterization of acid-sensing ion channels in dorsal horn neurons of rat spinal cord. J Biol Chem. 2004; 279(42): 43716–43724. PubMed Abstract | Publisher Full Text\n\nWu WN, Wu PF, Chen XL, et al.: Sinomenine protects against ischaemic brain injury: involvement of co-inhibition of acid-sensing ion channel 1a and L-type calcium channels. Br J Pharmacol. 2011; 164(5): 1445–1459. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXiong ZG, Chu XP, Simon RP: Ca2+ -permeable acid-sensing ion channels and ischemic brain injury. J Membr Biol. 2006; 209(1): 59–68. PubMed Abstract | Publisher Full Text\n\nXiong ZG, Pignataro G, Li M, et al.: Acid-sensing ion channels (ASICs) as pharmacological targets for neurodegenerative diseases. Curr Opin Pharmacol. 2008; 8(1): 25–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXiong ZG, Zhu XM, Chu XP, et al.: Neuroprotection in ischemia: blocking calcium-permeable acid-sensing ion channels. Cell. 2004; 118(6): 687–698. PubMed Abstract | Publisher Full Text\n\nYermolaieva O, Leonard AS, Schnizler MK, et al.: Extracellular acidosis increases neuronal cell calcium by activating acid-sensing ion channel 1a. Proc Natl Acad Sci U S A. 2004; 101(17): 6752–6757. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYin T, Lindley TE, Albert GW, et al.: Loss of Acid sensing ion channel-1a and bicarbonate administration attenuate the severity of traumatic brain injury. PLoS One. 2013; 8(8): e72379. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYong C, Arnold PM, Zoubine MN, et al.: Apoptosis in cellular compartments of rat spinal cord after severe contusion injury. J Neurotrauma. 1998; 15(7): 459–472. PubMed Abstract | Publisher Full Text\n\nZhang N, Yin Y, Xu SJ, et al.: Inflammation & apoptosis in spinal cord injury. Indian J Med Res. 2012; 135: 287–96. PubMed Abstract | Free Full Text\n\nZheng Z, Schwab S, Grau A, et al.: Neuroprotection by early and delayed treatment of acute stroke with high dose aspirin. Brain Res. 2007; 1186: 275–280. PubMed Abstract | Publisher Full Text\n\nZhou X, Lindsay H, Robinson MD: Robustly detecting differential expression in RNA sequencing data using observation weights. Nucleic Acids Res. 2014; 42(11): e91. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "15759", "date": "31 Aug 2016", "name": "David Magnuson", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very well-written paper that clearly describes the fairly complex experimental design and rationale and nicely presents the primary data. The main question addressed is if immediate systemic application of recombinant PcTx1, a spider venom toxin, can reduce tissue damage and improve function in male SD rats given mild contusive spinal cord injuries. All of the primary functional, histological and transcriptomic data were collected and analyzed in a blinded fashion. However, the statistical analysis appears to be unusual for this kind of study and the overall outcome is rather unclear.\nThe investigators chose to use a novel SIS (simplified injury severity) score at the time of injury and again at 24 hours post-injury.  The SIS is based on a rather coarse and certainly non-linear 6-point scale (0-3 in 0.5 unit increments). Nonetheless, the 24h SIS score plays a very important role in the paper. Functional assays at 24h, whether they be visual scales like the BBB or video based kinematics, are poor predictors of terminal hindlimb function when moderate to severe injuries are used because most of the animals will exhibit flaccid paralysis at that early time point. In this paper they show that the SIS at 24 hours is predictive of terminal hindlimb function (BBB scores) following the mild injuries employed, even when only part of the scales range is used (1.5 – 2.75).\nThe other functional assessments included the BBB Open Field Locomotor Score (without subscore), horizontal ladder and tapered beam tasks and swimming, the last three utilizing video to assist with counting footfalls/slips and to assess hindlimb alternation. Interestingly, the investigators found significant functional differences only using ANCOVA analysis of BBB scores at 6 weeks vs initial injury severity score (SIS). No differences were seen in ladder, tapered beam or swimming, although the pattern of data points is somewhat suggestive that PcTx1 treatment reduced errors on the horizontal ladder.\nThis paper also presents a substantial amount of data on gene expression at 24h post-injury using the Illumina platform and marker extravasation at various time points. These data provide a lot of food for thought but unfortunately do not really allow any strong conclusions to be drawn except that the blood spinal cord barrier remains leaky to small markers for up to 4 days after a mild contusive spinal cord injury thus providing a window of pharmacological opportunity wherein small molecules, like PcTx1, may enter the spinal cord from the bloodstream to confer protection.\n\nMajor Concerns:\nThe primary finding of the paper, that PcTx1 improves locomotor function of adult SD rats after thoracic SCI is only partly supported by the data. Traditionally, the BBB and other functional (and hopefully objective) assessments would be done weekly and changes over time would be observed. Group differences in BBB or BBB subscore along with objective kinematic or gait analysis would be used to conclude that a tissue-sparing strategy resulted in functional improvements or it did not. In this case the authors chose to assess the locomotor function at 6 weeks only and to reduce the effect of inter-animal variability using the 24h SIS scores and ANCOVA analysis.\n\nUsing ANCOVA is unusual in this kind of experiment, but can be done only if the correlation coefficients of the two groups are equal, and this information has not been provided. For each of the regression lines a 95% confidence interval should be shown. For the BBB vs SIS data, it should also be explained what test was done to determine that the PcTx1-treated group had a “significantly higher elevation compared to the saline-treated group”. It is assumed that this is part of the ANCOVA analysis, but more details should be given. Similarly, for the dlWB vs SIS data, it should be explained what test was done to determine significance and 95% confidence intervals should be shown. The outcomes of these tests should also be provided for the other data sets shown in Figures 2 and 6, even if there are no significant differences.\n\nMinor Concerns:\nIn the subsection “Functional Testing” the authors suggest that hindlimb alternation during swimming is indicative of supra-spinal connections without giving a reference. Subsequently, in the results section they indicate that there were no differences in swimming between the groups. This is meaningful only if the analysis was extended beyond the presence or absence of alternation and I believe more information is necessary or perhaps the reference to swimming should be removed as it adds nothing to the paper. We have used swimming extensively in my laboratory and have seen animals with a wide range of spared white matter accomplish hindlimb alternation.\n\nIf notes were taken during the BBB scoring it should be possible to determine the BBB subscores that have proven to be very helpful in other studies using mild thoracic injuries.\n\nIn the section “Blood-spinal cord barrier integrity” the authors reference using an overdose of inhaled anesthetic. It is recommended in the AVMA Guidelines on Euthanasia that a secondary method of ensuring death is employed and this should be described here.\n\nIn this same section it is obvious that an additional cohort (or cohorts) of animals were employed to develop the time course/size of tracer relationship and these animals should be included at the beginning of the methods section to indicate the real total of animals used in the studies.\n\nFor histological analysis the investigators chose to assess dorsolateral funiculus, ventral white matter and corticospinal tracts using a myelin stain. While this is acceptable, the rationale for these choices includes “white matter tracts involved in hind limb motor function”. While this statement is true, it is not complete. There are many examples in the literature demonstrating that the ventrolateral and lateral white matter are at the top of the hierarchy for hindlimb function. Thus, including the lateral and ventrolateral white matter in the analysis would be ideal, but at the very least the statement should be amended to indicate that the areas chosen are among those involved in hindlimb function after SCI. With this caveat, the investigators have done an excellent job of analyzing spared white matter and pointing out the poor appearance of ventral tracts.\n\nComments in the discussion point to the non-linearity of the BBB scale and suggest that a mean difference of 1 or 2 points may be small (on the scale) but functionally significant. This may be true for certain ranges of the BBB (8-10 or 11-13), but is arguable not true for the range used here (16-18). A more sensitive gait assessment where order and timing of paw placement would have been helpful in this situation.\n\nFigures:\nThe images in Figure 1 have quite different backgrounds which may add unwarranted contrast to the injured image (right). This should be corrected.\n\nThe indicator of significant differences in Figures 2 and 5 should be made more obvious. Increasing the size and blackness of the font and line/arrows should suffice.\n\nThe size of the font used in the legends is inconsistent and in some cases is too small (Figure 6).\n\nThe image in Figure 11b is too small and the text is way too small.", "responses": [ { "c_id": "2304", "date": "07 Dec 2016", "name": "Mark Habgood", "role": "Author Response", "response": "The investigators chose to use a novel SIS (simplified injury severity) score: The injury severity scale was used primarily to provide an index of the degree of initial dysfunction resulting from the contusion injury. In many experimental studies it is generally assumed that uniform mechanical impacts applied to the same spinal cord locations will produce very similar injuries with similar degrees of dysfunction. However, we have found that this is not the case and that the degree of initial dysfunction is much more variable and appears to depend on the extent and location of the initial damage and vascular disruption/haemorrhage .   The scale was intentionally developed to use hind limb motion features that are readily apparent to an observer and do not require more subjective interpretation (e.g. can the animal weight support it’s hindquarters or not). The scale is undoubtedly non-linear, a feature in common with most locomotor functional scales used in the field (e.g. BBB scale), but it is proportional. Less severe injuries give lower values than more severe injuries.   The scale does play an important role in the paper and this is a major point that we are trying to emphasise. The inherent variability of the initial injuries needs to be taken into account individually when assessing improvements in functional performance.  \"Functional assays at 24h, whether they be visual scales like the BBB or video based kinematics, are poor predictors of terminal hindlimb function when moderate to severe injuries are used because most of the animals will exhibit flaccid paralysis at that early time point.\" Temporary flaccid paralysis is much less common after light to moderate injuries where less tissue damage is involved. In our study, none of the animals exhibited flaccid paralysis, all had some residual ability to move some or all of their joints. The scores assessed during the first 24 hours post-injury were found to correlate with long term functional outcomes.   Less severe injuries were intentionally used in this study because the focus was treatment-induced reductions of secondary tissue damage expansion using hind limb function as one assay of effectiveness. With thoracic spinal cord injuries, it is the white matter tracts that are primarily involved in supraspinal control of hind limb function. With more severe injuries, the primary damage zone often includes large areas of the white matter tracts which leaves little room for radial secondary expansion. In this study, the initial injuries (confirmed by histology at 15 minutes post-SCI) were confined to the central grey matter with little involvement of surrounding white matter tracts (Ek et al., 2010). Major Concerns: 1. It is well known in the field that recovery of function after spinal contusion injury gradually improves over the course of several weeks until it reaches a plateau. We chose 6 weeks post-injury as being a time point well after the plateau in functional recovery. Our objective was to determine what was the ultimate level of functional recovery that could be achieved following 48 hours treatment commencing immediately after injury (i.e. were their long term functional improvements). The rate of functional recovery was not an objective of the study.   We agree that kinematic analysis could provide valuable detailed analysis, however this is often not available in many parts of the world. Therefore we wanted to demonstrate that even using simple and easily available techniques that targeting ASIC1a demonstrate significant differences providing the experimental design takes into account the variance of initial injuries.   Use of first 24 hours SIS scores. In this study we have shown that almost identical mechanical contusion impacts (see Results for impact velocity, tissue penetration and compression time) result in a variable range of initial dysfunction and thus the assumption that all animals are starting from the same functional level is not correct. Furthermore, we have also shown that the long-term functional outcome of individual animals correlates linearly with the degree of initial dysfunction (i.e. within each group, the differences in initial injuries are contributing to the observed differences in ultimate functional outcomes).   Our opinion is that ignoring the presence of a known source of variance in order to use parametric testing will increase the likelihood of making a Type 2 statistical error (i.e. by assuming that the covariate has no influence on the measurements being made). 2. Tests for equal slope (correlation coefficients) are part of the ANCOVA. The ANCOVA software (PRISM 6.0, Graphpad) uses the general linear model approach which fits linear regression lines to the datasets and then tests whether the slopes (correlation coefficients) are similar. If the slopes are confirmed to be the same, differences in the intercepts (elevations) can then be tested. If the slopes are not the same, differences in the intercepts cannot be calculated. As part of the statistical analysis, the two data groups being compared (i.e. control and PcTx1-treated) were analysed to determine if they were best described by (a) a single data group (i.e. there were no differences in slopes or intercepts), (b) two separate groups with different slopes or (c) two separate groups with the same slope, but different intercepts. In the text we have used the phrase “the data were best described by two separate lines with the same slope” to indicate that the slopes (correlation coefficients) were found to be the same. We have also added additional statistical information to the figure legends where appropriate (F statistic with degrees of freedom). The suggestion of adding 95% confidence intervals is excellent and these have been added to figures where appropriate.  “It is assumed the test to determine significantly higher elevations is part of the ANCOVA.” Yes, this is a fundamental part of the ANCOVA. The slopes (correlations) are tested first and if confirmed to be the same, differences in the intercepts (elevations) are then tested. The mathematics involved in the software is an F test. We have revised the figure legends to include the pooled slopes where significant differences are shown, and the F statistics for the comparisons. We have also added the name of the software package used for the ANCOVA (PRISM 6, Graphpad) and further explanation of the ANCOVA method to the Methods section. Figure 6a shows raw data for BBB scores vs dorso-lateral white matter area. The data for the PcTx1-treated animals fall on the same general correlation as for the untreated animals, albeit the treated group had larger areas of preserved tissue compared to the untreated group. Initial analysis revealed the data were best described as a single dataset with the same slope and intercept. The figure emphasises the point that BBB performance appears to relate primarily to the amount of preserved white matter in both groups. Similarly, Figure 6b shows raw data for horizontal ladder foot faults vs dorso-lateral white matter area. This figure emphasises that number of foot faults made on the horizontal ladder depends on the amount of preserved white matter.  Minor Concerns: 1. We apologise for omitting the references to the swimming test, these have now been added to the Methods section (Saunders et al., 1998; Smith et al., 2006; Magnuson et al., 2009).        The swimming test was included primarily to determine whether there was evidence of voluntary use of the hind limbs (when the limbs are not weight supporting) that would indicate some preservation of supra-spinal control. We have previously shown that opossums with complete spinal cord transections (no remaining supraspinal connections) do not move their hind limbs when swimming, but they do use them when climbing out of the tank and walking over a hard surface (Wheaton et al., 2011).   All animals in this study used their hind limbs in a brisk alternating pattern similar to uninjured controls. BBB subscores were not analysed. 2. The secondary method approved by our Animal Ethics and Welfare Committee was immediate opening of the chest cavity and transcardial perfuse fixation with paraformaldehyde. This information has been added to the Histological analysis section.  3. The 38 animals that were used for the tracer experiments have been added to the Methods section. 4. Amended as suggested. 5.  We accept this recommendation which as an additional “less-subjective” assessment could be helpful to confirm treatment-induced improvements in BBB function. Figures Suggested changes made." } ] }, { "id": "15249", "date": "30 Sep 2016", "name": "Helen Marie Bramlett", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript reports on the use of a novel injury severity score for spinal cord injury to determine treatment efficacy on histological, functional and transcriptomic outcome measures. The treatment chosen was a selective inhibitor of ASIC1a, PcTx-1. The authors compared extensively through correlation how PcTx-1 improves function as well as white matter preservation based on the individual animal’s severity score.  Overall, this treatment appears to improve several outcome measures but did not significantly impact the transcriptome analysis targeting the apoptosis pathway. However, the use of the injury severity score has merit when determining drug efficacy because individual animals are analyzed based on this score. When using this type of analysis, the heterogeneity of SCI can be taken into consideration.\nSpecific Comments:\nOverall, this is a well-written manuscript. However, several minor points are at issue. The authors state that the use of the osmotic pump for PcTx1 release resulted in stable plasma concentrations of the drug over a 48 hr period, how was this determined? Were levels taken or was this an assumption? In using the beam test, animals are usually pretrained on this task but there is no mention of this in the methods section.", "responses": [ { "c_id": "2303", "date": "07 Dec 2016", "name": "Mark Habgood", "role": "Author Response", "response": "\"The authors state that the use of the osmotic pump for PcTx1 release resulted in stable plasma concentrations of the drug over a 48 hr period, how was this determined? Were levels taken or was this an assumption?\" Yes this was an assumption. The role of the pump was to slowly and steadily release PcTx1 (1.08 μg/h) subcutaneously to compensate for estimated renal and tissue losses over a 48 h period.   PcTx1 is a cysteine knot peptide which makes it very stable and resistant to plasma peptidases with a half-life >12 hours (unpublished). Thus, in vivo, the main plasma losses are due to renal excretion \"Pre-training\" The animals were pre-trained on the beam apparatus prior to commencement of the surgical and treatment protocols. On the day of functional testing, the animals were allowed 3 practice runs prior to recording. This information has been added to the Methods." } ] } ]
1
https://f1000research.com/articles/5-1822
https://f1000research.com/articles/5-2083/v1
26 Aug 16
{ "type": "Research Article", "title": "Estimating limits for natural human embryo mortality", "authors": [ "Gavin E. Jarvis" ], "abstract": "Natural human embryonic mortality is generally considered to be high. Values of 70% and higher are widely cited. However, it is difficult to determine accurately owing to an absence of direct data quantifying embryo loss between fertilisation and implantation. The best available data for quantifying pregnancy loss come from three published prospective studies (Wilcox, Zinaman and Wang) with daily cycle by cycle monitoring of human chorionic gonadotrophin (hCG) in women attempting to conceive. Declining conception rates cycle by cycle in these studies indicate that a proportion of the study participants were sub-fertile. Hence, estimates of fecundability and pre-implantation embryo mortality obtained from the whole study cohort will inevitably be biased. This new re-analysis of aggregate data from these studies confirms the impression that discrete fertile and sub-fertile sub-cohorts were present. The proportion of sub-fertile women in the three studies was estimated as 28.1% (Wilcox), 22.8% (Zinaman) and 6.0% (Wang). The probability of conceiving an hCG pregnancy (indicating embryo implantation) was, respectively, 43.2%, 38.1% and 46.2% among normally fertile women, and 7.6%, 2.5% and 4.7% among sub-fertile women. Pre-implantation loss is impossible to calculate directly from available data although plausible limits can be estimated. Based on this new analysis and a model for evaluating reproductive success and failure it is proposed that a plausible range for normal human embryo and fetal mortality from fertilisation to birth is 40-60%.", "keywords": [ "early pregnancy loss", "embryo mortality", "human chorionic gonadotrophin", "fecundability" ], "content": "Introduction\n\nEstimates of natural human embryo mortality have been derived using speculative calculations1, mathematical modelling2, pregnancy surveys3, and a unique collection of surgical material4,5. Three well-designed studies (henceforth referred to as the Wilcox6, Zinaman7 and Wang8 studies) have shown that approximately two-thirds of menstrual cycles in which elevated human chorionic gonadotrophin (hCG) is detected approximately 1 week after ovulation proceed to a live birth. hCG is produced by the trophoblast cells of the embryo9 and its earliest detection indicates that implantation has commenced10–12. Hence, these studies provide no direct measure of embryo loss before implantation. The only measure of pre-implantation loss is the “scanty data of Hertig”13 which have generated estimates4,5 that are “difficult to defend with any precision”2. Estimates of embryo mortality from fertilisation onwards are therefore subject to considerable uncertainty owing to the absence of suitable data for the 5–7 day period between fertilisation and implantation.\n\nFecundability is the probability of reproductive success per cycle. Compared to other animals, fecundability in humans is low and has been estimated at <35%14,15. Red deer hinds, by contrast, achieve pregnancy rates of >85% per natural mating16. Clearly, as fecundability increases, the range of plausible values for embryo mortality narrows. Crude estimates of live birth fecundability can be calculated from prospective study data: 19.2% (136 births from 707 cycles6), 18.2% (79 births from 432 cycles7) and 23.9–25.9% (373 births and 31 ongoing pregnancies from 1,561 cycles8). These represent lower limits for fecundability, since optimal conditions for reproductive success were not achieved in every cycle17. However, some published estimates of embryo mortality, e.g., 76%2,18 and 78%1 can only be reconciled with these data if it is assumed that almost every non-birth cycle in these studies resulted in successful fertilisation and subsequent embryonic or fetal death, an extreme and improbable condition. Higher estimates of embryo mortality, including >85%19 and 90%20, are even less plausible. Furthermore, it is self-evident that not all observed reproductive failure is necessarily due to embryo or fetal mortality: other biological causes include mistimed coitus and failure of fertilisation despite in vivo co-localisation of ovum and sperm. Estimates of embryo mortality based on fecundability must take this into account.\n\nThe objective of this study is to obtain plausible estimates of fecundability and early human embryo mortality from available published data6–8. To do this, a simple quantitative framework is proposed to define a successful reproductive cycle. Hence, for a menstrual cycle to conclude with a live infant several distinct biological stages must be completed, each with its own probability (π) of success. These stages (and conditional probabilities) are defined as follows: (1) sexual activity within a cycle resulting in sperm-ovum-co-localisation (πSOC); (2) subsequent successful fertilisation (πFERT); (3) initiation of implantation approximately 1 week after fertilisation as indicated by increased levels of hCG (πHCG); (4) progression to a clinical pregnancy (πCLIN): the earliest typical clinical indication is an absent menstrual period approximately 14 days after fertilisation, although definitions of clinical pregnancy vary between studies; (5) survival of a clinical pregnancy to a live birth (πLB). It is therefore possible to calculate four different fecundabilities (broadly following Leridon21):\n\n1. Total (All fertilisations):           FECTOT = πSOC × πFERT\n\n2. Detectable (Implantation):       FECHCG = πSOC × πFERT × πHCG\n\n3. Apparent (Clinical):               FECCLIN = πSOC × πFERT × πHCG × πCLIN\n\n4. Effective (Live Birth):           FECLB = πSOC × πFERT × πHCG × πCLIN × πLB\n\nQuantitative differences between these fecundabilities reflect intrauterine mortality at different developmental stages. Hence, the probability that a fertilised egg will perish prior to implantation is [1 − πHCG], and prior to clinical recognition is [1 – (πHCG × πCLIN)]. In theory, embryonic mortality may be estimated at all stages although in practice this depends on available data.\n\nIn 1969, Barrett & Marshall analysed the relationship between coital patterns and conception and concluded that fecundability increased with coital frequency up to 68% for daily intercourse22. Schwartz’s re-analysis of the same data revealed a similar pattern, although at higher coital frequencies estimated fecundability was lower, at 49% for daily intercourse23. These analyses indicate that failure to conceive at coital frequencies of less than once per day is, in part, due to mistimed coitus and not solely failure of fertilisation and/or embryo mortality. The difference in their estimates of fecundability arises because of key differences between the two analyses. Firstly, Schwartz analysed 2,192 cycles, 294 more than Barrett & Marshall. Secondly, the measures of conception differed: Barrett & Marshall used “absence of menstruation, after ovulation”, approximately 2 weeks after ovulation, whereas for Schwartz conception was “defined as a pregnancy lasting at least 2 months from the last menstrual period”, i.e., approximately 6 weeks from the day of ovulation. It is not surprising therefore that Schwartz values were lower since they will not have captured pregnancies that failed between 2 and 6 weeks post-fertilisation. Thirdly, and importantly, Schwartz introduced a new term, ‘cycle viability’, into the analytical model.\n\nSchwartz modelled the probability of conceiving during a cycle (i.e., fecundability, FEC) as the product of three conditional probabilities as follows: FEC = PoPfPv. Po, Pf and Pv were the probabilities that (i) a fertilisable egg is produced (Po), (ii) it is fertilised once produced (Pf), and (iii) it survives to be detected as a conception (Pv). Pf was modelled as a function of coital frequency. Cycle viability (k) was defined as k = PoPv, and allows for the possibility that optimally-timed coitus would not result in a detected conception. It implies that there is a proportion of cycles that are infertile irrespective of coital activity. Although Schwartz did not explicitly report statistical data demonstrating that the extra parameter (k = 52%) improved the quality of the model, a comparison of the Barrett & Marshall and Schwartz models using the Wilcox study data6 provided compelling statistical evidence to this effect, and concluded that only 37% of cycles were ‘viable’24.\n\nSince cycle viability (k) includes terms defining reproductive success both before (Po = successful ovulation) and after (Pv = embryo survival) fertilisation, it is not possible to use this term to make direct inferences about early embryo mortality. Nevertheless, Schwartz assumed that Po = 100%, thereby interpreting all cycle non-viability as a consequence of embryo loss at a rate of 48% during the first 6 weeks after fertilisation. Similar logic applied to the Wilcox study24 would conclude an equivalent estimate of 63% embryo mortality. Schwartz also concluded that Pf = 94% for daily intercourse (0.49/0.52). Hence, Schwartz attributed almost all the observed reproductive inefficiency to embryo mortality and other processes of the reproductive process were, by implication, considered to work almost perfectly. By contrast, referring to fertilisation, Hertig noted that “it seems unlikely that such a complicated process should work perfectly every time”5. It has also been correctly pointed out that preimplantation loss is statistically indistinguishable from other causes of cycle non-viability including male factors15. It seems that this interpretation of reproductive inefficiency has contributed to a widespread impression that early human embryo mortality is very high.\n\nWhat are the potential explanations for cycle non-viability? Incorporation of a between-couple random effect into the modelling of these data has confirmed that cycle viability is heterogeneous between couples15. A subject-specific random effects modelling approach also resulted in a more consistent cycle by cycle estimate of cycle viability25. These analyses formally demonstrate that within the cohorts of women used in this study, there were individual differences in fecundability. Furthermore, in the Wilcox study, 14 out of 221 women were unable to conceive within 24 months6: this observation alone suggests that a proportion of the study participants were sub-fertile.\n\nEach of the three hCG studies sought to recruit normally fertile, non-contracepting women who intended to conceive. Subjects either had “no known fertility problems”6, or were excluded if they had any “known risk factors for infertility”7 or “had tried unsuccessfully to get pregnant for ≥1 year at any time in the past”8. However, such criteria cannot guarantee complete exclusion of sub-fertile or infertile couples, and in each study pregnancy rates declined in successive cycles as the presumed proportion of sub-fertile women remaining increased. Hence, calculations based on overall aggregate data underestimate fecundability in normally fertile women. Even estimates based on first cycle data are likely to be biased since a proportion of sub-fertile of women would be in the starting cohort. The extent of the bias of such estimates will depend on factors including the heterogeneity of the population and the number of cycles studied.\n\nEstimates for FECHCG of 30%7 and 40%8, and for FECCLIN of 30%8 and 25%6 probably underestimate the fecundability of reproductively healthy women owing to a mixed fertile/sub-fertile population in these studies. The object of the present analysis was to determine whether the published aggregate data supported this hypothesis and to estimate fecundability for any sub-cohorts identified. The modelling approach is conceptually simple; nevertheless, the results strongly indicate that the hypothesis is true and therefore provide less biased estimates of fecundability for reproductively normal women. These higher estimates of fecundability narrow the range of plausible values for embryo mortality in normal fertile women.\n\n\nMethods\n\nData were obtained from Table 2 of Wilcox6, Table 3 and Figure 1 of Zinaman7 and Table 2 of Wang8 studies. Fourteen women who did not conceive after 24 months were included in the analysis of the Wilcox data (1 reproductive cycle per month was assumed). A subsequent publication reported an extra cycle and an extra hCG pregnancy26; however, it is not clear in which cycle this occurred, and so the original report data6 have been used. In Wilcox and Wang, for each study cycle, the number of (i) women starting each cycle, (ii) hCG pregnancies, and (iii) clinical pregnancies were recorded. The number of women who finished the study without becoming clinically pregnant and the number of women who dropped out at the end of each cycle were also reported. Women who conceived an hCG positive pregnancy but not a clinical pregnancy in a cycle continued in the study. Wilcox reported data for a maximum of nine cycles per subject and Wang for 14. The Zinaman study was similar, except that hCG data were obtained for only the first three study cycles. In the subsequent nine cycles only clinical pregnancy was recorded. Also, only the first pregnancy, whether hCG or clinical was reported.\n\nProbabilities and percentages were estimated as logits (base 10). Standard errors are shown. Actual probabilities with 95% confidence intervals are reported in Figure 1. Two alternatively parameterised (Model 0 & Model 00) but statistically identical models were used to obtain standard errors for FECHCG and FECCLIN since FECCLIN = FECHCG × πCLIN (ELS = extended least squares; dof = degrees of freedom.)\n\nDegrees of freedom (dof) is the difference in the number of estimated parameters between the models. χ2 is the difference in objective function values (ELS) for the two models. P values were calculated using likelihood ratio tests. The models are defined in brackets. H0 is the null hypothesis. H1 is the alternative hypothesis. NONMEM control files are named according to the study and the model, e.g., Model 0 for the Wang data is WANG0.ctl.\n\nEstimates of hCG (FECHCG) and clinical (FECCLIN) fecundabilities and πCLIN are derived from three hCG pregnancy studies as described in the text. πLB is calculated from published values in Wilcox6, Zinaman7 and Wang8 study reports. Estimates of fertilised egg loss up to implantation, clinical recognition and birth are provided, based on three scenarios: (i) high implantation probability (πHCG = 90%); (ii) equal implantation and fertilisation probabilities (πFERT = πHCG); (iii) high fertilisation probability (πFERT = 90%). The probability of sperm-ovum-co-localisation (πSOC) was assumed to be 0.80.\n\nEach panel shows the data value from the study for each point (○ = women starting cycle; + = hCG pregnancies; × = clinical pregnancies). The line indicates the best fit models as defined in Table 1. Parameter estimates and [95% confidence intervals] from these models are also shown.\n\nObserved data were modelled to estimate the following parameters: (1) %fert(1) = the percentage of fertile women in the starting cohort; (2) FECHCG = the probability of conceiving an hCG pregnancy per cycle; (3) FECCLIN the probability of becoming clinically pregnant per cycle. Alternative parameterisation allowed the probability of an hCG pregnancy progressing to a clinical pregnancy (πCLIN) to also be determined. The percentage of sub-fertile women in the starting cohort was %subf(1) = 100% – %fert(1). FECHCG, FECCLIN and πCLIN were determined for both fertile and sub-fertile sub-cohorts. The following expressions define the relationship between the parameters and the modelled estimates.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWhere: N(#) is the number of women starting cycle # (for cycle 1, N(1) was fixed for each set of study data; Wilcox = 221; Zinaman = 200; Wang = 518); NFERT(#) and NSUBF(#) are the modelled number of fertile and sub-fertile women starting cycle #; PREGHCG/FERT(#) and PREGCLIN/FERT(#) are predicted numbers of hCG and clinical pregnancies in fertile women in cycle # (and analogously for sub-fertile women); FIN(#) is the number of women who finished the study without becoming clinically pregnant in cycle #; DROP(#) is the number of women who withdrew from the study at the end of cycle #; %fert(#) is the percentage of women starting cycle # who were fertile (and analogously for sub-fertile women); NONPREG(#) is the number of non-pregnant women after # cycles (equation (9) was only used to incorporate 14 non-pregnant women after 24 months into the Wilcox data model). Model expansion to allow three fertility sub-cohorts and contraction to a single fertility sub-cohort enabled hypotheses about parameters and sub-cohorts to be statistically evaluated.\n\nAll probabilities and percentages were estimated as logits (base 10). Residual unexplained variance (RUV) was modelled as a function of predicted values (PRED) as follows:\n\nData were analysed with NONMEM 7.3.0 (Icon PLC, Dublin, Eire) and implemented using Wings for NONMEM (http://wfn.sourceforge.net/). Parameters were estimated using a maximum likelihood algorithm (First Order Conditional Estimate with Interaction) and standard errors derived using the inverse Hessian (MATRIX = R). The objective function in NONMEM is the Extended Least Squares (ELS)27. Statistical hypotheses of nested models (Table 2) were tested using likelihood ratio tests (LRT). Control and data files are available online. Control files are named from the study and the model, e.g., WANG0.ctl is the control file for Model 0 applied to the Wang study data.\n\n\nResults\n\nFigure 1 shows the original data values and the fitted models plotted by cycle. Parameter estimates are also shown and output from the models is given in Table 1. These models incorporate discrete fertile and sub-fertile sub-cohorts with differing FECHCG but common πCLIN values. Statistical comparison of alternative models strongly indicated that reducing the dimensionality of the model to a single FECHCG value substantially reduced its quality (Table 2, Hypothesis 1), whereas expanding the model to allow for three different FECHCG values did not improve the quality of the model (Table 2, Hypothesis 2). These statistical results indicate that the data are consistent with bi-modal study populations comprising two distinct fertility sub-cohorts. There was no statistical indication that πCLIN differed between these sub-cohorts (Table 2, Hypothesis 3). Evidence for heteroscedasticity in the residual error was strong for the Wilcox and Wang studies, and weak for the Zinaman study (Table 2, Hypothesis 4).\n\nFigure 2 illustrates the estimated parameter values. Notwithstanding the differences between the studies, there is considerable agreement in the estimates. One noteworthy difference is in the proportion of sub-fertile women. This was low (6.0%) in the Wang study compared to the other two which were approximately 25%. Zinaman et al. commented on the high proportion of apparently infertile women in their study despite their efforts during recruitment7. The estimate of 22.8% sub-fertile women is consistent with their estimate of 18% infertility, bearing in mind that sub-fertile women may conceive, albeit with a lower probability. The Wang study was conducted in young Chinese women and had the highest FECHCG/FERT (46.2%) and lowest πCLIN (75.4%) values. This may reflect the Bayesian methodology used to detect hCG positive cycles, the identification of DDT (dichlorodiphenyltrichloroethane), present at unusually high levels in this group28, as a positive predictor of pre-clinical pregnancy loss29, or even a higher incidence of gestational trophoblastic disease in Asian women30.\n\nValues are shown for Wilcox (□), Zinaman (▲) and Wang () studies. Panel A shows the proportions in the starting cohorts modelled as fertile or sub-fertile (%fert(1) & %subf(1)). Panel B shows the hCG (FECHCG) and clinical (FECCLIN) fecundabilities and the probability of hCG pregnancies progressing to clinical pregnancies (πCLIN). Values are derived from modelled parameter estimates (Table 1) and error bars indicate 95% confidence intervals.\n\nThe analysis also indicates that fewer hCG pregnancies in the Zinaman study (12.5%) failed to progress to clinical recognition, compared to either the Wilcox (21.7%) or Wang (24.6%) studies. This may reflect differences in methodology for detecting hCG, the fact that they made fewer hCG measurements or differences in the definition of clinical pregnancy. Wilcox and Wang defined clinical pregnancy as those that lasted for up to 6 weeks after the last menstrual period6,8,17,26. In Zinaman, clinical pregnancy was determined following serum testing if a woman’s anticipated menses was just one day late7. Hence, the window for pre-clinical embryo loss was approximately 1–4 weeks post-fertilisation for Wilcox and Wang and 1–2 weeks for Zinaman. This different definition of clinical pregnancy would not only contribute to the higher πCLIN value from Zinaman but also the increased clinical loss of 21.0% compared to 12–13% observed by Wilcox and Wang.\n\nQuantifying the outcome of clinical pregnancies is relatively straightforward. Excluding those lost to follow-up and induced abortions, the probability of a clinical pregnancy progressing to a live birth (πLB) was: Wilcox, 87.7% (136/155); Zinaman, 79.0% (79/100); and Wang, 87.1% (373/428). Combining these values with the modelled πCLIN provides an estimate for embryo loss from implantation to live birth of 31.3% (Wilcox), 30.9% (Zinaman) and 34.2% (Wang) (Table 3).\n\nEstimating embryo loss prior to hCG detection is less straightforward. For sub-fertile participants, it is impossible to know why they struggled to become pregnant: there are many causes of sub-fertility31. However, for normally fertile women the modelled hCG fecundability values can be used to put limits on fertilisation (πFERT) and implantation (πHCG) conditional probabilities. As noted above, fecundability is the product of the conditional probabilities of success for each stage of the reproductive cycle. Hence for Wang:\n\nFECHCG = πSOC × πFERT × πHCG = 0.462\n\nSince probabilities cannot be greater than 1, the lowest possible value for πHCG must be 0.462, indicating a maximum possible loss from fertilisation up to implantation in these women of 53.8%. However, it is unlikely that all other probabilities equal 1. Sperm-ovum-co-localisation is dependent on both behavioural and biological factors. As previously noted, the analyses of Barrett & Marshall22,32 and Schwartz23 show that daily intercourse is more reproductively effective than alternate day intercourse. Hence, at coital frequencies less than once per day, πSOC must be less than 1. Specifically, a reduction of fecundability from 0.49 with daily to 0.39 for alternate day intercourse23 points towards a reduction in πSOC of approximately 20%. Volunteers in these hCG studies wished to become pregnant and were undoubtedly aware of the importance of well-timed intercourse. However, they were not required to have daily intercourse and it is likely that in some of the 3,137 cycles intercourse was not always ideally timed. Indeed, in 360/625 cycles in the Wilcox study, intercourse occurred from zero to two times during the 6 days before ovulation, and intercourse occurred on only 40% of the 6 pre-ovulatory days in 625 cycles17. It seems likely therefore that πSOC and hence fecundability were not maximised in these studies.\n\nFurthermore, not all cycles are ovulatory. Leridon suggested that levels of anovulation lie between 5 and 15%33. Among normal healthy women, the incidence of anovulation ranged from 5.5–12.8% depending on the detection method used34. Therefore, considering behavioural and biological factors together, it seems reasonable to suppose that πSOC < 1.\n\nIt also seems unlikely that either fertilisation or implantation probabilities equal 1. Hence, Table 3 shows derived values for πFERT and πHCG assuming that πSOC = 0.80, and under conditions where: (i) πFERT = 0.90; (ii) πFERT = πHCG; and (iii) πHCG = 0.90. Based on this analysis, a plausible range for total embryo loss from fertilisation to birth is 40–60%. This is consistent with estimates from both older35 and more recent36 text books. Even with the wide range of mathematically possible outcomes, it is likely that estimates of 90%20, 83%37, 80–85%38, 78%1, 76%2 and 70%10,12 total human embryonic loss are excessive.\n\n\nDiscussion\n\nIn 1980, Schwartz wrote that Barrett & Marshall’s estimate of fecundability of 0.68 for daily intercourse “seems to be high”. It implies an absolute maximum limit of embryo mortality of 32%. Schwartz contrasted this with Leridon’s estimate of 44% embryo loss in the first 6 weeks following fertilisation3. However, Leridon’s estimates for early intrauterine mortality are substantially dependent on data and analysis from Hertig4,5, which are themselves of questionable precision2,13,39. Widespread pessimism about human reproductive efficiency may have become a self-fulfilling prophecy in the absence of relevant good quality data.\n\nNevertheless, Schwartz’s analysis is a useful improvement on that of Barrett & Marshall and points clearly to the presence of infertile or non-viable cycles. The challenge arises in assigning a mechanistic cause for this “non-viability”. Previous reports draw attention to the difficulty of teasing apart distinct components, e.g., egg viability versus uterine receptivity24, or male and female factors15, and alternative modelling approaches will yield “different interpretations of the parameters related to cycle viability”15. The advantage of the present models is that the unit of analysis remains the cycle, i.e., fecundability, but the heterogeneity of the population is also acknowledged and explicitly incorporated. The model for estimating embryo loss also accommodates other plausible mechanisms for reproductive failure, rather that accrediting all unaccounted reproductive inefficiency to pre-implantation embryo mortality. Although the model does not provide a definitive answer, it does offer plausible limits within which the answer may lie.\n\nThe results of this analysis offer a statistically clear picture of bi-modal study populations comprising couples with two discrete levels of fertility. Expanding the model to three levels does not improve this picture and the published data do not support a model of uni-modal, albeit varied, fecundability. Put simply, there was a significant proportion of couples in these studies who were, for unknowable reasons, infertile or clearly sub-fertile. Incorporation of data derived from such couples in calculations to determine normal fecundability will therefore result in biased estimates. By analytically separating the study population into reproductively normal and sub-fertile sub-cohorts, more accurate estimates for normal reproductive function and embryo mortality have been obtained. The analysis presented here cannot be satisfactorily completed owing, in part, to a lack of data on fertilisation success rates in vivo40,41. Consequently, the range for pre-implantation loss, at approximately 10–40%, is wide, although inclusive of Hertig’s pre-implantation loss estimate of 30%4,5. Despite the imperfections and weaknesses in the available data, it is apparent that plausible values for embryo mortality are considerably less than some figures published in the scientific literature. It is concluded that a plausible range for natural human embryo mortality from fertilisation to live birth in normal healthy women is approximately 40–60%.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data Wilcox et al. study, 10.5256/f1000research.9479.d13395142\n\nF1000Research: Dataset 2. Raw data Zinaman et al. study, 10.5256/f1000research.9479.d13395243\n\nF1000Research: Dataset 3. Raw data Wang et al. study, 10.5256/f1000research.9479.d13395344", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThanks are due to Professor David Paton for providing helpful comments and suggestions during the writing of this paper.\n\n\nReferences\n\nRoberts CJ, Lowe CR: Where have all the conceptions gone? Lancet. 1975; 305(7905): 498–9. Publisher Full Text\n\nBoklage CE: Survival probability of human conceptions from fertilization to term. Int J Fertil. 1990; 35(2): 75, 79–80, 81–94. PubMed Abstract\n\nLeridon H: Intrauterine Mortality. Human Fertility: The Basic Components. Chicago: The University of Chicago Press; 1977; 48–81. Reference Source\n\nHertig AT, Rock J, Adams EC, et al.: Thirty-four fertilized human ova, good, bad and indifferent, recovered from 210 women of known fertility; a study of biologic wastage in early human pregnancy. Pediatr. 1959; 23(1 Part 2): 202–11. PubMed Abstract\n\nHertig AT: The Overall Problem in Man. In: Benirschke K, editor. Comparative Aspects of Reproductive Failure: An International Conference at Dartmouth Medical School. Berlin: Springer Verlag; 1967; 11–41. Publisher Full Text\n\nWilcox AJ, Weinberg CR, O'Connor JF, et al.: Incidence of early loss of pregnancy. N Engl J Med. 1988; 319(4): 189–94. PubMed Abstract | Publisher Full Text\n\nZinaman MJ, Clegg ED, Brown CC, et al.: Estimates of human fertility and pregnancy loss. Fertil Steril. 1996; 65(3): 503–9. PubMed Abstract | Publisher Full Text\n\nWang X, Chen C, Wang L, et al.: Conception, early pregnancy loss, and time to clinical pregnancy: a population-based prospective study. Fertil Steril. 2003; 79(3): 577–84. PubMed Abstract | Publisher Full Text\n\nCole LA: hCG, the wonder of today's science. Reprod Biol Endocrinol. 2012; 10: 24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChard T: Frequency of implantation and early pregnancy loss in natural cycles. Baillieres Clin Obstet Gynaecol. 1991; 5(1): 179–89. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, Wehmann RE, et al.: Measuring early pregnancy loss: laboratory and field methods. Fertil Steril. 1985; 44(3): 366–74. PubMed Abstract | Publisher Full Text\n\nMacklon NS, Geraedts JP, Fauser BC: Conception to ongoing pregnancy: the 'black box' of early pregnancy loss. Hum Reprod Update. 2002; 8(4): 333–43. PubMed Abstract | Publisher Full Text\n\nBiggers JD: Risks of In Vitro Fertilization and Embryo Transfer in Humans. In: Crosignani PG, Rubin BL, editors. In Vitro Fertilization and Embryo Transfer. London: Academic Press, 1983; 393–410.\n\nBenagiano G, Farris M, Grudzinskas G: Fate of fertilized human oocytes. Reprod Biomed Online. 2010; 21(6): 732–41. PubMed Abstract | Publisher Full Text\n\nZhou H, Weinberg CR, Wilcox AJ, et al.: A random-effects model for cycle viability in fertility studies. J Am Stat Assoc. 1996; 91(436): 1,413–22. PubMed Abstract | Publisher Full Text\n\nAsher GW: Reproductive cycles of deer. Anim Reprod Sci. 2011; 124(3–4): 170–5. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, Baird DD: Timing of sexual intercourse in relation to ovulation. Effects on the probability of conception, survival of the pregnancy, and sex of the baby. N Engl J Med. 1995; 333(23): 1517–21. PubMed Abstract | Publisher Full Text\n\nDrife JO: What proportion of pregnancies are spontaneously aborted? Brit Med J. 1983; 286(6361): 294. Reference Source\n\nBraude PR, Johnson MH: The Embryo in Contemporary Medical Science. In: Dunstan GR, editor. The Human Embryo: Aristotle and the Arabic and European Traditions. Exeter: University of Exeter Press; 1990; 208–21.\n\nOpitz JM: Human Development - The Long and the Short of it. In: Furton EJ, Mitchell LA, editors. What is Man O Lord? The Human Person in a Biotech Age; Eighteenth Workshop for Bishops. Boston, MA: The National Catholic Bioethics Center; 2002; 131–53.\n\nLeridon H: Fecundability. Human Fertility: The Basic Components. Chicago: The University of Chicago Press; 1977; 22–47. Reference Source\n\nBarrett JC, Marshall J: The risk of conception on different days of the menstrual cycle. Popul Stud (Camb). 1969; 23(3): 455–61. PubMed Abstract | Publisher Full Text\n\nSchwartz D, Macdonald PD, Heuchel V: Fecundability, coital frequency and the viability of Ova. Popul Stud (Camb). 1980; 34(2): 397–400. PubMed Abstract | Publisher Full Text\n\nWeinberg CR, Gladen BC, Wilcox AJ: Models relating the timing of intercourse to the probability of conception and the sex of the baby. Biometrics. 1994; 50(2): 358–67. PubMed Abstract | Publisher Full Text\n\nZhou H, Weinberg CR: Potential for bias in estimating human fecundability parameters: a comparison of statistical models. Stat Med. 1999; 18(4): 411–22. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, Baird DD: Risk factors for early pregnancy loss. Epidemiology. 1990; 1(5): 382–5. PubMed Abstract\n\nSheiner LB, Beal SL: Pharmacokinetic parameter estimates from several least squares procedures: superiority of extended least squares. J Pharmacokinet Biopharm. 1985; 13(2): 185–201. PubMed Abstract | Publisher Full Text\n\nLongnecker MP: Invited Commentary: Why DDT matters now. Am J Epidemiol. 2005; 162(8): 726–8. PubMed Abstract | Publisher Full Text\n\nVenners SA, Korrick S, Xu X, et al.: Preconception serum DDT and pregnancy loss: a prospective study using a biomarker of pregnancy. Am J Epidemiol. 2005; 162(8): 709–16. PubMed Abstract | Publisher Full Text\n\nTham BW, Everard JE, Tidy JA, et al.: Gestational trophoblastic disease in the Asian population of Northern England and North Wales. Br J Obstet Gynaecol. 2003; 110(6): 555–9. PubMed Abstract | Publisher Full Text\n\nEdwards RG: Introduction. Conception in the Human Female. London: Academic Press, 1980; 1–22.\n\nBarrett JC: Fecundability and coital frequency. Popul Stud (Camb). 1971; 25(2): 309–13. PubMed Abstract | Publisher Full Text\n\nLeridon H: The Physiological Basis. Human Fertility: The Basic Components. Chicago: The University of Chicago Press; 1977; 5–16. Reference Source\n\nLynch KE, Mumford SL, Schliep KC, et al.: Assessment of anovulation in eumenorrheic women: comparison of ovulation detection algorithms. Fertil Steril. 2014; 102(2): 511–518.e2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdwards RG: Sexuality and Coitus. Conception in the Human Female. London: Academic Press; 1980; 525–72 at 561.\n\nJones RE, Lopez KH: Pregnancy. Human Reproductive Biology. 4th ed. Amsterdam: Elsevier, 2014; 175–204. Publisher Full Text\n\nHarris J: Stem cells, sex and procreation. Camb Q Healthc Ethics. 2003; 12(4): 353–71. PubMed Abstract | Publisher Full Text\n\nJohnson MH, Everitt BJ: Chapter 15: Fertility. Essential Reproduction. 5th ed. Oxford: Wiley-Blackwell; 2000; 251–74.\n\nPotts M, Diggory P, Peel J: Spontaneous Abortion. Abortion. Cambridge: Cambridge University Press, 1977; 45–64.\n\nShort RV: When a conception fails to become a pregnancy. Ciba Found Symp. 1978; (64): 377–94. PubMed Abstract | Publisher Full Text\n\nKline J, Stein Z, Susser M: Conception and Reproductive Loss: Probabilities. Conception to Birth. Epidemiology of Prenatal Development. New York: OUP, 1989; 43–68.\n\nJarvis GE: Dataset 1 in: Estimating Limits for Natural Human Embryo Mortality. F1000Research. 2016. Data Source\n\nJarvis GE: Dataset 2 in: Estimating Limits for Natural Human Embryo Mortality. F1000Research. 2016. Data Source\n\nJarvis GE: Dataset 3 in: Estimating Limits for Natural Human Embryo Mortality. F1000Research. 2016. Data Source" }
[ { "id": "15931", "date": "19 Sep 2016", "name": "Stephen J. Senn", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting and generally well-written article. I am unfamiliar with the field of reproductive physiology and female fertility regulation and so cannot be described as an expert reviewer. However, I do have expertise in the field of statistics and modelling and felt to understand the issues much better having read this article and that is a tribute to its general clarity.\nNevertheless, at one or two points I felt the clarity could have been improved. The author is not always completely explicit on two points. The first is whether a conditional probability is being estimated (and if so conditional on what) and the second is the precise details of the mixed model being used.\nSince readers will not necessarily be familiar with the software an author uses, and since the more complex the subject the more likely an algorithm will differ between packages, one of the inevitable problems in a field of this complexity are 1) that it is quite likely that readers will not be familiar with some details of implementation and 2) results might differ somewhat from package to package. The author has used NONMEM, a package that is popular in nonlinear mixed effect modelling in pharmacokinetics but less well-known in other fields. This is a limitation of the article. (Not because NONMEM is not a suitable package to use but because it is the only package used.) For example, Makubate and Senn, modelling the effects of cross-over trials in infertility, found some differences depending on whether SAS, GenStat or R were used to  implement what was ostensibly the same model, or indeed program it from scratch using Mathcad and in the field of estimating values below the limit of quantitation Senn, Holford and Hockey got different standard errors using NOMEM compared to SAS, GenStat and R, although such differences are not necessarily inherent to packages but may reflect implementation.\nOn a more technical matter, the author has used a discrete mixture which some might regard as being excessively restrictive and a little unrealistic, although the author does claim \"the published data do not support a model of uni-modal, albeit varied, fecundability\". A further issue is that unlike for causal studies, such as clinical trials, the degree to which the subjects studied are representative of a population of interest is important. Lacking knowledge of this particular field and the studies cited I cannot judge whether this condition is satisfied. It seems at least plausible that sub-fertile couples are more likely to be studied than those of average fertility.\nNevertheless, this seems to be an interesting and valuable exercise in modelling a difficult field.", "responses": [ { "c_id": "2331", "date": "29 Nov 2016", "name": "Gavin Jarvis", "role": "Author Response", "response": "I would like to thank Professor Senn for his remarks1. I respond to his comments as follows: Representative Populations: This is an important point and Prof. Senn’s intuitive concern is well-founded. The analysis attempts to introduce a little more focus to the data from the three particular studies2-4. Despite the similarities between the quantitative conclusions from the three studies, extrapolation to a general population is risky, given the known and likely variances in fertility associated with age, health and social status, level of education, ethnicity etc... However, the strength of these studies lies in the detail and density of the data, which is rare among other similar studies, and I believe this re-analysis does yield some additional insight. I have addressed some of the concern regarding differences in populations and other sources of reproductive variance in another article5.   Conditional Probabilities: I hope the following makes my intention more explicit. If P(A|B) is the probability of event A, conditional on event B, then: (i) πLB    = P(A|B), where A is a live birth, and B is a clinical pregnancy (ii) πCLIN = P(A|B), where A is a clinical pregnancy, and B is a positive hCG test (iii) πHCG = P(A|B), where A is a positive hCG test, and B is successful fertilisation (iv) πFERT = P(A|B), where A is successful fertilisation, and B is the in vivo co-localisation of ovum and sperm (v) πSOC  = P(A|B), where A is the in vivo co-localisation of ovum and sperm, and B a single menstrual cycle   Discrete mixture model: I agree that dividing the cohort into two discrete populations is a little unrealistic. However, in the absence of the original raw data, there is little else that could be done. The data conform markedly better to this bi-modal distribution, as compared to either a uni-modal or tri-modal model. I doubt if this model captures all the quantitative subtlety that subsists in the data; nevertheless, it does both confirm and quantify, albeit perhaps a little brutally, the clear impression expressed in the original reports that the study populations included subjects who were sub-fertile.   Modelling packages: The point about differing outputs from different modelling software packages is also well made. My own instinct (which is admittedly not as refined as Prof. Senn’s) is that any difference is likely to be small, since, although I used NONMEM, there is no random effect (i.e., OMEGA) modelling other than the residual error variance (i.e., SIGMA). However, there are perhaps other reasons why performing the analysis in GenStat or R is preferable. At some point I will repeat the analysis and report back any differences in the output. I shall incorporate changes relating to points 1, 2 and 3 into a second version of the article. References Senn SJ: Referee Report For: Estimating limits for natural human embryo mortality [version 1; referees: 2 approved]. F1000Research. 2016; 5: 2083. Wilcox AJ, Weinberg CR, O'Connor JF, et al.: Incidence of early loss of pregnancy. N Engl J Med. 1988; 319(4): 189-94. PubMed PMID: 3393170. Zinaman MJ, Clegg ED, Brown CC, et al.: Estimates of human fertility and pregnancy loss. Fertil Steril. 1996; 65(3): 503-9. PubMed PMID: 8774277. Epub 1996/03/01. eng. Wang X, Chen C, Wang L, et al.: Conception, early pregnancy loss, and time to clinical pregnancy: a population-based prospective study. Fertil Steril. 2003; 79(3): 577-84. PubMed PMID: 12620443. Epub 2003/03/07. eng. Jarvis GE: Early embryo mortality in natural human reproduction: What the data say [version 1; referees: awaiting peer review]. F1000Research. 2016; 5: 2765." } ] }, { "id": "16765", "date": "03 Oct 2016", "name": "Alan O. Trounson", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well thought out analysis of the data available on human pregnancy wastage. The conclusions are valid from the data explored but the major variant of failed fertilization and very early embryo loss cannot be estimated. Failed fertilization will always be an unknown in such studies but early embryo loss can be estimated from the large amount of IVF data published on embryo survival to the blastocyst stage (day 5-7). However, the vast bulk of this data comes from superovulated patients and this may not represent embryo loss in the natural ovulatory cycle. This data suggests that only 30% of conceptions end up as live babies at delivery (Macklon et al., 2002)\nIndeed embryonic arrest before day 5 can be attributed to whole chromosome abnormalities in more than half of human embryos (McCoy et al., 2015). The embryonic losses due to mitotic and meiotic support the high embryonic wastage in human reproduction. It is a pity the authors didn’t include this genetic data in support of their hypothesis", "responses": [ { "c_id": "2330", "date": "29 Nov 2016", "name": "Gavin Jarvis", "role": "Author Response", "response": "I would like to thank Professor Trounson for his remarks1. I respond to his comments as follows: Use of IVF data: The study was intentionally restricted to the analysis of hCG data from a natural reproductive context, and from the three studies in particular2-4. As Prof. Trounson indicates, data from IVF may not be representative of natural cycles. Extrapolation of conclusions from a specific to a wider or different context should always be done with caution (see review by Prof. Senn5). The use of IVF data to inform our understanding of natural reproduction can be particularly difficult, as I note elsewhere6.   Macklon et al., 20027: This is a well-known and frequently cited review. I discuss the referenced value of 30% elsewhere6. However, contrary to what Prof. Trounson seems to imply, it is neither a summary of nor an extrapolation from IVF data. It is part of Macklon’s “overview of the outcome of spontaneous human pregnancy”, and is copied directly from an earlier review on the “frequency of implantation and early pregnancy loss in natural cycles” by Prof. Tim Chard8. Macklon is explicit in stating that the conditions from which in vitro data are obtained are “far from ideal” and “do not reflect the normal situation”7. He reviews several hCG studies concluding that many problems associated with these were addressed by Wilcox2 and subsequently Zinaman3. Surprisingly however, the numerical estimates in Macklon (including the 30% survival value) do not reflect the outcome from these two studies, as I explain elsewhere6.   McCoy et al., 20159: It is difficult to incorporate quantitative conclusions from McCoy into a model of natural human embryo loss. All McCoy’s data are from IVF embryos and are susceptible to criticism regarding their accuracy as a description of natural reproduction. As he himself states: “specific rates of meiotic and mitotic error reported in this study are likely particular to the IVF population” and “studies also demonstrated that ovarian stimulation and IVF culture conditions can both influence rates of chromosome abnormalities”9. McCoy cites Macklon7 as authority for a 70% loss of all conceptions in human reproduction. Moreover, he goes further by associating this loss specifically with “young, otherwise fertile couples”9. McCoy also cites a published summary10 of Edmonds, 198211, an early hCG study. Edmonds' data lie at one extreme of the pre-Wilcox studies, with 56.8% of hCG+ cycles failing prior to clinical recognition. By contrast, Walker, 198812 reported no pre-clinical losses of hCG+ pregnancies. Neither Edmonds’ nor Walker’s estimates have been replicated since Wilcox, 19882. Subsequent studies report a post-implantation, pre-clinical loss of approximately 20%6. Elsewhere13, McCoy states that “Fewer than ~30% of conceptions result in successful pregnancy”, citing Wilcox. However, Wilcox2 reports that “The total rate of pregnancy loss after implantation, including clinically recognized spontaneous abortions, was 31%”. Therefore, McCoy’s conclusion that high levels of aneuploidy observed in IVF embryos can explain natural human embryo loss is not well-founded, since natural embryo loss is substantially lower than he claims. An alternative view is that the high level of aneuploidy observed in vitro is, at least in part, an artefact of the handling of human ova and the associated interventions of assisted reproductive technology, as suggested by Chard (“it is possible that at least some of the abnormalities are the result of the experimental procedures themselves”8) and Braude (“Experiments in our laboratories have suggested that the in vitro handling of oocytes can produce chromosomal aberrations at alarmingly high frequencies”14). The critical question is – how large is that part? These issues are addressed in more detail elsewhere6.   References: Trounson A: Referee Report For: Estimating limits for natural human embryo mortality [version 1; referees: 2 approved]. F1000Research 2016; 5: 2083. Wilcox AJ, Weinberg CR, O'Connor JF, et al.: Incidence of early loss of pregnancy. N Engl J Med. 1988; 319(4): 189-94. PubMed PMID: 3393170. Zinaman MJ, Clegg ED, Brown CC, et al.: Estimates of human fertility and pregnancy loss. Fertil Steril. 1996; 65(3): 503-9. PubMed PMID: 8774277. Epub 1996/03/01. eng. Wang X, Chen C, Wang L, et al.: Conception, early pregnancy loss, and time to clinical pregnancy: a population-based prospective study. Fertil Steril. 2003; 79(3): 577-84. PubMed PMID: 12620443. Epub 2003/03/07. eng. Senn SJ: Referee Report For: Estimating limits for natural human embryo mortality [version 1; referees: 2 approved]. F1000Research. 2016; 5: 2083. Jarvis GE: Early embryo mortality in natural human reproduction: What the data say [version 1; referees: awaiting peer review]. F1000Research. 2016; 5: 2765. Macklon NS, Geraedts JP, Fauser BC: Conception to ongoing pregnancy: the 'black box' of early pregnancy loss. Hum Reprod Update. 2002; 8(4): 333-43. PubMed PMID: 12206468. Epub 2002/09/11. eng. Chard T: Frequency of implantation and early pregnancy loss in natural cycles. Baillieres Clin Obstet Gynaecol. 1991; 5(1): 179-89. PubMed PMID: 1855339. Epub 1991/03/01. eng. McCoy RC, Demko ZP, Ryan A, et al.: Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development. PLoS genetics. 2015; 11(10): e1005601. PubMed PMID: 26491874. Pubmed Central PMCID: 4619652. Epub 2015/10/23. eng. Edmonds DK, Lindsay KS, Miller JF, et al.: Early embryonic mortality in women. Obstetrical and Gynecological Survey. 1983; 38(7): 433-34. eng. Edmonds DK, Lindsay KS, Miller JF, et al.: Early embryonic mortality in women. Fertil Steril. 1982; 38(4): 447-53. PubMed PMID: 7117572. Epub 1982/10/01. eng. Walker EM, Lewis M, Cooper W, et al.: Occult biochemical pregnancy: fact or fiction? Br J Obstet Gynaecol. 1988; 95(7): 659-63. PubMed PMID: 3273753. Epub 1988/07/01. eng. McCoy RC, Demko Z, Ryan A, et al.: Common variants spanning PLK4 are associated with mitotic-origin aneuploidy in human embryos. Science. 2015; 348(6231): 235-8. PubMed PMID: 25859044. Epub 2015/04/11. eng. Braude PR, Johnson MH, Pickering SJ, et al. Mechanisms of Early Embryonic Loss In Vivo and In Vitro. In: Chapman. M, Grudzinskas G, Chard T, editors. The Embryo: Normal and Abnormal Development and Growth. London: Springer-Verlag; 1991. p. 1-10." } ] } ]
1
https://f1000research.com/articles/5-2083
https://f1000research.com/articles/5-1356/v1
13 Jun 16
{ "type": "Method Article", "title": "DRIMSeq: a Dirichlet-multinomial framework for multivariate count outcomes in genomics", "authors": [ "Malgorzata Nowicka", "Mark D. Robinson", "Malgorzata Nowicka" ], "abstract": "There are many instances in genomics data analyses where measurements are made on a multivariate response. For example, alternative splicing can lead to multiple expressed isoforms from the same primary transcript. There are situations where the total abundance of gene expression does not change (e.g. between normal and disease state), but differences in the relative ratio of expressed isoforms may have significant phenotypic consequences or lead to prognostic capabilities. Similarly, knowledge of single nucleotide polymorphisms (SNPs) that affect splicing, so-called splicing quantitative trait loci (sQTL), will help to characterize the effects of genetic variation on gene expression. RNA sequencing (RNA-seq) has provided an attractive toolbox to carefully unravel alternative splicing outcomes and recently, fast and accurate methods for transcript quantification have become available. We propose a statistical framework based on the Dirichlet-multinomial distribution that can discover changes in isoform usage between conditions and SNPs that affect splicing outcome using these quantifications. The Dirichlet-multinomial model naturally accounts for the differential gene expression without losing information about overall gene abundance and by joint modeling of isoform expression, it has the capability to account for their correlated nature. The main challenge in this approach is to get robust estimates of model parameters with limited numbers of replicates. We approach this by sharing information and show that our method improves on existing approaches in terms of standard statistical performance metrics. The framework is applicable to other multivariate scenarios, such as Poly-A-seq or where beta-binomial models have been applied (e.g., differential DNA methylation). Our method is available as a Bioconductor R package called DRIMSeq.", "keywords": [ "DRIMSeq", "genomics", "single nucleotide polymorphism", "RNA-seq", "splicing", "statistical framework" ], "content": "Introduction\n\nWith the development of digital high-throughput sequencing technologies, the analysis of count data in genomics has become an important theme motivating the investigation of new, more powerful and robust approaches that handle complex overdispersion patterns while accommodating the typical small numbers of experimental units.\n\nThe basic distribution for modeling univariate count responses is the Poisson distribution, which also approximates the binomial distribution. One important limitation of the Poisson distribution is that the mean is equal to the variance, which is not sufficient for modeling, for example, gene expression from RNA sequencing (RNA-seq) data where the variance is higher than the mean due to technical sources and biological variability1–5. A natural extension of the Poisson distribution that accounts for overdispersion is the negative-binomial distribution, which has been extensively studied in the small-sample situation and has become an essential tool in genomics applications1–3.\n\nAnalogously, the fundamental distribution for modeling multivariate count data is the multinomial distribution, which models proportions across multiple features. To account for overdispersion, the multinomial can be extended to the Dirichlet-multinomial (DM) distribution6. Because of its flexibility, the DM distribution has found applications in forensic genetics7, microbiome data analysis8, the analysis of single-cell data9 and for identifying nucleosome positions10. Another extension of the multinomial is the Dirichlet negative multinomial distribution11, which allows modeling of correlated count data and was applied in the analysis of clinical trial recruitment12. Notably, the beta-binomial distribution, such as those used in differential methylation from bisulphite sequencing data13–15, represents a special case of the DM.\n\nExpressed transcripts are generated by alternatively including exons into mature mRNAs. Hence, gene expression can be viewed as a multivariate expression of transcripts or exons, and such representation allows to study not only the overall gene expression, but how it is composed from different isoforms. Changes in the relative ratios of different isoforms can have significant phenotypic consequences and its aberrations may be associated with disease16,17. Thus, in addition to differential gene expression, biologists are interested in using RNA-seq data to discover changes in isoform usage between conditions; this is also referred to as differential splicing (DS).\n\nAlternative splicing is a process regulated by complex protein-RNA interactions that can be altered by genetic variation. Knowledge of single nucleotide polymorphisms (SNPs) that affect splicing, known as splicing quantitative trait loci (sQTL), can help to characterize this layer of regulation. From an inference point of view, sQTL analyses are equivalent to DS analyses where covariates are defined by genotypes.\n\nIn this article, we propose the DM distribution to model relative usage of isoforms. The DM model treats transcript expression as a multivariate response and allows for flexible small-sample estimation of overdispersion. We address the challenge of obtaining robust estimates of the model parameters, especially dispersion, when only a small number of replicates is available by applying an empirical Bayes approach to share information, similar to those proven successful in negative-binomial frameworks1,18. In particular, weighted likelihood is used to moderate the gene-wise dispersion toward a common or trended value.\n\nThe Dirichlet-multinomial framework, implemented as a Bioconductor R package called DRIMSeq, is oriented for both DS analysis and sQTL analysis. It has been evaluated and compared to the current best methods in extensive simulations and in real RNA-seq data analysis using transcript and exon counts, highlighting that DRIMSeq performs best with transcript counts. Furthermore, the framework can be applied to other types of emerging multivariate genomic data, such as PolyA-seq where the collection of polyadenylated sites for a given gene are measured19 and to settings where the beta-binomial is already applied (e.g., differential methylation, allele-specific differential gene expression).\n\n\nApproaches to DS and sQTL analyses\n\nRNA-seq has provided an attractive toolbox to unravel alternative splicing outcomes. There are various methods designed explicitly to detect DS based on samples from different experimental conditions20–22. Independently, a set of methods was developed for detecting genetic variation associated with changes in splicing (sQTLs). While sQTL detection represents a different application, it is essentially DS between groups defined by genotypes. In the following overview, we do not distinguish between applications but rather between the general concepts used to detect differences in splicing.\n\nDS can be studied in three main ways: as differential isoform usage or, in a more local context, as differential exon or exon junction usage or as specific splicing events (e.g., exon skipping), and all have their advantages and disadvantages. A survey of the main methods can be found in Table S1 (Supplementary File). From the quantification perspective, exon-level abundance estimation is straightforward since it is based on counting read-region overlaps (e.g., featureCounts23). Exons from different isoforms may have different boundaries, thus the authors of DEXSeq24 quantify with HTSeq25 non-overlapping windows defined by projecting all exons to the linear genome. However, this strategy does not utilize the full information from junction reads. Such reads are counted multiple times (in all exons that they overlap with), artificially increasing the total number of counts per gene and ignoring that junction reads support the isoforms that explicitly contain the combinations of exons spanned by these reads. This issue is captured in Altrans26, which quantifies exon-links (exon junctions) or in MISO27, rMATS28 and SUPPA29, all of which calculate splicing event inclusion levels expressed as percentage spliced in (PSI). Such events capture not only cassette exons but also alternative 3’ and 5’ splice sites, mutually exclusive exons or intron retention. GLiMMPS30 and Jia et al.31, with quantification from PennSeq32, use event inclusion levels for detecting SNPs that are associated with differential splicing. However, there are (hypothetical) instances where changes in splicing pattern may not be captured by exon-level quantifications (Figure 1A in Monlog et al.33). Furthermore, detection of more complex transcript variations remains a challenge for exon junction or PSI methods (see Figure S5 in Ongen et al.26). Soneson et al.22 considered counting to accommodate various types of local splicing events, such as exon paths traced out by paired reads, junction counts or events that correspond to combinations of isoforms; in general, the default exon-based counting resulted in strongest performance for DS gene detection.\n\nThe above methods allow for detection of differential usage of local splicing features, which can serve as an indicator of differential transcript usage but often without knowing specifically which isoforms are differentially regulated. This can be a disadvantage in cases where knowing the isoform ratio changes is important, since isoforms are the ultimate determinants of proteins. Moreover, exons are not independent transcriptional units but building blocks of transcripts. Thus, the main alternative is to make a calculation of DS using isoform-level quantitations. A vast number of methods is available for gene isoform quantification, such as MISO27, BitSeq34, casper35, Cufflinks36, RSEM37, FlipFlop38 and more recent, extremely fast pseudoalignment-based methods, such as Sailfish39, kallisto40 and Salmon41. Additionally, Cufflinks, casper and FlipFlop allow for de novo transcriptome assembly. Recently, performance of various methods was extensively studied42,43, including a webtool43 to allow further comparisons. Regardless of this progress, it remains a complex undertaking to quantify isoform expression from short cDNA fragments since there is a high degree of overlap between transcripts in complex genes; this is a limitation of the technology, not the algorithms. In the case of incomplete transcript annotation, local approaches may be more robust and can detect differential changes due to transcripts that are not in the catalog22,26. Nevertheless, DS at the resolution of isoforms is the ultimate goal within the DRIMSeq framework, and with the emergence of longer reads (fragments), transcript quantifications will become more accurate and methods for multivariate transcript abundances will be needed.\n\nWhether the differential analysis is done at the transcript or local level, modeling and testing independently each transcript44,45 or exon ratio46 ignores the correlated structure of these quantities (e.g., proportions must sum to 1). Similarly, separate modeling and testing of exon junctions (Altrans26) or splicing events (rMATS28, GLiMMPS30,31,47) of a gene leads to non-independent statistical tests, although the full effect of this on calibration (e.g., controlling the rate of false discoveries) is not known. Nevertheless, with the larger number of tests, the multiple testing correction becomes more extreme. In sQTL analyses, this burden is even larger since there are many SNPs tested for each gene. There, the issue of multiple comparisons is usually accounted for by applying a permutation scheme in combination with the false discovery rate (FDR) estimation26,30,33,44,46–48.\n\nDEXSeq and voom-diffSplice4,5 undertake another approach, where the modeling is done per gene. DEXSeq fits a generalized linear model (GLM), assuming that (exonic) read counts follow the negative-binomial distribution. A bin is deemed differentially used when its corresponding group-bin interaction is significantly different. The exact details of voom-diffSplice are not published. Nevertheless, exons are again treated as independent in the gene-level model.\n\nIn contrast, MISO27, Cuffdiff36,49 and sQTLseekeR33 model alternative splicing as a multivariate response. MISO is designed for DS analyses only between two samples and does not handle replicates. Variability among replicates is captured within Cuffdiff via the Jensen-Shannon divergence metric on probability distributions of isoform proportions as a measure of changes in isoform relative abundances between samples. sQTLseekeR tests for the association between genotype and transcript composition, using an approach similar to a multivariate analysis of variance (MANOVA) without assuming any probabilistic distribution and Hellinger distance as a dissimilarity measure between transcript ratios. Very recently, LeafCutter50 gives intron usage quantifications that can be used for both DS analyses (also using the DM model) and sQTL analyses via a correlation-based approach with FastQTL48.\n\nsQTLseekeR, Altrans, LeafCutter and other earlier methods for the sQTL analysis33,44–46 employ feature ratios to account for the overall gene expression. A potential drawback of this approach is that feature ratios do not take into account whether they are based on high or low expression, while the latter have more uncertainty in them. DRIMSeq naturally builds this in via the multinomial model.\n\n\nDirichlet-multinomial model for relative transcript usage\n\nIn the application of the DM model to DS, we refer to features of a gene. These features can be transcripts, exons, exonic bins or other multivariate measurable units, which for DS, contain information about isoform usage and can be quantified with (estimated) counts.\n\nAssume that a gene has q features with relative expression defined by a vector of proportions π = (π1,…,πq), and the feature counts Y = (Y1,…,Yq) are random variables. Let y = ( y1,…,yq) be the observed counts and m=∑j=1q yj. Here, m is treated as an ancillary statistic since it depends on the sequencing depth and gene expression, but not on the model parameters. The simplest way to model feature counts is with the multinomial distribution with probability function defined as:\n\n\n\nwhere the mean and the covariance matrix of Y are 𝔼(Y) = mπ and 𝕍(Y) = diag(π) – ππT, respectively.\n\nTo account for overdispersion due to true biological variation between experimental units as well as technical variation, such as library preparation and errors in transcript quantification, we assume the feature proportions, Π, follow the (conjugate) Dirichlet distribution, with density function:\n\n\n\nwhere γj, j = 1,…,q are the Dirichlet parameters and γ+ = m=∑j=1qγj. The mean and covariance matrix of random proportions Π are 𝔼(Π) = γ/γ+ = π and 𝕍(Π) = {γ+diag(γ) – γγT}/{γ+2(γ+ + 1)}, respectively. We can see that proportions Π are proportional to γ and their variance is inversely proportional to γ+, which is called the concentration or precision parameter. As γ+ gets larger, the proportions are more concentrated around their means.\n\nWe can derive the marginal distribution of Y by multiplying densities (1) and (2) and integrating over π. Then, feature counts Y follow the DM distribution6 with probability function defined as:\n\n\n\nThe mean of Y is unchanged at 𝔼(Y) = 𝔼{𝔼(Y|Π)} = 𝔼(mΠ) = mγ/γ+ = mπ, while the covariance matrix of Y is given by 𝕍(Y) = cm{diag(π) – ππT}, where c = (m+γ+)/(1+γ+) is an additional factor when representing the Dirichlet-multinomial covariance to the ordinary multinomial covariance. c depends on concentration parameter γ+ which controls the degree of overdispersion and is inversely proportional to variance of Y.\n\nWe can represent the DM distribution using an alternative parameterization: π = γ/γ+ and θ = 1/(1 + γ+); then, the covariance of Y can be represented as 𝕍(Y) = n{diag(π) – ππT}{1 + θ(n – 1)}, where θ can be interpreted as a dispersion parameter. When θ grows (γ+ gets smaller), the variance becomes larger. From the knowledge of the gamma function, xΓ(x) = Γ(x + 1), we can write Γ(α+x)Γ(α)=∏r=1x{α+(r–1)}. Hence, the DM density function becomes:\n\n\n\nsuch that for θ = 0, DM reduces to multinomial.\n\n\nDetecting DS and sQTLs with the DM model\n\nWithin DRIMSeq, the DM method can be used to detect the differential usage of gene features between two or more conditions. For simplicity, suppose that features of a gene are transcripts and the comparison is done between two groups. The aim is to determine whether transcript ratios of a gene are different in these two conditions. Formally, we want to test the hypothesis H0 : π1 = π2 against the alternative H1 : π1 ≠ π2. For the convenience of parameter estimation, we decide to use the DM parameterization with precision parameter γ+, which can take any non-negative value, instead of dispersion parameter θ, which is bounded to values between 0 and 1. Because our goal is to compare the proportions from two groups, γ+ is a nuisance parameter that gets estimated in the first step (see the following Section). Let l(π1, π2, γ+) be the joint log-likelihood function. Assuming γ+=γ^+, the maximum likelihood (ML) estimates of π1, π2 are the solution of dld(π1,π2)(π1, π2, γ+ = γ^+) = 0. Under the hypothesis H1 : π1 = π2 = π, the ML estimate of π is the solution of dldπ(π1 = π, π2 = π, γ+ = γ^+) = 0. We test the null hypothesis using a likelihood ratio statistic of the form\n\n\n\nwhich asymptotically follows the chi-squared distribution χq–12 with q – 1 degrees of freedom. In comparisons across c groups, the number of degrees of freedom is (c – 1) × (q – 1). After all genes are tested, p-values can be adjusted for multiple comparisons with the Benjamini-Hochberg method.\n\nIn a DS analysis, groups are defined by the design of an experiment and are the same for each gene. In sQTL analyses, the aim is to find nearby (bi-allelic) SNPs associated with alternative splicing of a gene. Model fitting and testing is performed for each gene-SNP pair, and grouping of samples is defined by the genotype, typically translated into the number of minor alleles (0, 1 or 2). Thus, sQTL analyses are similar to DS analyses with the difference that multiple models are fitted and tested for each gene. Additional challenges to be handled in sQTL analyses include a large number of tests per gene with highly variable allele frequencies (models) and linkage disequilibrium, which can be accounted for in the multiple testing corrections. As in other sQTL studies33,47,48, we apply a permutation approach to empirically assess the null distribution of associations and use it for the adjustment of nominal p-values. (see Supplementary Note 2 in Supplementary File). For computational efficiency, SNPs within a given gene that exhibit the same genotypes are grouped into blocks. In this way, blocks define unique models to be fit, reducing computation and the degree of multiple testing correction.\n\n\nDispersion estimation with adjusted profile likelihood and moderation\n\nAccurate parameter estimation is a challenge when only a small number of replicates is available. Following the edgeR ideology1,2,51, we propose multiple approaches for dispersion estimation, all based on the maximization and adjustment of the profile likelihood, since standard maximum likelihood (ML) is known to produce biased estimates as it tends to underestimate variance parameters by not allowing for the fact that other unknown parameters are estimated from the same data52,53.\n\nIn the DM model parameterization of our choice, we are interested in estimating the precision (concentration) parameter, γ+ (inverse proportional to dispersion θ). Hence, at this stage, proportions π1 and π2 can be considered nuisance parameters and the profile log-likelihood for γ+ can be constructed by maximizing the log-likelihood function with respect to proportions π1 and π2 for fixed γ+:\n\n\n\nThe profile likelihood is then treated as an ordinary likelihood function for estimation and inference about parameters of interest. Unfortunately, with large numbers of nuisance parameters, this approach can produce inefficient or even inconsistent estimates52,53. To correct for that, one can apply an adjustment proposed by Cox and Reid54:\n\n\n\nwhere det denotes determinant and I is the observed information matrix for π1 and π2. The interpretation of the correction term in APL is that it penalizes values of γ+ for which the information about π1 and π2 is relatively large. When data consists of many samples, one can use gene-wise dispersion estimates, i.e., the dispersion is estimated for each gene g = 1,…,G separately:\n\n\n\nThese estimates become more unstable as the sample size decreases. At the other extreme, one can assume a common dispersion for all genes and use all genes to estimate it:\n\n\n\nCommon dispersion estimates are more stable but the assumption of a single dispersion for all genes is rather strong, given that some genes are under tighter regulation than others55,56. Thus, moderated dispersion is a trade-off between gene-wise and common dispersion and estimates are calculated with an empirical Bayes approach, which uses a weighted combination of the common and individual likelihood:\n\n\n\nIf a dispersion-mean trend is present (see Figure S16, Figure S17, Figure S28 and Figure S29 in Supplementary File), as commonly observed in gene-level differential expression analyses1,3, one can apply shrinkage towards this trend instead of to the common dispersion:\n\n\n\nwhere C is a set of genes that have similar gene expression as gene g and W is a weight defining the strength of moderation (see Supplementary Note 1 for further details).\n\n\nEstimation and inference: simulations from the Dirichlet-multinomial model\n\nWe first investigated the performance of the DM model and the approach for parameter estimation and inference in the case where only few replicates are available. We performed simulations that correspond to a two-group comparison with no DS (i.e. null model) where feature counts were generated from the DM distribution with identical parameters in both groups. Simulations were repeated 50 times for 1000 genes. In these simulations, we can vary the overall expression (m), number of features (q), proportions (prop) and sample size in one condition (n). Proportions follow a uniform or decaying distribution or are estimated based on kallisto transcripts or HTSeq exon counts from Kim et al. and Brooks et al. data (more details on these datasets below). In the first case, all genes have the same (common) dispersion, and in the second one, each gene has different (genewise) dispersion. Simulations for evaluating the dispersion moderation are intended to better resemble a real dataset. For these instances (repeated 25 times for 5000 genes), genes have expression, dispersion and proportions that were estimated from the real data. See Supplementary Note 3 for the additional details.\n\nFigure 1A and Figure S1 confirm that using the Cox-Reid adjustment (CR) improves the estimation (in terms of median absolute error and extreme error values) of the concentration parameter γ+ in comparison to raw profile likelihood (PL) estimates. Additionally, the median error of concentration estimates for Cox-Reid adjusted profile likelihood is always lower than for PL or maximum likelihood (ML) used in the dirmult package7 (Figure 1B, Figure S2). This translates directly into the inference performance where the CR approach leads to lower false positive (FP) rate than other approaches (Figure 1C, Figure S3).\n\nIn the first scenario, all genes have the same (common) dispersion, and in the second one, each gene has a different (genewise) dispersion. All genes have expression equal to 1000 and 3 or 10 features with the same proportions estimated from kallisto counts from Kim et al. data set. For each of the scenarios, common, genewise, with and without moderation to common dispersion is estimated with maximum likelihood using the dirmult R package, the raw profile likelihood and the Cox-Reid adjusted profile likelihood. A: Median absolute error of concentration γ+ estimates. B: Median raw error of γ+ estimates. C: False positive (FP) rate for the p-value threshold of 0.05 of the null two-group comparisons based on the likelihood ratio statistics. Dashed line indicates the 0.05 level. Additionally, the FP rates when true concentration estimates are used in the inference (gray boxplot).\n\nAccurate estimates of dispersion do not always lead to expected control of FP rate. Notably, using the true concentration parameters in genes with many features (with decaying proportions) results in higher than expected nominal FP rates (Figure 1C, Figure S3 and Figure S5A). Meanwhile, for genes with uniform proportions, even with many features, the FP rate for true dispersion is controlled (Figure S3 and Figure S5C). Also, the Cox-Reid adjustment tends to underestimate the concentration (overestimate dispersion) for genes with many features and decaying proportions, especially for very small sample size (Figure 1B, Figure S2, Figure S4A, Figure S4C, Figure S4D), which leads to accurate FP rate control not achieved even with the true dispersion (Figure S5A).\n\nAs expected, common dispersion estimation is effective when all genes indeed have the same dispersion, though this cannot be generally assumed in most real RNA-seq datasets (see results of simulations in the following section). In contrast, pure gene-wise estimates of dispersion lead to relatively high estimation error in small sample sizes (Figure 1A, Figure S1 and Figure S7). Thus, sharing information about concentration (dispersion) between genes by moderating to the gene-wise (adjusted) profile likelihood is applied. This improves concentration estimation in terms of median error (Figure S7) and by shrinking extremely large values (on the boundary of the parameter space, see Figure S6) toward common or trended concentration. Therefore, moderated gene-wise estimates lead to better control of the nominal FP rate (Figure S9).\n\n\nComparison on simulations that mimic real RNA-seq data\n\nNext, we use the simulated data from Soneson et al.22, where RNA-seq reads were generated such that 1000 genes had isoform switches between two conditions of the two most abundant transcripts. Altogether, we summarize results for three scenarios: i) Drosophila melanogaster with no differential gene expression; ii) Homo sapiens without differential gene expression; iii) Homo sapiens with differential gene expression.\n\nThe aim of these analyses is to compare the performance of DRIMSeq against DEXSeq, which emerged among the top performing methods for detection of DS from RNA-seq data22. Additionally, for DRIMSeq, we consider different dispersion estimates: common, gene-wise with no moderation and with moderation-to-common and to-trended dispersion. We use the exonic bin counts provided by HTSeq (same input to the DEXSeq pipeline), and transcript counts obtained with kallisto. Additionally, we use HTSeq and kallisto counts that are re-estimated after the removal of lowly expressed transcripts (less than 5% in all samples) from the gene annotation (pre-filtering) as proposed by Soneson et al.22 and kallisto filtered counts that exclude the lowly expressed transcripts. DRIMSeq returns a p-value per gene. To make results comparable, we used the module within DEXSeq that summarizes exon-level p-values to a gene-level adjusted p-value. As expected, common dispersion estimates lead to worse performance (lower power and higher FDR) compared to gene-wise dispersions. DRIMSeq achieves the best performance with moderated gene-wise dispersion estimates, while the difference in performance between moderating to common or to trended dispersion is quite small, with moderated-to-trend dispersion estimates being slightly more conservative (Figure 2 and Figure S12).\n\nDRIMSeq was run with different dispersion estimation strategies: common dispersion and genewise dispersion with no moderation (genewise_grid_none), moderation to common dispersion (genewise_grid_common) and moderation to trended dispersion (genewise_grid_trended). Results presented for Drosophila melanogaster and Homo sapiens simulations with DS (nonull) and no differential gene expression (node) using transcript counts from kallisto and exonic counts from HTSeq. Additionally, filtered counts (kallistofiltered5, htseqprefiltered5) are used. When the achieved FDR is smaller than the threshold, circles are filled with the corresponding color, otherwise, they are white.\n\nAs noted by Soneson et al.22, detecting DS in fruit fly is easier than in human; all methods have much smaller false discovery rate (FDR). Nevertheless, none of the methods manages to control the FDR at a given threshold in either of the simulations.\n\nAnnotation pre-filtering, suggested as a solution to mitigate high FDRs22, affects DEXSeq and DRIMSeq in a different way. For DEXSeq, it strongly reduces the FDR. For DRIMSeq, it increases power without a strong reduction of FDR. Moreover, the results for kallisto filtered and pre-filtered are almost identical (Figure S12 and Figure S21), which means that the re-estimation step based on the reduced annotation is not necessary for kallisto when used with DRIMSeq or DEXSeq. Additionally, we have considered how other filtering approaches affect DS detection. From Figure S21, we can see that DS analysis based on transcript counts are more robust to different variations of filtering and indeed some filtering improves the inference. For exonic counts, filtering should be less stringent and the pre-filtering approach is the best performing strategy.\n\nDRIMSeq performs well when coupled with transcript counts from kallisto. In the case when no filtering is applied to the data, it outperforms DEXSeq. When transcript counts are pre-filtered, both methods have very similar performance (Figure S15). For both differential engines, the performance decreases substantially with increasing number of transcripts per gene, with DRIMSeq having slightly more power when genes have only a few transcripts (Figure S14). DRIMSeq has poor performance for the exonic counts in the human simulation, where achieved FDRs of more than 50% are observed for an expected 5%; consequently, we recommend the use of DRIMSeq on transcript counts only. On the other hand, the concordance of DRIMSeq and DEXSeq top-ranked genes is quite high and similar even for exonic counts (Figure S13).\n\nThe p-value distributions highlight a better fit of the DM model to transcript counts compared to exonic counts (it is uniform with a sharp peak close to zero). Similarly, dispersion estimation gives better results for transcript counts (Figure S16 and Figure S17). In particular, for exonic counts, a large number of genes have concentration parameter estimates at the boundary of the parameter space, unlike the situation for transcript counts (Figure S19 and Figure S20).\n\n\nDS analysis\n\nTo compare the methods further, we consider two public RNA-seq data sets. The first is the pasilla dataset57 (Brooks et al.). The aim was to identify genes regulated by pasilla, the Drosophila ortholog of mammalian splicing factors NOVA1 and NOVA2. In this experiment, libraries were prepared from seven biologically independent samples: four control samples and three samples in which pasilla was knocked down. Libraries were sequenced using a mixture of single-end and paired-end reads as well as different read lengths. The second data set is from matched human lung normal and adenocarcinoma samples from six Korean female non-smoking patients58, using paired-end reads (Kim et al.).\n\nBoth datasets have a more complex design than those used in the simulations; in addition to the grouping variable of interest, there are additional covariates to adjust for (e.g., library layout for the fruit fly data, patient identifier for the paired human study). In order to account for such effects, one should rather use a regression approach, which currently is not supported by DRIMSeq, but can be applied within DEXSeq’s GLM framework. To make the comparison fair, we fit multiple models. For the pasilla dataset, we compare four control samples versus three pasilla knock-down samples without taking into account the library layout (model full) as well as compare only the paired-end samples, which removes the covariate. To not diminish DEXSeq for its ability to fit GLMs, we run it using a model that does the four control versus three knock-down comparison with library layout as an additional covariate (model full glm). For the adenocarcinoma data, we do a two-group comparison of six normal versus six cancer samples (model full) and for DEXSeq, we fit an extra model that takes into account patient effects (model full glm). Additionally, we do so-called \"mock\" analyses where samples from the same condition are compared (model null), and the expectation is to detect no DS since it is a within-condition comparison (see Supplementary Note 5 for the exact definition of these null models).\n\nIn the full comparisons with transcript counts, DRIMSeq calls similar or fewer DS genes than DEXSeq, and a majority of them are contained within the DEXSeq calls (Figure S24) showing high concordance between DRIMSeq and DEXSeq and slightly more conservative nature of DRIMSeq. Accounting for covariates in DEXSeq using the GLM or performing the analysis on a subgroup without covariates (model full paired) results in more DS genes detected (Figure S26 and Figure S27).\n\nIn the \"mock\" analyses, as expected, both methods detect considerably fewer DS genes, except in two cases. First, for the pasilla data (model null 3), where the two versus two control samples were from single-end library in one group and from paired-end library in the second group, leading to a comparison between batches in which both of the methods found more DS genes than in the comparison of control versus knock-down showing that the \"batch\" effect is very strong. Second, in the adenocarcinoma data (model null normal 1), where the two groups of individuals (each consisting of three women) happened to be very distinct (Figure S22). Therefore, we do not include these two cases when referring to the null models.\n\nOverall, in the full comparisons, there are more DS genes detected based on differential transcript usage than differential exon usage (Figure S23). For DEXSeq, this is also the case in the null comparisons, which shows that DEXSeq works better with exonic counts than with transcript counts. DRIMSeq, on the other hand, has better performance on transcript counts, for which it calls less DS genes in the null analysis than with exon counts. Also the distributions of p-values indicate that DM fits better to the transcript counts (Figure S28 and Figure S29).\n\nMethod comparisons based on real data are very challenging as the truth is simply not known. In this sense the pasilla data is very precious, as the authors of this study have validated alternative usage of exons in 16 genes using RT-PCR. Of course, these validations represent an incomplete truth, and ideally, large-scale independent validation would be needed to comprehensively compare DS detection methods. In Figure 3, Figure S30, Figure S31 and Figure S32 again it is shown that DRIMSeq is slightly more conservative than DEXSeq. DRIMSeq performs poorly on exon-level but returns strong performance on transcript-level quantification (e.g., kallisto) and even outperforms DEXSeq when the sample size is very small (model full paired).\n\nOn each curve, \"X\" indicates the number of DSgenes detected for the FDR of 0.05. Model full - comparison of 4 control samples versus 3 knock-down. Model full paired - comparison of 2 versus 2 paired-end samples. Model full glm - as model full but including the information about library layout (GLM fitting can be done only with DEXSeq).\n\n\nsQTL analysis\n\nTo demonstrate the application of DRIMSeq to sQTL analysis, we use the data from the GEUVADIS project44 where 465 RNA-seq samples from lymphoblastoid cell lines were sequenced, 422 of which were sequenced in the 1000 Genomes Project Phase 1. Here, we present the analysis of 91 samples corresponding to the CEU population and 89 samples from the YRI population. Expected transcript counts (obtained with FluxCapacitor) and genotype data were downloaded from the GEUVADIS project website. We choose to compare the performance of DRIMSeq with sQTLseekeR, because it is a very recent tool that performs well33, can be directly applied to transcript count data and models transcript usage as a multivariate outcome.\n\nFor both of the methods, we investigate only the bi-allelic SNPs with a minor allele present in at least five samples (minor allele frequency approximately equal to 5%) and at least two alleles present in a population. Given a gene, we keep the SNPs that are located within 5 Kb upstream or downstream of the gene. We use the same pre-filtered counts in DRIMSeq and sQTLseekeR to have the same baseline for the comparison of the statistical engines offered by these packages. We keep the protein coding genes that have at least 10 counts in 70 or more samples and at least two transcripts left after the transcript filtering, which keeps the one that has at least 10 counts and proportion of at least 5% in 5 or more samples. The numbers of tested and associated genes and sQTLs are indicated in Figure 4, Figure S35 and Figure S36.\n\nA: Concordance between sQTLseekeR and DRIMSeq. \"X\" indicates number of sQTLs for FDR = 0.05. Panel B, C and D show characteristics of sQTLs and genes detected by sQTLseekeR or DRIMSeq for FDR = 0.05. Values in brackets indicate numbers of sQTLs or genes in a given set. Dark gray line corresponds to sQTLs or genes that were identified by both of the methods (overlap). B: Distance to the closest exon of intronic sQTLs. The light gray line (non sqtl) corresponds to intronic sQTLs that were not called by any of the methods. C: Distribution of mean gene expression for genes that are associated with sQTLs. D: Distribution of the number of expressed transcripts for genes that are associated with sQTLs. The light gray lines (all genes) represent corresponding features for all the analyzed genes.\n\nIn Figure 4A and Figure S37 we can see that the concordance between DRIMSeq and sQTLseekeR is quite high and reaches 75%. Nevertheless, there is considerable difference between the number and type of genes that are uniquely identified by each method. sQTLseekeR finds more genes with alternative splicing associated to genetic variation (Figure S35 and Figure S36), but these genes have fewer transcripts expressed and lower overall expression in comparison to genes detected by DRIMSeq (Figures 4C & 4D). Moreover, sQTLs detected by DRIMSeq show more enrichment for splicing related features, such as location within exons (Table 1) and distance to the closest exon (Figure 4B), than sQTLseekeR sQTLs, suggesting that by accounting for the overall gene expression, one can detect more meaningful associations.\n\n\nDiscussion\n\nWe have created a statistical framework called DRIMSeq based on the Dirichlet-multinomial distribution to model alternative usage of transcript isoforms from RNA-seq data. We have shown that this framework can be used for detecting differential isoform usage between experimental conditions as well as for identifying sQTLs. In principle, the framework is suitable for differential analysis of any type of multinomial-like responses. From our simulations and real data analyses towards DS and sQTL analyses, DRIMSeq seems better suited to model transcript counts rather than exonic counts.\n\nOverall, there are many tradeoffs to be made in DS analyses. For example, deriving transcript abundances from RNA-seq data is more difficult (e.g., complicated overlapping genes at medium to low expression levels) than directly counting exon inclusion levels of specific events. On the other hand, local splicing events may be not able to capture biologically interesting splice changes (e.g., switching between two different transcripts) but have ultimately more ability to detect DS in case when the transcript catalog is incomplete. Despite these tradeoffs and given the results observed here, DRIMSeq finds its place as a method to make downstream calculations on transcript quantifications. With emerging technologies that sequence longer DNA fragments (either truly or synthetically), we may see in the near future more direct counting of full-length transcripts, making transcript-level quantification more robust and accurate. Even with current-standard RNA-seq data, ultrafast and lightweight methods make transcript counting more accessible and users will want to make comparative analyses directly from these estimates.\n\nIn principle, existing DS methods that allow multiple group comparisons could be adapted to the sQTL framework and vice versa; DRIMSeq is one of few tools that bridge these two applications. In particular, parameter estimation with DRIMSeq is suited for a situation where only a few replicates are available per group (common in DS analysis) as well as analyses over larger samples sizes (typical in sQTL analysis). For small sample sizes, accurate dispersion estimation is especially challenging. Thus, we incorporate estimation techniques analogous to those used in negative binomial frameworks, such as Cox-Reid adjusted profile likelihood; perhaps not surprisingly, raw profile likelihood or standard maximum likelihood approaches do not perform as well in our tests of estimation performance. In addition, as with many successful genomics modeling frameworks, sharing information across genes leads to more stable and accurate estimation and therefore better inference (e.g., better control of nominal FP rates).\n\nIn comparison to other available methods, DRIMSeq seems to be more conservative than both DEXSeq (using transcript counts) and sQTLseekeR, identifying fewer DS genes and sQTLs, respectively. On the other hand, DEXSeq is known to be somewhat liberal22. Moreover, the sQTL associations detected by DRIMSeq have more enrichment in splicing-related features than sQTLseekeR sQTLs, which could be due to the fact that DRIMSeq accounts for the higher uncertainty of lowly expressed genes by using transcript counts instead of transcript ratios.\n\nOur developed DM framework is general enough that it can be applied to other genomic data with multivariate count outcomes. For example, PolyA-seq data quantifies the usage of multiple RNA polyadenylation sites. During polyadenylation, poly(A) tails can be added to different sites and thus more than one transcript can be produced from a single gene (alternative polyadenylation); comparisons between groups of replicates can be conducted with DRIMSeq. As mentioned, the DM distribution is a multivariate generalization of the beta-binomial distribution, as the binomial and beta distributions are univariate versions of the multinomial and Dirichlet distributions, respectively. Although untested here, the DRIMSeq framework could be applied to analyses where the beta-binomial distribution are used with the advantage of naturally accommodating small-sample datasets. Interesting beta-binomial-based analyses include differential methylation using bisulphite sequencing data, where counts of methylated and unmethylated cytosines (a bivariate outcome) at specific genomic loci are compared, or allele-specific gene expression, where the expression of two alleles (again, a bivariate outcome) are compared across experimental groups.\n\nOne particularly important future enhancement is a regression framework, which would allow direct analysis of more complex experimental designs. For example, covariates such as batch, sample pairing or other factors could be adjusted for in the model. In the sQTL analysis, it would allow studying samples from the pooled populations, with the subpopulation as a covariate, allowing larger sample sizes and increased power to detect interesting changes. Another potential limitation is that DRIMSeq treats transcript estimates as fixed, even though they have different uncertainty, depending on the read coverage and complexity of the set of transcripts within a gene. Although untested here, propagation of this uncertainty could be achieved by incorporating observational weights that are inversely proportional to estimated uncertainties or, in case of fast quantification methods like kallisto, by making effective use of bootstrap samples. At this stage, there is no consensus on how these approaches will perform and ultimately may require considerable additional computation.\n\n\nSoftware availability\n\nThe Dirichlet-multinomial framework described in this paper is implemented within an R package called DRIMSeq. In addition to the user friendly workflow for the DS and sQTL analyses, it provides plotting functions that generate diagnostic figures such as the dispersion versus mean gene expression figures and histograms of p-values. User can also generate figures of the observed proportions and the DM estimated ratios for the genes of interest to visually investigate their individual splicing patterns.\n\nThe release version of DRIMSeq is available on Bioconductor http://bioconductor.org/packages/DRIMSeq, and the latest development version can be found on GitHub https://github.com/markrobinsonuzh/DRIMSeq.\n\n\nData availability\n\nData for simulations that mimic real RNA-seq was obtained from Soneson et al.22, where all the details on data generation and accessibility are available.\n\nDifferential splicing analyses were performed on the publicly available pasilla dataset, which was downloaded from the NCBI’s Gene Expression Omnibus (GEO) under the accession number GSE18508, and adenocarcinoma dataset under the accession number GSE37764.\n\nData for the sQTL analyses was downloaded from the GEUVADIS project website.\n\nAll the details about data availability and preprocessing are described in the Supplementary Materials.\n\nDRIMSeq analyses for this paper were done with version 0.3.3 available on Zenodo http://dx.doi.org/10.5281/zenodo.5308459 and Bioconductor release 3.2. Source code used for the analyses in this paper is available on Zenodo http://dx.doi.org/10.5281/zenodo.5305960.", "appendix": "Author contributions\n\n\n\nMN drafted the manuscript, designed the analyses, analyzed the data and implemented the DRIMSeq R package. MDR drafted the manuscript and designed the overall study. All authors read and approved the final manuscript and have agreed to the content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nMN acknowledges the funding from a Swiss Institute of Bioinformatics (SIB) Fellowship. MDR would like to acknowledge funding from an Swiss National Science Foundation (SNSF) Project Grant (143883).\n\n\nAcknowledgments\n\nThe authors wish to thank Magnus Rattray, Torsten Hothorn and members of the Robinson lab for helpful discussions with special acknowledgment for Charlotte Soneson and Lukas Weber for careful reading of the manuscript.\n\n\nSupplementary Material\n\nSupplementary File 1. Contains supplementary figures and tables referred to in the text. It also contains descriptions of dispersion moderation and p-value adjustment in sQTL analysis and details about the simulations and real data analyses.\n\nClick here to access the data\n\n\nReferences\n\nMcCarthy DJ, Chen Y, Smyth GK: Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40(10): 4288–4297. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, Smyth GK: Small-sample estimation of negative binomial dispersion, with applications to SAGE data. Biostatistics. 2008; 9(2): 321–332. PubMed Abstract | Publisher Full Text\n\nAnders S, Huber W: Differential expression analysis for sequence count data. Genome Biol. 2010; 11(10): R106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRitchie ME, Phipson B, Wu D, et al.: Limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7): e47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaw CW, Chen Y, Shi W, et al.: voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014; 15(2): R29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMosimann JE: On the compound multinomial distribution, the multivariate β-distribution, and correlations among proportions. Biometrika. 1962; 49(1–2): 65–82. Publisher Full Text\n\nTvedebrink T: Overdispersion in allelic counts and θ-correction in forensic genetics. Theor Popul Biol. 2010; 78(3): 200–210. PubMed Abstract | Publisher Full Text\n\nChen J, Li H: Variable Selection for Sparse Dirichlet-Multinomial Regression With an Application To Microbiome Data Analysis. Ann Appl Stat. 2013; 7(1): 418–442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFinak G, McDavid A, Chattopadhyay P, et al.: Mixture models for single-cell assays with applications to vaccine studies. Biostatistics. 2014; 15(1): 87–101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSamb R, Khadraoui K, Belleau P, et al.: Using informative Multinomial-Dirichlet prior in a t-mixture with reversible jump estimation of nucleosome positions for genome-wide profiling. Stat Appl Genet Mol Biol. 2015; 14(6): 517–532. PubMed Abstract | Publisher Full Text\n\nMosimann JE: On the Compound Negative Multinomial Distribution and Correlations Among Inversely Sampled Pollen Counts. Biometrika. 1963; 50(1/2): 47–54. Publisher Full Text\n\nFarewell DM, Farewell VT: Dirichlet negative multinomial regression for overdispersed correlated count data. Biostatistics. 2013; 14(2): 395–404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSun D, Xi Y, Rodriguez B, et al.: MOABS: model based analysis of bisulfite sequencing data. Genome Biol. 2014; 15(2): R38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPark Y, Figueroa ME, Rozek LS, et al.: MethylSig: a whole genome DNA methylation analysis pipeline. Bioinformatics. 2014; 30(17): 2414–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeng H, Conneely KN, Wu H: A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data. Nucleic Acids Res. 2014; 42(8): e69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang GS, Cooper TA: Splicing in disease: disruption of the splicing code and the decoding machinery. Nat Rev Genet. 2007; 8(10): 749–61. PubMed Abstract | Publisher Full Text\n\nTazi J, Bakkour N, Stamm S: Alternative splicing and disease. Biochim Biophys Acta. 2009; 1792(1): 14–26. PubMed Abstract | Publisher Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDerti A, Garrett-Engele P, Macisaac KD, et al.: A quantitative atlas of polyadenylation in five mammals. Genome Res. 2012; 22(6): 1173–1183. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHooper JE: A survey of software for genome-wide discovery of differential splicing in RNA-Seq data. Hum Genomics. 2014; 8(1): 3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlamancos GP, Agirre E, Eyras E: Methods to study splicing from high-throughput RNA sequencing data. Methods Mol Biol. 2014; 1126: 357–397. PubMed Abstract | Publisher Full Text\n\nSoneson C, Matthes KL, Nowicka M, et al.: Isoform prefiltering improves performance of count-based methods for analysis of differential transcript usage. Genome Biol. 2016; 17(1): 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiao Y, Smyth GK, Shi W: FeatureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2014; 30(7): 923–930. PubMed Abstract | Publisher Full Text\n\nAnders S, Reyes A, Huber W: Detecting differential usage of exons from RNA-seq data. Genome Res. 2012; 22(10): 2008–2017. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnders S, Pyl PT, Huber W: HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015; 31(2): 166–169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOngen H, Dermitzakis ET: Alternative Splicing QTLs in European and African Populations. Am J Hum Genet. 2015; 97(4): 567–575. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatz Y, Wang ET, Airoldi EM, et al.: Analysis and design of RNA sequencing experiments for identifying isoform regulation. Nat Methods. 2010; 7(12): 1009–1015. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShen S, Park JW, Lu ZX, et al.: rMATS: robust and flexible detection of differential alternative splicing from replicate RNA-Seq data. Proc Natl Acad Sci U S A. 2014; 111(51): E5593–601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlamancos GP, Pagès A, Trincado JL, et al.: Leveraging transcript quantification for fast computation of alternative splicing profiles. RNA. 2015; 21(9): 1521–1531. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhao K, Lu ZX, Park JW, et al.: GLiMMPS: Robust statistical model for regulatory variation of alternative splicing using RNA-seq data. Genome Biol. 2013; 14(7): R74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJia C, Hu Y, Liu Y, et al.: Mapping Splicing Quantitative Trait Loci in RNA-Seq. Cancer Inform. 2014; 13(Suppl 4): 35–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu Y, Liu Y, Mao X, et al.: PennSeq: accurate isoform-specific gene expression quantification in RNA-Seq by modeling non-uniform read distribution. Nucleic Acids Res. 2014; 42(3). e20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMonlong J, Calvo M, Ferreira PG, et al.: Identification of genetic variants associated with alternative splicing using sQTLseekeR. Nat Commun. 2014; 5: 4698. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlaus P, Honkela A, Rattray M: Identifying differentially expressed transcripts from RNA-seq data with biological variation. Bioinformatics. 2012; 28(13): 1721–1728. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossell D, Stephan-Otto Attolini C, Kroiss M, et al.: Quantifying alternative splicing from paired-end RNA-sequencing data. Ann Appl Stat. 2014; 8(1): 309–330. PubMed Abstract | Publisher Full Text\n\nTrapnell C, Williams BA, Pertea G, et al.: Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat Biotechnol. 2010; 28(5): 511–515. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi B, Dewey CN: RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC bioinformatics. 2011; 12: 323. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernard E, Jacob L, Mairal J, et al.: Efficient RNA isoform identification and quantification from RNA-Seq data with network flows. Bioinformatics. 2014; 30(17): 2447–2455. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatro R, Mount SM, Kingsford C: Sailfish enables alignment-free isoform quantification from RNA-seq reads using lightweight algorithms. Nat Biotechnol. 2014; 32(5): 462–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBray NL, Pimentel H, Melsted P, et al.: Near-optimal probabilistic RNA-seq quantification. Nat Biotech. 2016; 34(5): 525–7. PubMed Abstract | Publisher Full Text\n\nPatro R, Duggal G, Kingsford C: Salmon: Accurate, Versatile and Ultrafast Quantification from RNA-seq Data using Lightweight-Alignment. bioRxiv. 2015; 021592. Publisher Full Text\n\nKanitz A, Gypas F, Gruber AJ, et al.: Comparative assessment of methods for the computational inference of transcript isoform abundance from RNA-seq data. Genome Biol. 2015; 16(1): 150. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeng M, Love MI, Davis CA, et al.: A benchmark for RNA-seq quantification pipelines. Genome Biol. 2016; 17(1): 74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLappalainen T, Sammeth M, Friedländer MR, et al.: Transcriptome and genome sequencing uncovers functional variation in humans. Nature. 2013; 501(7468): 506–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBattle A, Mostafavi S, Zhu X, et al.: Characterizing the genetic basis of transcriptome diversity through RNA-sequencing of 922 individuals. Genome Res. 2014; 24(1): 14–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPickrell JK, Marioni JC, Pai AA, et al.: Understanding mechanisms underlying human gene expression variation with RNA sequencing. Nature. 2010; 464(7289): 768–772. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMontgomery SB, Sammeth M, Gutierrez-Arcelus M, et al.: Transcriptome genetics using second generation sequencing in a Caucasian population. Nature. 2010; 464(7289): 773–777. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOngen H, Buil A, Brown AA, et al.: Fast and efficient QTL mapper for thousands of molecular phenotypes. Bioinformatics. 2016; 32(10): 1479–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrapnell C, Hendrickson DG, Sauvageau M, et al.: Differential analysis of gene regulation at transcript resolution with RNA-seq. Nat Biotechnol. 2013; 31(1): 46–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi YI, Knowles DA, Pritchard JK: LeafCutter: Annotation-free quantification of RNA splicing. bioRxiv. 2016. Publisher Full Text\n\nRobinson MD, Smyth GK: Moderated statistical tests for assessing differences in tag abundance. Bioinformatics. 2007; 23(21): 2881–2887. PubMed Abstract | Publisher Full Text\n\nReid N, Fraser DAS: Likelihood inference in the presence of nuisance parameters. 2003; 7. Reference Source\n\nMcCullagh P, Tibshirani R: A Simple Method for the Adjustment of Profile Likelihoods. J R Stat Soc Series B Stat Methodol. 1990; 52(2): 325–344. Reference Source\n\nCox DR, Reid N: Parameter orthogonality and approximate conditional inference. J R Stat Soc Series B Stat Methodol. 1987; 49(1): 1–39. Reference Source\n\nChoi JK, Kim YJ: Intrinsic variability of gene expression encoded in nucleosome positioning sequences. Nat Genet. 2009; 41(4): 498–503. PubMed Abstract | Publisher Full Text\n\nSingh A, Soltani M: Quantifying intrinsic and extrinsic variability in stochastic gene expression models. PLoS One. 2013; 8(12): e84301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrooks AN, Yang L, Duff MO, et al.: Conservation of an RNA regulatory map between Drosophila and mammals. Genome Res. 2011; 21(2): 193–202. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim SC, Jung Y, Park J, et al.: A high-dimensional, deep-sequencing study of lung adenocarcinoma in female never-smokers. PLoS One. 2013; 8(2): e55596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNowicka M, Robinson M: Source code of the R package used for analyses in \"DRIMSeq: a Dirichlet-multinomial framework for multivariate count outcomes in genomics\". Zenodo. 2016. Data Source\n\nNowicka M, Robinson M: Source code of the analyses in the \"DRIMSeq: a Dirichlet-multinomial framework for multivariate count outcomes in genomics\". Zenodo. 2016. Data Source" }
[ { "id": "14338", "date": "24 Jun 2016", "name": "Alejandro Reyes", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNowicka and Robinson propose a novel method, called DRIMSeq, to test for differential transcript usage between groups of samples using RNA-seq. The method is based on the Dirichlet-multinomial distribution. The authors evaluate different existing approaches to estimate the parameters of their model using simulated experiments with a small number of replicates, which is a common scenario of high-throughput sequencing experiments. Furthermore, Nowicka et al. provide a proof of principle of their method by applying it to both simulated and real RNA-seq data. They also compare the performance of DRIMSeq with DEXSeq and sQTLseekeR in detecting differential transcript usage and splicing quantitative trait loci (sQTLs), respectively. DRIMSeq shows high concordance with DEXSeq. Furthermore, the authors demonstrate that DRIMSeq performs better than DEXSeq when using transcript-level counts. DRIMSeq and sQTLseekeR were also highly concordant. Nevertheless, sQTL genes detected by DRIMSeq were expressed higher than those detected by sQTLseekeR, and sQTLs detected by DRIMSeq were in closer proximity to exons compared to sQTLs detected by sQTLseeker. DRIMSeq is implemented as an R/Bioconductor package.\n\nOverall, the manuscript is well presented and is scientifically sound. The description of the method is clear, the comparisons are fair, and the conclusions are supported by data and analyses.\n\nBelow some minor comments:\n\nTranscription of multiple isoforms from a single gene can be the consequence of differences in the following molecular mechanisms: transcription start sites, splicing, and termination of transcription.  The terms “differential splicing” and “splicing QTLs”, which are used throughout the manuscript and the package vignette, focus only on splicing. Consider a hypothetical example of an isoform switch between conditions in which the two isoforms only diverge by the transcription start site of the first exon. DRIMSeq should also detect this difference, and this would not be due to differential splicing. Thus, the authors could use more generic terminology that describes all possible interpretations of the outcome of their test. Perhaps “differential transcript usage” or “transcript usage QTLs”?\n\nIn equations 6-11, PL and APL are understandable from the context but are not defined in the text.\n\nIt would be useful for the reader to include more information of the simulated data from Sonseson et al. (2016) in the main text of this manuscript (for example, number of replicates per condition).\n\nThe authors describe how DEXSeq can account for additional covariates in complex experimental designs. This paragraph, as well as the figures and supplementary material associated to it, could be understood as if DEXSeq fits GLMs only for complex experimental designs. In reality, DEXSeq always fits GLMs, even for simple two-group comparisons.\n\nThere are some panels from the supplementary figures where data are missing. Specifically, Fig. S13 has 3 empty panels and Fig. S21 the left panels are missing the data for “dexseq.prefilter5” and “drimseq_genewise_grid_trended.filter5”.\n\nThe list of software for splicing event quantification is already very extensive, however a citation to the Bioconductor package SGSeq (Goldstein et al., 2016) could also be added.\n\nAs for the readability of the supplementary information, some abbreviations are not defined in each supplementary figure caption. For example, in Fig. S5, n, m, DM, FP and nr_features are not defined in its caption (some of them, however, are defined in previous captions).  Since many abbreviations repeat several times through the supplementary information, it would be useful to include a glossary of all abbreviations at the beginning of all supplementary figures.", "responses": [ { "c_id": "2319", "date": "06 Dec 2016", "name": "Mark Robinson", "role": "Author Response", "response": "Thank you for taking the time to read and review our paper. As per your suggestion, we have now stressed that DRIMSeq can be applied to differential transcript usage (DTU), which accounts for not only differential splicing but also the differences in transcription start sites and differential transcript termination. In the QTL analysis, as we test for associations between genotypes and transcript usage and not only splicing, following your suggestion, we have also changed the term from splicing QTLs (sQTLs) to transcript usage QTLs (tuQTLs). We have addressed all the other minor comments which include: defining the abbreviations of profile likelihood (PL) and adjusted profile likelihood (APL), adding the sample size information about the simulations from Soneson et al. [1], in order to remove the misleading suggestion that DEXSeq fits GLMs only in the complex designs, we have changed the names of the models used in real data analysis from \"model full glm\" to \"model full 2\" and paraphrased the corresponding manuscript sections, we have included results for the panels with missing data in the Supplementary Figures S15, S16 and S24, we have included the citation to SGSeq [2] - the Bioconductor package for analyzing splice events from RNA-seq data, in the Supplementary Materials, we have prepared a section explaining abbreviations used in the subsequent Supplementary Figures. References [1]  Charlotte Soneson, Katarina L Matthes, Malgorzata Nowicka, Charity W Law, and Mark D Robinson. Isoform prefiltering improves performance of count-based methods for analysis of differential transcript usage. Genome Biology, 17(1):1–15, 2016. [2]  Leonard D Goldstein, Yi Cao, Gregoire Pau, Michael Lawrence, Thomas D Wu, Somasekar Seshagiri, and Robert Gentleman. Prediction and Quantification of Splice Events from RNA-Seq Data. PLoS ONE, 11(5):e0156132, may 2016." } ] }, { "id": "14580", "date": "06 Jul 2016", "name": "Robert Castelo", "expertise": [], "suggestion": "Approved With Reservations", "report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article introduces a new statistical method, called DRIMSeq and implemented in a R/Bioconductor package of the same name, to detect isoform expression changes between two conditions from RNA-seq data. The same method can be used to search for significant associations between SNPs and isoform quantifications obtained also from RNA-seq data (sQTLs).  The main novelty of this method with respect to the existing literature on this problem, is the joint modelling of transcript quantification values derived from isoforms of the same gene, by using a Dirichlet-multinomial model. This allows the method to account of the intrinsic dependency between quantification values of these isoforms.\nThe assessment of DRIMSeq on differential isoform usage provides a comparison of its performance with DEXSeq1, a statistical method for differential exon inclusion from RNA-seq data, as function of two different \"isoform\" quantification strategies: exonic-bin (not really \"isoform\") count values calculated with HTSeq and transcript-quantification values calculated with kallisto2.\nThe experimental results make perfect sense, DRIMSeq works better than DEXSeq with transcript-quantification values and DEXSeq works better than DRIMSeq with exonic-bin count values. However, while both methods, and both types of \"isoform\" quantification input data, allow one to study the post-transcriptional processing of RNA transcripts, the kind of questions that can be addressed with each of them are different. Exonic-bin count values and DEXSeq can be used to investigate differential exon inclusion across conditions, which is a consequence of differential isoform usage, while transcript-quantification values and DRIMSeq can be used to directly investigate differential isoform usage.\nA potentially interesting outcome of this comparison in the paper could be some sort of guidelines about when is it more sensible to investigate differential exon inclusion or differential isoform usage, depending on factors such as the biological question at hand, sequencing depth or number of biological replicates. However, this is apparently beyond the scope of this paper and the experimental results are in principle geared towards convincing the reader that DRIMSeq improves on existing approaches to discover changes in isoform usage, as suggested in the abstract. In my view, the experimental results do not address this question and I would suggest the authors to compare DRIMSeq with methods that also work with transcript-quantification values and assess differential isoform usage such as, for instance, Cuffdiff3 or sleuth4.\nThe experimental results on searching for sQTLs compare favourably DRIMSeq with an existing tool for that purpose, sQTLseekR5. Evaluating performance in this context is challenging and the idea of assessing enrichment with respect to splicing-related features is a good one. However, the (two) presented features in Table 1 could be made more precise. It is unclear that a SNP close to a GWAS hit should be necessarily related to splicing and it is also unclear why one should expect splicing-related enrichment more than a few hundred nucleotides away from the intervening exon. While it is technically interesting to see a method being used to address two completely different research questions, in my view, mixing both types of analyses makes the article less focused. I would argue that both questions deserve separate papers, and that would allow the authors to investigate in depth critical aspects of both types of analysis that are currently not addressed in the current article.\nIn summary, this article provides an interesting new methodology for the analysis of differential isoform usage from RNA-seq data, it is well-written and the implemented software runs smoothly and is well documented. However, in my view, the current experimental results of the article are not that informative for the reader to learn what advantages DRIMSeq provides over other tools for differential isoform usage analysis, and to decide whether he/she should be doing a differential isoform usage, or a differential exon inclusion analysis, if this were a goal of the comparison with DEXSeq.\nMinor comments:\nI would replace the term \"edgeR ideology\" in page 5 by \"edgeR strategy\". In page 9 it is described that the distributions of raw p-values shown in Supplementary Figures S28 and S29 fit \"better\" when derived from transcript quantification values than from exonic-bin count values, but in fact in both cases the distributions are non-uniform for p-values distributed under the null hypothesis. This can be easily shown with the data from vignette of the DRIMSeq package when skipping the step that reduces the transcript set to analyze to speed up the building time of the vignette. This is not openly discussed in the paper but I would argue that it is quite critical to know under what technical assumptions the proposed hypothesis test leads to uniform raw p-values under the null, as this has a direct consequence on the control of the probability of the type-I error. The sQTL analysis described in pages 9, 10 and 11 uses transcript-quantification values from FluxCapacitor. If the entire first part of the paper shows the performance metrics of DRIMSeq using kallisto, in my view, it would make more sense to use kallisto for this analysis as well. With regard to the implementation in the R/Bioconductor software package DRIMSeq, the authors have implemented a specialized S4 object class called 'dmDSdata' to act as a container for counts and information about samples.  Since the package forms part of the Bioconductor project, I think it would better for both, the end-user and the developer authors, that the package re-uses the 'SummarizedExperiment' class as container for counts and sample information. This would facilitate the integration of DRIMSeq into existing or new workflows for the analysis of RNA-seq data. As an example of the limitations derived from providing a completely new specialized object class, the dimensions of a 'dmDSdata' object in terms of number of features and number of samples cannot be figured out using the expected call to the 'dim()' accessor method. Of course the authors may add that method to the 'dmDSdata' object class but, in general, there are obvious advantages derived from enabling data interoperability through the use of common data structures across Bioconductor software packages6.", "responses": [ { "c_id": "2320", "date": "06 Dec 2016", "name": "Mark Robinson", "role": "Author Response", "response": "Thank you for taking the time to read and review our paper. DEXSeq is a package designed for the differential exon usage (DEU) and returns exon-level p-values, which can be also summarized to the gene level. In principle, DEXSeq’s implementation could be used to address the question of differential isoform/transcript usage (DTU) as well, which was done, for example, in the simulation study by Soneson et al. [1]. They use different counting strategies, among them transcript quantifications from kallisto [2], coupled with DEXSeq’s differential engine to detect differential transcript usage. DRIMSeq, based on the Dirichlet-multinomial model, was developed to detect differential usage of any kind of multivariate genomic features at the gene-level. Thus potentially, both DEXSeq and DRIMSeq can be applied to exon counts and to transcript quantifications. However, from our comparisons, which were performed at the gene-level, the performance of DEXSeq and DRIMSeq is different on these different types of counts. DEXSeq performs better on exon counts and DRIMSeq on transcript counts. We have not used Cuffdiff [3] in our comparisons here because in the study by Soneson et al. [1], it performed poorly compared to DEXSeq. In particular, Cuffdiff was very conservative having low false discovery rate (FDR) at the cost of very low power for detecting DTU. The conservative nature of Cuffdiff for differential transcript expression, was also pointed out by Frazee et al. [4]. We decided to compare DRIMSeq only to the top performing method, DEXSeq. The other tool proposed by the Reviewer, sleuth [5], is meant for differential transcript expression analyses, not DTU. The scope of this paper was not to justify exon or transcript level analysis, for that one could refer to the comparison paper by Hooper [6], but to propose a methodologically-sound tool for differential isoform usage analysis or detect transcript usage QTLs based on transcript quantifications. We propose to use DRIMSeq since it outperformed DEXSeq in this type of analysis and there are no other tools for differential transcript usage that were intended for transcript level quantifications from the latest generation of fast quantification tools, such as kallisto [2] or Salmon [7]. Importantly, DEXSeq returns p-values per feature (exon or transcript), which can be also summarized to the gene level. DRIMSeq performs gene-level tests and returns p-values per gene only. When the interest is in detecting specific exons or isoforms that change, one should use DEXSeq because currently DRIMSeq does not provide any post hoc analysis (although in many cases, the relevant information can be deduced from looking at the relative transcript expression from DRIMSeq’s plots). We have not investigated the differences in performance due to sequencing depth or number of biological replicates, but we believe that the requirements would be basically the same in these terms for both of the methods. What matters is the completeness of annotation. Detecting DTU based on exon counts is generally more robust than that based on transcript quantifications when the annotation is incomplete, which was investigated in detail by Soneson et al. [1]. To compare the performance of DRIMSeq and sQTLseekeR, we use the splicing-related features that were also used in the sQTLseekeR paper [8] to compare sQTLseekeR against other methods. The Reviewer suggested to consider other splicing-related features, such as exonic splicing enhancers (ESEs), exonic splicing silencers (ESSs) and splice sites. We have added the frequency of tuQTL overlapping with the splice sites to Table 1. However, we have not performed analyses on ESEs and ESSs since Lalonde et al. [9] concluded from their study that \"ESE predictions themselves are a poor indicator of the effect of SNPs on splicing patterns\". By addressing differential splicing and sQTLs in one paper, our aim was to show that methods used for these analyses are based on statistical approaches that in the end tackle ultimately the same question: differential splicing between conditions. Both analyses employ the same methods for gene feature quantification and potentially one main differential engine could be used with slight analysis- specific adjustments, such as information sharing between genes for small sample size data or using genotypes as grouping factor, which is done in DRIMSeq. We believe we have addressed in sufficient depth aspects of both of these analyses providing comparisons on simulated and real data. Addressing the minor comments: We have replaced the term \"edgeR ideology\" in page 5 by \"edgeR strategy\".   As suggested, we have investigated in more depth, based on simulations from the DM model, the DRIMSeq p-value distributions under the null hypothesis of no differential transcript usage (Figures 1, S4, S6, S11, S14). Overall, using the Cox-Reid adjusted profile likelihood and the dispersion moderation leads to p-value distributions that in most cases are closer to the uniform distribution (Figures 1D, S4 and S11). The better fit of the DM model to transcript counts in comparison to exon counts can be seen in Figure S14, where the p-value distributions are more uniform for simulations that mimic kallisto counts than for simulations that mimic HTseq counts.   Yes, using kallisto counts would be more consistent with the rest of our manuscript. Nevertheless, we decided to to use the Flux Capacitor counts because they were already available on the GEUVADIS project website and have been used extensively in other projects, for example, in the sQTLseekeR paper. Moreover, we think that using other counts should not affect the comparison between DRIMSeq and sQTLseekeR.   We had already considered the SummarizedExperiment class while developing the DRIMSeq package. However, it does not provide features and functionality that we need for storing the count data and DRIMSeq results. In particular, the dimensions of Assays in SummarizedExperiment must be the same. That is not the case for us for two reasons. Firstly, each gene has multiple transcripts and, for example, the table with proportion estimates per transcript is larger than a table with dispersion estimates which are available per gene. Second, in the QTL analysis, table with transcript counts has different dimensions than table with genotypes. Additionally, we use matrices instead of data frames to store our data because the former occupies less space. Specifically, we have created a class called MatrixList, which is adjusted to store data where each gene has multiple features quantified and allows a quick access to these counts in per gene basis. We have not implemented the dim() method on dmDSdata or dmSQTLdata because we want to keep consistency between them and, for example, dmSQTLdata contains transcript counts and genotypes which have different dimensions. Thus we decided to make the dim() methods available for the counts and genotypes slots in these classes but not for the classes themselves. References [1]  Charlotte Soneson, Katarina L Matthes, Malgorzata Nowicka, Charity W Law, and Mark D Robinson. Isoform prefiltering improves performance of count-based methods for analysis of differential transcript usage. Genome Biology, 17(1):1–15, 2016. [2]  Nicolas L Bray, Harold Pimentel, Pall Melsted, and Lior Pachter. Near-optimal probabilistic RNA-seq quantification. Nat Biotech, advance on, Apr 2016. [3]  Cole Trapnell, David G Hendrickson, Martin Sauvageau, Loyal Goff, John L Rinn, and Lior Pachter. Differential analysis of gene regulation at transcript resolution with RNA-seq. Nature biotechnology, 31(1):46–53, 2013. [4]  A. C. Frazee, G. Pertea, a. E. Jaffe, B. Langmead, S. L. Salzberg, and J. T. Leek. Flexible isoform-level differential expression analysis with Ballgown. bioRxiv, pages 0–13, 2014. [5]  Harold J Pimentel, Nicolas Bray, Suzette Puente, Páll Melsted, and Lior Pachter. Differential analysis of RNA-Seq incorporating quantification uncertainty. bioRxiv, Jun 2016. [6]  Joan E Hooper. A survey of software for genome-wide discovery of differential splicing in RNA-Seq data. Human genomics, 8:3, 2014. [7]  Rob Patro, Geet Duggal, and Carl Kingsford. Salmon: Accurate, Versatile and Ultrafast Quantification from RNA-seq Data using Lightweight-Alignment. bioRxiv, page 021592, 2015. [8]  Jean Monlong, Miquel Calvo, Pedro G. Ferreira, and Roderic Guigó. Identification of genetic variants associated with alternative splicing using sQTLseekeR. Nature Communications, 5(May):4698, Aug 2014. [9]  Emilie Lalonde, Kevin C H Ha, Zibo Wang, Amandine Bemmo, Claudia L Kleinman, Tony Kwan, Tomi Pastinen, and Jacek Majewski. RNA sequencing reveals the role of splicing polymorphisms in regulating human gene expression. Genome Research, 21(4):545–554, Apr 2011." } ] } ]
1
https://f1000research.com/articles/5-1356
https://f1000research.com/articles/5-2394/v1
27 Sep 16
{ "type": "Research Note", "title": "RNA-seq assembler artifacts can bias expression counts and differential expression analysis - application of YeATS on the chickpea transcriptome", "authors": [ "Sandeep Chakraborty" ], "abstract": "Background: The unprecedented volume of genomic and transcriptomic data analyzed by software pipelines makes verification of inferences based on such data, albeit theoretically possible, a challenging proposition. The availability of intermediate data can immensely aid re-validation efforts. One such example is the transcriptome, assembled from raw RNA-seq reads, which is frequently used for annotation and quantification of genes transcribed. The quality of the assembled transcripts influences the accuracy of inferences based on them. Method: Here the publicly available transcriptome from Cicer arietinum (ICC4958; Desi chickpea, http://www.nipgr.res.in/ctdb.html) was analyzed using YeATS. Results and Conclusion: The analysis revealed that a majority of the highly expressed transcripts (HET) encoded multiple genes, strongly indicating that the counts may have been biased by the merging of different transcripts. TC00004 is ranked in the top five HET for all five tissues analyzed here, and encodes both a retinoblastoma-binding-like protein (E-value=0) and a senescence-associated protein (E-value= 5e-108). Fragmented transcripts are another source of error. The ribulose bisphosphate carboxylase small chain (RBCSC) protein is split into two transcripts with an overlapping amino acid sequence ”ASNGGRVHC”, TC13991 and TC23009, with length 201 and 332 nucleotides and expression counts 17.90 and 1403.8, respectively. The huge difference in counts indicates an erroneous normalization algorithm in determining counts. It is well known that RBCSC is highly expressed and expectedly TC23009 ranks fifth among HETs in the shoot. Furthermore, some transcripts are split into open reading frames that map to the same protein, although this should not have any significant bearing on the counts. It is proposed that studies analyzing differential expression based on the transcriptome should consider these artifacts, and providing intermediate assembled transcriptomes should be mandatory, possibly with a link to the raw sequence data (Bioproject).", "keywords": [ "RNA-seq", "transcriptome", "Computational genomics", "chickpea", "Cicer arietinum", "re-validation", "Intermediate assembly data", "Big Data", "Bioproject" ], "content": "Introduction\n\nThe lack of reproducibility of results in biology is a contentious subject1,2. In computational studies, the exact replication of the output of most computer programs is difficult as most non-trivial algorithms use heuristics. The problem is compounded by recent technological advances generating \"Big Data\" involving multiple programs and pipelines3,4. However, inferences based on these results should not be subject to the same, or ideally any, unpredictability. The availability of software used at each stage and the intermediate data generated is key in enabling debugging and tracking the veracity of results by subsequent researchers5.\n\nChickpea (Cicer arietinum L.) is an important pulse crop having numerous nutritional and health benefits6. Several online resources exist for chickpea genomes and transcriptomes (http://www.cicer.info/databases.php, http://www.nipgr.res.in/ctdb.html, http://gigadb.org/dataset/100076, http://nipgr.res.in/CGAP/7). Interestingly, the 68th United Nations General Assembly has declared 2016 as the International Year of Pulses (IYP).\n\nThe RNA-seq8,9 derived transcriptome of chickpea has also been sequenced10. In contrast to other traditional methods like RNA:DNA hybridization11 and short sequence-based approaches12, RNA-seq detects transcripts with very low expression levels. YeATS is a work-flow for analyzing RNA-seq data13, and was used to detect a second homolog of a polyphenol oxidase gene and ~130 genes in the large gallate 1-β- glucosyltransferase in walnut14. YeATS analysis of RNA-seq data from 20 different tissues of walnut in California unravelled detailed, tissue-specific information of ~400 transcripts encoded by a large family of resistance (R) genes and elucidated the biodiversity and possible plant–microbe interactions15.\n\nIn the current work, errors arising from the assembly step as identified by YeATS are shown to have a bearing on the transcript quantification. The chickpea transcriptome (http://www.nipgr.res.in/ctdb.html10) and its quantification in different tissues provided by an online interface is analyzed. This demonstrates that transcripts which are tagged with high counts predominantly encode multiple genes. While some studies provide the assembled sequences16,17, the author could not find the relevant data, even after personal communication18. It is also proposed that the availability of intermediate assembly sequences be made mandatory, in line with the recent initiative Global Open Data (GODAN: http://www.godan.info/).\n\n\nMaterials and methods\n\nThe transcriptome of the Cicer arietinum (transHybrid.fasta, ICC4958; Desi chickpea) “represents optimized de novo hybrid assembly of 454 and short-read sequence data. About 2 million 454 reads were assembled using Newbler v2.3 followed by hybrid assembly with 53 409 transcripts generated by optimized short-read data assembly reported previously10 using TGICL program” (http://www.nipgr.res.in/ctdb.html10). Quantification of transcripts in different tissues is provided by an online interface. The chickpea genome (Cicer_arietinum_GA_v1.0.fa) is obtained from http://gigadb.org/dataset/10007619.\n\nYeATS13 analyzed the post assembly transcripts, and first excluded transcripts that did not align to the genome. A BLAST database of protein peptides (plantpep.fasta:1M sequences) using ~30 organisms (list.plants) from the Ensembl genome was created20. The three longest ORFs were obtained using the ‘getorf’ utility in the EMBOSS suite21. These ORFs are BLAST’ed22 to the 'plantpep.fasta’. We identify three classes of errors. Type I error occurs when a single transcript has multiple ORFs with significant matches to different genes. In a Type II error a single gene is broken into two separate transcripts. For a type III error, a single transcript has multiple ORFs, but they all map to the same gene. Multiple sequence alignment was done using MAFFT (v7.123b)23, and sequence alignment figures were generated using the ENDscript server24.\n\n\nResults and discussion\n\nThe chickpea transcriptome (transHybrid.fasta:n=3476010) was first mapped to the chickpea genome (Cicer_arietinum_GA_v1.0.fa) obtained from http://gigadb.org/dataset/100076. There were 60 unmapped transcripts, some of which are mitochondrial transcripts (list.mito in Dataset1), some are metagenomic contamination (list.meta in Dataset1), and the rest have no match in the complete BLAST ‘nt’ database (list.nomatchinNT in Dataset1). The metagenomic transcripts are removed from further processing.\n\nSubsequently, each transcript was split into three ORFs (list.3ORFS:n=104K in Dataset1), each of which was BLAST’ed22 to a subset of plant proteins created from the Ensembl20 database (see Methods).\n\nThere are ~1300 transcripts encoding more than one significant peptide (see list.duplicate in Dataset1). The top five highly expressed transcripts (HET) from five tissues - flower bud (FB), mature leaf (ML), root (RT), shoot (SH), young plant (YP) - were obtained from http://www.nipgr.res.in/ctdb.html (tissues.txt in Dataset1). The number of transcripts encoding multiple genes, as found from ’list.duplicate’, were are follows - FB:4, ML:5, RT:3, SH:4 YP:5, indicating an over-representation of merged transcripts in HET.\n\nThe top five HET in the root, and the genes encoded by them are listed in Table 1. The ORFs of TC00004 encoded in the reverse direction map to a partial retinoblastoma-binding-like protein (~500 out of 549 amino acids) and the complete senescence-associated protein (157 amino acids) (Figure 1). TC00004 is computed to have highly transcribed (ranked in the top five) in all five tissues studied here (tissues.txt in Dataset1). The top ranking transcript, TC00002, has two ORFs - ORF.11 and ORF.38 (TC00002.orf in Dataset1) - which align to an ATP synthase subunit beta (327 aa) and senescence-associated protein (615 aa), respectively. This transcript is highly fragmented and encodes on both strands in an overlapping manner.\n\nTrLen: length of transcript, PLen: length of the full protein, OLen: length of ORF, RPKM: Reads Per Kilobase of transcript per Million mapped reads. Three out of the five transcripts have Type I errors, where two different transcripts are merged.\n\nThe RBLP ORFs are fragmented, and combine to ~500 aa. TC00004 is computed to be highly transcribed (ranked in the top five) in all five tissues studied here.\n\nSince the merged transcripts are proximally located in the genome, it is possible that these loci are under the same transcriptional control and the expression counts are correct. However, the over-representation of such merged transcripts in HET suggests that there might be some errors in counting.\n\nThere are transcripts that encode fragmented ORFs which map to the same protein. YeATS has a merging algorithm that identifies overlapping amino acid sequences in transcripts. For example, TC13991 (35 aa) and TC23009 (110 aa) have an overlapping amino acid sequence \"ASNGGRVHC\", and both map to a ribulose bisphosphate carboxylase small chain (RBCSC, 180 aa) family protein (Figure 2). TC13991 has a count of 17.90, while TC23009 has a count of 1403.8. The large difference indicates an erroneous normalization algorithm, since there is only one expressed transcript of this gene (the ORF of TC13991 has no other significant match among other transcripts), although there are other genomic variants. TC23009 is ranked fifth among HET in the shoot, while all better ranked transcripts are transcripts having Type I errors. Furthermore, there is a missing 35 aa stretch in the C-terminal peptide \"IGFDNVRQVQCISFIAHTPKEF\", which has no match in the transcripts. A similar scenario with fragmented transcripts and a missing fragment was detected by YeATS in a polyphenol oxidase gene in walnut14.\n\nAlso, their counts are significantly different - TC13991 has a count of 17.90, while TC23009 has a count of 1403.8, indicating an erroneous normalization algorithm. The ORF of TC13991 does not have other significant matches among other transcripts. Furthermore, the C-terminal peptide \"IGFDNVRQVQCISFIAHTPKEF\" has no matches in the transcripts.\n\nThere are ~3000 transcripts which encode more than one ORFs mapping to the same peptide (list.splitORF in Dataset1). TC01688 is one such transcript having ORF.70 and ORF.89 (see TC01688.orf in Dataset1) mapping to an aspartyl protease family protein (TAIRid:AT1G05840.1) with BLAST bitscores 250 and 285, respectively. Merging ORF.70 and ORF.89 (inserting ‘ZZZ’) results in an increased BLAST bitscore of 507 (Figure 3). This should have minimal effects on the counts, unless Type I or Type II also occur simultaneously.\n\nThe BLAST bitscore of the merged ORF increases to 507. Errors like this should have minimal effects on the counts, unless Type I or Type II also occur simultaneously.\n\n\nConclusions\n\nIn the current work, assembler errors have been categorized into three types. These errors have been analyzed for the chickpea transcriptome sequence, and anomalies in the quantification have been detected. The availability of assembled transcriptome sequence has enabled such analysis. It is proposed that sequences of assembled transcriptomes be linked to the Bioproject. Such initiatives have been adopted in the Global Open Data (http://www.godan.info/pages/statement-purpose) to “to make agricultural and nutritionally relevant data available, accessible, and usable for unrestricted use worldwide”.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for YeATS on chickpea transcriptome, 10.5256/f1000research.9667.d13681625", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nI gratefully acknowledge Mridul Bhattacharjee and Nitin Salaye for logistic support.\n\n\nReferences\n\nMoonesinghe R, Khoury MJ, Janssens AC: Most published research findings are false-but a little replication goes a long way. PLoS Med. 2007; 4(2): e28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIoannidis JP: How to make more published research true. PLoS Med. 2014; 11(10): e1001747. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarx V: Biology: The big challenges of big data. Nature. 2013; 498(7453): 255–260. PubMed Abstract | Publisher Full Text\n\nStephens ZD, Lee SY, Faghri F, et al.: Big Data: Astronomical or Genomical? PLoS Biol. 2015; 13(7): e1002195. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHurley DG, Budden DM, Crampin EJ: Virtual Reference Environments: a simple way to make research reproducible. Brief Bioinform. 2015; 16(5): 901–903. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJukanti AK, Gaur PM, Gowda CL, et al.: Nutritional quality and health benefits of chickpea (Cicer arietinum L.): a review. Br J Nutr. 2012; 108(Suppl 1): S11–S26. PubMed Abstract | Publisher Full Text\n\nJain M, Misra G, Patel RK, et al.: A draft genome sequence of the pulse crop chickpea (Cicer arietinum L.). Plant J. 2013; 74(5): 715–729. PubMed Abstract | Publisher Full Text\n\nWang Z, Gerstein M, Snyder M: RNA-seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009; 10(1): 57–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlintoft L: Transcriptomics: digging deep with RNA-seq. Nat Rev Genet. 2008; 9(8): 568. Publisher Full Text\n\nGarg R, Patel RK, Tyagi AK, et al.: De novo assembly of chickpea transcriptome using short reads for gene discovery and marker identification. DNA Res. 2011; 18(1): 53–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClark TA, Sugnet CW, Ares M Jr: Genomewide analysis of mRNA processing in yeast using splicing-specific microarrays. Science. 2002; 296(5569): 907–910. PubMed Abstract | Publisher Full Text\n\nKodzius R, Kojima M, Nishiyori H, et al.: CAGE: cap analysis of gene expression. Nat Methods. 2006; 3(3): 211–222. PubMed Abstract | Publisher Full Text\n\nChakraborty S, Britton M, Wegrzyn J, et al.: YeATS - a tool suite for analyzing RNA-seq derived transcriptome identifies a highly transcribed putative extensin in heartwood/sapwood transition zone in black walnut [version 2; referees: 3 approved]. F1000Res. 2015; 4: 155. PubMed Abstract | Publisher Full Text\n\nMartínez-García PJ, Crepeau MW, Puiu D, et al.: The walnut (Juglans regia) genome sequence reveals diversity in genes coding for the biosynthesis of non-structural polyphenols. Plant J. 2016; 87(5): 507–32. PubMed Abstract | Publisher Full Text\n\nChakraborty S, Britton M, Martínez-García PJ, et al.: Deep RNA-Seq profile reveals biodiversity, plant-microbe interactions and a large family of NBS-LRR resistance genes in walnut (Juglans regia) tissues. AMB Express. 2016; 6(1): 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain M, Srivastava PL, Verma M, et al.: De novo transcriptome assembly and comprehensive expression profiling in Crocus sativus to gain insights into apocarotenoid biosynthesis. Sci Rep. 2016; 6: 22456. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHara Y, Tatsumi K, Yoshida M, et al.: Optimizing and benchmarking de novo transcriptome sequencing: from library preparation to assembly evaluation. BMC Genomics. 2015; 16(1): 977. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaba SA, Mohiuddin T, Basu S, et al.: Comprehensive transcriptome analysis of Crocus sativus for discovery and expression of genes involved in apocarotenoid biosynthesis. BMC genomics. 2015; 16(1): 698. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVarshney RK, Song C, Saxena RK, et al.: Genomic data of the chickpea (Cicer arietinum). 2014. Publisher Full Text\n\nKersey PJ, Allen JE, Armean I, et al.: Ensembl Genomes 2016: more genomes, more complexity. Nucleic Acids Res. 2016; 44(D1): D574–D580. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRice P, Longden I, Bleasby A: EMBOSS: the European Molecular Biology Open Software Suite. Trends Genet. 2000; 16(6): 276–277. PubMed Abstract | Publisher Full Text\n\nCamacho C, Madden T, Ma N, et al.: BLAST Command Line Applications User Manual. 2013. Reference Source\n\nKatoh K, Standley DM: MAFFT multiple sequence alignment software version 7: improvements in performance and usability. Mol Biol Evol. 2013; 30(4): 772–780. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobert X, Gouet P: Deciphering key features in protein structures with the new ENDscript server. Nucleic Acids Res. 2014; 42(Web Server issue): W320–W324. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S: Dataset 1 in: RNA-seq assembler artifacts can bias expression counts and differential expression analysis - application of YeATS on the chickpea transcriptome. F1000Research. 2016. Data Source" }
[ { "id": "17160", "date": "11 Nov 2016", "name": "Björn Voß", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article \"RNA-seq assembler artifacts can bias expression counts and differential expression analysis - application of YeATS on the chickpea transcriptome\" reports putative assembly artifacts in the chickpea transcriptome and discusses possible impacts on the expression counts. The main message is that it should become mandatory to make intermediate data, such as assembled transcripts, accessible. With respect to this the title is not appropriate. Additionally the title should not explicitly mention YeATS, and the fact the artifacts can bias expression counts is not surprising. The title needs to be changed.\nThere are some wrong claims in the introduction, but overall the intro is acceptable. In contrast, the Methods section is insufficient in its current form. It lacks information to reproduce the results. Interestingly, this is a point that the author stresses in the introduction. Furthermore, some of the used tools and datasets seem inappropriate (e.g. getorf to translate transcripts to peptides. Further concerns are provided in the detailed comments below).\nIn the Results, the author presents some examples of putative assembly artifacts found in highly expressed transcripts. Unfortunately, no information about the actual errors that result from the artifacts is provided. Furthermore, finding such artifacts in a single dataset does not satisfy to claim these as a widespread problem.\nOverall, the manuscript is hard to read. One reason is the quality of the language that makes it hard to follow the authors reasoning. Another reason, is that there seems to be no clear story, that is to be presented. What is the main point the author wants to make? Is it reproducibility, the problem of assembly artifacts or that more data needs to be made freely accessible?\np.3: 'In computational studies, the exact replication of the output of most computer programs is difficult as most non-trivial algorithms use heuristics' Comment: Heuristics do not contradict reproducibility.\n\np.3: 'The three longest ORFs were obtained using the ‘getorf’ utility in the EMBOSS suite 2' Comment: Is assured that these ORFs do not overlap on either strand?\n\np.3: 'BLAST’ed 2' Comment: Provide details, like E-value cutoff, and so on.\n\np.3: 'Type I error occurs when a single transcript has multiple ORFs with significant matches to different genes' Comment: How is significance assessed? How are multiple significant matches of the same ORF treated? Is the best selected?\n\np.3: 'the ‘plantpep.fasta’' Comment: It is not clear to me why the author uses a database of plant proteins rather than looking at the mapping to the genome, to identify assembly errors. Another option would be to use chickpea protein sequences from the genome.\n\np.3: 'In a Type II error a single gene is broken into two separate transcripts' Comment: How is this detected from the BLAST results?\n\np.3: 'For a type III error, a single transcript has multiple ORFs, but they all map to the same gene' Comment: Are the ORFs perhaps overlapping?\n\np.3: 'was first mapped to the chickpea genome' Comment: How?\n\np.3: 'The RNA-seq [8,9] derived transcriptome of chickpea has also been sequenced [10]' Comment: This sentence makes not much sense.\n\np.3: 'short sequence-based approaches [12], RNA-seq detects transcripts with very low expression levels.' Comment: RNA-seq is also based on short sequence reads, which depends on the used sequencing technology. The detection of low expressed transcripts heavily depends on the used sequencing depth and is not an inherent feature of it.\n\np.3: 'metagenomic contamination' Comment: This term does not exist. Simply say contamination.\n\np.3: 'split ' Comment: Split is misleading.\n\np.3: 'significant' Comment: How is significance measured?\n\np.3: 'FB:4, ML:5, RT:3, SH:4 YP:5' Comment: Some transcripts occur several times in different tissues, TC00004 for example. What would be the non-redundant numbers? Does it at all make sense to differentiate between tissues for this study?\n\np.3: 'are' Comment: as\n\np.3: 'retinoblastoma-binding-like protein' Comment: Does this annotation make sense for a plant protein?\n\np.3: 'have' Comment: be\n\np.3: 'This transcript is highly fragmented and encodes on both strands in an overlapping manner' Comment: This sound like this is a transcript from a pseudogene and the predicted ORFs are wrong.\n\np.4: 'TC13991 has a count of 17.90, while TC23009 has a count of 1403.8. The large difference indicates an erroneous normalization algorithm, since there is only one expressed transcript of this gene (the ORF of TC13991 has no other significant match among other transcripts), although there are other genomic variants.' Comment: First, the author argues that these two transcripts should be merged, but then he is concerned about the fact that, although different genomic variants of the corresponding gene exist, there is only one transcript. It looks like the transcripts resemble the genomic situation, although very poorly. It would be interesting to see the alignment on the nucleotide level.\n\np.4t: 'Furthermore, there is a missing 35 aa stretch in the C-terminal peptide “IGFDNVRQVQCISFIAHTPKEF”, which has no match in the transcripts' Comment: What is meant with the missing 35aa stretch? The missing C-terminal part has 22aa.\n\np.4: 'merged transcripts are proximally located in the genome' Comment: Please clarify what this is intended to mean. Which merged transcripts?", "responses": [ { "c_id": "2317", "date": "24 Nov 2016", "name": "Sandeep Chakraborty", "role": "Author Response", "response": "Dear Dr Voß, I would like to thank you for taking the time to review this paper in detail, and providing constructive criticism on the overall manuscript. I also appreciate the opportunity to make suitable changes where appropriate, and defend some of the critiques that are not correct in my opinion. Reference numbering below is based on the main manuscript. It is apparent to me that the concept of identifying assembly errors by breaking a transcript into ORFs is not lucid. Mapping a transcript to the genome will not identify such errors, since there is no way to differentiate introns and inter-gene sequences. However, I have attempted to the best of my ability to clarify your doubts. Getting access to the assembled transcriptome is not the general case, even in cases when inferences are made based on quantification (maybe not in this particular chickpea study). In another case, I have been unable to obtain the transcriptome through personal communication. (http://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-015-1894-5). Another paper (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4725360/) makes inferences on differential gene expression under drought and salinity stress, but I could not find intermediate data. Although, I have not communicated with the authors of this paper, the point here is that one should not have to make personal contact, as not getting responses to valid queries is not a particularly good feeling. So, I think studies that make inferences on transcriptomes should always provide them. In general, I am focused on studies that make inferences based on transcriptomic data since I believe they have a degree of impenetrability due to complex pipelines. For example, certain studies (which have provided intermediate data) have made inferences on the saffron transcripts, without removing extraneous transcripts from other genomes [16]. In a different study (pre-print) it has been shown that there is a fungal transcript annotated as a saffron gene (and possibly others) [23]. Below I outline a point-by-point response to your comments. The article \"RNA-seq assembler artifacts can bias expression counts and differential expression analysis - application of YeATS on the chickpea transcriptome\" reports putative assembly artifacts in the chickpea transcriptome and discusses possible impacts on the expression counts. The main message is that it should become mandatory to make intermediate data, such as assembled transcripts, accessible. With respect to this the title is not appropriate. Additionally the title should not explicitly mention YeATS, and the fact the artifacts can bias expression counts is not surprising. The title needs to be changed. The title has been changed as suggested. However, I would not take \"the fact the artifacts can bias expression counts is not surprising\" since understanding the basis for these biases can possibly lead to algorithms that fix it (maybe an assembler that does a quick analysis and does not merge transcripts that map to different genes). There are some wrong claims in the introduction, but overall the intro is acceptable. Please point them out so that they can be fixed, even if it takes multiple revisions. In contrast, the Methods section is insufficient in its current form. It lacks information to reproduce the results. Interestingly, this is a point that the author stresses in the introduction. I agree that, if true, this is a major oversight my part. I assumed, wrongly, that these would be really trivial to reproduce. (1) Find the top five transcripts. (2) Find the ORFs - choose the three longest. (3) Find whether they are annotated using BLAST (possibly on a smaller database as I have done, but best done the ’nr’ database). (4) See if the most highly transcribed transcripts encode more than one gene. The complete analysis of all transcripts need to be done for identifying Type II errors, which highlights another problem of discrepancies in counts of split transcripts from the same gene. While using the full ’nr’ BLAST database is the best option, a BLAST database of protein peptides (plantpep.fasta:1M seqeunces) using ∼30 organisms (list.plants) from the Ensembl genome was created to reduce computational times. I have re-written the Methods - hopefully it is more lucid now. Furthermore, some of the used tools and datasets seem inappropriate (e.g. getorf to translate transcripts to peptides. Further concerns are provided in the detailed comments below). I disagree with this point. ‘getorf’ from the EMBOSS suite is a well-known tool for getting ORFs. While it is simple, and I have my own version written before I found this tool, I use this standardized version. In the Results, the author presents some examples of putative assembly artifacts found in highly expressed transcripts. Unfortunately, no information about the actual errors that result from the artifacts is provided. Furthermore, finding such artifacts in a single dataset does not satisfy to claim these as a widespread problem. All that is shown here that RNA-assembly merges transcripts (already shown before for a different plant and assembler [2]), which casts suspicion on counts - and the fact that most highly expressed genes in the chickpea database analyzed are merged further strengthens this doubt. Admittedly, the particular chickpea study made no inferences based on the counts - but that still does not refute the point that counts might be wrong, even in this single study. I use the word ‘might’, since I have not analyzed the downstream raw data to establish. And if there are insinuations in the manuscript that this is widespread (which I suspect is true, but will take time to establish), kindly point it out so that I may correct that. Overall, the manuscript is hard to read. One reason is the quality of the language that makes it hard to follow the authors reasoning. Please let me know how to improve this. Another reason, is that there seems to be no clear story, that is to be presented. What is the main point the author wants to make? Is it reproducibility, the problem of assembly artifacts or that more data needs to be made freely accessible? The ‘problem of assembly artifacts (Point 1)’ (Trinity) encountered during the Walnut genome project [14] has led to the development of methods to detect these artifacts [2], which the current work could reproduce (Point 2) for another transcriptome (and a different assembler, Newbler) as it was possible to find ‘freely accessible data’ (Point 3). Point 2 and 3 are well-known issues, the little contribution in the current paper is to emphasize Point 1. p.3: ’In computational studies, the exact replication of the output of most computer programs is difficult as most non-trivial algorithms use heuristics’ Comment: Heuristics do not contradict reproducibility. Agreed, I have changed the statement to ‘as most non-trivial algorithms are non-deterministic.’ p.3: ’The three longest ORFs were obtained using the ‘getorf’ utility in the EMBOSS suite 2’ Comment: Is assured that these ORFs do not overlap on either strand? No, there is no such assumption. However, this is deliberate since the E-value of matches of these ORFs will determine their relevance. p.3: ’BLAST’ed 2’ Comment: Provide details, like E-value cutoff, and so on. Done. p.3: ’Type I error occurs when a single transcript has multiple ORFs with significant matches to different genes’ Comment: How is significance assessed? How are multiple significant matches of the same ORF treated? Is the best selected? Significance is assessed using: E-value=1E-8, BLAST bitscore=∼75. Yes, the best match is selected for multiple significant matches of the same ORF. p.3: ’For a type III error, a single transcript has multiple ORFs, but they all map to the same gene’ Comment: Are the ORFs perhaps overlapping? No, these ORFs would never be overlapping. They have been created by sequencing or assembly errors, resulting in the false insertion of stop codons. Note, that the ’getorf’ program always ends at a stop codon, but does not (need to) start from a start codon. p.3: ’was first mapped to the chickpea genome’ Comment: How? Using BLAST through the YEATS pipeline. Mentioned in the text now. p.3: ’The RNA-seq [8,9] derived transcriptome of chickpea has also been sequenced [10]’ Comment: This sentence makes not much sense. Changed to ‘The transcriptome of chickpea has been sequenced [1] using RNA-seq [10, 11].’ p.3: ’short sequence-based approaches [12], RNA-seq detects transcripts with very low expression levels.’ Comment: RNA-seq is also based on short sequence reads, which depends on the used sequencing technology. The detection of low expressed transcripts heavily depends on the used sequencing depth and is not an inherent feature of it. I concur, changed the line to ‘... RNA-seq can detect transcripts with very low expression levels by increasing sequencing depth.’ p.3: ’metagenomic contamination’ Comment: This term does not exist. Simply say contamination. Corrected. p.3: ’split ’ Comment: Split is misleading. Corrected. p.3: ’significant’ Comment: How is significance measured? E-value=1E-8, BLAST bitscore=∼75. Mentioned in the text. p.3: ’FB:4, ML:5, RT:3, SH:4 YP:5’ Comment: Some transcripts occur several times in different tissues, TC00004 for example. What would be the non-redundant numbers? Does it at all make sense to differentiate between tissues for this study? Having multiple tissues strengthens the main point highlighted in this paper, that expression counts might be biased due to merged transcript. The claim would be weaker if there were just one tissue in which this were true. There are common (TC00004) transcripts, but there are specific ones too (TC00462 in mature leaf). Also, the very high RPKM of these merged transcripts compared to other transcripts indicates the possibility of some errors in counting. p.3: ’are’ Comment: as Corrected. p.3: ’retinoblastoma-binding-like protein’ Comment: Does this annotation make sense for a plant protein? Yes - http://nar.oxfordjournals.org/content/27/17/3527.full. Even though this is correct, annotation of genes is not critical to the narrative here. p.3: ’have’ Comment: be Corrected. p.3: ’This transcript is highly fragmented and encodes on both strands in an overlapping manner’ Comment: This sound like this is a transcript from a pseudogene and the predicted ORFs are wrong. This is not a pseudogene, by virtue of being transcribed. It seems there is an assembly error, for example Uniprot:A0A072THK0 Senescence-associated protein has three homologous segments in TC00002 - fwd:249-485, reverse:211-35 and fwd:730-894 with E-values 2e-34, 2e-14 and 9e-14, respectively. p.4: ’TC13991 has a count of 17.90, while TC23009 has a count of 1403.8. The large difference indicates an erroneous normalization algorithm, since there is only one expressed transcript of this gene (the ORF of TC13991 has no other significant match among other transcripts), although there are other genomic variants.’ Comment: First, the author argues that these two transcripts should be merged, but then he is concerned about the fact that, although different genomic variants of the corresponding gene exist, there is only one transcript. It looks like the transcripts resemble the genomic situation, although very poorly. It would be interesting to see the alignment on the nucleotide level. This is an excellent suggestion, I have looked at the nucleotide mapping of these transcripts to the genome more closely. There is one new paragraph addressing this (three new tables). p.4t: ’Furthermore, there is a missing 35 aa stretch in the C-terminal peptide “IGFDNVRQVQCISFIAHTPKEF”, which has no match in the transcripts’ Comment: What is meant with the missing 35aa stretch? The missing C-terminal part has 22aa. Corrected. p.4: ’merged transcripts are proximally located in the genome’ Comment: Please clarify what this is intended to mean. Which merged transcripts? I have rephrased the statement. ‘When two genes are adjacent to each other in the genome, and are both transcribed it is possible that an assembler merges these into one transcript. Also, it is possible that these loci are under the same transcriptional control and the expression counts are correct. Thus, high levels of expression of one gene should correlate with high expression level of the other, assuming a proper normalization. However, the over-representation of such merged transcripts in HET suggests that there might be some errors in counting.’ I hope that your concerns have been addressed suitably. Best regards, Sandeep" } ] }, { "id": "17521", "date": "05 Dec 2016", "name": "Lilah Toker", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article \"RNA-seq assembler artifacts can bias expression counts and differential expression analysis application of YeATS on the chickpea transcriptome\" by Sandeep Chakraborty describes the use of a workflow developed by the author for identification of assembly artifacts in the chickpea transcriptome. The author suggests a possible impact of these artifacts on the expression counts and concludes that when de-novo transcriptome assembly is implemented as a part of RNAseq study, the intermediate assembly transcripts should be provided by the authors.\nWhile the concluding statement is true regardless of the content of the manuscript, there are several issues that need to be addressed for the readers to benefit from this study:\nThe mention of YeATS in the title and the abstract without providing explanation of what it is, unnecessarily complicates the understanding of the text.\n\nThe title and the abstract of the manuscript are quite misleading. It is true that assembly artifacts can bias expression counts and differential expression analysis, in fact, it is a quite trivial statement. Nevertheless, nothing in the manuscript addresses this claim. In the best case, the author might have detected several artifacts in the assembly of chickpea transcriptome speculating about their impact on the expression counts, but he has not evaluated the impact of the proposed artifacts on expression counts or differential expression analysis. In addition, the author analysed a single dataset and thus should avoid generalizing his statements.\n\nThe manuscript is entirely based on a workflow developed by the author. The workflow was never validated and was not previously used by other researchers, questioning the reliability of the results. The main step of the workflow is based on ORFs identified by the getorf tool, which by default define ORFs as regions between two STOP codons. This is quite puzzling, since traditionally ORFs are defined as sequences beginning with a START codon and might or might not contain a STOP codon. The author should validate his results using a tool based on the conventional definition of ORF such as NCBI’s ORFfinder.\n\nMore generally, the author should evaluate the performance of his workflow and verify the proposed artifacts by analysing of well annotated organism such as Arabidopsis. Do the identified “errors\" indeed represent artifacts or do they represent biological truth? For example, one intuitive explanation for a single ORF aligned to two proteins would be that the two proteins are truly transcribed from a single transcript, as often observed in other organisms. Similarly, the “Type II error” can represent cases were multiple isoforms exist for a single protein (e.g. due to alternative splicing events).\n\np.4 – “However, the overrepresentation of such merged transcripts in HET suggests that there might be some errors in counting”. This is entirely based on the author’s opinion. Is there any evidence that this is the case?\n\nThe manuscript needs to be proofread in order to improve its readability. For example, the interchangeable use of past and present tenses while describing the work make it difficult to understand the workflow of the study (e.g. p.3, 4th paragraph).\n\np.3, 3rd paragraph: “In contrast to other traditional methods like RNA:DNA hybridization and short sequence-based approaches”. Why does the author mention the other methods? This sentence doesn’t really make any sense, especially in light of the fact that RNAseq methodology is also a short sequence based approach.\n\np.3, 3rd paragraph: “RNA-seq detects transcripts with very low expression levels”. This sentence is irrelevant to the manuscript. Moreover, it is not entirely valid since the ability of the method to detect low abundant transcripts is related to the depth of the sequencing.\n\np.3, 3rd paragraph: “YeATS is a work-flow for analyzing RNA-seq data”. The author should clearly indicate that YeATS workflow was developed and implemented by the author himself instead of presenting it as well established approach.", "responses": [] } ]
1
https://f1000research.com/articles/5-2394
https://f1000research.com/articles/5-2814/v1
05 Dec 16
{ "type": "Opinion Article", "title": "COP-eration for global food security", "authors": [ "Erick de la Barrera" ], "abstract": "Mexico is hosting the 13th Conference of the Parts (COP-13) on the Convention on Biological Diversity. Participants will have another opportunity to \"integrate biodiversity for wellbeing.\" Considering that food production is a major driver for the loss of biological diversity, despite the fact that ample genetic reservoirs are crucial for the persistence of agriculture in a changing world, food can be a conduit for bringing biodiversity into people's minds and government agendas. If this generation is going to \"live in harmony with nature,\" as the Aichi Biodiversity Targets indicate, such an integration needs to be developed between the agricultural and environmental sectors throughout the world, especially as an increasingly urban civilization severs its cultural connections to food origin.", "keywords": [ "Biodiversity", "COP-13", "food justice", "global environmental change", "human development", "planetary boundaries", "sustainability" ], "content": "\n\nThe world is witnessing an accelerated loss of biological species owing to human activities, which are also putting the integrity of other life-support systems of the planet at risk1,2. In response, the governments of 196 countries (86% have signed thus far) established the Convention on Biological Diversity (CBD; http://www.cbd.int/) in 1992 with the objectives of 1) conserving existing biodiversity, 2) utilizing the components of biodiversity in a sustainable manner, and 3) ensuring that the benefits stemming from the use of genetic resources are distributed in a fair and equitable manner. Given concerns about the development of new biotechnologies, a series of provisions based on the cautionary principle have also been established in the Cartagena Protocol for Biosecurity (https://bch.cbd.int/protocol/), which became valid in 2003. In addition, given the disparities between economic gains reaped by large multinational corporations and the conditions facing first nations, who are regarded as the custodians of biodiversity, the Nagoya Protocol became effective in 2014 to regulate access to genetic diversity and the sharing of its benefits (https://www.cbd.int/abs/).\n\nCancún is host to the 13th Conference of the Parts (COP-13; 28th November - 17th December 2016) on the CBD with the guiding theme of \"integrating biodiversity for wellbeing.\" This is no simple task, especially when considering that our western civilization relies on a conceptual separation of humans and nature. Perhaps food can be a useful conduit for incorporating the notion of biodiversity into everyday life. Indeed, the link between ecosystem integrity and food production is deceivingly strong. On one hand, agro-food production is a main driver for global environmental change. For instance, advancing the agricultural frontier results in the loss of natural ecosystems in agricultural exporter regions, such as the Brazilian Amazon for production of cereals, soybean, and cattle husbandry, the Argentine Pampas for production of soybean, and the Mexican cloud forests for the production of avocados. In addition, over 10% of the global greenhouse gas emissions stem from agriculture, including from the production of synthetic fertilizer and other agrochemicals, the operation of machinery, such as water pumps and tractors, and transporting inputs and produce to and from the sites of final consumption3. On the other hand, agriculture is also one of the most vulnerable sectors to environmental change. A recent example is the multi-year drought in California, where more than 2 billion dollars and almost 20,000 jobs had been lost by 2014, in what otherwise was the most important agricultural state in the USA4. Also, the production of coffee, the most traded global commodity after hydrocarbons, is likely to decrease during the present century, especially at lower elevations5.\n\nFood supply inherently relies on biodiversity. For instance, an ample genetic diversity for cultivated species and their wild relatives is necessary for the selection of materials with desirable traits for the development of improved varieties. Thus, the reservoirs of genetic diversity, along with traditional cultural practices, need to be protected, including for Coffea arabica in Ethiopia, Zea mays in the American continent, and Oryza sativa throughout Asia6–8. Also, a high diversity of edible species has originated from the prevalence of family agriculture, already responsible for 80% of the global food production (http://www.fao.org/3/b-mm296e.pdf). It is precisely in home gardens throughout the world where domestication has occurred for numerous species that have been incorporated to local menus and pharmacopeias over the course of history. Finally, these catalogues of species will allow the development of new \"climate-ready\" crops, such as the cultivation of agaves for production of fiber, sugars, and ethanol in arid lands, reduction of methane emissions during rice cultivation systems, and bringing to mainstream agriculture new fruits, sources of starch, and vegetables, as our diet shifts to a less meat-intensive food system9–11.\n\nHowever, people are increasingly disconnected from their food's origin. This is the result of multiple factors, including an increasingly urban population, the consequent rapid growth in the demand for food with a long shelf life, and the fact that large-scale cattle husbandry and industrialized foods rely on a mere handful of species. This is especially true for the urban poor, who only have access to highly processed food from convenience stores, and, to a lesser extent, for the middle classes, for whom eating out is aspirational and a parameter considered when quantifying wealth (http://documents.worldbank.org/curated/en/269121467991958460).\n\nIt thus seems paradoxical that the prevalent government model has ministries of environment and agriculture as discrete entities, the environmental sector often regarded as obstructive to economic development. Examples include exceptions of environmental compliance for agricultural activities in the USA12, the recent reopening of fisheries within an exclusion zone adjacent to oil platforms off the coast of Campeche in the Gulf of Mexico (http://dof.gob.mx/nota_detalle.php?codigo=5456197&fecha=11/10/2016), and the internationally pervasive disconnection between government investment in the agricultural sector and a meager to nil improvement in food security and food related health issues13. This antagonism makes sense from a historical perspective, though. After all, agriculture is the activity that allowed the rise of humans as Earth's dominant species. In contrast, our understanding of planetary life support systems only developed during the last century. Now we know that nature is not an external entity created for human utilization, but that the evolutionary and ecological success and the (sometimes debatable) cognitive awareness that characterize our species come with the responsibility of looking after the planet. An example of this paradigm shift is Laudato si' (http://w2.vatican.va/content/francesco/en/encyclicals/documents/papa-francesco_20150524_enciclica-laudato-si.html), the papal encyclical whose second chapter argues against the literal interpretation of the so-called creation mandate that has justified the human (ab)use of nature over the course of many centuries.\n\nThe most important task of the CBD is perhaps to bring biological diversity to center stage in the minds of citizens and government agendas in a similar way to what the Framework Convention on Climate Change achieved after 23 years of existence (http://unfccc.int/paris_agreement/items/9485.php). Indeed, the link between the optical properties of certain gases and ongoing changes in temperature and precipitation with the economy and the planet's viability is now considered to be straight forward. While the link between biological diversity and planetary viability appears to be subtler, monitoring its integrity can be done by direct observation so it can involve the public, a necessary step if we are to transit towards the Aichi Biodiversity Targets of living in harmony with nature. The food nexus may be an initial stepping stone.", "appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was conceived and drafted while the author held a generous Fulbright NEXUS Fellowship (2014-2016), and was completed under the generous support of the Dirección General del Personal Académico, Universidad Nacional Autónoma de México (PAPIIT IN205616).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nSala OE, Chapin FS 3rd, Armesto JJ, et al.: Global biodiversity scenarios for the year 2100. Science. 2000; 287(5459): 1770–1774. PubMed Abstract | Publisher Full Text\n\nRockström J, Steffen W, Noone K, et al.: A safe operating space for humanity. Nature. 2009; 461(7263): 472–475. PubMed Abstract | Publisher Full Text\n\nIPCC: Summary for policymakers. In, Pachauri RK, Meyer LA, eds. Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the intergovernmental Panel on Climate Change. IPCC, Geneva, Switzerland, 2014; SPM1–32. Reference Source\n\nHowitt R, Medellín-Azuara J, MacEwan D, et al.: Economic Analysis of the 2014 Drought for California Agriculture. Center for Watershed Sciences, University of California, Davis. 2014. Reference Source\n\nOvalle-Rivera O, Läderach P, Bunn C, et al.: Projected shifts in Coffea arabica suitability among major global producing regions due to climate change. PLoS One. 2015; 10(4): e0124155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoege E: El patrimonio biocultural de los pueblos indígenas de México: Hacia la conservación in situ de la biodiversidad y agrodiversidad en los territorios indígenas. Instituto Nacional de Antropología e Historia, Mexico City. 2008. Reference Source\n\nAerts R, Berecha G, Gijbels P, et al.: Genetic variation and risks of introgression in the wild Coffea arabica gene pool in south-western Ethiopian montane rainforests. Evol Appl. 2013; 6(2): 243–252. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumagai M, Kanehara M, Shoda S, et al.: Rice Varieties in Archaic East Asia: Reduction of Its Diversity from Past to Present Times. Mol Biol Evol. 2016; 33(10): 2496–2505. PubMed Abstract | Publisher Full Text\n\nGarcia-Moya E, Romero-Manzanares A, Nobel PS: Highlights for Agave productivity. Glob Change Biol Bioenergy. 2011; 3(1): 4–14. Publisher Full Text\n\nXia L, Xia Y, Li B, et al.: Integrating agronomic practices to reduce greenhouse gas emissions while increasing the economic return in a rice-based cropping system. Agr Ecosyst Environ. 2016; 231: 24–33. Publisher Full Text\n\nGarnett T: Plating Up Solutions. Science. 2016; 353(6305): 1202–1204. PubMed Abstract | Publisher Full Text\n\nRuhl JB: Farms, their environmental harms, and environmental law. Ecol Law Q. 2000; 27(2): 263–350. Publisher Full Text\n\nInternational Food Policy Research Institute (IFPRI): Global Nutrition Report 2015: Actions and accountability to advance nutrition and sustainable development. IFPRI, Washington, D.C. 2015. Publisher Full Text" }
[ { "id": "18238", "date": "28 Dec 2016", "name": "Mónica E. Riojas", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript addresses three avenues related with the relationship between biodiversity conservation and food production if we are meant to manage our natural resources in a reasonable way to assure the provision of environmental services in the future. These ideas are (1) the importance of the conservation of the ample genetic diversity of domesticated varieties of plants and their wild relatives; (2) the food acquisition habits of the population and, (3), in my opinion, the strongest idea, the integration of the ministries of agriculture and environment into one single entity. The three key ideas are important, but they are not developed to their full expression, and do not connect clearly with the central theme of the manuscript. I suggest that the author develops the two first ideas more comprehensively, and go more in depth in the third, so he can round up the message he wants to convey.", "responses": [] }, { "id": "18233", "date": "03 Jan 2017", "name": "Enrico A. Yépez", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article, the author transmits a very important message regarding means for bringing biodiversity into people's minds and government agendas (a rather difficult task) by connecting biodiversity to the everyday need for food coming from agriculture. The author presents a positive attitude by listing some examples of the environmental effects of agriculture and the tight connection with biodiversity, which succinctly underlines the need to rise a concern for biodiversity a planetary driver as we have done in the last two decades with the awareness of climate change.", "responses": [] } ]
1
https://f1000research.com/articles/5-2814
https://f1000research.com/articles/5-2813/v1
05 Dec 16
{ "type": "Software Tool Article", "title": "CanVar: A resource for sharing germline variation in cancer patients", "authors": [ "Daniel Chubb", "Peter Broderick", "Sara E. Dobbins", "Richard S. Houlston", "Peter Broderick", "Sara E. Dobbins", "Richard S. Houlston" ], "abstract": "The advent of high-throughput sequencing has accelerated our ability to discover genes predisposing to disease and is transforming clinical genomic sequencing. In both contexts knowledge of the spectrum and frequency of genetic variation in the general population and in disease cohorts is vital to the interpretation of sequencing data. While population level data is becoming increasingly available from publicly accessible sources, as exemplified by The Exome Aggregation Consortium (ExAC), the availability of large-scale disease-specific frequency information is limited. These data are of particular importance to contextualise findings from clinical mutation screens and small gene discovery projects. This is especially true for cancer, which is typified by a number of hereditary predisposition syndromes.  Although mutation frequencies in tumours are available from resources such as Cosmic and The Cancer Genome Atlas, a similar facility for germline variation is lacking. Here we present the Cancer Variation Resource (CanVar) an online database which has been developed using the ExAC framework to provide open access to germline variant frequency data from the sequenced exomes of cancer patients. In its first release, CanVar catalogues the exomes of 1,006 familial early-onset colorectal cancer (CRC) patients sequenced at The Institute of Cancer Research. It is anticipated that CanVar will host data for additional cancers, providing a resource for others studying cancer predisposition and an example of how the research community can utilise the ExAC framework to share sequencing data.", "keywords": [ "exome sequencing", "ExAC", "CanVar", "cancer", "colorectal cancer", "NGS", "Germline", "database" ], "content": "Introduction\n\nWith the widespread adoption of high-throughput sequencing as a tool for disease gene discovery and clinical diagnostics there is a need to evaluate candidate disease predisposition genes through defining the spectrum and frequency of genetic variation in the general population and in specific disease cohorts. For this to be meaningful, large sample sizes are required in order that variant frequencies are accurately defined. Such data is often only acquired through combining multiple datasets. Although these data are being rapidly produced by both large consortia and individual research groups, their acquisition and integration are subject to logistical, computational and ethical challenges. When undertaken by multiple agencies, this results in considerable duplication of effort, the products of which may not be widely shared. It is therefore desirable for large, processed sequencing datasets to be made easily accessible to the community. Recently, a paradigm for sharing has been provided by the Exome Aggregation Consortium1,2 (ExAC). ExAC have aggregated and analysed a set of 60,706 exomes from over twenty different studies, providing this information as an intuitive online resource. The ExAC website presents these data as variant frequencies stratified by different ethnic groups alongside additional sequencing quality metrics and transcript based annotations.\n\nSimilar resources providing frequencies of variants in specific disease associated cohorts are not widely available. Such datasets are of particular importance for small-scale studies, where the confirmation of rare variant frequencies in genes of interest is critical to determine the importance of candidate genes. Furthermore, in the case of clinical genetic testing, they aid in the interpretation of variants of unknown significance. This is especially true for cancer, where it is estimated that 5–10% of cases have a strong heritable basis3. The identification of genes involved in hereditary cancers not only provide valuable biological insight but can allow for screening of at risk individuals, providing an opportunity for early diagnosis, which is key to long-term survival. To address the deficiency of germline frequency data in the realm of cancer research, we have produced CanVar, an online resource derived from cancer patient germline exome sequencing data. CanVar has been produced by adapting the ExAC framework2 to provide cancer type specific variant frequencies, presenting them as a familiar and intuitive online interface modelled after the ExAC browser.\n\nCanVar currently catalogues frequency data for 1,006 early-onset familial colorectal cancer cases4. In total, 1,096,907 variant sites are catalogued in CanVar: specifically 981,491 single nucleotide variants (SNVs) and 115,416 insertion deletions (indels). As previous studies have observed, rare variation is itself common, indeed 52% of these variants are only observed in one sample.\n\nIt is beneficial to be able to compare cancer variant frequency in cases with that observed in population frequency control data. We have therefore annotated each cancer variant with ExAC allele frequency data excluding samples from The Cancer Genome Atlas (TCGA, n=53,105, henceforth referred to as non-TCGA ExAC). Links are also provided to the relevant ExAC browser entries at the gene and variant levels in order to assess loss of function tolerance and overall gene burden.\n\nCanVar utilises an adapted ExAC framework, providing SNV and INDEL frequency data and can be accessed via http://canvar.icr.ac.uk. The interface mirrors the ExAC browser available at http://exac.broadinstitute.org/2 and is divided in to three main parts: the front page (Figure 1), the gene page (Figure 2) and the variant page (Figure 3).\n\nThe front page (Figure 1) contains a search bar where either genes or individuals variants can be queried. Genes are queried either by entering an HGNC gene name or ensemble gene ID. Individual transcripts within a gene can also be queried through entering an Ensembl transcript ID. Variants are queried either by dbSNP rsid or entering the chromosome, position, reference and alternate alleles. Additionally, whole regions can be queried, which opens a page similar to the gene view, providing coverage data and variants present in the queried region.\n\nThe Gene page (Figure 2) first provides metadata and external links followed by a per base resolution coverage plot on top of the exon-intron structure of the gene of interest. These features default to the Ensembl canonical transcript but different transcripts can either be searched from the front page or selected from a drop down menu. A table provides frequency information and annotations for each variant identified within the gene assuming the worst effect in any transcript. The quality of a variant in the gene table is assessed by its filter status, obtained from the variant recalibration step of the GATK pipeline (Methods). To simplify the table display, users can select the cancers of interest. Non-TCGA ExAC frequencies are also displayed for each variant. Selecting a variant will open up the appropriate variant page.\n\nA) metadata and external links, including the ExAC page for a given gene; B) coverage plot and exon/intron structure C) table containing annotations and variant frequencies for each variant identified within a gene. The ExAC_AF column refers to the frequency from non-TCGA ExAC. The variant table has a menu C.1) which is used to select which cancer frequencies are displayed. Currently only NSCCG CRC samples are available.\n\nMore detailed quality and frequency information is provided in the variant page (Figure 3). Links are provided to external resources such as the equivalent ExAC page and users can explore genotype, depth and site quality metrics. The call rate of each variant according to the QC thresholds (Methods) is provided at the top of the page. Care should be taken when interpreting variants with lower call-rates as they are typically more likely to be false positives. Annotation particular to different transcript can be browsed along with an assessment of loss of function variant quality according to the Loss-Of-Function Transcript Effect Estimator (LOFTEE - https://github.com/konradjk/loftee). The frequency of the variant across studies included within CanVar is also provided in a sortable table.\n\nA) Call rate of a given variant B) Metadata and external links, including equivalent ExAC page; C) Quality metrics D) Transcript annotations E) Frequency information in different studies.\n\n\nDiscussion\n\nExAC, the most comprehensive attempt at a large-scale aggregation of sequencing data, has been a great success, proving the usefulness of providing open-access population level genetic data for the research community. Here we present an adaptation of the ExAC framework to create CanVar, a cancer specific online resource for germline sequencing data.\n\nCanVar currently provides SNV and INDEL frequency data, with associated annotations. As ExAC introduce new features it is anticipated that these will be merged in to future versions of CanVar.\n\nThe data currently catalogued in CanVar will provide a valuable resource for researchers investigating genetic predisposition to colorectal cancer and those engaged in delivery of clinical cancer genetic testing programs. It is expected that the utility of CanVar will increase as additional sequencing data is integrated through a number of different mechanisms: firstly, in-house sequencing of ongoing projects at the Institute of Cancer Research; secondly, applications for publically available data e.g. samples deposited in the Ensembl EGA archive and dbGap; and thirdly, collaborations with others engaged in the germline sequencing of cancer patients.\n\nOnly when the community fully embraces a policy of data sharing will resources such as ExAC and CanVar fulfil their potential. We therefore encourage all researchers engaged in cancer germline sequencing projects to consider sharing their data (email canvar@icr.ac.uk). Where consent or other factors preclude the sharing of the individual level data, we encourage others to adopt the ExAC framework to make their data available. To facilitate this we have made our adapted ExAC code available.\n\n\nMethods\n\n\n\n\nImplementation\n\nCanVar is built upon the Python-based framework designed to accommodate the ExAC database downloaded from https://github.com/konradjk/exac_browser. A full description of the framework’s construction and optimisation is available from the ExAC browser publication2.\n\nBriefly, custom python scripts parse input data into a mongoDB database. These data consist of variant calls with VEP annotations (from VCF files) and sample coverage metrics (derived from BAM files) in addition to other annotation data in the form of downloaded flat files from dbSNP (for rsids), Gencode v19 (for transcript and gene structure), dbNSFP (for gene names and aliases) and OMIM (to link to the relevant OMIM entry).\n\nThe python Flask framework is then used to serve variant frequencies and associated annotations from mongoDB to webpages based upon HTML templates.\n\nHardcoded paths contained within the original code were altered and additional changes were made to the provided HTML templates to remove ExAC specific references and to make specific changes in the interface. For example, the gene results page was altered to annotate CanVar frequencies with ExAC frequency data and to allow for multiple studies to be viewed on the same table.\n\nFull installation instructions with all software dependencies are provided at https://github.com/danchubb/CanVar/blob/master/readme.txt. The required python modules, installed using the pip package management system are described in https://github.com/danchubb/CanVar/blob/master/requirements.txt.\n\nCanVar runs on a Dell PowerEdge R310 with 1x Intel i3-540 CPU and 4 GB DDR3 RAM using Apache version 2.4.6. The variant and associated annotation mongoDB files are 55GB in size.\n\nThe CanVar website itself can be accessed using any modern internet browser.\n\nCanVar currently contains summary level exome sequencing data from 1,006 early-onset familial CRC cases4 from the National Study of Colorectal Cancer Genetics (NSCCG)5. All samples had previously undergone quality control, ensuring the removal of those with: non-northern European ancestry, high levels of heterozygosity, sex discrepancy, poor call rate and contamination. The full sequencing and analysis pipeline is described in detail in the dataset’s publication4. Briefly: all samples underwent exome capture utilising llumina’s Truseq 62 Mb expanded exome enrichment kit followed by sequencing using Illumina Hi-seq 2500 technology. Alignment to build 37 (hg19) of the human reference genome was performed using Stampy(v1.0.17)6 and BWA(v0.5.9)7 software. Alignments were processed using the Genome Analysis Tool Kit (GATKv3) pipeline according to best practices8,9. Analysis was restricted to capture regions defined in the Truseq 62Mb bed file plus 100bp padding. Combined individual level VCF files generated using the GATK 3 pipeline were assessed using variant quality recalibration (VQSR). In this step a variant is assigned a tranche which represents the sensitivity threshold required to call a given variant, the higher the tranche, the less confidence is given to a call. Variants are assigned a PASS value if they fall below the 99.0 tranche for SNVs and the 95.0 tranche for indels. Above these values, the sensitivity required for a given variant is reported in increments of 0.1 to provide users with the most accurate assessment of variant quality. The CRC cases were jointly called and subjected to VQSR alongside a larger set of exomes therefore calls may differ from those reported in previous publications. Finally, each variant was annotated using the Ensembl Variant Effect Predictor(VEP v78)10 before being converted to the summary level site format required by the ExAC framework using custom python scripts.\n\nThe ExAC framework requires individual level variant and coverage files to be converted into specific summary formats before they can be parsed into mongoDB.\n\nVariant frequency and annotation. Individual level vcf files are converted into a summary site format, providing allele count and frequency data for different groups in addition to depth and genotype quality data. For ExAC these groups correspond to ethnic groups whereas CanVar utilises this facility to instead group samples in to separate phenotypic classes, allowing the expansion of the database to contain data from a variety of malignancies. This process is accomplished using a custom python script https://github.com/danchubb/CanVar/blob/master/vcf_to_site_canvar.py which takes as input a VCF file and a list of which populations (or phenotypes) each contained sample belongs to. Variant frequencies and VEP annotations are then output according to QC parameters. In order to provide maximum sensitivity for users, minimum variant QC is imposed: requiring a site to be called in > 50% of samples and for an individual sample call to have a depth of > 2 reads with a GQ>20. All female Y chromosome calls are removed, as are male heterozygous Y and X calls.\n\nCoverage data. Per base coverage files are generated for each sample using the GATK DepthOfCoverage command. Individuals coverage files are then indexed using the tabix tool and average coverage across all captured bases is calculated across all samples using a custom python script: https://github.com/danchubb/CanVar/blob/master/average_coverage_calculate.py.\n\nThe CanVar website is available at: https://canvar.icr.ac.uk\n\nLatest source code: https://github.com/danchubb/CanVar\n\nArchived source code as at the time of publication: 10.5281/zenodo.16801911\n\nLicense: The source code is licensed using the same MIT open source license as ExAC (https://github.com/danchubb/CanVar/blob/master/LICENSE).\n\nRaw alignment (BAM files) data on the 1,006 CRC samples have been deposited at the European Genome-phenome Archive with accession number EGAS00001001666. The availability of individual level data for future datasets included within CanVar will be specific to each study.", "appendix": "Author contributions\n\n\n\nConception and design: Daniel Chubb, Sara E. Dobbins, Peter Broderick, Richard S. Houlston, Collection and assembly of data: Daniel Chubb, Peter Broderick, Sara E. Dobbins. Implementation: Daniel Chubb. Manuscript writing: All authors. Final approval of manuscript: All authors.\n\n\nCompeting interests\n\n\n\nThe authors declare no competing interests.\n\n\nGrant information\n\nThis work was supported by grant funding from Cancer Research UK (C1298/A8362), the European Union Seventh Framework Programme (FP7/207–2013) under grant 258236, FP7 collaborative project SYSCOL and BLOODWISE (LRF05001). All grants assigned to Richard S Houlston.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThanks to Nikolas Pontikos (https://github.com/pontikos/uclex_browser) for his assistance with the ExAC framework and data parsing.\n\n\nReferences\n\nLek M, Karczewski KJ, Minikel EV, et al.: Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016; 536(7616): 285–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarczewski KJ, Weisburd B, Thomas B, et al.: The ExAC Browser: Displaying reference data information from over 60,000 exomes. bioRxiv. 2016. Publisher Full Text\n\nNagy R, Sweet K, Eng C: Highly penetrant hereditary cancer syndromes. Oncogene. 2004; 23(38): 6445–6470. PubMed Abstract | Publisher Full Text\n\nChubb D, Broderick P, Dobbins SE, et al.: Rare disruptive mutations and their contribution to the heritable risk of colorectal cancer. Nat Commun. 2016; 7: 11883. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPenegar S, Wood W, Lubbe S, et al.: National study of colorectal cancer genetics. Br J Cancer. 2007; 97(9): 1305–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLunter G, Goodson M: Stampy: a statistical algorithm for sensitive and fast mapping of Illumina sequence reads. Genome Res. 2011; 21(6): 936–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKenna A, Hanna M, Banks E, et al.: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010; 20(9): 1297–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDePristo MA, Banks E, Poplin R, et al.: A framework for variation discovery and genotyping using next-generation DNA sequencing data. Nat Genet. 2011; 43(5): 491–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLaren W, Pritchard B, Rios D, et al.: Deriving the consequences of genomic variants with the Ensembl API and SNP Effect Predictor. Bioinformatics. 2010; 26(16): 2069–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\ndanchubb : danchubb/CanVar: Canvar code beta 0.1 F1000. Zenodo. 2016. Data Source" }
[ { "id": "18219", "date": "20 Dec 2016", "name": "Laura Valle", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article Chubb et al.1 describe a software tool (CanVar) that the authors have developed to make publically available germline variant frequency information from sequenced exomes of cancer patients. Most importantly, they have used the Exome Aggregation Consortium (ExAc) framework, a tool the scientific community is currently very familiar to, in order to facilitate its access and use. The incalculable value of the open accessibility to the germline variation data obtained from >60,000 exomes provided by the Exome Aggregation Consortium (ExAc) seem to finally begin to reach disease-specific cohorts, as it is the case of CanVar. Hopefully this will soon become a reality not only for cancer but also for other common diseases. This information, together with the variation frequencies observed in the general population, is key when trying to evaluate the pathogenic relevance of disease-predisposing genes and/or variants, not only for novel candidate genes but also for well-known susceptibility genes.\n\nSo far, the data available through CanVar correspond to the 1,006 exomes of early onset familial colorectal cancer cases recently studied by the same group2. Being this a very insightful cohort in the field of colorectal cancer predisposition, much needs yet to be done to make CanVar a relevant routine tool for the scientific community, and it is the responsibility of all of us to make this possible. The tool is already available, so I encourage all researchers with germline exome sequencing data in cancer patients to submit their data to CanVar, as larger representation of tumor types, populations and patients in general is required. I also would like to encourage researchers to use these extremely useful data in their cancer predisposition studies and to increase the visibility of CanVar among their colleagues and peers.\n\nDespite the so far limited availability of germline exome sequencing data from cancer patients, a huge amount of data has been gathered in the last years from genome-wide association studies and exome SNP arrays. This information would be of value if added to CanVar, at least the variants included in exome arrays and rare exonic variants included in genotyping arrays.\n\nAnother issue that needs to be contemplated is the implementation of filters for ethnicities/studies, anticipating the inclusion of data from other groups. Alternatively, as occurs in ExAC, data could be itemized by ethnicity/geographic origin and study.", "responses": [] }, { "id": "18681", "date": "21 Dec 2016", "name": "Pavel Vodička", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study of Chubb et al. presents an online database and software tool Cancer Variation Resource (CanVar), developed on the basis of Exome Aggregation Consortium (ExAc) framework (sequenced exomes of cancer patients). The main aim of the database is to enable an open access to germline variant frequency. CanVar focuses on colorectal cancer, as it summarizes exome sequencing data from more than 1000 familial early-onset patients with this disease. Strikingly, the CanVar database catalogues almost 1.1 million variants and more than 100000 insertions/deletions. An additional advantage for the user are the data on associated annotations of variants and insertions/deletions.\nThe information, which may be acquired from on the basis of the published database, may provide valuable information on gene variants and indels in populations assuming disease-specific context. The data that could be mined with the help of the present database may also find utilization in clinics, particularly in the context with mutational screening in cancer, which becomes to be routine.\nA sentence in Introduction (When undertaken by multiple agencies…) would benefit from re-phrasing into a more reader-friendly form.", "responses": [] }, { "id": "18217", "date": "03 Jan 2017", "name": "Wolfgang Huber", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nShort synopsis. The authors describe an online resource for exploring disease-associated germline variants. The Cancer Variation Resource (CanVar) browser is inspired by the ExAC browser at the Broad Institute. Currently CanVar is limited to those germline variants that were identified as risk-associated in a study of 1,006 familial early-onset colorectal cancer (CRC) patients published by the authors in Nature Communications in 2016.\nOverall impression. CanVar is a useful tool for mining variants implicated in CRC, and the authors do a good job at explaining how to use the resource. The authors also provide some background information on the underlying data, methodology and technologies. We have only a few minor suggestions as to how its presentation could be improved.\nSuggestions. In the Introduction, the sentence “CanVar has been produced by adapting the ExAC framework” is a bit vague. You could be clearer what is meant by framework: software? APIs? some or all of the concepts, and which? datasets?\nAbstract and introduction can be confusing (at least to the rushed reader, of which there are many) as to whether CanVar also contains or interfaces to the ExAC 60,000 exome data on top of the CRC data. This is clarified in the “CanVar datasets” subsection, but in our view this should be clarified earlier. For instance, abstract and introduction talk more, and earlier, about ExAC than about the data that are actually contained in CanVar; in the “CanVar website” section, links to the ExAC browser and to the CanVar website are provided side by side, which might lead to further confusion. Since both websites are almost look-alikes, readers might even be led to expect that both sites might also mirror each other. Perhaps, it would be better to provide the links in a more asymmetric manner.\nIn “CanVar datasets” it is mentioned that each variant is annotated “with ExAC allele frequency data excluding samples from the TCGA”. Perhaps, a short explanation of why this has been done could be provided, as not all readers may be familiar with the source of non-TCGA samples in the ExAC dataset.\nIt should be time to remove the “Beta” state from the resource. Bring it to a good enough state to warrant release, and then be not afraid to update it later with new releases.", "responses": [] }, { "id": "18218", "date": "06 Jan 2017", "name": "James Ware", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary & Impression The authors describe an online resource for exploring germline cancer variants. The resource currently contains data from 1,006 samples representing a single cancer type, a single ethnicity, and a single centre. The authors have invited collaborators to help expand this to other cancer types which will add further value to what is already an excellent tool over time.\nWe share the authors’ enthusiasm for intuitive data sharing, and agree that presenting variant frequencies from disease cases, as well as reference samples, is hugely valuable. Overall, the manuscript is very clear, and we anticipate that the intuitive web resource will be well received.\nWe have a couple of high level comments, and some minor suggestions for the authors to consider.\nComments\nThe authors describe 2 uses for this data: variant-level analyses (i.e. interpreting individual variants, primarily in established disease genes), and gene-level analyses (assessing candidate disease predisposition genes). The resource in its current form is primarily suitable for the first. The case data and control data are unlikely to be technically matched sufficiently for case/control association testing at the gene level (burden tests), and the authors wisely do not provide this sort of comparative data on the gene page. So, while variant frequency data is important in interpreting genes, in our opinion the present resource is primarily valuable for variant interpretation.\n\nA critical strength of the ExAC project was the joint and unified analysis of aggregated data. The authors describe adopting the “ExAC framework”, but at present this represents only data from only a single source. As well as adopting the ExAC web architecture, it would be interesting to hear the authors' plans for data analysis going forwards - will CanVar seek to harmonise variant calling and analysis on data from disparate sources as it grows?\n\nSuggestions for consideration\nReaders are likely to know ClinVar as the go-to resource for germ-line variants in inherited diseases. It may be worth highlighting the complementary value of ClinVar & CanVar - i.e. the addition of consistent frequency data.\n\nI would provide a little more detail on the case series in the section on “CanVar datasets\" (introduction). In particular, are cases are all unrelated probands? I would add here that cases are all European (given in methods). Drs Huber & Kim note that variants were limited to “germline variants that were identified as risk-associated” - I did not appreciate this from the manuscript, and think this is important to note if the dataset does not include all variants in protein-coding regions.\n\nGene view\nAre variants annotated with ExAC frequencies even if they do not ‘PASS’ filters in ExAC? May be helpful to display the ExAC filter status as well as CanVar filter status.\n\nWould be helpful to indicate whether variants absent in ExAC were well covered - i.e. give some summary measure of ExAC coverage for all variant sites, since capture platforms and coverage profiles may be very different\n\nWe understand that ExAC_AF on this page is non-TCGA ExAC frequency. Is this ethnicity matched too, since samples are all european)?\n\nVariant view\nIt would be invaluable to incorporate ExAC frequencies into the frequency table (Fig 3e) - especially since the non-TCGA data is not accessible via click through to the web browser (only by download).  Users may be misled by a link to the full ExAC dataset (with TCGA included).\n\nAs data is added would be desirable to stratify by cancer type AND ethnicity\n\nThe call rate is reported with 12 decimal places of precision\n\nMethods\n\"Curation of colorectal cancer exome data within CanVar\" May be helpful to indicate which of the technical parameters differ from ExAC where relevant.\n\nIt would be interesting to hear about any challenges encountered in reconciling the data sets that may be relevant to others attempting something similar -  Were there any problems with multi-allelic sites? e.g. GATK filters by site, rather than variant.  Are multi-nucleotide polymorphisms phased and jointly interpreted?  Any other technical challenges?\n\nData availability\nIs the sites-only vcf available for download? This may be useful in addition to raw data available via application to EGA.\n\nCould add source code link to final sentence of the discussion.", "responses": [] } ]
1
https://f1000research.com/articles/5-2813
https://f1000research.com/articles/5-1442/v1
21 Jun 16
{ "type": "Software Tool Article", "title": "Bringing your tools to CyVerse Discovery Environment using Docker", "authors": [ "Upendra Kumar Devisetty", "Kathleen Kennedy", "Paul Sarando", "Nirav Merchant", "Eric Lyons", "Kathleen Kennedy", "Paul Sarando", "Nirav Merchant", "Eric Lyons" ], "abstract": "Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.", "keywords": [ "CyVerse", "virtualization platform", "Discovery Environment" ], "content": "Introduction\n\nCyVerse (formerly iPlant Collaborative)1 is the national life sciences cyberinfrastructure funded by the National Science Foundation (NSF). The infrastructure’s purpose is to scale science, domain experts and knowledge by providing a variety of computational tools, services, and platforms for storing, sharing, and analyzing large and diverse biological datasets. In addition, CyVerse provides a variety of resources to train scientists with diverse backgrounds to make the best use of CyVerse’s infrastructure and leverage advanced computational resources, including high-performance and cloud computing. The Discovery Environment (DE) in CyVerse provides a modern web interface for running powerful computing, data, and analysis applications. By providing a consistent user interface for accessing the tools and computing resources needed for specialized scientific analyses, the DE facilitates data exploration and scientific discovery. Because much of the complexity is hidden from the user, the DE makes it easy for non-technical users to run their analyses and for computational savvy users to share their apps with collaborators. Scientists do not need to master command-line analysis tools or learn new software for every type of analysis. All aspects of bioinformatics data management and analysis may be handled within the DE.\n\nIt is common in bioinformatics to build new analysis methods utilizing multiple programs, libraries, and modules, e.g., SAMtools or R with Bioconductor. However, each analysis that uses these tools requires specific versions of the operating system and underlying programs, such as Ubuntu version 14.04, Bioconductor version 3.2, R version 3.2.2, and SAMtools 1.3. In order to reproduce results, the same versions of software are often required, including supporting libraries and the underlying operating system. This delicate balance of dependencies, often called Dependency Hell, adversely impacts the reproducibility of analyses, and makes it challenging to share programs, workflows and analysis methods with collaborators and users who do not have access to identical systems. In the past, these issues have made it challenging for users to integrate new applications and analysis methods in the DE, as the underlying execution platform could only support a limited number of versions of the same software. For example, if your program expects to find BWA program version 0.7.122 in /usr/local/bin, but another program expects version 0.7.13, it was impossible for the two versions to coexist without modifications to your code. With advances in container-based virtualization technology, these issues now are easily resolved, and support customized execution environments for every analysis.\n\nDocker3 is a container technology that wraps software of interest (e.g., a bioinformatics tool) together with all its software dependencies so it can run in a reproducible manner regardless of the environment. Compared to the previous method of tool integration in the DE, Docker images allow users to install multiple versions of software on the same system, streamlining the application integration process and ensuring that the final DE app will function as the user intended while developing it on their own compute platform (desktop, laptop or server). It also enables more complicated and difficult to install software (e.g., software with many additional dependencies) to be easily integrated in the DE. Even though containerizing applications has its own advantages, it does have certain limitations, such as limited access to large data or no web user interface. The DE helps solve some of these limitations by providing a web interface for integrated data management and tasks execution, while also streamlining the ability to upload, organize, edit, view, and share data and analysis with collaborators.\n\nCyVerse has adopted Docker for integrating software apps that run in the CyVerse DE’s Compute Cluster, which uses HTCondor for its resource-job-management system (RJMS). The user creates a Dockerfile, which is sent to CyVerse and used to build the Docker image containing the tool. After the image has been deployed on the DE’s compute cluster, the user can build an web app in the DE to use the tool. The Docker engine runs on three different containers (Figure 1) in the DE:\n\nThe data staging-in container delivers the data on which you want to operate from its location in the Data Store.\n\nThe app container, based on your integrated Dockerized tool, runs with the data visible to it as a union file system.\n\nThe data staging-out container returns data from the analysis that uses the app to the Data Store.\n\nThis compartmentalizes each major step of data movement and running analyses so that updates to each part can happen more easily.\n\n\nMethods\n\nSteps for Dockerizing a tool in the DE: Before you can use a Dockerized image in DE, you must complete a few prerequisites:\n\nInstall Docker and any other dependencies:\n\n– Linux: The installation procedure involves the use of package containers, such as Curl, or the use of APT (Advanced Package Tool) and Yum repositories for your installation.\n\n– Mac OS X and Windows: Docker Toolbox is a quick and easy way to install and set up a Docker environment for Mac OS X and Windows.\n\n– Virtual Machine: Docker can be installed in a virtual machine environment through Virtual Box or Kitematic, which runs containers through a simple and powerful user interface.\n\nEnsure the tool you want to Dockerize is available from a reliable URL:\n\nA reliable source is a website that hosts files/binaries necessary for tool installation and which can relied upon for future builds (e.g., ubuntu apt repos, redhat/centos yum repos, GitHub). An unreliable source is a public Dropbox link, a lab computer, a personal computer, etc.\n\n– If the tool and all its executables are available from reliable sources, use that URL for Dockerization of the tool.\n\n– If installation files cannot be retrieved from a reliable source, they should be version controlled with the Dockerfile by deposition in a code repository such as GitHub.\n\nThe following steps (Figure 2) along with this video tutorial serve as a guide for Dockerizing a tool in the DE.\n\nSTEP 1: Check if the tool and correct version are already installed in the DE: Before requesting installation of a new tool or new version of an existing tool, check the list of all tools in the DE to make sure that the tool and version you want is not already available:\n\n1. Log in to the Discovery Environment by going to https://de.iplantcollaborative.org/de/, entering your CyVerse username and password, and clicking LOGIN. If you have not already done so, you will need to sign up for a CyVerse account. If you need to reset your password or forgot your CyVerse username, click Need to reset your password? and complete the form.\n\n2. Click the Apps icon to open the Apps window.\n\n3. Click the Apps menu item and then click Create New.\n\n4. In the Tool Used field in the middle section, click the search icon to open the Installed Tools window.\n\n5. In the search field, enter the first few letters of the tool name and then click the browse button, or scroll through the list until you find the tool to use.\n\nAll tools now run inside a Docker containers and are installed as Docker images in the DE. If the tool you want is not already available, you must first create a Dockerfile for the tool before requesting its installation in the DE.\n\nSTEP 2: Create the Dockerfile Construction of each tool’s image needs to be reviewed in order to ensure that it is reproducible and contains no security issues. The Dockerfile satisfies this goal of documenting how a tool and its software dependencies, including the base operating system, were installed. It is recommended that you adhere to the Docker community specific set of instructions. Additionally, follow these CyVerse Dockerfile best practices:\n\nInclude all installation steps in the Dockerfile. For example, a Dockerfile should not copy and run a script that performs Ubuntu APT commands; instead, the APT commands should be in the Dockerfile.\n\nWrite the Dockerfile to fail fast. This means that if anything goes wrong, construction of the image will fail immediately and the Docker image will not be deployed.\n\nEncapsulate the tool execution to avoid external dependencies unless by design (e.g., tool provides integration with external service).\n\nDerive the tool from an official Docker image (e.g., ubuntu:14.04.3 for the operating system).\n\nSample Dockerfile: hisat2—Dockerfile for installing hisat2 in a Docker container based on the Ubuntu:14.04.03 image:\n\n\n\nSTEP 3: Test the Dockerized tool Before you request the installation of the Dockerized tool you should test the new image, using the docker run command:\n\nYour tool will most likely require inputs and produce outputs. If the tool’s image was built from a Dockerfile with an ENTRYPOINT and tagged with your/docker-image, then place some test input files into a scratch directory (e.g., ~/my-scratch-dir) and run a command like the following in that directory:\n\nThe -v option mounts the current working directory on the host machine into the /working-dir directory inside the container. The -w option sets the working directory inside the container to that same /working-dir directory.\n\nNote:\n\nThe DE will run a tool’s Docker image using a combination of the docker run flags -w and -v\" flags in order to mount the Condor node’s working directory to some arbitrary working directory inside the container. All inputs will be placed inside this working directory and the DE expects the tool to generate outputs in this working directory as well.\n\nAdditionally, all arguments specified by the user in the DE’s app interface will be passed as command-line arguments to the docker run command following the your/docker-image name. From the example command above, these would be user-input-1 user-input-2 .... Exceptions are the Environment Variable fields, which will be passed to the docker run command as -e flags.\n\nReference Genome/Sequence/Annotation input arguments are passed to the tool differently from other arguments, so if your tool requires these types of inputs, please inform the team of this requirement when you request installation of the tool.\n\nIf the tool’s container produced outputs in that host’s scratch directory, then this tool should be ready for the DE.\n\nSTEP 4: Submit the request for installation of the Dockerized tool\n\n1. In the DE Apps window, click Apps and then click Request Tool.\n\n2. In the Request New Tool Installation window:\n\n• Enter the name of the tool (executable or binary).\n\n• Enter a brief description of the tool as you want it to appear in the Apps window information section for apps that use this tool.\n\n– Enter the attribution for the tool, such as the person who created the tool (optional).\n\n3. To submit your Dockerfile, in How do you want to submit data for your tool’s source, either:\n\n• Click Enter a link and enter or paste the URL.\n\n• Click New upload and browse to upload the file from your personal folder.\n\n• Click Select from existing and then browse to the source file location in the DE Data Store.\n\n4. Enter the tool’s version.\n\n5. Select the software architecture for the tool.\n\n6. Specify if the file is multi-threaded. For information on threading, see Thread (computing) on the Wikipedia website.\n\n7. If necessary, click to expand the Other Information section:\n\n• In the How do you want to submit additional data field, select either New upload (to upload the file to your personal Data folder), or Select from existing (to select a file that already exists in your personal Data folder), and then browse to select the first test data file. If choosing New upload, the filename must be unique.\n\n• Enter instructions for how to use the tool in the Unix environment.\n\n8. To upload a second test data file, click Browse in the How do you want to submit additional data field and browse to select the second file (optional).\n\n9. Enter any additional information that might be useful (optional).\n\n10. Click Submit.\n\nNote: Also note that Reference Genome/Sequence/Annotation input arguments are passed to the tool differently from other arguments, so if your tool requires these types of inputs, please include that requirement on the form when you request installation.\n\nYour request is sent to CyVerse Support. When the new tool is installed, you will receive an email from CyVerse Support.\n\nSTEP 5: Create and save the new app interface in the DE Once the Dockerized tool is installed, you can create the DE app interface for the tool. The Create App window consists of four distinct sections (Figure 4):\n\nThe first section contains the different app items that can be added to your interface. To add an app item, select the one to use (hover over the object name for a brief description) and drag it into position in the middle section.\n\nThe second section is the landing place for the objects you dragged and dropped from the left section, and it updates to display how the app will look when presented to a user.\n\nThe third section (Details) displays all of the available properties for the selected item. As you customize the app in this section, the middle section updates dynamically so you can see how it will look and act.\n\nFinally, the fourth section at the bottom (Command line view) contains the command-line commands for the current item’s properties. As you update the properties in the Details section, the command-line view updates as well to let you make sure that you are passing the correct arguments in the correct order.\n\nCreating a new app interface requires that you know how to use the tool. With that knowledge, you create the interface according to how you want options to be displayed to a user. An app interface in the DE is arranged in a hierarchy of two main pieces:\n\nThe framework consists of one or more groups. A group creates a conceptual boundary in the interface to organize options. For example, many DE apps have a first panel called File Inputs.\n\nInside each group, you add those user interface objects you need to facilitate the collection of user inputs. There are a number of different user interface objects from which to select, including input file fields, selection and checkbox fields, text and numerical input fields, and output file fields.\n\nIn the example below (Figure 5), we see three groups, with the Select input data group expanded.\n\nAt any step in creating or editing an app, you can preview the app and the its underlying JSON code. Previewing the app allows you to see how it will appear to a user, and to test how the app looks and functions. You also can preview and save the JSON code to a txt file.\n\nOnce you begin creating the app, it is highly recommended that you save your app frequently. Once saved, the app is available immediately in your workspace Apps under development folder (but not available to others to use).\n\nSTEP 6: Test your app in the DE After creating the new app according to your design, test your app in the your Apps under development folder in the DE to make sure it works properly.\n\nIf your app works the way you expect it to, skip to Optional steps, below.\n\nIf your app still needs a bit of work or if you need to make changes to your Dockerfile, go back to Step 2 and repeat.\n\nOptional steps: Complete the additional steps as needed for your tool.\n\nEditing an unshared app: If you have not yet shared the app with the public (that is, it is still listed in your Apps under development folder in your personal workspace), you can modify its interface and create a new Dockerfile. If you create a new Dockerfile, email CyVerse Support to have it updated in the DE.\n\nSharing (publishing) your app in the DE: Once the app is working to your satisfaction and you have published it, you can share it with your collaborators or make it public for anyone to use. To share it with other users or make it public, see Sharing your App or Workflow and Editing the User Manual.\n\nDeleting/editing a publicly shared app: After an app has been made public, it cannot be deleted because of CyVerse’s commitment to supporting reproducible science. You can, however, make a new version of the app and/or Dockerfile. Additionally, if you want to modify the app (e.g., expose more options in the interface), you can make a copy of the app and then modify that copy’s interface.\n\nRequesting a different category for your app: When you share your app with the public, you will indicate the category or categories where it may be found. To request that your app be moved or added to a different or additional category, email CyVerse Support with the app name, current category or categories, and desired target category or categories.\n\n\nUse cases\n\nBefore you Dockerize a tool, it is important that you understand program dependencies (check the program documentation/manual thoroughly).\n\nUse case 1: Dockerizing a simple bioinformatics tool — Kallisto The Kallisto Docker image was built on an Ubuntu-64 bit Virtual Machine using Virtual Box.\n\n1. Install Docker (see the Methods section).\n\n2. Create the Dockerfile.\n\n\n\n3. Build the image.\n\n\n\n4. Test the built Kallisto image.\n\n\n\nUse case 2: Dockerizing a bioinformatics tool - ParaAT The ParaAT Docker image was built on Mac OS X using the Docker toolkit (quick start terminal).\n\n1. Install Docker Toolbox for Mac OS X (see Methods section)\n\n2. Create a paraAT git repo on GitHub\n\n3. Create a paraAT git repo locally on your computer\n\n\n\n4. Clone the paraAT git repo to local\n\n\n\n5. Download the paraAT files, and then add and commit them to the local paraAT GitHub repo folder\n\n\n\n6. Push the local paraAT repo to the remote repo\n\n\n\n7. Create a Dockerfile\n\n\n\n8. Build the paraAT image\n\n\n\n9. Test the paraAT image\n\n\n\n\nSummary\n\nThe CyVerse Discovery Environment provides a simple yet powerful web portal for managing data, analyses, and workflows. The recent addition of using Docker to deploy new tools in the DE makes it easier and faster for users to incorporate and deploy their own tools in the DE with minimal effort. In addition, users can more easily share their tools, workflows, and knowledge with other researchers.\n\n\nData and software availability\n\n1. URL link to the Kallisto Dockerfile along with the test data - https://github.com/iPlantCollaborativeOpenSource/docker-builds/blob/master/kallisto/Dockerfile, doi: 10.5281/zenodo.538344\n\n2. URL link to the Paraat Dockerfile algon with the test data - https://github.com/jdebarry/paraat, doi: 10.5281/zenodo.538615", "appendix": "Author contributions\n\n\n\nUpendra Kumar Devisetty has written most of the manuscript. Example Dockerfiles were provided by Kapeel Chogule and Jeremy DeBarry. All authors helped prepare the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nCyVerse is funded by NSF award numbers DBI-0735191 and DBI-1265383.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe thank Roger Barthelson for helping with the video tutorial and members of the DE team for their comments and suggestions regarding the manuscript.\n\n\nReferences\n\nGoff SA, Vaughn M, McKay S, et al.: The iPlant Collaborative: Cyberinfrastructure for Plant Biology. Front Plant Sci. 2011; 2: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–1760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMerkel D: Docker: Lightweight Linux containers for consistent development and deployment. Linux J. 2014; (239). Reference Source\n\nDevisetti UK, Kennedy K, Sarando P, et al.: F1000Research/docker-builds. Zenodo. 2016. Data Source\n\nDevisetti UK, Kennedy K, Sarando P, et al.: F1000Research/Paraat Dockerfile. Zenodo. 2016. Data Source" }
[ { "id": "14493", "date": "01 Jul 2016", "name": "Steven B Cannon", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis Software Tool Article describes the way Docker has been incorporated into the CyVerse Discovery Environment.\n\nThe article makes a good case for use of Docker for reproducibility and to circumvent \"dependency hell\" and to enable more efficient sharing of complex software stacks.\nThe article is clearly written, with nice examples and use cases.\nPoints recommended for the review: - The title is appropriate. - The abstract provides an adequate summary of the article. - There is clear description of the use of Docker in the CyVerse DE environment, with clear examples and sample code.", "responses": [] }, { "id": "16925", "date": "18 Oct 2016", "name": "Thomas S. B. Schmidt", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this Software Tool Article, how the CyVerse Discovery Environment has been extended to integrate custom/user-provided (bioinformatics) applications using Docker. The article is clean and concise, contains several examples and downloadable data/code to recapitulate the described use cases. As an application note, the article provides an informative entry point for professional users and tool developers to get acquainted with CyVerse DE.\nIn general, reproducibility is a very topical issue in various fields of life science research, and the use of approaches such as Docker-based environments present one way of tackling this. In this sense, the present article and the tool(s) it describes are timely and of use for the community.\nMy only (minor) criticism is that in some parts the authors use jargon and overly technical language which might make the text less accessible to non-expert readers (\"Dependency Hell\", \"BWA program\", etc.). However, given that the main target audience are arguably bioinformatics tool developers, this is only a very minor point.\nOverall, I believe that the article and the work presented therein are a useful and constructive contribution to the field, and I am confident that the tools provided may greatly facilitate everyday research work for a wide range of potential users.", "responses": [] }, { "id": "16927", "date": "14 Nov 2016", "name": "Robert Cannon", "expertise": [], "suggestion": "Approved", "report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a clear description of how to use Docker to add tools to the CyVerse Discovery Environment. As a relatively recent development, Docker is having a major impact on the way many businesses deploy and update software for exactly the reasons described in the article. Clearly, the scientific use case is just as compelling in providing a way to deploy different tools on the same machine where they may have incompatible dependencies.\n\nThe steps for Dockerizing a tool and the various sample Dockerfiles provide a helpful guide to the whole process in the context of the CyVerse infrastructure.", "responses": [] } ]
1
https://f1000research.com/articles/5-1442
https://f1000research.com/articles/5-2810/v1
02 Dec 16
{ "type": "Research Article", "title": "Identification of genetic pathways driving Ebola virus disease in humans and targets for therapeutic intervention", "authors": [ "Daniel A. Achinko", "Anton Dormer", "Mahesh Narayanan", "Elton F. Norman", "Muneer Abbas", "Anton Dormer", "Mahesh Narayanan", "Elton F. Norman", "Muneer Abbas" ], "abstract": "Introduction: LCK gene, also known as lymphocyte-specific proto-oncogene, is expressed in lymphocytes, and associated with coordinated expression of MHC class I and II in response to physiological stimuli, mediated through a combined interaction of promoters, suppressors, and enhancers. Differential usage of LCK promoters, transcribes dysfunctional transcript variants leading to leukemogenesis and non-induction of MHC class I gene variants. Viruses use C-type lectins, like CD209, to penetrate the cell, and inhibit Pattern Recognition Receptors (PRR), hence evading immune destruction. Given that Ebolavirus (EBOV) disease burden could result from a dysfunctional LCK pathway, identification of the genetic pathway leading to proper immune induction is a major priority. Methods: Data for EBOV related virus samples were obtained from Gene Expression Omnibus database and RMEAN information per gene per sample were entered into a table of values. R software v.3.3.1 was used to process differential expression patterns across samples for LCK, CD209 and immune-related genes. Principal component analysis (PCA) using ggbiplot v.0.55 was used to explain the variance across samples. Results: Data analyses identified three viral clusters based on transmission patterns as follows: LCK-CD209 dependent, LCK-dependent specific to EBOV, and CD209 dependent. Compared to HLA class II gene variants, HLA class I (A, B and C) variants were <2 fold expressed, especially for EBOV samples. PCA analyses classified TYRO3, TBK1 and LCK genes independent of the data, leading to identification of a possible pathway involving LCK, IL2, PI3k, TBK1, TYRO3 and MYB genes with downstream induction of immune T-cells. Discussion: This is the first study undertaken to understand the non-functional immune pathway, leading to EBOV disease pathogenesis and high fatality rates. Our lab currently exploits, through cutting edge genetic technology to understand the interplay of identified genes required for proper immune induction. This will guide antiviral therapy and possible markers for viral disease identification during outbreaks.", "keywords": [ "Ebolavirus", "LCK", "HLA", "viruses", "T-cells", "GEO", "pathways", "pathogenesis" ], "content": "Introduction\n\nEbola virus (EBOV), a known member of filoviruses, causes hemorrhagic fever related epidemics, resulting in disease severity and high case fatality rates (Sanchez et al., 2006). The severity of the illness is an end result of pathogenetic and complex mechanisms exploited by the virus to suppress immune responses, both innate and adaptive (Hoenen et al., 2006). EBOV remains an important threat to public health, specifically in central Africa, but also worldwide, due to imported infections and fear of biological terrorism misuse. Of the four known EBOV species (Zaire, Sudan, Cote d’Ivoire and Reston), only Zaire and Sudan EBOVs species have resulted in large epidemics with intense hemorrhagic fever (Johnson et al., 1977; Georges et al., 1999). Due to several outbreaks and high fatality rates recorded in many Zaire EBOV cases, this species is the most extensively studied (Mahanty & Bray, 2004). Infections from EBOV induce a response to systemic inflammation resulting in dysfunctional patterns in blood clotting, and vascular and immune systems. This systemic imbalance leads to failure in several body organs, which is a similar phenomenon to septic shock (Achinko & Dormer, 2015). Fatalities resulting from EBOV infections, combined with the lack of treatment and approved vaccines, keeps EBOV on the list of most important public health and category A bio-threat pathogens (Hoenen et al., 2006). Although a lot of work is currently ongoing to identify the right genetic mechanisms exploited by the virus in its host leading to pathogenesis, little is currently known of its pathogenetic pathway.\n\nEnveloped EBOV particles consist of an RNA genome, which is negatively stranded and non-segmented. The fourth gene of the linear genome encodes different and specific glycoproteins, with the precursor for the viral soluble nonstructural glycoprotein (pre-sGP) known as the main product of this gene. After posttranslational cleavage of furin molecules, pre-sGP matures to sGP and delta peptide, which are both secreted from infected cells, with sGP also identified in the serum of infected individuals (Sanchez et al., 1996; Volchkova et al., 1998). Among the Mononegavirales, EBOV is the only virus to produce a soluble glycoprotein (GP) (Volchkova et al., 1998). sGP has an associated role with viral pathogenesis, due to its relative abundance in circulation and the presence of non-infected B- and T-cells subjected to intense bystander apoptosis through a previously unknown mechanism, which suggests a possible interaction between sGP and important related immune cells (Baize et al., 2000; Geisbert et al., 2000). An attempt by Wolf et al. (2011) to prove this concept using recombinant sGP alone or together with death receptors (tumor necrosis factor-related apoptosis-inducing ligand and FAS) showed neither a decrease or increase in apoptosis in Jurkat cells (a human lymphocytic cell line), exempting it from bystander apoptosis induction.\n\nThe known EBOV entry model leading to infection occurs through minute skin and mucosa lesions of the host, further infecting macrophages and dendritic cells (DCs) as the main target cells (Schnittler & Feldmann, 1998). In mice, macrophages are primary targets (Bray et al., 1998; Gibb et al., 2001) and may be infected as early as two days post infection in non-human primates (Geisbert et al., 2003; Ryabchikova et al., 1999). Several protein molecules may play a role in the cellular attachment for EBOV and they include: b1-integrin receptor, DC specific C-type lectins or lymph node/liver specific (L) intercellular adhesion molecule 3 grabbing nonintegrin (DC-SIGN and L-SIGN, respectively; CD209), C-type lectin of the sinusoidal cells of liver and lymph nodes, C-type lectin with galactose and N-acetylgalactosamine specific to human macrophages, and factors related to DC-SIGN (Hensley et al., 2005). During an innate immune response from EBOV infection, the interferon (IFN) response is considered very critical for disease outcome in mice infected by EBOV-wild-type (Bray, 2001), and mice will die within a week when treated with anti-IFN antibodies. Mice that lack receptors for INF-a/b or signal transducer and activator of transcription-1 (STAT-1), which are necessary for IFN signaling, are susceptible to EBOV infection (Bray, 2001). The block of IFN signaling was also associated with p38 phosphorylation inhibition, a central molecule for the mitogen activated-protein-kinase (MAPK) signaling pathway linked to IFN. Comparatively, DCs, which are crucially involved with both innate and adaptive immunity, don’t seem to properly execute their function post-EBOV infection. In vitro demonstrations showed that DCs infected with the virus failed to generate pro-inflammatory cytokines and expression of co-stimulatory molecules, like CD80 or CD86, resulting in the absence of T-cell proliferation and abnormal maturation (Bosio et al., 2003; Mahanty et al., 2003). EBOV infection in humans shows evidence of intravascular apoptosis and loss of mRNA for CD3, CD8 and T-cell receptor (TCR)-Vb from peripheral blood mononuclear cells (Baize et al., 1999).\n\nFrom our last study (Achinko & Dormer, 2015), we showed that the Lymphocyte Tyrosine Kinase (LCK) was dominantly associated with immune related functions in an EBOV and host protein interaction map. With this information, more studies on LCK guided our understanding in the role it plays in EBOV infection. LCK is involved in a pathway for TCR activation (Palacios & Weiss, 2004) and is dominantly expressed in T-cells with localization in microdomains. It plays crucial roles in several immune related processes, such as activation of early immune events, cell cycle progression, differentiation of the thymus and Th1/Th2 cells (Yamashita et al., 1998), homeostatic proliferation and apoptosis of naïve T-cells. LCK is organized in domains: a C-terminal kinase domain, homology domains to Src (SH4, SH3 and SH2) and the N-terminal region. SH3 is important in protein-protein interactions and recruits downstream molecules like phosphatidylinositol-4,5-biphosphate 3-kinase catalytic subunit (PI3K) and MAPK (Togni et al., 2004). LCK protein expression significantly decreases in different types of cancers (Majolini et al., 1999) and type 1 diabetes (Nervi et al., 2002), but it has not been associated with chronic viral related diseases like EBOV. The human LCK gene is 14kb in size with 12 exons, implicating it with alternative splicing events (Modrek et al., 2001). Two structurally different promoter regions (distal and proximal) separated by ~35kb, control the gene (Takadera et al., 1989). Through differential usage of each promoter, type I mRNA is produced from the proximal promoter and type II mRNA from the distal promoter. Alternative splicing at the distal promoter with about five different start sites, produces three different mRNAs, two of which are dominantly oncogenic related, leading to a loss of interaction with CD4 and CD8 (Huse et al., 1998), and CD3/TCR (Goldman et al., 1998). Interleukin (IL)2 is a T-cell growth factor and is known to induce human peripheral T-cell lymphocytes to re-enter the cell cycle. Due to a lack of intrinsic catalytic activity at the IL2 receptor, early responses to IL2 stimulation originate from cytoplasmic enzymes with receptor association. It has been shown in peripheral blood lymphocytes that IL2 is stimulated through a dependent tyrosine phosphorylation pattern engaging the LCK SH2 domain (Vogel & Fujita, 1995).\n\nThe associated evidence of LCK with EBOV and immune activation prompted the design of this study by exploring data from the Gene Expression Omnibus (GEO) database (Wilhite & Barrett, 2012) hosted at the National Center for Biotechnology Information (NCBI; http://www.ncbi.nlm.nih.gov/), which was created in 2000 for storing high throughput gene expression data, exploited for research and dissemination (Edgar et al., 2002). Functional genomics datasets deposited here are both from microarray experiments and sequence-based technologies (Wilhite & Barrett, 2012).\n\nThe only EBOV data documented in GEO is from Zaire species. Through NCBI Entrez Query (Sayers et al., 2009), related EBOV data, with humans as hosts, was obtained by the present study and the relative mean (RMEAN) values of genes per sample were used to perform expression variation analysis across datsets, which grouped samples in three clusters based on LCK and/or CD209 expression. All samples showed a low expression of major histocompatibility complex (MHC) class I human leukocyte antigen (HLA) immune genes, which are critical for viral clearance. This study brings to light a possible immune pathway exploited by EBOV and other related viruses for T-cell infection and immune suppression. Given the need for sequenced data on various EBOV species to enable intra species comparative analysis, our group is currently trying to decipher a possible EBOV immune pathway, with implications for identifying potent antiviral therapeutic targets and possible markers to differentiate viral disease.\n\n\nMethods\n\nBased on our previous publication on differential immune responses on relative antigenic surfaces presented by EBOV (Achinko & Dormer, 2015), we aimed to understand the variation in viral envelope GP surfaces and their post signaling responses within the cell. Data was extracted from GEO (RRID: SCR_005012), which has three divisions: a) GEO Database (www.ncbi.nlm.nih.gov/geo/), a public repository which provides tools to query functional genomics data and download experiments and gene expression curated profiles from deposited sequenced and micro-array datasets; b) GEO Datasets (www.ncbi.nlm.nih.gov/gds), a storage site for molecular abundance and curated gene expression datasets assembled from the GEO repository; and c) GEO Profiles (www.ncbi.nlm.nih.gov/geoprofiles/), a storage site for individual molecular abundance and gene expression profiles assembled from the GEO repository. We focused on GEO Profiles, and using the terms “GP (viral Envelope glycoprotein)” and “virus”, all data related to experiments targeting these words, including EBOV, was extracted for further analysis.\n\nThe data obtained per sample had the following parameters: sample identity, title of the study, relative standard deviation (RSTD), relative mean (RMEAN), gene name, Genebank accession number, taxon and GEO dataset type (Dataset 1; Achinko et al., 2016a). Genes of Interest, with signaling as their molecular function and previously shown to be dominantly involved in an EBOV gene interaction map (Achinko & Dormer, 2015) (MVP, POLG, HLA, SRC, LCK, MERTK, TYRO3, UFO, ACK1, BMPR2, YES, FYN, ROCK2, PAK1, TBK1, TTBK1, IKKE, IMB1, IKBA, IL1, IL2, IL3, IL4, IL5, IL6, IL7, IL8, IL9, IL10, CD209) (Supplementary Table 1), were used as a basis to extract gene related expression data across samples for differential expression analysis. Their molecular functions, in relation to genetic interactions between host and virus as represented in the NCBI gene database (http://www.ncbi.nlm.nih.gov/gene), were considered across samples. Two other parameters considered were if the genes were upregulated and downregulated at least 2 fold in EBOV samples, which were extracted to complete the total list of genes (452 genes) used for expression comparative analysis across samples. The dominant genes observed on the EBOV interaction map, as selected above, were extracted separately into a gene expression heatmap with the aim of comparing how LCK gene, previously shown to be highly involved with immune functions (Achinko & Dormer, 2015), was related to other signaling genes, with special emphasis laid on the various MHC class I and II genes and CD209. LCK gene variants were compared to those seen for HLA and CD209 with the aim of understanding gene functional variation (Supplementary Table 1) and population immune variation related to the disease. The main MHC I HLA immune gene variants (A, B and C) most observed in the data were individually analyzed to see which immune variant occurred more often within the various samples.\n\nFor each gene, the RMEAN data was extracted and processed in two sets using a unique code (Achinko, 2016). One set considered the sum of all extracted values per gene per sample and then normalized across samples, while the other set was not normalized. Gene values per sample were entered into a table of values using a shell script. Data analysis for various statistical parameters were later processed using R version 3.3.1 (R Development Core Team, 2008; RRID, SCR_001905) (Dataset 1; Achinko et al., 2016a). For the normalized table, gene values were summed across samples (GS, gene summed) and all genes with GS = 0 were removed from the table and a minimal value of 0.001 was added to every value, so as to eliminate zeros from the table. The final normalized value per gene per sample was the Log2 [(Sample value + 0.001)/GS], and all values were considered for cluster analysis using Cluster version 2.0.4 with algorithm PAM wherein the input is a distance matrix from daisy (Kaufman & Rousseeuw, 1990) with metric=gower, a dissimilarity coefficient (Gower, 1971), and a cluster coefficient of 2. PAM is more robust; hence minimizing the sum of dissimilarities in values. The graphical representation from PAM clustering used the silhouette parameter (Rousseeuw, 1987) for interpretation and validation of clusters by showing which objects (genes) properly sit within their cluster or somewhere in between. The silhouette width is considered for clustering evaluation with the average used for validating clusters; hence used for selecting the number of clusters (e.g, two clusters in our case). Linear model analysis (Gelman & Hill, 2007) was used for t-statistics (R Core Team, 2013), which compared different parameters (skewedness and kurtosis) and F-Test statistics (R Core Team, 2013) was considered as a basis to accept or reject the null hypothesis of equal variance between samples at P value < 0.05. ANOVA two-way factorial (R Core Team, 2013) was used to compare pairwise expression patterns between samples from identified clusters. Using gplots version 2.1.0 (Gao et al., 2015) and RColorBrewer version 1.1–2 (Neuwirth, 2014), the data was subjected to hierarchical clustering (R Core Team, 2013; RRID: SCR_009154) and plotted using heatmap.2. Principal component analysis (PCA) performed on the data was done using ggbiplot library version 0.55 (Vu, 2011), which uses a covariance biplot scaled to one, wherein the covariance is approximated to the inner product between the variables while distances between points approximates to Mahalanobis distance (Mahalanobis, 1936).\n\n\nResults\n\nBased on the query parameters, 54 samples were identified in GEO (Dataset 2; Achinko et al., 2016b), with genes ranging from 8,000 to 154,000, as submitted by original studies (Figure 1). These samples were further filtered for Homo sapiens only and 34 samples, including Zaire EBOV, formed the final list used for further analyses. The final list of 34 samples (Table 1) cut across different viral related diseases known to infect humans as host, with different pathogenic effects.\n\nA total of 54 samples were identified for both human and mouse hosts. Genes expressed showed counts varying from 8,000 to 154,000 genes for different samples. EBOV, Ebola virus; GP-V_Sample_1, Glycoprotein and virus sample 1.\n\nIn total, 34 viral samples related to studies involving humans as hosts were obtained and the Relative mean (RMEAN) was further used to evaluate differential expression patterns across samples. The sample codes on the left was used for data graphs and the heatmap codes to the right were used for heat map figures.\n\nFrom the collected data, normalization showed a relative range from 0 to 140 RMEAN frequency values (Figure 2). This large mean variation observed in some samples resulted in a linear model analysis of the mean in relation to skewedness and kurtosis of the dataset. The regression model showed an adjusted R-squared value of 0.3814, implying non-linearity. The median of residuals was equal to -0.01912, with many samples below the regression line and having negative values. The F-statistic value was 11.17 > F critical value of 2.1646, for 2 and 31 degrees of freedom. This was significant with P value = 0.0002219 and α=0.05, hence rejecting the null hypothesis and supporting strong differences within samples. The t statistics for the data showed significant values at α=0.05 (0.01496) for skewedness and α=0.01 (0.00814) for kurtosis, indicating significant differences in the respective parameters. Plotting the graph for residuals (y-axis) against fitted sample values in the model (x-axis) (Figure 3) showed that residual values ranged from -0.1 to +0.2, and though random, some few points deviated considerably from the regression line and this pattern was observed in three main samples [EB_Sample_1, EB_Sample_2 and EB_Sample_3 (Table 1)], which also clustered together. The log2 transformation of the data, as a means to observe relative gene expression patterns across samples, showed mean gene values per sample greater than zero, but with confidence intervals ranging from -5 to +5 (Figure 4). The total relative mean variation per sample showed no big difference between samples. This distribution of of the samples led to a random selection of the sample per cluster with the most gene related data, which were: sample 1 (EB_GEO_Sample_1) from the kurtosis graph cluster (KGC) (Figure 3), which also included samples 2 and 3; sample 27 (EB_GEO_Sample_28) from the large mean deviation cluster (MDC) (Figure 2), which also included samples 28 and 29; and sample 30 (EB_GEO_Sample_32) from the EBOV cluster (EBOVC), which also included samples 31, 32, 33 and 34. The selected samples were from studies concerning the following: sample 1, c-Myb and oncogenic variant v-Myb transcriptional activities; sample 27, Sindbis virus-induced cell death; sample 30, Zaire EBOV time course infection of macrophages. Samples 1 and 27 were each paired with sample 30 for comparative analysis. For gene count per sample versus log2 expression values, it was observed that samples 1 and 30 had a wide log2 expression coverage (Figure 5A), with the former showing more samples expressed with a fold change >2 for ≥ 30 genes. This distribution could account for the low values observed for skewedness and kurtosis in these samples. Comparatively, samples 27 and 30 showed a similar log2 expression coverage pattern, but the former had most of its gene expression concentrated between 0 to +2 fold change with very high gene counts (Figure 5B); hence possibly accounting for the high skewed and kurtosis values observed previously. The comparative analysis for each pair of samples representing the clusters related to gene expression patterns observed within clusters. Statistical analysis using ANOVA with a two-way factorial design showed a statistical difference for sample 30 (EBOV) compared to sample 27; F value = 191.415 and P value < 2e-16 and sample 30 compared sample 1; F value = 18.712 and P value = 1.88e-05. The difference between sample 1 and sample 27 was not significant (Figure 5C) with F value = 0.248 and P value = 0.619. Therefore, EBOVC showed significant differential expression patterns with KGC and MDC, but KGC and MDC showed no significant differential expression patterns.\n\nIn total, 34 samples related to human hosts showed gene expression mean frequency variation ranging from 0 to 140. All of the samples had their mean close to 0, but with significant deviations seen in a few samples. EB_sample_32 was specific to EBOV. EB_sample, Ebola virus related sample; EBOV, Ebola virus.\n\nA linear regression analysis for mean against skewedness and kurtosis (mean ~ skew + kurtosis) showed that though the residuals showed a random distribution it wasn’t distributed properly for a straight line through the points. EB_Sample_1, EB_Sample_2 and EB_Sample_3 clustered together with residuals showing a higher positive value. The residual median value = -0.01912; hence, several points were negative. The regression statistics showed adjusted R2 = 0.3814, F statistic = 11.17 with P value = 0.0002219. T statistics showed P value = 0.01496 for skewedness and P value = 0.00814 for kurtosis. This showed that the variation in the data resulted more from kurtosis.\n\nThe log2 fold change ranged from -5 to +5, and in all the samples it was observed that the mean distribution was <0, suggesting that the majority of genes in all samples were upregulated. EB_sample_1, EB_sample_2 and EB_sample_3 were more upregulated than the rest of the samples and was considered the (kurtosis group cluster because it also clustered in the kurtosis graph (Figure 3). EB_sample_27, EB_sample_28 and EB_sample_29 were considered mean deviation cluster because of the large mean variation observed in these samples (Figure 2). EB_sample_30, EB_sample_31, EB_sample_32 EB_sample_33 and EB_sample_34 were considered (Ebola virus cluster because of the grouping with the Zaire Ebola sample (EB_GEO_Sample_32)).\n\nComparative gene expression profile between sample 1 (KGC) and sample 30 (EBOVC) with the former showing the majority of its genes (25 to 35) between +2 to +4 upregulation, and the latter having majority of its genes (15 to 20) between 0 to +2 upregulation. The difference was statistically significant with F statistics = 18.712 and P value = 1.88e-05. KGC, kurtosis group cluster; EBOV, Ebola virus; EBOVC, EBOV related cluster.\n\nComparative gene expression profile between sample 27 (MDC) and sample 30 (EBOVC) with the former showing the majority of its genes (20 to 40) between +1 to +2 upregulation. Both samples showed a statistical difference with F statistics = 191.415 and P value = < 2e-16. MDC, mean deviation cluster; EBOV, Ebola virus; EBOVC, EBOV related cluster.\n\nComparative gene expression profile between sample 1 (KGC) and sample 30 (MDC) showed no statistical significance between samples. F = 0.248 and P = 0.619. KGC, kurtosis group cluster; MDC, mean deviation cluster.\n\nCluster analysis using the PAM package on the 452 genes extracted for this study, identified two main clusters for the relative gene expression profile across samples (Figure 6A), and had the following gene distribution patterns: cluster 1 (401 genes) with average silhouette width = 0.53; and cluster 2 (51 genes) with average silhouette width = 0.16. The total average silhouette width = 0.49, implying very few genes showed a substantial variation across samples. The expression heatmap (Figure 6B) showed a similar gene distribution pattern for hierarchical clustering as observed with PAM analysis, resulting in grouping sample 30 (Zaire EBOV) with samples 13, 30, 31, 33, 34. Though sample 13 showed a similar RMEAN distribution pattern with those first mentioned in the EBOVC (Figure 2), emphasis remained on EBOVC (blue side color) as originally indicated, for further discussion. This cluster showed more genes with a relative expression ranging from 0 to ≤ +2 folds in upregulation, meanwhile a few genes were relatively downregulated up to 6 folds; hence correlating the pattern observed for the 51 genes in the PAM cluster. The cluster for samples 1, 2 and 3 (grey side color) observed in the regression graph also stood out on the heatmap, showing a similar and relatively high gene expression pattern across these samples. An extraction of the three clusters (KGC, MDC and EBOVC) into a heatmap (Figure 6C) showed that sample 30 had an expression pattern much closer to MDC than EBOVC The heatmap for the selected groups of genes (Figure 6D) dominantly involved with signaling (LCK-HLA heatmap) were plotted against all HLA gene variants identified across samples, and though EBOV still clustered in the same pattern with similar samples, sample 13 clustered with KGC not EBOVC, indicating that immune centric gene expression could show more segregation on related samples. An extracted HLA centric heatmap (Figure 6E) for the three clusters showed a similar pattern seen on the cluster specific heatmap observed for all the genes. Similarly, sample 30 clustered more with MDC than EBOVC. Based on EBOVC, differential gene expression pattern showed no expression (green color in heatmap) for some genes (MHCII: HLA-DPA2, HLA-DPB2, HLA-DQB2, HLA-DQA2; MHCI: HLA-H, HLA-L, HLA-F-AS1;IL1 receptor; and CD209), a ≥ -2 to ≤ +2 fold change (black color on heatmap) for other genes, including the main MHCI genes (HLA-A, HLA-B, HLA-C and HLA-E, HLA-F) MHCII HLA-DRB1, IL2 receptor alpha and SRC proto-oncogene non-receptor tyrosine kinase, and some genes showed fold changes of > 2, including MHCII HLA-DQB1, HLA-DPA1, HLA-DRB4, HLA-DRA, HLA-DOB, HLA-DPB1, HLA-DMA, HLA-DMB, HLA-DRB3, HLA-DRB4, HLA-DRB5, HLA-DRB6, IL2 and LCK. The most expressed genes on the cluster specific HLA centric heatmap were HLA-DRB2, a MHC class II gene, and MYB, a proto oncogene transcription factor), with an upregulation of ≥ +4. Relative to other clusters, gene variations observed for CD209, LCK, IL2 and MYB were further discussed given their role in immune activation.\n\nSilhouette cluster differentiation plot for all 452 genes used in the analysis. The PAM cluster package with metric gower was used to identify two clusters. The biggest cluster showed 401 genes and the second cluster showed 51 genes with some genes extending to the negative side of the silhouette width axis.\n\nB) Heatmap for all 452 genes in this study with hierarchical clustering for genes on the y axis and all 34 samples on the x-axis. Samples in three main clusters related to KGC with 3 samples (EB_GEO_Sample_1, EB_GEO_Sample_2, EB_GEO_Sample_3), MDC with 3 samples (EB_GEO_Sample_27, EB_GEO_Sample_28, EB_GEO_Sample_29) and EBOVC with 5 samples (EB_GEO_Sample_30, EB_GEO_Sample_31, EB_GEO_Sample_32, EB_GEO_Sample_33, EB_GEO_Sample_34). These were highlighted with the following color codes for further analysis: KGC, gray); MDC, pink; and EBOVC, blue. EB_GEO_Sample_32 was specific to Zaire EBOV. KGC stood out with high expression profiles seen across its genes compared to all other samples. C) Heatmap for 452 genes specific to clusters of interest. The color codes define clusters of interest and EB_GEO_Sample_30 showed that it was closer to MDC than EBOVC, as originally observed. D) Heatmap of 57 genes with functions related to signaling and immune response. Genes were extracted from the total data, but all samples were considered for clustering pattern comparative analysis with total gene heatmap. Clustering pattern showed a variation on samples with specific observation for EB_GEO_Sample_13, which originally clustered with EBOVC, but now clustered with KGC. Color code represents clusters of interest. E) Heatmap for 57 genes specific to clusters of interest and showing a similar pattern to previous heatmaps for clusters and all genes. Genes of interest were CD209 (DC-SIGN), a C-type lectin expressed in KGC and MDC, but absent in EBOVC, and LCK, a lymphocyte specific tyrosine kinase, which is expressed in KGC and EBOVC, but not MDC. KGC, kurtosis group cluster; MDC, mean deviation cluster; EBOV, Ebola virus; EBOVC, EBOV related cluster.\n\nThe frequency distribution of all MHCI gene variants identified across samples were evaluated. It was observed that relative to HLA-B, which had the highest frequency distribution (100%), HLA-C had a 85% frequency distribution and HLA-A was 38%, while the top three MHCII genes ranked as follows: HLA-DRB1, 25%; HLA-DQB1, 16%; and HLA-DPB1, 15% (Figure 7). The three top MHCI gene variants considered also showed different frequency patterns across samples, with the following top 5 alleles: HLA-B*2704, HLA-A*2711, HLA-B*2702, HLA-B*1537 and HLA-B27052. The only HLA class I allele observed specifically for EBOV was HLA-B*2702. The top three MHCII gene variants considered belonged to: HLA-DPB1 with HLA-DPB1*0202, HLA-DPB1*0501, and HLA-DPB1*1301 alleles (accession number: NM_002121) and HLA-DQB1 and HLA-DRB1 with HLA-DR2-Dw12, DRw6, DQw1, and DQw9 alleles (accession number: NM_002123). The MHCII HLA variant alleles specific for EBOLA were HLA-DR2-Dw12, DR7, DQw9, DR2.3 and DQw2.\n\nImmune genes for MHC HLA class I & II were extracted from the total data and to it three more genes of interest CD209 (DC-SIGN) a C-type lectin, LCK and MYB, a transcriptional activator, were added to the total data. HLA class I genes had the highest frequency: HLA-B was the highest (100%), followed by HLA-C (85%) and HLA-A (38%). For HLA class II genes, the highest was HLA-DRB1 (25%), then HLA-DQB1 (16%) and HLA-DPB1 (15%). LCK, lymphocyte specific tyrosine kinase; DC-SIGN, dendritic cell specific C-type lectin intercellular adhesion molecule 3 grabbing nonintegrin.\n\nLCK is a known proto-oncogene of the Src family tyrosine kinase and is an important molecule involved in signaling, which relates to the selection and maturation of T-cells under development (http://www.ncbi.nlm.nih.gov/gene/3932). EBOV with related viral diseases, like HIV, are known to interact through their envelope GP molecule, with CD209 on the surface of dendritic cells to penetrate the host cell (Lin et al., 2003). However, it is not clear if LCK and CD209 do interact directly, but it has been shown that C-type lectins possessing the intracellular signaling immunoreceptor tyrosine-based activation motif (ITAM) interacts via SH2 domain on SRC kinases (Sancho & Reis e Sousa, 2012), and LCK have a similar domain. Based on our previous publication on EBOV antigenic peptides favoring successful viral transmission (Achinko & Dormer, 2015), LCK was identified as the main host signaling protein involved with immune related functions. The three clusters of interest, KGC, MDC, and EBOVC, showed three distinct patterns on the heatmap (Figure 6E) for LCK and CD209 gene expression. MDC showed a positive expression for CD209 [NM_021155 (Hijazi et al., 2011; Rappocciolo et al., 2006) and AF290886 (Bashirova et al., 2001; van Vliet et al., 2009)], but not LCK. EBOVC showed a positive expression for LCK [M26692 (Takadera et al., 1989), M36881 (Lesley et al., 2002; Wu et al., 1995) and U23852 (Vogel et al., 1995)], but not CD209. KGC showed expression for both LCK [NM_005356 and U07236 (Wright et al., 1994)] and CD209 [AF290886 (Bashirova et al., 2001; van Vliet et al., 2009), AY042224 and NM_021155 (variant 1: Hijazi et al., 2011; Rappocciolo et al., 2006)]. The differential expression pattern observed for IL2 and MYB genes was also considered because the former is known to be induced by LCK as its effector, leading to T-lymphocytes division (Vogel et al., 1995), while the latter is known to be an IL2 responsive gene and so becomes activated downstream of IL2 (Beadling & Smith, 2002). IL2 was expressed only in the KGC samples [NM_000586 (James et al., 2002; Lupino et al., 2010) and M22005 (Weir et al., 1988)] and EBOVC related samples [M22005 (Weir et al., 1988), S82692 (Chernicky et al., 1996) and X00695 (Holbrook et al., 1984)], while MYB was expressed in all three clusters, but the highest expression (> +2 fold up regulation) was seen in EBOVC. MYB variants for the various clusters were EBOVC [M13666 (Slamon et al., 1986), M15024 (Dash et al., 1996) and U22376 (Beadling & Smith, 2002)], KGC [NM_005375 (variant 2: Lorenzo et al., 2011) and AI357042 (V-myb myeloblastosis viral oncogene homolog (avian))] and MDC [NM_005375 (variant 2: Lorenzo et al., 2011) and AI357042 (V-myb myeloblastosis viral oncogene homolog (avian))].\n\nFor the PCA, the covariance biplot at scale = 1, generated scattered plots for the two main components considered, and data points were plotted within a correlation circle of radius = 1, based on the Mahalanobis distance, which looks at point variations away from the mean with a maximum deviation of 1 as belonging to the dataset or >1 as not belonging to the related dataset. Focusing on the total dataset, it was observed on the y-axis (PC2) that gene related variation explained 19.2% variation among samples, while on the x-axis (PC1) sample related variation explained a 22.8% variation across genes (Figure 8A). For the HLA immune centric genes, the y-axis (PC2) explained a gene related variation of 18.0% variation among samples while on the x-axis (PC1) sample related variation explained a 31.3% variation across genes (Figure 8B). Several genes were localized out of the correlation circle in the HLA immune genes centric group, including X-linked IL1 receptor accessory protein-like 2, IL1 receptor-like 2, IL1 beta (IL1B), tyrosine-protein kinase receptor (TYRO3), serine/threonine-protein kinase (TBK1), LCK, HLA-DRB2, HLA-H and HLA-L.\n\nThis is a covariance biplot analyses, which looks at the variance across genes and samples and plots them as a correlation circle with radius = 1, wherein a maximum deviation from the center and out of the circle considers the point of interest as not belonging to that dataset. A) Biplot for all 452 genes in the dataset and all 34 samples. In total, 19.2% variation could be explained across samples, while 22.8% variation could be explained across genes. This high variation in genes compared to samples could be seen by the grouping of samples on the plot compared to a more scattered pattern for genes, and with LCK, one of the genes of interest seen out of the plot. B) Biplot for 57 genes with signaling and immune response functions across all 34 samples. In total, 18.0% variation could be explained across samples, while 31.3% variation was seen across genes. The gene related variation placed LCK out of the correlation circle as not part of the dataset.\n\n\nDiscussion\n\nThe concept of differential gene expression has always been referred to datasets where, for a particular diseased condition of interest, data is collected for the diseased state (experimental) and the non-diseased state (control), and the difference in gene profiles between the two conditions helps explain the gene regulation pattern within the cell at a given time. This becomes a challenge when post analysis of the data is very difficult to address if no comparative and related well-studied dataset is available; hence, the data ends up with more questions to be answered than solutions to address the diseased condition. This has been the case when addressing the EBOV disease burden in 2014, which lead to the death of numerous people and left many families in pain (Chiappelli et al., 2015). Therefore, there is a need to exploit the currently documented EBOV disease data alongside virus related data with similar genetic underpinnings to better understand the common and related disease pathways that could be further molecularly characterized to better arrest EBOV in the future.\n\nThe viral GP envelope remains the main genetic component identified by CD209 on human DCs, as a target for generating an immune response (Achinko & Dormer, 2015). For the 34 human related virus data samples obtained for analysis based on our search terms, the gene related data ranged from 8,000 to 154,000 genes for different samples. This genetic profile depicts a viral related expression variation pattern in a wide majority of genes within the host, and a large number of them co-regulated in a similar fashion across different viral diseases, as seen in the collected samples. Hence, genetic comparative analysis of gene expression profiles between related samples could tell a better story on immune regulatory patterns and viral targets, which could be exploited for disease prevention. Regression analysis of the mean against skewedness and kurtosis showed a non-linear pattern across residual points, though a random pattern was observed in the residuals. The statistical analysis with P value < 0.05 confirmed variation in the samples with more significance for kurtosis than skewedness. Kurtosis, which is a measure of the heaviness of tails in a data, considers higher kurtosis values as extreme and less frequent deviations of the variance in the data, compared to frequently and modestly sized deviations (Westfall, 2014). Therefore, MDC with high kurtosis values suggests that gene expression distribution is not uniform across each sample, with some genes showing extremely high and low expression patterns (Figure 5B). Samples in MDC showed variable genetic expression patterns related to genes of interest that is, up regulation of CD209 and MYB and down regulation of LCK and IL2. This cluster showed similar genes of interest expressed with KGC and not EBOVC. Virus samples in this cluster showed an inhibition of IFNs, which are activated downstream of Pattern Recognition Receptors (PRRs) like Toll-like receptors (TLRs) on epithelial and dendritic cells. Proinflammatory cytokines are also activated through this pathway, therefore the absence of LCK suggest that this gene could be activated downstream of this pathway and induction of IL2, a non-inflammatory cytokine, by LCK, further suggest a downstream pathway specific to activation of immune T-cells. Sample 27 (Laine et al., 2004) showed that Sindbis virus is an arthritis related arbovirus of the alpha virus group and is transmitted by insect vectors. Arboviruses and likewise EBOV, commonly cause disease outbreaks, resulting in high fatality rates. Sindbis virus is able to infect neurons, hence decreasing immune responses associated with type I and II IFNs (Griffin, 2010) through the Janus kinase (JAK)/STAT pathway (Simmons et al., 2010). The observed expression pattern in this sample was similar to sample 29; Bronchial epithelial cell line response to various airway pathogens. An example of airway pathogen with this pattern was Parainfluenza Virus 3 shown to block downstream antiviral mediators like IFN by modulating STAT1 phosphorylation (Eberle et al., 2015). Sample 28 (Schlee et al., 2004) showed that Epstein-Barr Virus (EBV) is a B-lymphotropic herpes virus and post infection of B-cells, nuclear antigen 2 (EBNA2) is one of the first viral genes expressed. EBNA2 is a transcriptional activator regulating expression of viral and cellular genes, which initiate and maintain cell proliferation. EBNA2 is needed for virus infected cells proliferation, establishing viral latency in large numbers of B-cells (Babcock et al., 2000). The strong transactivation domain of EBNA2 does not interact with DNA directly but rather with several components of RNA polymerase II transcription complex (Tong et al., 1995). Through sequence specific DNA, EBNA2 gets recruited to the promoter of target genes and can upregulate genes like CD23 and CD209. MDC showed that the inhibition of the IFN pathway in these related viruses leads to B-cells infection and possible evasion of MHC class I pathway shown to occur through EBNA1. The interaction of EBNA2 with c-myc confirms the upregulation of CD209 in these viral samples. Dendritic cells are professionally involved in antigen presentation with immune related function. Upon antigen capture from the periphery, their activation results in migration to lymphoid tissues and presentation of antigens to T-lymphocytes creating a pathogen-specific immune response (Janeway et al., 2004). C-type lectins, on DCs are known to possess carbohydrate recognition domains for interaction with GPs on pathogens, but some lectins possess a C-type lectin-like domain, which is non-specific to carbohydrate binding and is found on C-type lectin-like receptors (Geijtenbeek et al., 2004). This is the case of CD209 belonging to the DC-SIGN lectin group. CD209 lacks the ITAM required for LCK interaction and activation for immune induction (Sancho & Reis e Sousa, 2012), but it presents with a non-immunoreceptor tyrosine-based motif and is specific to DCs (Geijtenbeek et al., 2000). CD209 contains two dileucine internalization motifs and is known to mediate interactions between DCs and resting T-cells through the interaction with intercellular adhesion molecules (ICAM-2 and 3); however, the use of antibodies against DC-SIGN inhibits the induction of T-cells by DCs by >60%, (Geijtenbeek et al., 2000), suggesting that DC-SIGN (CD209) acts as an immune escape route. Through receptor mediated antigen endocytosis on DCs, antigens are processed by the endosomal or lysosomal pathway to form MHC class II peptide complexes for further presentation to CD4+ T-cells (Pieters, 2000). From our data, MHC HLA class II peptides were highly expressed in all clusters, with HLA-DRB2 being specific to EBOV. The ITAM domain interacts with the LCK gene through SH2 domains, and the low expression profiles observed for MHC HLA class I genes in the MDC cluster suggest that the lack of interaction between CD209 and LCK is critical for MHCI activation and subsequently, CD8+ T-cell induction. Therefore, DC-SIGN should induce MHC HLA class II genes through ICAM motifs, while other DCs with the ITAM motif, which interact with LCK, should induce MHC HLA class I genes. Therefore, other possible DC routes require urgent identification for antiviral therapy and immunotherapy needs to take into consideration the inhibition of CD209 route to completely eliminate viral infection.\n\nFor KGC, the kurtosis values were not high, but their pattern on the graph (Figure 3) showed considerable deviation of residuals from the regression line with a high positive value. This pattern was consistent with upregulation of almost all genes in these samples, which were much higher than those observed for EBOVC, as seen in Figure 5A. This observed pattern could suggest that strongly positive samples with high residual deviation on a regression plot for the mean against skewedness and kurtosis present with >90% of genes upregulated, including immune related genes (HLA class I and II). This cluster showed a statistical difference with EBOVC and not MDC suggesting similar expression patterns with the latter and not the former. KGC showed similar CD209 and MYB genes with MDC while with EBOVC, it showed different LCK and MYB genes but similar IL2 genes. Sample 1 (Quintana et al., 2011) showed that c-Myb transcription factor is involved in hematopoietic proliferation and its inhibition causes lethal anemia and lack of T- and B-cell development in mouse embryos. v-Myb is a transcriptional activator and is also considered an oncogene transduced by retroviruses known as avian leukemia virus E26 (ALV) and avian myeblastosis virus (AMV), due to truncation and mutations (Lipsick & Wang, 1999) at its C-terminal, also known as the 3’ end. The C-terminal of c-Myb presents four domain types conserved in chicken, mice and humans, which include transcriptional activation domain (Dubendorff et al., 1992), FAETL, an oncogenic activity domain, (Fu & Lipsick, 1996), TPTPF, which is conserved in MYB genes, and EVES (negative regulation and molecular interaction) (Dash et al., 1996). Mim-1 was the first chicken gene identified to be regulated by c-Myb and E26 v-Myb, but not AMV v-myb. The three consecutive binding sites at the Mim-1 promoter region are known to interact with v-Myb (Ness et al., 1989), and both c-Myb and v-Myb activate their genes through the PyAACG/TG motif (Biedenkapp et al., 1988). It has been shown that enhanced alternative RNA splicing in leukemia samples can result in ~60 different mRNA variants encoding at least 20 different c-Myb versions, since at least 6 alternative exons and several splice donors and acceptors do occur in its standard exons. The many c-Myb variants show differences only at the C-terminal site, and this leads them to differentially target genes based on selected promoter interactions (O’Rourke & Ness, 2008); hence asserting the fact that enhanced splicing in leukemias produce oncogenic and truncated c-Myb variants contributing to leukemogenesis. Sample 2 (Sirois et al., 2011) showed that HIV-1 also infects macrophages, using it as potent reservoir to rapidly disseminate throughout the body and also infect CD4+ T-cells through a viral synapse. HIV-1 infection on macrophages induces IFNs and Interferon stimulating genes (ISGs) through innate immune response, which further stimulates a polypeptide like-A3 of the apolipoprotein-B mRNA-editing, enzyme-catalytic (A3A) family known to be critical for monocyte resistance to infection. Decreased A3A expression during differentiation of macrophages leads to an HIV-1 vulnerable target cell population (Peng et al., 2006) hence circumventing the IFN protective effect in these cells. Sample 3 (Lung et al., 2014) showed that EBV is a major player in the development of nasopharyngeal carcinoma, which is an epithelial cancer. As previously seen above for sample 28, EBV through EBNA2, preferentially infects B-cells and other immune cells leading to tumorigenesis, which could actively occur through its direct activation of c-myc oncogene (Kaiser et al., 1999). KGC showed a dominant regulatory pattern of genes through transcription binding and mRNA spliced variants. It demonstrates a genomic pattern in cells similar to tumors, and is mainly caused by oncogenic variants of transcription factors involving two avian virus types (ALV and AMV). The C-terminal mutations associated with v-Myb and c-Myb have been implicated in oncogenesis, and given that c-Myb C-terminal mutations are involved with leukemogenesis, chronic lymphocytic leukemia also shows wide intron retention resulting from mutations on the spliceosome factor SF3B1, incapacitating it to properly splice 3’ ends of nascent mRNAs (Wan & Wu, 2013). In this cluster we also observed evasion of the immune system and direct infection of immune cells. The transcription directed role of c-myb and v-myb suggests an active involvement and specificity of these proteins in immune cells leading to leukemogenesis.\n\nIn the EBOV cluster, Sample 31 (Zilliox et al., 2006) demonstrated that measles virus is associated with mortality and resultant life long protection. It has been shown that measles virus can infect monocytes derived immature dendritic cells and replication occurs through dendritic cell maturation and subsequent activation of T-cells (Servet-Delprat et al., 2000). Infection of dendritic cells by measles virus leads to down-regulation of some cells (CD40-CD40L signaling co stimulating TCR/CD3, IL-12 and CD4+ T-cells) and upregulation of TNF-related apoptosis-inducing ligand to initiate apoptosis in T-cells. Sample 32 (An et al., 2005) showed that Kaposi’s-sarcoma associated herpesvirus could be the main cause of Kaposi sarcoma, primary effusion lymphoma (lympho proliferative disease) and Castleman’s disease. Most cells with these malignancies are latently infected, expressing in majority latency-associated nuclear antigen (LANA) a viral latent gene (Dupin et al., 1999). LANA is a multifunctional protein and functionally related to EBNA-1 of the EBV earlier discussed (Sample 28), which was seen to escape MHC class I pathway. Sample 33 & 34 (Rieger et al., 2004) showed that radiation toxicity on EBV immortalized B-cells were in majority associated with DNA damage leading to abnormal transcriptional responses. The virus distribution pattern in EBOVC showed that viral infections affect transcriptional responses in infected cells leading to DNA damage. The pattern in dendritic cells showed a TNF upregulated apoptosis of T-cell induced pathway suggesting massive destruction of cells. The upregulation of dysfunctional LCK in this cluster suggests a viral transcriptional regulatory pattern in infected cells possibly resulting from related genes like LANA and EBNA1 and EBNA2, leading to several LCK variants expressed in DCs. Downregulation of CD209 in this cluster shows that some viruses possibly use an alternate route through dendritic cells and for measles viruses, it has been shown that, CD150 (De Witte et al., 2008) a signaling lymphocytic activation molecule [SLAM (http://www.uniprot.org/uniprot/Q13291)] involved in the interaction of both innate and adaptive immune systems, is most used. SLAM is mainly expressed on thymocytes, lymphocytes, macrophages and mature DCs (Yanagi et al., 2006). SLAMF1, the T and B lymphocytes SLAM version was shown to recruit FYN (Tyrosine protein kinase) through its cytoplasmic adapters for its phosphorylation and FYN was seen clustering with LCK in our previous work (Achinko & Dormer, 2015). SLAMF1 also mediates proliferation of activated T-lymphocytes by IL-2 independently during an immune response (Aversa et al., 1997) and this IL2 related activation has been shown previously for LCK (Vogel & Fujita, 1995). For innate immune responses, SLAM recruits PI3K complex for response against Gram-negative bacteria in macrophages (Berger et al., 2010) and PI3K also acts downstream of LCK. SLAM could be a possible route used by EBOV through its soluble glycoprotein molecule to penetrate the cell unlike the structural glycoprotein used by various viruses. Differentially mutated LCK variants were expressed in KGC for related viral infected immune cells and EBOVC for dendritic infected cells, suggesting that, the dual role played by SLAM could engage LCK protein in different functional pathways due to its differential regulatory pattern in lymphocytes, dendritic cells and macrophages. Herpesviruses associated with EBOVC has been shown to encode proteins having inhibition effect on MHC class I antigen presentation molecules, hence helping the virus evade cytotoxic T-cells. MHC class I molecules are located in the endoplasmic reticulum (ER) where they fold, assemble and bind 8–10 amino acids of antigenic peptides, all required for the assembly process. Therefore viral peptides need to be translocated to the ER and this process is facilitated by the transporter associated with antigen processing (TAP) protein (van Endert et al., 2002), which further presents a scaffold for binding while ER chaperones fold novel MHC class I molecules, respectively. Tapasin protein forms the critical link facilitating the binding of pathogen derived peptides and MHC class I derived variants to TAP, after which, the MHCI-peptide complex are selectively transported through cargo vesicles to the Golgi apparatus (Spiliotis et al., 2000). The herpes simplex virus ICP47 cytosolic protein exploits the TAP role in MHCI-peptide assembly by acting as a competitive inhibitor of peptide binding (Neumann et al., 1997) hence preventing the MHCI-peptide formation complex. The MHC class I molecules were most downregulated in EBOVC compared to other clusters but in general the expression in all clusters was < 2 folds. The effect of herpes proteins had functional similarities with EBV whose protein effects showed different regulatory effects through EBNA1, EBNA2 and c-myc in the viral infected cells. The differential regulatory pattern of LCK in dendritic cells and immune cells suggests an interaction pathway which could be key to TAP and MHCI proper functioning and exploited by several viruses for their pathogenic gains. Therefore proper understanding of viral inhibition pathway and LCK interaction could help in developing therapies favoring immune activation against viruses through the MHC class I molecules. Previously seen in Sample 27 (MDC) that inhibition of the IFN pathway could result in LCK downregulation, suggest a possible activation route for LCK whose upregulation in this cluster resulted downstream of IFNs and cytokines production after DC activation. During an infection (viral or bacterial), innate immune response results in activation of PRRs, such as Toll-like receptors and CD209 on DCs, histocytes, Kupffer cells and macrophages, with subsequent release of inflammatory mediators (histamine, bradykinin, serotonin, leukotrienes and prostaglandins), which produce pain sensation and vasodilation of local blood vessels; hence attracting neutrophils as the main phagocytes to the affected site (Stvrtinová et al., 1995). This phenomenon should be the cause of exacerbation of gene expression and disease shock syndrome, commonly known as the cytokine storm (Mackenzie & Lever, 2007), seen in those infected by EBOV in the 2014 epidemic. Therefore downregulation of LCK could lead to viral inhibition of MHCI in infected cells permitting these cells to directly infect CD4+ T-cells when they migrate to lymphoid tissues.\n\nLCK is a T-cell specific leukemia related gene, found in several cancer cell types, possessing two promoter regions whose differential usage leads to LCK type I mRNA for proximal promoters and type II mRNA for distal promoters. Type II mRNAs have been shown to be dominantly expressed in T-cell leukemia and two colon cancer cell lines (Takadera et al., 1989). It was observed that type IIB mRNA was void of exon 1, which encodes LCK N terminal domain; hence lacking the sequence coding motif required for interaction with CD4 and CD8 co-receptors (Huse et al., 1998). Alternative splicing at the distal promoter with many splice sites produces another mRNA lacking exon 7, retaining intron B, which lacks the ATP binding site and is also very unstable (Nervi et al., 2005). This mRNA, void of exon 7, shows a deficient signaling pattern downstream of the complex CD3/TCR in an early stage immunodeficient patient (Goldman et al., 1998). LCK has also been documented to regulate CD28, one of the most efficient receptors required for T-cell activation and also PI3K (Gibson et al., 1996). IL2 has been shown to induce human peripheral T-cell lymphocytes to re-enter the maturation phase from rest (G0) (Meyerson & Harlow, 1994). It is activated through a receptor associated with catalytic activity by LCK. c-Myb is known to lie downstream of IL2 (Lauder et al., 2001); therefore, LCK interaction pathway could involve SLAM, TAP, PI3K, IL2 and c-Myb whose differential interaction in related cells drives gene transcription related to T-cell activity. This is the first time a suggested immune pathway relative to T-cell immune induction involving LCK required to arrest viral pathogenesis has been identified. Further investigation of this information could be key to preventing viral disease burden.\n\nThe CD209 variant in KGC showed that upon interaction with HIV-1, T-cell infection and pathogenesis by the virus is enhanced (Bashirova et al., 2001). Although it’s not clear if there is any direct genetic interaction between CD209 and LCK, CD209 and its signaling complex involves lymphocyte specific protein 1 (LSP1) and Raf-1, a serine/threonine-protein kinase, which upon activation results in a T-cell infection common to HIV (Gringhuis et al., 2009). The LSP1 gene suggests a similar functional class for LCK, which is also lymphocyte specific. In a general sense, viruses tend to use the DC-SIGN route to escape lysosomal degradation and thereby deviate from immune surveillance (Ludwig et al., 2004), leading to CD4 T-cell infection. Therefore, infection of CD4+ helper T-cells, common to HIV and KGC, suggest a pathway used by HIV and related viruses like EBV to escape immunity clearance by the cell. The statistical difference between clusters, as depicted by sample comparative analysis (Figure 5A and B), was statistically significant for EBOVC compared to KGC and MDC, suggesting that the lack of CD209 expression (EBOVC) and differential regulation of LCK expression (KGC) is critical for host antiviral competence. However, since KGC possessed both genes, and showed statistical difference with EBOVC, understanding viral routes critical to LCK will be important in understanding EBOV pathogenesis in its host. The PCA analysis also showed the role played by LCK and IL2 in immune activation, given that the former was far from the correlation circle of unit radius and the latter was at the border of the circle, suggesting a high deviation from immune related function with possible interaction in a pathway that requires proper immune activation. LCK and other signaling genes, including TYRO3 [a receptor kinase involved in signal transduction from extracellular to intracellular matrix and also the PI3K pathway (Shimojima et al., 2006)], TBK1 [a serine/threonine kinase involved in regulating inflammatory responses from foreign agents, like virus and bacteria, which acts downstream of Toll-like receptors (Buss et al., 2004)] and IL1B (a potent inflammatory cytokine involved in prostaglandin induction and the activation of neutrophils, T- and B-cells) were out of the correlation circle, which suggests that they may all be involved in a similar immune pathway for virus clearance. This pattern of genes out of correlation was also observed in the silhouette plot, where some genes in the second cluster were completely on the negative side of the silhouette axis. Observations from this study suggest a viral selective process to better engage host cell receptors for better penetration and infection by subduing the MHC class I HLA molecules necessary for antiviral functions. Deciphering this pathway is very critical for understanding and differentiating viral related clinical manifestations and identifying the right genetic approach to arrest viral epidemics in the future.\n\nThis is the first study carried out to understand viral differential usage of host cell routes to penetrate and infect the cell. Relative to EBOV, for which a lot of scientific research is currently going on to identify a possible therapy to prevent future occurrence and also arrest the disease in case of an epidemic, other viral diseases show similar manifestations and use similar entry routes and cellular pathways to infect the cell and could be good controls in understanding genetic mechanisms of viral pathogenesis. The three main clusters showed clearly that viruses differentially regulate LCK in their related cells of interest. KGC shows that viruses directly infect immune cells mutating LCK and for MDC, viruses downregulate LCK hence escaping MHCI pathway and directly infecting B-cells while in EBOVC, LCK is expressed in dendritic cells and mutated with inhibition of MHCI pathway. Several transcripts which could be critical to LCK differential functional pattern in either the innate immune pathway or the Adaptive immune pathway were identified and understanding their role in proper immune activation is critical because viruses all escape the MHC class I peptide recognition pathway hence infecting immune T and B-cells. The differential expression of LCK in dendritic cells and immune cells suggest pathways required for complete immune activation and clearance of viruses, which is under viral control in infected cells. This mechanism results in poor transcription of LCK through differential promoter usage and prevention of proper LCK pathway activation for immune induction. Viruses included in this study could actually be classified into three groups, as follows: CD209 dependent (MDC), LCK dependent (EBOVC) and CD209 and LCK dependent (KGC). The differential regulation of LCK for proper T-cell induction through differential promoter usage in EBOV disease, are currently under research by our group. Markers to identify and classify viral disease types in humans based on CD209-LCK dependent pathways are currently being exploited.\n\n\nSoftware and data availability\n\nRMEAN extractor software available from:\n\nhttps://github.com/brack123/GEO_RMEAN_Data/tree/V.1.0.0\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.166757 (Achinko, 2016)\n\nLicense: GNU\n\nDataset 1: Ebola related virus dataset in different hosts: Total sample data collected for this study, based on keywords “glycoprotein” and “virus”, doi: 10.5256/f1000research.9778.d143382 (Achinko et al., 2016a). The data are available in a .txt or .xls file. RSTD, relative standard deviation; RMEAN, relative mean; GBACC, gene bank accession; gdsType, dataset type.\n\nDataset 2: Ebola related virus dataset in human host: Titles and ranking of all 54 samples obtained for this study prior to filtering for humans, doi: 10.5256/f1000research.9778.d143384 (Achinko et al., 2016b). GP-V_Sample_1, glycoprotein and virus sample 1; GEO, Gene Omnibus.", "appendix": "Author contributions\n\n\n\nAll authors actively engaged and contributed to the successful production of this manuscript. D.A.A conceived and designed the work, collected the data and did the analysis required to put the manuscript together. D.A.A, D.A.O, M.A, M.N, E.F.N and M.A did the proof reading and edited the manuscript to bring it to its successful end.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work is supported by grant G12 MD007597 from NIMHD, NIH to the RCMI program at Howard University.\n\n\nSupplementary Material\n\nSupplementary Table 1: LCK-immune related gene functional differences. Genes of interest in signaling and immune related function used for expression variation analyses. Click here to access the data.\n\n\nReferences\n\nAchinko D: brack123/GEO_RMEAN_Data: GEO virus data extraction software. Zenodo. 2016. Data Source\n\nAchinko D, Dormer A: Epitope specificity and protein signaling interactions driving epidemic occurrences of Ebola disease [version 1; referees: awaiting peer review]. F1000Res. 2015; 4: 166. Publisher Full Text\n\nAchinko D, Dormer A, Narayanan M, et al.: Dataset 1 in: Identification of genetic pathways driving Ebola virus disease in humans and targets for therapeutic intervention. F1000Research. 2016a. Data Source\n\nAchinko D, Dormer A, Narayanan M, et al.: Dataset 2 in: Identification of genetic pathways driving Ebola virus disease in humans and targets for therapeutic intervention. F1000Research. 2016b. Data Source\n\nAn FQ, Compitello N, Horwitz E, et al.: The latency-associated nuclear antigen of Kaposi's sarcoma-associated herpesvirus modulates cellular gene expression and protects lymphoid cells from p16 INK4A-induced cell cycle arrest. J Biol Chem. 2005; 280(5): 3862–74. PubMed Abstract | Publisher Full Text\n\nAversa G, Chang CC, Carballido JM, et al.: Engagement of the signaling lymphocytic activation molecule (SLAM) on activated T cells results in IL-2-independent, cyclosporin A-sensitive T cell proliferation and IFN-gamma production. J Immunol. 1997; 158(9): 4036–44. PubMed Abstract\n\nBabcock GJ, Hochberg D, Thorley-Lawson AD: The expression pattern of Epstein-Barr virus latent genes in vivo is dependent upon the differentiation stage of the infected B cell. Immunity. 2000; 13(4): 497–506. PubMed Abstract | Publisher Full Text\n\nBaize S, Leroy EM, Georges-Courbot MC, et al.: Defective humoral responses and extensive intravascular apoptosis are associated with fatal outcome in Ebola virus-infected patients. Nat Med. 1999; 5(4): 423–426. PubMed Abstract | Publisher Full Text\n\nBaize S, Leroy EM, Mavoungou E, et al.: Apoptosis in fatal Ebola infection. Does the virus toll the bell for immune system? Apoptosis. 2000; 5(1): 5–7. PubMed Abstract | Publisher Full Text\n\nBashirova AA, Geijtenbeek TB, van Duijnhoven GC, et al.: A dendritic cell-specific intercellular adhesion molecule 3-grabbing nonintegrin (DC-SIGN)-related protein is highly expressed on human liver sinusoidal endothelial cells and promotes HIV-1 infection. J Exp Med. 2001; 193(6): 671–678. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBeadling C, Smith KA: DNA array analysis of interleukin-2-regulated immediate/early genes. Med Immunol. 2002; 1(1): 2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerger SB, Romero X, Ma C, et al.: SLAM is a microbial sensor that regulates bacterial phagosome functions in macrophages. Nat Immunol. 2010; 11(10): 920–927. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiedenkapp H, Borgmeyer U, Sippel AE, et al.: Viral myb oncogene encodes a sequence-specific DNA-binding activity. Nature. 1988; 355(6193): 835–837. PubMed Abstract | Publisher Full Text\n\nBosio CM, Aman MJ, Grogan C, et al.: Ebola and Marburg viruses replicate in monocyte-derived dendritic cells without inducing the production of cytokines and full maturation. J Infect Dis. 2003; 188(11): 1630–1638. PubMed Abstract | Publisher Full Text\n\nBray M: The role of the type I interferon response in the resistance of mice to filovirus infection. J Gen Virol. 2001; 82(Pt 6): 1365–1373. PubMed Abstract | Publisher Full Text\n\nBray M, Davis K, Geisbert T, et al.: A mouse model for evaluation of prophylaxis and therapy of Ebola hemorrhagic fever. J Infect Dis. 1998; 178(3): 651–661. PubMed Abstract | Publisher Full Text\n\nBuss H, Dörrie A, Schmitz ML, et al.: Constitutive and interleukin-1-inducible phosphorylation of p65 NF-{kappa}B at serine 536 is mediated by multiple protein kinases including I{kappa}B kinase (IKK)-{alpha}, IKK{beta}, IKK{epsilon}, TRAF family member-associated (TANK)-binding kinase 1 (TBK1), and an unknown kinase and couples p65 to TATA-binding protein-associated factor II31-mediated interleukin-8 transcription. J Biol Chem. 2004; 279(53): 55633–43. PubMed Abstract | Publisher Full Text\n\nChernicky CL, Tan H, Burfeind P, et al.: Sequence of interleukin-2 isolated from human placental poly A+ RNA: Possible role in maintenance of fetal allograft. Mol Reprod Dev. 1996; 43(2): 180–186. PubMed Abstract | Publisher Full Text\n\nChiappelli F, Bakhordarian A, Thames AD, et al.: Ebola: translational science considerations. J Transl Med. 2015; 13: 11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDash AB, Orrico FC, Ness SA: The EVES motif mediates both intermolecular and intramolecular regulation of c-Myb. Genes Dev. 1996; 10(15): 1858–1869. PubMed Abstract | Publisher Full Text\n\nDe Witte L, de Vries RD, van der Vlist M, et al.: DC-SIGN and CD150 have distinct roles in transmission of measles virus from dendritic cells to T-lymphocytes. Buchmeier MJ, ed. PLoS Pathog. 2008; 4(4): e1000049. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDubendorff JW, Whittaker LJ, Eltman JT, et al.: Carboxy-terminal elements of c-Myb negatively regulate transcriptional activation in cis and in trans. Genes Dev. 1992; 6(12B): 2524–2535. PubMed Abstract | Publisher Full Text\n\nDupin N, Fisher C, Kellam P, et al.: Distribution of human herpesvirus-8 latently infected cells in Kaposi’s sarcoma, multicentric Castleman’s disease, and primary effusion lymphoma. Proc Natl Acad Sci U S A. 1999; 96(8): 4546–4551. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEberle KC, McGill JL, Reinhardt TA, et al.: Parainfluenza Virus 3 Blocks Antiviral Mediators Downstream of the Interferon Lambda Receptor by Modulating Stat1 Phosphorylation. J Virol. 2015; 90(6): 2948–2958. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar R, Domrachev M, Lash AE: Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucl Acids Res. 2002; 30(1): 207–210. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFu SL, Lipsick JS: FAETL motif required for leukemic transformation by v-Myb. J Virol. 1996; 70(8): 5600–5610. PubMed Abstract | Free Full Text\n\nGao TT, Qin ZL, Ren H, et al.: Inhibition of IRS-1 by hepatitis C virus infection leads to insulin resistance in a PTEN-dependent manner. Virol J. 2015; 12: 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeijtenbeek TB, Torensma R, van Vliet SJ, et al.: Identification of DC-SIGN, a novel dendritic cell-specific ICAM-3 receptor that supports primary immune responses. Cell. 2000; 100(5): 575–585. PubMed Abstract | Publisher Full Text\n\nGeijtenbeek TB, van Vliet SJ, Engering A, et al.: Self- and nonself-recognition by C-type lectins on dendritic cells. Ann Rev Immunol. 2004; 22: 33–54. PubMed Abstract | Publisher Full Text\n\nGeisbert TW, Hensley LE, Gibb TR, et al.: Apoptosis induced in vitro and in vivo during infection by Ebola and Marburg viruses. Lab Invest. 2000; 80(2): 171–86. PubMed Abstract | Publisher Full Text\n\nGeisbert TW, Hensley LE, Larsen T, et al.: Pathogenesis of Ebola hemorrhagic fever in cynomolgus macaques: evidence that dendritic cells are early and sustained targets of infection. Am J Pathol. 2003; 163(6): 2347–2370. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGelman A, Hill J: Data analysis using regression and multilevel/hierarchical models. Cambridge: Cambridge University Press; 2007. Reference Source\n\nGeorges AJ, Leroy EM, Renaut AA, et al.: Ebola hemorrhagic fever outbreaks in Gabon, 1994–1997: epidemiologic and health control issues. J Infect Dis. 1999; 179(Suppl 1): S65–75. PubMed Abstract | Publisher Full Text\n\nGibb TR, Bray M, Geisbert TW, et al.: Pathogenesis of experimental Ebola Zaire virus infection in BALB/c mice. J Comp Pathol. 2001; 125(4): 233–242. PubMed Abstract | Publisher Full Text\n\nGibson S, August A, Kawakami Y, et al.: The EMT/ITK/TSK (EMT) tyrosine kinase is activated during TCR signaling: LCK is required for optimal activation of EMT. J Immunol. 1996; 156(8): 2716–2722. PubMed Abstract\n\nGoldman FD, Ballas ZK, Schutte BC, et al.: Defective expression of p56lck in an infant with severe combined immunodeficiency. J Clin Invest. 1998; 102(2): 421–429. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGower JC: A general coefficient of similarity and some of its properties. Biometrics. 1971; 27(4): 857–871. Publisher Full Text\n\nGriffin DE: Recovery from viral encephalomyelitis: immune-mediated noncytolytic virus clearance from neurons. Immunol Res. 2010; 47(1–3): 123–133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGringhuis SI, den Dunnen J, Litjens M, et al.: Carbohydrate-specific signaling through the DC-SIGN signalosome tailors immunity to Mycobacterium tuberculosis, HIV-1 and Helicobacter pylori. Nat Immunol. 2009; 10(10): 1081–1088. PubMed Abstract | Publisher Full Text\n\nHensley LE, Jones SM, Feldmann H, et al.: Ebola and Marburg viruses: pathogenesis and development of countermeasures. Curr Mol Med. 2005; 5(8): 761–772. PubMed Abstract | Publisher Full Text\n\nHijazi K, Wang Y, Scala C, et al.: DC-SIGN Increases the Affinity of HIV-1 Envelope Glycoprotein Interaction with CD4. PLoS One. 2011; 6(12): e28307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoenen T, Groseth A, Falzarano D, et al.: Ebola virus: unravelling pathogenesis to combat a deadly disease. Trends Mol Med. 2006; 12(5): 206–215. PubMed Abstract | Publisher Full Text\n\nHolbrook NJ, Lieber M, Crabtree GR: DNA sequence of the 5' flanking region of the human interleukin 2 gene: homologies with adult T-cell leukemia virus. Nucl Acids Res. 1984; 12(12): 5005–5013. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuse M, Eck MJ, Harrison SC: A Zn2+ ion links the cytoplasmic tail of CD4 and the N-terminal region of Lck. J Biol Chem. 1998; 273(30): 18729–18733. PubMed Abstract | Publisher Full Text\n\nJames TW, Humphrey GK, Gati JS, et al.: Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia. 2002; 40(10): 1706–1714. PubMed Abstract | Publisher Full Text\n\nJaneway CA, Travers P, Walport M, et al.: Immunobiology. Garland publishing; 2004. Reference Source\n\nJohnson KM, Lange JV, Webb PA, et al.: Isolation and partial characterisation of a new virus causing acute haemorrhagic fever in Zaire. Lancet. 1977; 1(8011): 569–71. PubMed Abstract | Publisher Full Text\n\nKaiser C, von Stein O, Laux G, et al.: Functional genomics in cancer research: identification of target genes of the Epstein-Barr virus nuclear antigen 2 by subtractive cDNA cloning and high-throughput differential screening using high-density agarose gels. Electrophoresis. 1999; 20(2): 261–268. PubMed Abstract | Publisher Full Text\n\nKaufman L, Rousseeuw PJ: Finding Groups in Data An Introduction to Cluster Analysis. John Wiley & Sons, New York; 1990. Reference Source\n\nLaine M, Luukkainen R, Toivanen A: Sindbis viruses and other alphaviruses as cause of human arthritic disease. J Intern Med. 2004; 256(6): 457–471. PubMed Abstract | Publisher Full Text\n\nLauder A, Castellanos A, Weston K: c-Myb transcription is activated by protein kinase B (PKB) following interleukin 2 stimulation of Tcells and is required for PKB-mediated protection from apoptosis. Mol Cell Biol. 2001; 21(17): 5797–5805. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLesley SA, Graziano J, Cho CY, et al.: Gene expression response to misfolded protein as a screen for soluble recombinant protein. Protein Eng. 2002; 15(2): 153–160. PubMed Abstract | Publisher Full Text\n\nLin Y, Roberts TJ, Sriram V, et al.: Myeloid marker expression on antiviral CD8+ T cells following an acute virus infection. Eur J Immunol. 2003; 33(10): 2736–2743. PubMed Abstract | Publisher Full Text\n\nLipsick JS, Wang DM: Transformation by v-Myb. Oncogene. 1999; 18(19): 3047–3055. PubMed Abstract | Publisher Full Text\n\nLorenzo PI, Brendeford EM, Gilfillan S, et al.: Identification of c-Myb target genes in K562 cells reveals a role for c-Myb as a master regulator. Genes Cancer. 2011; 2(8): 805–817. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLudwig IS, Lekkerkerker AN, Depla E, et al.: Hepatitis C virus targets DC-SIGN and L-SIGN to escape lysosomal degradation. J Virol. 2004; 78(15): 8322–8332. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLung ML, Cheung AK, Ko JM, et al.: The interplay of host genetic factors and Epstein-Barr virus in the development of nasopharyngeal carcinoma. Chin J Cancer. 2014; 33(11): 556–568. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLupino E, Buccinnà B, Ramondetti C, et al.: In CD28-costimulated human naïve CD4+ T cells, I-κB kinase controls the expression of cell cycle regulatory proteins via interleukin-2-independent mechanisms. Immunology. 2010; 131(2): 231–241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMackenzie I, Lever A: Management of sepsis. BMJ. 2007; 335(7626): 929–932. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahalanobis PC: On the generalised distance in statistics. Proceedings of the National Institute of Sciences of India. 1936; 2(1): 49–55. Reference Source\n\nMahanty S, Bray M: Pathogenesis of filoviral haemorrhagic fevers. Lancet Infect Dis. 2004; 4(8): 487–98. PubMed Abstract | Publisher Full Text\n\nMahanty S, Hutchinson K, Agarwal S, et al.: Cutting edge: impairment of dendritic cells and adaptive immunity by Ebola and Lassa viruses. J Immunol. 2003; 170(6): 2797–2801. PubMed Abstract | Publisher Full Text\n\nMajolini MB, Boncristiano M, Baldari CT: Dysregulation of the protein tyrosine kinase LCK in lymphoproliferative disorders and in other neoplasias. Leuk Lymphoma. 1999; 35(3–4): 245–254. PubMed Abstract | Publisher Full Text\n\nMeyerson M, Harlow E: Identification of G1 kinase activity for cdk6, a novel cyclin D partner. Mol Cell Biol. 1994; 14(3): 2077–2086. PubMed Abstract | Publisher Full Text | Free Full Text\n\nModrek B, Resch A, Grasso C, et al.: Genome-wide detection of alternative splicing in expressed sequences of human genes. Nucleic Acids Res. 2001; 29(13): 2850–2859. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNervi S, Guinamard R, Delaval B, et al.: A rare mRNA variant of the human lymphocyte-specific protein tyrosine kinase LCK gene with intron B retention and exon 7 skipping encodes a putative protein with altered SH3-dependent molecular interactions. Gene. 2005; 359: 18–25. PubMed Abstract | Publisher Full Text\n\nNervi S, Nicodeme S, Gartioux C, et al.: No association between lck gene polymorphisms and protein level in type 1 diabetes. Diabetes. 2002; 51(11): 3326–3330. PubMed Abstract\n\nNess SA, Marknell A, Graf T: The v-myb oncogene product binds to and activates the promyelocyte-specific mim-1 gene. Cell. 1989; 59(6): 1115–1125. PubMed Abstract | Publisher Full Text\n\nNeumann L, Kraas W, Uebel S, et al.: The active domain of the herpes simplex virus protein ICP47: a potent inhibitor of the transporter associated with antigen processing. J Mol Biol. 1997; 272(4): 484–92. PubMed Abstract | Publisher Full Text\n\nNeuwirth E: RColorBrewer: ColorBrewer Palettes. R package version 1.1-2. 2014. Reference Source\n\nO'Rourke JP, Ness SA: Alternative RNA splicing produces multiple forms of c-Myb with unique transcriptional activities. Mol Cell Biol. 2008; 28(6): 2091–2101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalacios EH, Weiss A: Function of the Src-family kinases, Lck and Fyn, in T-cell development and activation. Oncogene. 2004; 23(48): 7990–8000. PubMed Abstract | Publisher Full Text\n\nPeng G, Lei KJ, Jin W, et al.: Induction of APOBEC3 family proteins, a defensive maneuver underlying interferon-induced anti-HIV-1 activity. J Exp Med. 2006; 203(1): 41–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPieters J: MHC class II-restricted antigen processing and presentation. Adv Immunol. 2000; 75: 159–208. PubMed Abstract | Publisher Full Text\n\nQuintana AM, Liu F, O'Rourke JP, et al.: Identification and regulation of c-Myb target genes in MCF-7 cells. BMC Cancer. 2011; 11: 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; 2013; ISBN 3-900051-07-0. Reference Source\n\nR Development Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; 2008; ISBN 3-900051-07-0. Reference Source\n\nRappocciolo G, Piazza P, Fuller CL, et al.: DC-SIGN on B Lymphocytes Is Required For Transmission of HIV-1 to T Lymphocytes. Koup R, ed. PLoS Pathog. 2006; 2(7): e70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRieger KE, Hong WJ, Tusher VG, et al.: Toxicity from radiation therapy associated with abnormal transcriptional responses to DNA damage. Proc Natl Acad Sci U S A. 2004; 101(17): 6635–6640. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRousseeuw PJ: Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math. 1987; 20: 53–65. Publisher Full Text\n\nRyabchikova EI, Kolesnikova LV, Luchko SV: An analysis of features of pathogenesis in two animal models of Ebola virus infection. J Infect Dis. 1999; 179(Suppl1): S199–S202. PubMed Abstract | Publisher Full Text\n\nSanchez A, Geisbert TW, Feldmann H: Filoviridae: Marburg and Ebola viruses. Knipe DM, Howley PM, editors. Fields virology. Philadelphia: Lippincott Williams & Wilkins. 2006; 1409–1448. Reference Source\n\nSanchez A, Trappier SG, Mahy BW, et al.: The virion glycoproteins of Ebola viruses are encoded in two reading frames and are expressed through transcriptional editing. Proc Natl Acad Sci U S A. 1996; 93(8): 3602–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSancho D, Reis e Sousa C: Signaling by myeloid C-type lectin receptors in immunity and homeostasis. Ann Rev Immunol. 2012; 30: 491–529. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSayers EW, Barrett T, Benson DA, et al.: Database resources of the National Center for Biotechnology Information. Nucleic Acids Res. 2009; 37(Database issue): D5–D15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchlee M, Krug T, Gires O, et al.: Identification of Epstein-Barr virus (EBV) nuclear antigen 2 (EBNA2) target proteins by proteome analysis: activation of EBNA2 in conditionally immortalized B cells reflects early events after infection of primary B cells by EBV. J Virol. 2004; 78(8): 3941–3952. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchnittler HJ, Feldmann H: Marburg and Ebola hemorrhagic fevers: does the primary course of infection depend on the accessibility of organ-specific macrophages? Clin Infect Dis. 1998; 27(2): 404–406. PubMed Abstract | Publisher Full Text\n\nServet-Delprat C, Vidalain PO, Bausinger H, et al.: Measles virus induces abnormal differentiation of CD40 ligand-activated human dendritic cells. J Immunol. 2000; 164(4): 1753–1760. PubMed Abstract | Publisher Full Text\n\nShimojima M, Takada A, Ebihara H, et al.: Tyro3 family-mediated cell entry of Ebola and Marburg viruses. J Virol. 2006; 80(20): 10109–10116. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimmons JD, Wollish AC, Heise MT: A determinant of Sindbis virus neurovirulence enables efficient disruption of Jak/STAT signaling. J Virol. 2010; 84(21): 11429–11439. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSirois M, Robitaille L, Allary R, et al.: TRAF6 and IRF7 control HIV replication in macrophages. PLoS One. 2011; 6(11): e28125. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlamon DJ, Boone TC, Murdock DC, et al.: Studies of the human c-myb gene and its product in human acute leukemias. Science. 1986; 233(4761): 347–351. PubMed Abstract | Publisher Full Text\n\nSpiliotis ET, Manley H, Osorio M, et al.: Selective export of MHC class I molecules from the ER after their dissociation from TAP. Immunity. 2000; 13(6): 841–51. PubMed Abstract | Publisher Full Text\n\nStvrtinová V, Jakubovský J, Hulín I: Pathophysiology. Principles of diseases. Computing Centre, Slovak Academy of Sciences. 1995.\n\nTakadera T, Leung S, Gernone A, et al.: Structure of the two promoters of the human lck gene: differential accumulation of two classes of lck transcripts in T cells. Mol Cell Biol. 1989; 9(5): 2173–2180. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTogni M, Lindquist J, Gerber A, et al.: The role of adaptor proteins in lymphocyte activation. Mol Immunol. 2004; 41(6–7): 615–630. PubMed Abstract | Publisher Full Text\n\nTong X, Drapkin R, Reinberg D, et al.: The 62- and 80-kDa subunits of transcription factor IIH mediate the interaction with Epstein-Barr virus nuclear protein 2. Proc Natl Acad Sci U S A. 1995; 92(8): 3259–3263. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Endert PM, Saveanu L, Hewitt EW, et al.: Powering the peptide pump: TAP crosstalk with energetic nucleotides. Trends Biochem Sci. 2002; 27(9): 454–61. PubMed Abstract | Publisher Full Text\n\nvan Vliet SJ, Steeghs L, Bruijns SC, et al.: Variation of Neisseria gonorrhoeae lipooligosaccharide directs dendritic cell-Induced T helper hesponses. PLoS Pathog. 2009; 5(10): e1000625. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVogel LB, Arthur R, Fujita DJ: An aberrant lck mRNA in two human T-cell lines. Biochim Biophys Acta. 1995; 1264(2): 168–72. PubMed Abstract | Publisher Full Text\n\nVogel LB, Fujita DJ: p70 phosphorylation and binding to p56lck is an early event in interleukin-2-induced onset of cell cycle progression in T-lymphocytes. J Biol Chem. 1995; 270(6): 2506–11. PubMed Abstract | Publisher Full Text\n\nVolchkova VA, Feldmann H, Klenk HD, et al.: The nonstructural small glycoprotein sGP of Ebola virus is secreted as an antiparallel-orientated homodimer. Virology. 1998; 250(2): 408–14. PubMed Abstract | Publisher Full Text\n\nVu VQ: ggbiplot: A ggplot2 based biplot. R package version 0.55. 2011. Reference Source\n\nWan Y, Wu CJ: SF3B1 mutations in chronic lymphocytic leukemia. Blood. 2013; 121(23): 4627–4634. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeir MP, Chaplin MA, Wallace DM, et al.: Structure-activity relationships of recombinant human interleukin 2. Biochemistry. 1988; 27(18): 6883–6892. PubMed Abstract | Publisher Full Text\n\nWestfall PH: Kurtosis as Peakedness, 1905 – 2014. R.I.P. Am Stat. 2014; 68(3): 191–195. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilhite SE, Barrett T: Strategies to explore functional genomics data sets in NCBI's GEO database. Methods Mol Biol. 2012; 802: 41–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWolf K, Beimforde N, Falzarano D, et al.: The Ebola virus soluble glycoprotein (sGP) does not affect lymphocyte apoptosis and adhesion to activated endothelium. J Infect Dis. 2011; 204(Suppl 3): S947–S952. PubMed Abstract | Publisher Full Text\n\nWright DD, Sefton BM, Kamps MP: Oncogenic activation of the Lck protein accompanies translocation of the Lck gene in the human HSB2 T-cell leukemia. Mol Cell Biol. 1994; 14(4): 2429–2437. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu X, Knudsen B, Feller SM, et al.: Structural basis for the specific interaction of lysine-containing proline-rich peptides with the N-terminal SH3 domain of c-Crk. Structure. 1995; 3(2): 215–226. PubMed Abstract | Publisher Full Text\n\nYamashita M, Hashimoto K, Kimura M, et al.: Requirement for p56(lck) tyrosine kinase activation in Th subset differentiation. Int Immunol. 1998; 10(5): 577–591. PubMed Abstract | Publisher Full Text\n\nYanagi Y, Takeda M, Ohno S: Measles virus: cellular receptors, tropism and pathogenesis. J Gen Virol. 2006; 87(Pt 10): 2767–2779. PubMed Abstract | Publisher Full Text\n\nZilliox MJ, Parmigiani G, Griffin DE: Gene expression patterns in dendritic cells infected with measles virus compared with other pathogens. Proc Natl Acad Sci U S A. 2006; 103(9): 3363–3368. PubMed Abstract | Publisher Full Text | Free Full Text" }
[ { "id": "18183", "date": "13 Mar 2017", "name": "Priya Duggal", "expertise": [], "suggestion": "Not Approved", "report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAchinko and colleagues utilize publicly available data to identify important genetic pathways that may drive Ebola virus and could be used as targets for intervention. However, the analysis as presented raises some concerns that the authors should address and clarify.\n\nBackground: The introduction needs to emphasize the potential relationship between LCK and EBOV. Many of these opening paragraphs appear to be making the case and speculating on future research instead of setting up why we think that LCK should even be considered. The main aim of the paper is embedded in the results section and should be moved to the introductory paragraphs. The link between glycoprotens and LCK and Ebola virus is also not explained,  which is the motivation of the analysis.\n\nMethods: The authors use data from the publicly available GEO website and they aim to look for an association between the viral envelope gylcoprotein and post-signaling response in cells. They suggest that they are restricting their analysis to EBOV--but the list of profiles ranges from hepatitis C to measles to dengue viral infections. Did you filter out samples from viruses not related to EBOV? If you are only focusing on EBOV then a search of geo profiles would return only 12 samples (however 34 samples are included in this research study). This discrepancy confuses the manuscript - what are we looking at? What does a difference in Rmean samples mean given that we are looking at a full spectrum of genes?\n\nIt is also not clear how you can evaluate the differences in gene expression at the cellular level when you have different tissues and thus heterogenous cell composition.\n\nAnd the genes you list range from 8,000 to 154,000--which when merged together are likely to create a bias. How do you account for this heterogeneity and bias from different studies and different depths of coverage and cell types, and viruses?\nThe authors also refer to EBOV as a chronic viral infection. Many of us consider it an acute infection as well. Is the focus of this manuscript on samples from people who survive EBOV and then persist with infection? Or on those with acute infection? And how does that change the conclusions of the paper and the title when you are considering intervention?\n\nFor methods of the principal components--what is this biplot telling us - its very difficult to interpret, and more confounded by the 34 samples vs. 12 samples.\n\nAdditional clarifications on the background, methods, type I and type II mRNA, importance and relevance of the genes identified and LCK are all critical. Some of these can be clarified by a figure that outlines the immunologic pathway in primates and what is actually known vs what is hypothesized, and then more details in the text explaining the logic. We appreciate the knowledge on mice, but how does this translate to humans and why is it relevant?\nThese issues raise concerns about using this type of data without careful consideration for issues of heterogeneity and bias. Please address these issues.", "responses": [] } ]
1
https://f1000research.com/articles/5-2810